I am experiencing poor performance on a single row insert/update statement over a table when a nvarchar(max) column has few MB of data.
Here my table structure :
CREATE TABLE [dbo].[tbl_set_Cart](
[ID] [int] NOT NULL,
[AS400_CUSTOMER_COD] [nvarchar](50) NOT NULL,
[AS400_LISTIN] [int] NOT NULL,
[VALUE] [nvarchar](max) NOT NULL,
[DELIVERY_COSTS] [nvarchar](max) NOT NULL,
[ITEMS_COUNT] [int] NOT NULL,
[ITEMS] [nvarchar](max) NOT NULL,
[KIND] [int] NOT NULL,
[CHECKOUT_INFO] [nvarchar](max) NOT NULL,
[ISSUES] [nvarchar](max) NOT NULL,
[LAST_CHECK] [datetime] NOT NULL,
[USER_ID] [int] NOT NULL,
[IMPERSONATED_USER_ID] [int] NOT NULL,
[OVERRIDE_PRICES] [bit] NOT NULL,
[HAS_ISSUE] [bit] NOT NULL,
[IS_CONFIRMED] [bit] NOT NULL,
[IS_COLLECTED] [bit] NOT NULL,
[_METADATA] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_tbl_set_Cart] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Here an example of an update statement
DECLARE @p0 AS INT = [cart_id];
DECLARE @p1 AS INT = [entry_count];
DECLARE @p2 AS NVARCHAR(MAX) = '..document..' --~8MB;
UPDATE [dbo].[tbl_set_Cart]
SET [ITEMS_COUNT] = @p1, [ITEMS] = @p2
WHERE [ID] = @p0
Executing this command on the same machine of the DB (a development environment so no other work load) I got this performance:
It really surprise me that it's that slow to load a text into SQL Server, so what I am asking is if there is a better way to achieve that.
The document is a big JSON that will not exceed the 10MB of data (I have done the benchmark reported above in the worst scenario) and I have no issue into converting it to a BLOB or other data structures if performance improve. If it's a better idea I can also use a stored procedure to upload the big text.
Environment:
A Windows Server 2019 16 logical processor 2,1 GHz and Microsoft SQL Server Standard (64-bit) version 15.0.2070.41 with 4GB reserved RAM, 16 core licensed.
(it's a development only environment and in this case I am the only working with no other program or scheduled activity running.)
I have tested again a different cart (a little smaller one) here times (client statistics) and execution plan.
link here : https://www.brentozar.com/pastetheplan/?id=SJcPhDUFK
If I insert a new row it's faster:
link here : https://www.brentozar.com/pastetheplan/?id=S1FDjD8KF
For the insert statement I have used the data generated by SQL Server (task -> script table -> data only)
What I am expecting to achieve is an update time <0,5s
In my scenario json usually born small and then increase day by day till the size I am testing. May I start thinking to store them as file?
I have preallocated space in the log file. And check again that recovery model is set to simple. What I can do to further improve performance?
-
1A simple update shouldn't take very long see dbfiddle.uk/…Paul White– Paul White ♦2021年12月02日 14:54:09 +00:00Commented Dec 2, 2021 at 14:54
-
Are you using parameters?Charlieface– Charlieface2021年12月04日 21:13:51 +00:00Commented Dec 4, 2021 at 21:13
2 Answers 2
Your slow update plan shows a long wait on LOGBUFFER
:
You should check that the log file for the database is on fast enough storage for the workload.
There might also be some benefit from increasing the amount of RAM available to the instance. The PAGEIOLATCH_EX
wait is also quite long at 361ms.
As David Browne suggested you could also:
COMPRESS
the JSON to shift some of the IO to CPU.- Consider the "large value types out of row" table option to get more rows/page in the clustered index and reduce migration on update to reduce the
PAGEIOLATCH
waits.
-
1LZMA compression implemented and dramatically reduced IO time and size. Under my worst case scenario i can update in 0,3s (on average) against 2,5s of previous implementation. A really good advice, thanks again.Skary– Skary2021年12月02日 17:28:05 +00:00Commented Dec 2, 2021 at 17:28
I suggest you try putting/copying the json information into a file to a "local" directory (on or close to the db server) and then use the OPENROWSET BULK
feature
SELECT @json = BulkColumn
FROM OPENROWSET (BULK 'C:\JSON\Books\book.json', SINGLE_CLOB) as j
-
Does anyone have any test data to compare regarding performance?Ben– Ben2022年07月06日 22:48:39 +00:00Commented Jul 6, 2022 at 22:48
Explore related questions
See similar questions with these tags.