Showing posts with label Errors. Show all posts
Showing posts with label Errors. Show all posts
Wednesday, October 30, 2019
SQLSTATE 4200 Error 666 and what to do.
This morning I was greeted by the following message from a job email
The maximum system-generated unique value for a duplicate group was exceeded for index with partition ID 72059165481762816. Dropping and re-creating the index may resolve this; otherwise, use another clustering key. [SQLSTATE 42000] (Error 666)
Almost Halloween? check!
Error 666? check!
Ever seen this error before? no!
The job has a step that inserts into a bunch of tables...
The table in question had a clustered index that was created without the UNIQUE property. When you create such an index, SQL Server will create a uniqueifier internally
This part is from the CSS SQL Server Engineers blog post
A uniqueifier (or uniquifier as reported by SQL Server internal tools) has been used in the engine for a long time (since SQL Server 7.0), and even being known to many, referenced in books and blogs, The SQL Server documentation clearly states that you will not see it exposed externally in the engine (https://docs.microsoft.com/en-us/sql/relational-databases/sql-server-index-design-guide).
"If the clustered index is not created with the UNIQUE property, the Database Engine automatically adds a 4-byte uniqueifier column to the table. When it is required, the Database Engine automatically adds a uniqueifier value to a row to make each key unique. This column and its values are used internally and cannot be seen or accessed by users."
While it´s unlikely that you will face an issue related with uniqueifiers, the SQL Server team has seen rare cases where customer reaches the uniqueifier limit of 2,147,483,648, generating error 666.
Msg 666, Level 16, State 2, Line 1
The maximum system-generated unique value for a duplicate group was exceeded for index with partition ID <PARTITIONID>. Dropping and re-creating the index may resolve this; otherwise, use another clustering key.
So I ran into this rare case :-(
How can you quickly find out what table and index name the error is complaining about?
You can use the following query, just change the partitionid to match the one from your error message
SELECT SCHEMA_NAME(o.schema_id) as SchemaName, o.name as ObjectName, i.name as IndexName, p.partition_id as PartitionID FROM sys.partitions p JOIN sys.objects o on p.object_id = o.object_id JOIN sys.indexes i on p.object_id = i.object_id WHERE p.partition_id = 72059165481762816
After running the query, you will now have the schema name, the table name and the index name. That is all you need to find the index, you can now drop and recreate it
In my case this table was not big at all... 5 million rows or so, but we do delete and insert a lot of data into this table many times a day.
Also we have rebuild jobs running, rebuild jobs do not reset the uniqifier (see also below about a change from the CSS SQL Server Engineers)
To fix this, all I had to do was drop the index and recreate the index (after filling out tickets and testing it on a lower environment first).
DROP INDEX [IX_IndexName] ON [SchemaName].TableName] GO CREATE CLUSTERED INDEX [IX_IndexName] ON [SchemaName].[TableName] ( Col1 ASC, Col2 ASC, Col3 ASC ) ON [PRIMARY] GO
After dropping and recreating the index.. the code that threw an error earlier did not throw an error anymore
Since my table only had 5 million rows or so.. this was not a big deal and completed in seconds. If you have a large table you might have to wait or think of a different approach
If you want to know more, check out this post by the CSS SQL Server Engineers Uniqueifier considerations and error 666
The interesting part is
As of February 2018, the design goal for the storage engine is to not reset uniqueifiers during REBUILDs. As such, rebuild of the index ideally would not reset uniquifiers and issue would continue to occur, while inserting new data with a key value for which the uniquifiers were exhausted. But current engine behavior is different for one specific case, if you use the statement ALTER INDEX ALL ON <TABLE> REBUILD WITH (ONLINE = ON), it will reset the uniqueifiers (across all version starting SQL Server 2005 to SQL Server 2017).
Important: This is something that is not documented and can change in future versions, so our recommendation is that you should review table design to avoid relying on it.
Edit.. it turns out I have seen this before and have even blogged about it http://sqlservercode.blogspot.com/2017/06/having-fun-with-maxed-out-uniqifiers-on.html
Tuesday, January 10, 2017
T-SQL Tuesday #86: String or binary data would be truncated
This month's T-SQL Tuesday is hosted by Brent Ozar, he proposed the following
Find the most interesting bug or enhancement request (and it can be your own), and write a blog post about it (including a link to the Connect item so that folks who agree with you can upvote the item)
Msg 8152, Level 16, State 6, Procedure <ProcName>, Line 61
String or binary data would be truncated.
This is like not having the black box after a plane crashed, you know the plane crashed, but you don't know why exactly.
Dealing with this issue on a semi-regular basis, I even have written my own T-SQL helper to quickly see where the issue is
Something like this only helps you if you have the data readily available, what if it is from an application? In that case you need profiler or extended events to capture the statement
It is also crazy that this connect item is almost 9 years old, it was opened in April 2008
We do have someone from Microsoft commenting on this issue last August
This connect item has 1328 upvotes as of today, it also has 5 downvotes (who are these people..probably working on the SQL Server team :-) )
Find the most interesting bug or enhancement request (and it can be your own), and write a blog post about it (including a link to the Connect item so that folks who agree with you can upvote the item)
This one was pretty easy for me, it is the following connect item Please fix the "String or binary data would be truncated" message to give the column name
This error drives me crazy as well, it should be fairly easy to tell me if nothing else what damn column barfed on the data inserted, but no.. all you get is something like
This is like not having the black box after a plane crashed, you know the plane crashed, but you don't know why exactly.
Dealing with this issue on a semi-regular basis, I even have written my own T-SQL helper to quickly see where the issue is
declare @ImportTable varchar(100) declare @DestinationTable varchar(100) select @ImportTable = 'temp' select @DestinationTable = 'TestTrunc' declare @ImportTableCompare varchar(100) declare @DestinationTableCompare varchar(100) select @ImportTableCompare = 'MaxLengths' select @DestinationTableCompare = 'TempTrunc' declare @sql varchar(8000) select @sql = '' select @sql = 'select 0 as _col0 ,' select @sql += 'max(len( ' + column_name+ ')) AS ' + column_name + ',' from information_schema.columns where table_name = @ImportTable and data_type in('varchar','char','nvarchar','nchar') select @sql = left(@sql,len(@sql) -1) select @sql +=' into ' + @ImportTableCompare + ' from ' + @ImportTable --select @sql -debugging so simple, a caveman can do it exec (@sql) select @sql = '' select @sql = 'select 0 as _col0, ' select @sql += '' + convert(varchar(20),character_maximum_length)
+ ' AS ' + column_name + ',' from information_schema.columns where table_name = @DestinationTable and data_type in('varchar','char','nvarchar','nchar') select @sql = left(@sql,len(@sql) -1) select @sql +=' into ' + @DestinationTableCompare --select @sql -debugging so simple, a caveman can do it exec (@sql) select @sql = '' select @sql = 'select ' select @sql += '' + 'case when t.' + column_name + ' > tt.' + column_name + ' then ''truncation'' else ''no truncation'' end as '+ column_name + ',' from information_schema.columns where table_name = @ImportTableCompare and column_name <> '_col0' select @sql = left(@sql,len(@sql) -1) select @sql +=' from ' + @ImportTableCompare + ' t join ' + @DestinationTableCompare + ' tt on t._col0 = tt._col0 ' --select @sql -debugging so simple, a caveman can do it exec (@sql) exec ('drop table ' + @ImportTableCompare+ ',' + @DestinationTableCompare )
Something like this only helps you if you have the data readily available, what if it is from an application? In that case you need profiler or extended events to capture the statement
It is also crazy that this connect item is almost 9 years old, it was opened in April 2008
We do have someone from Microsoft commenting on this issue last August
Posted by David [MSFT] on 8/5/2016 at 1:39 PM
Latest update - the developer working on it understands the challenges involved in creating a full fix. It may be tricky to plumb the information about columns needed to generate a full error message down to the actual conversion function in such a way that won't impact insert or update performance. We may implement something cheap in the short term such as logging the type and length of the data being truncated. It's still too early to know when such a fix would reach a publicly visible release.
So there you have it that is my contribution to T-SQL Tuesday # 86, keep track of Brent's blog here https://www.brentozar.com/blog/ there will be a recap posted on Tuesday, January 2017
Saturday, October 10, 2015
Your first step in dealing with SQLState = S1000 I/O error while writing a BCP data-file
The other day one job started to fail every time it ran. The error was
Server: FancyServerName
Job: SomeJobName
Step: Step 3) SomeStepName
Message:Executed as user: SomeDomain\SomeServiceAccount. Starting copy... SQLState = S1000, NativeError = 0 Error = [Microsoft][ODBC Driver 11 for SQL Server]I/O error while writing BCP data-file 19 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 1 Average : (19000.00 rows per sec.). Process Exit Code 1. The step failed.
Time: 2015年10月06日 08:16:20
This was very interesting since the file we were writing had only 19 rows and this was very small, less than a kilobyte. So how do you investigate what the error is? It could be all kinds of things:
Permissions on the drive
Problems with the drive
Memory issues
Issue with a format file
If you do a search you will get all kinds of things back. However the first thing you need to run is the following
Here is what I saw after running xp_fixeddrives
Take a look at the Z drive do you see that? The Z drive is full, there is no space left. It would be nice if the error message indicated that it couldn't write to the file because there was no space left. In a production environment this should not happen because hopefully you have something in place that sends out an alert if the drive is 90% full.
So, if you ever get this error, first thing you have to do is if you have space left on the drive.
I also want to point out that the xp_fixeddrives stored procedure is undocumented so there is no guarantee it will be available in a future version of SQL Server.
Server: FancyServerName
Job: SomeJobName
Step: Step 3) SomeStepName
Message:Executed as user: SomeDomain\SomeServiceAccount. Starting copy... SQLState = S1000, NativeError = 0 Error = [Microsoft][ODBC Driver 11 for SQL Server]I/O error while writing BCP data-file 19 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 1 Average : (19000.00 rows per sec.). Process Exit Code 1. The step failed.
Time: 2015年10月06日 08:16:20
This was very interesting since the file we were writing had only 19 rows and this was very small, less than a kilobyte. So how do you investigate what the error is? It could be all kinds of things:
Permissions on the drive
Problems with the drive
Memory issues
Issue with a format file
If you do a search you will get all kinds of things back. However the first thing you need to run is the following
EXEC xp_fixeddrives
Here is what I saw after running xp_fixeddrives
drive MB free
C 235644
D 1479234
E 10123
F 10123
G 10123
L 10123
S 117706
T 10123
Z 0
Take a look at the Z drive do you see that? The Z drive is full, there is no space left. It would be nice if the error message indicated that it couldn't write to the file because there was no space left. In a production environment this should not happen because hopefully you have something in place that sends out an alert if the drive is 90% full.
So, if you ever get this error, first thing you have to do is if you have space left on the drive.
I also want to point out that the xp_fixeddrives stored procedure is undocumented so there is no guarantee it will be available in a future version of SQL Server.
Sunday, December 16, 2007
GNET: Even Google Screws Up By Showing A Bunch Of MySQL Messages
When I visited http://www.gnet.com/ today I got greeted by the following error messages. These messages should never be seen by users. What about a friendly message instead? Anyway the mess is below.
Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in /home/gnet/public_html.v2/libs/adodb/drivers/adodb-mysql.inc.php on line 359
Subscribe to:
Posts (Atom)