I have a large generated .sql
file for the data from our UAT database. I'm trying to load that into my (localdb)\v11.0
using sqlcmd
.
I was able to use the following:
sqlcmd -S (localdb)\v11.0 -d myDB -I c:\temp\mydatafile.sql
The .sql
file is 5GB. Everything seemed to be doing well; however, at the end of the output in the command window I see an error:
Msg 102, Level 15, State 1
Incorrect syntax near...
Apparently the data that it is pulling from the DB doesn't work in the insert statements. Since the file is 5GB it won't open in an editor, so I can't fix it.
I did a quick data compare and I see records not in my localdb
that are in the UAT database. So that leads me to believe that it did not finish inserting all the data.
Is there a way to skip over these types of errors using sqlcmd?
It would appear that if I can't open the file, yet sqlcmd
is running into errors when running that file that the next best thing is knowing what table has that error.
My next question is, is there a way to message out what table has that error so I can exclude it from the original script?
-
You are limited on options unless you can open that file up and fix the syntax. You might also read through this SO question, see if any of the answers work for you.user507– user5072016年10月25日 22:13:13 +00:00Commented Oct 25, 2016 at 22:13
3 Answers 3
I don't have a 5gb file to test against, so I have no idea if this proposed solution will work for you. Your data may not lend itself to this proposed solution, but I thought I'd offer it anyway.
I created a table called SqlCommands
USE [Test]
GO
CREATE TABLE [dbo].[SqlCommands](
[ID] [int] IDENTITY(1,1) NOT NULL,
[TextData] [varchar](max) NOT NULL,
CONSTRAINT [PK_SqlCommands] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
For testing purposes, I used the SSMS scripting wizard to script one of my small tables (data) as 'inserts' into a test file.
I then used a Powershell script to read that test file (which are basically SQL insert commands into the above SqlCommands table. The table has an identity column and the Powershell script only references the 'TextData' column. The ID identity column will allow me to select from this table in the correct order. Here is the Powershell script:
$database = "test"
$server = "."
$table = "dbo.SqlCommands"
$ff="C:\Test\TestData.txt"
write-host "Input file used is $ff"
$inrec=""
$counter=0
if ([System.IO.File]::Exists($ff))
{
$obj=new-object System.IO.StreamReader($ff)
$inrec=$obj.ReadLine()
while ($inrec -ne $null)
{
#replace single quotes with double quotes so the insert will work for literals
$inrec = $inrec -Replace("'","''")
Invoke-SqlCmd -Database $database -ServerInstance $server -Query "insert into $table (TextData) values('$inrec')"
$counter++
$inrec=$obj.ReadLine()
}
$obj.close()
write-host "$counter lines read"
}
else
{
write-host "Source file ($ff) does not exist."
}
write-host "All done now!"
Now, I have my SQL file in a table. I wrote a cursor to select from the table ordering by ID to make sure I'm executing the commands in the correct order. The cursor script reads each row from the SqlCommands table and checks to see if it is a 'GO' statement ('GO' statements were automatically added by the scripting wizard when I scripted the table data). If the row is NOT a 'GO', the current row is concatenated to @cmd and the next row is read. If the current row is a 'GO', I execute the accumulated @cmd and blank @cmd out for the next series of statements. The EXEC is wrapped inside a TRY/CATCH. In the event of an error, I'm able to display the row number (Identity ID column) so I can run selects against the SqlCommands table 'around' that row number to find any problems.
SET NOCOUNT ON
DECLARE @cmd VARCHAR(max) = ' '
DECLARE @Id INT
DECLARE @textdata VARCHAR(max)
DECLARE _CURSOR CURSOR LOCAL FORWARD_ONLY STATIC READ_ONLY
FOR
SELECT ID
,TextData
FROM [Test].[dbo].[SqlCommands]
ORDER BY ID
DECLARE @count INT
SET @count = 0
OPEN _CURSOR
FETCH NEXT
FROM _CURSOR
INTO @Id
,@textdata
WHILE @@FETCH_STATUS = 0
BEGIN
IF @textdata = 'GO'
BEGIN
BEGIN TRY
--print @cmd
EXEC (@cmd)
END TRY
BEGIN CATCH
PRINT 'Error occurred at line ' + convert(VARCHAR(100), @Id)
PRINT error_message()
END CATCH
SET @cmd = ' '
END
ELSE
BEGIN
SET @cmd += @textdata
END
FETCH NEXT
FROM _CURSOR
INTO @Id
,@textdata
END --End While
CLOSE _CURSOR
DEALLOCATE _CURSOR
PRINT @count
If any errors exist, I can use the row number to try and find the error
select * from [Test].[dbo].[SqlCommands] where id between 25 and 30
Community Wiki answer generated from question comments by jyao:
In theory, you can open a SQL Profiler, and set the error event and SQL:StmtStarting
event, then you will see what statement caused the error, then you will know the table name or something else by reading the error message.
Another recommendation is to use UltraEdit to open your file, which can handle large files. I do not know if it can handle 5GB because I only handled about 1.5GB sized files before.
I had the same issue. I don't know why sqlcmd has this problem, but when it gave me that exact same error when I was generating my database from a 7Gb script. The workaround I did was to open the script file with a specialized software for opening big text files (in my case, I used UltraEdit) and splitted in several scripts of about 2Gb each. Actually, I searched which was the last row added to the database and I found it in the script file and splitted there. Finally, I executed all the scripts separately and everything was like a charm.
In the end, there was no problem in the script file, is just that it seems the sqlcmd command can't handle very big database scripts.
If there is another workaround for this I'll like to know because is bit tortuous to do it the way I did. Of course, there is always the option of making a small application that finds the last entry in a table and then splits the script file after that entry. In my case the error was always given after an insert block.
Hope it helps.