I am using BULK INSERT to read a csv file in SQL Server. Is there a way to keep the first row as column name while I am reading the file?
If not, after reading the data from the csv (from second row), how can I add column names to it?
Any help/comments/suggestions are much appreciated.
2 Answers 2
If not, after reading the data from the csv (from second row), how can I add column names to it?
I would put your focus here. First, create your table with yoru column names, data types, etc.
create table myTable (column1 <datatype>, column2 <datatype>)
Then, bulk insert into it but ignore the first row.
bulk insert myTable
from 'C:\somefile.csv',
with( firstrow = 2,
fieldterminator = ',',
rowterminator = '\n')
If it's dynamic, then you may want to look into openrowset
.
Assuming you are importing a standard CSV, you can dynamically create the table as such:
DECLARE @sql NVARCHAR(MAX)
DECLARE @filePath NVARCHAR(MAX) = 'C:/SomeFolder/yourImportFile.csv'
DECLARE @tableName NVARCHAR(MAX) = 'yourTableName'
DECLARE @colString NVARCHAR(MAX)
SET @sql = 'SELECT @res = LEFT(BulkColumn, CHARINDEX(CHAR(10),BulkColumn)) FROM OPENROWSET(BULK ''' + @filePath + ''', SINGLE_CLOB) AS x'
exec sp_executesql @sql, N'@res NVARCHAR(MAX) output', @colString output;
SELECT @sql = 'DROP TABLE IF EXISTS ' + @tableName + '; CREATE TABLE [dbo].[' + @tableName + ']( ' + STRING_AGG(name, ', ') + ' ) '
FROM (
SELECT ' [' + value + '] nvarchar(max) ' as name
FROM STRING_SPLIT(@colString, ',')
) t
EXECUTE(@sql)
and then bulk insert the data as @scsimon suggested:
BULK INSERT dbo.yourTableName.
FROM 'C:/SomeFolder/yourImportFile.csv'
WITH (
FORMAT='CSV',
FIRSTROW = 2,
ROWTERMINATOR = '\n',
FIELDQUOTE= '"',
TABLOCK
)