I have some rather large pandas DataFrames and I'd like to use the new bulk SQL mappings to upload them to a Microsoft SQL Server via SQL Alchemy. The pandas.to_sql() method, while nice, is slow.
I'm having trouble writing the code...
I'd like to be able to pass this function a pandas DataFrame which I'm calling table, a schema name I'm calling schema, and a table name I'm calling name. Ideally, the function will 1.) delete the table if it already exists. 2.) create a new table 3.) create a mapper and 4.) bulk insert using the mapper and pandas data. I'm stuck on part 3.
Here's my (admittedly rough) code. I'm struggling with how to get the mapper function to work with my primary keys. I don't really need primary keys but the mapper function requires it.
Thanks for the insights.
from sqlalchemy import create_engine Table, Column, MetaData
from sqlalchemy.orm import mapper, create_session
from sqlalchemy.ext.declarative import declarative_base
from pandas.io.sql import SQLTable, SQLDatabase
def bulk_upload(table, schema, name):
e = create_engine('mssql+pyodbc://MYDB')
s = create_session(bind=e)
m = MetaData(bind=e,reflect=True,schema=schema)
Base = declarative_base(bind=e,metadata=m)
t = Table(name,m)
m.remove(t)
t.drop(checkfirst=True)
sqld = SQLDatabase(e, schema=schema,meta=m)
sqlt = SQLTable(name, sqld, table).table
sqlt.metadata = m
m.create_all(bind=e,tables=[sqlt])
class MyClass(Base):
return
mapper(MyClass, sqlt)
s.bulk_insert_mappings(MyClass, table.to_dict(orient='records'))
return
11 Answers 11
I ran into a similar issue with pd.to_sql taking hours to upload data. The below code bulk inserted the same data in a few seconds.
from sqlalchemy import create_engine
import psycopg2 as pg
#load python script that batch loads pandas df to sql
import cStringIO
address = 'postgresql://<username>:<pswd>@<host>:<port>/<database>'
engine = create_engine(address)
connection = engine.raw_connection()
cursor = connection.cursor()
#df is the dataframe containing an index and the columns "Event" and "Day"
#create Index column to use as primary key
df.reset_index(inplace=True)
df.rename(columns={'index':'Index'}, inplace =True)
#create the table but first drop if it already exists
command = '''DROP TABLE IF EXISTS localytics_app2;
CREATE TABLE localytics_app2
(
"Index" serial primary key,
"Event" text,
"Day" timestamp without time zone,
);'''
cursor.execute(command)
connection.commit()
#stream the data using 'to_csv' and StringIO(); then use sql's 'copy_from' function
output = cStringIO.StringIO()
#ignore the index
df.to_csv(output, sep='\t', header=False, index=False)
#jump to start of stream
output.seek(0)
contents = output.getvalue()
cur = connection.cursor()
#null values become ''
cur.copy_from(output, 'localytics_app2', null="")
connection.commit()
cur.close()
7 Comments
contents variable didn't get used. might as well drop it since it does a read through the whole string buffer which can get big depending on db size. tested without and it's working fine.copy_from), which may help people besides the asker, but the asker very explicitly asked about SQL Server.This might have been answered by then, but I found the solution by collating different answers on this site and aligning with SQLAlchemy's doc.
- The table needs to already exist in db1; with an index set up with auto_increment on.
- The Class Current needs to align with the dataframe imported in the CSV and the table in the db1.
Hope this helps whoever comes here and wants to mix Panda and SQLAlchemy in a quick way.
from urllib import quote_plus as urlquote
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Numeric
from sqlalchemy.orm import sessionmaker
import pandas as pd
# Set up of the engine to connect to the database
# the urlquote is used for passing the password which might contain special characters such as "/"
engine = create_engine('mysql://root:%s@localhost/db1' % urlquote('weirdPassword*withsp€cialcharacters'), echo=False)
conn = engine.connect()
Base = declarative_base()
#Declaration of the class in order to write into the database. This structure is standard and should align with SQLAlchemy's doc.
class Current(Base):
__tablename__ = 'tableName'
id = Column(Integer, primary_key=True)
Date = Column(String(500))
Type = Column(String(500))
Value = Column(Numeric())
def __repr__(self):
return "(id='%s', Date='%s', Type='%s', Value='%s')" % (self.id, self.Date, self.Type, self.Value)
# Set up of the table in db and the file to import
fileToRead = 'file.csv'
tableToWriteTo = 'tableName'
# Panda to create a lovely dataframe
df_to_be_written = pd.read_csv(fileToRead)
# The orient='records' is the key of this, it allows to align with the format mentioned in the doc to insert in bulks.
listToWrite = df_to_be_written.to_dict(orient='records')
metadata = sqlalchemy.schema.MetaData(bind=engine,reflect=True)
table = sqlalchemy.Table(tableToWriteTo, metadata, autoload=True)
# Open the session
Session = sessionmaker(bind=engine)
session = Session()
# Inser the dataframe into the database in one bulk
conn.execute(table.insert(), listToWrite)
# Commit the changes
session.commit()
# Close the session
session.close()
5 Comments
def __repr__(self)?class Current used? Doesn't actually seem to be dereferenced anywhere here.Current is subclassing the Base which records all of its ORM definittions in the MetaData object which is used to create table schema. docs Based on @ansonw answers:
def to_sql(engine, df, table, if_exists='fail', sep='\t', encoding='utf8'):
# Create Table
df[:0].to_sql(table, engine, if_exists=if_exists)
# Prepare data
output = cStringIO.StringIO()
df.to_csv(output, sep=sep, header=False, encoding=encoding)
output.seek(0)
# Insert data
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.copy_from(output, table, sep=sep, null='')
connection.commit()
cursor.close()
I insert 200000 lines in 5 seconds instead of 4 minutes
3 Comments
Pandas 0.25.1 has a parameter to do multi-inserts, so it's no longer necessary to workaround this issue with SQLAlchemy.
Set method='multi' when calling pandas.DataFrame.to_sql.
In this example, it would be
df.to_sql(table, schema=schema, con=e, index=False, if_exists='replace', method='multi')
Answer sourced from docs here
Worth noting that I've only tested this with Redshift. Please let me know how it goes on other databases so I can update this answer.
3 Comments
method="multi" didn't improve the speed of bulk-load to SQL Server for me, only create_engine(..., fast_executemany=True) did.As this is an I/O heavy workload you can also use the python threading module through multiprocessing.dummy. This sped things up for me:
import math
from multiprocessing.dummy import Pool as ThreadPool
...
def insert_df(df, *args, **kwargs):
nworkers = 4
chunksize = math.floor(df.shape[0] / nworkers)
chunks = [(chunksize * i, (chunksize * i) + chunksize) for i in range(nworkers)]
chunks.append((chunksize * nworkers, df.shape[0]))
pool = ThreadPool(nworkers)
def worker(chunk):
i, j = chunk
df.iloc[i:j, :].to_sql(*args, **kwargs)
pool.map(worker, chunks)
pool.close()
pool.join()
....
insert_df(df, "foo_bar", engine, if_exists='append')
1 Comment
Here is a simple method
Download Drivers for SQL database connectivity
For Linux and Mac OS:
For Windows:
https://www.microsoft.com/en-us/download/details.aspx?id=56567
Creating Connection
from sqlalchemy import create_engine
import urllib
server = '*****'
database = '********'
username = '**********'
password = '*********'
params = urllib.parse.quote_plus(
'DRIVER={ODBC Driver 17 for SQL Server};'+
'SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
#Checking Connection
connected = pd.io.sql._is_sqlalchemy_connectable(engine)
print(connected) #Output is True if connection established successfully
Data insertion
df.to_sql('Table_Name', con=engine, if_exists='append', index=False)
"""
if_exists: {'fail', 'replace', 'append'}, default 'fail'
fail: If table exists, do nothing.
replace: If table exists, drop it, recreate it, and insert data.
append: If table exists, insert data. Create if does not exist.
"""
If there are many records
# limit based on sp_prepexec parameter count
tsql_chunksize = 2097 // len(bd_pred_score_100.columns)
# cap at 1000 (limit for number of rows inserted by table-value constructor)
tsql_chunksize = 1000 if tsql_chunksize > 1000 else tsql_chunksize
print(tsql_chunksize)
df.to_sql('table_name', con = engine, if_exists = 'append', index= False, chunksize=tsql_chunksize)
PS: You can change the parameters as per your requirement.
Comments
My postgres specific solution below auto-creates the database table using your pandas dataframe, and performs a fast bulk insert using the postgres COPY my_table FROM ...
import io
import pandas as pd
from sqlalchemy import create_engine
def write_to_table(df, db_engine, schema, table_name, if_exists='fail'):
string_data_io = io.StringIO()
df.to_csv(string_data_io, sep='|', index=False)
pd_sql_engine = pd.io.sql.pandasSQL_builder(db_engine, schema=schema)
table = pd.io.sql.SQLTable(table_name, pd_sql_engine, frame=df,
index=False, if_exists=if_exists, schema=schema)
table.create()
string_data_io.seek(0)
string_data_io.readline() # remove header
with db_engine.connect() as connection:
with connection.connection.cursor() as cursor:
copy_cmd = "COPY %s.%s FROM STDIN HEADER DELIMITER '|' CSV" % (schema, table_name)
cursor.copy_expert(copy_cmd, string_data_io)
connection.connection.commit()
Comments
for people like me who are trying to implement the aforementioned solutions:
Pandas 0.24.0 has now to_sql with chunksize and method='multi' option that inserts in bulk...
Comments
This worked for me to connect to Oracle Database using cx_Oracle and SQLALchemy
import sqlalchemy
import cx_Oracle
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, String
from sqlalchemy.orm import sessionmaker
import pandas as pd
# credentials
username = "username"
password = "password"
connectStr = "connection:/string"
tableName = "tablename"
t0 = time.time()
# connection
dsn = cx_Oracle.makedsn('host','port',service_name='servicename')
Base = declarative_base()
class LANDMANMINERAL(Base):
__tablename__ = 'tablename'
DOCUMENTNUM = Column(String(500), primary_key=True)
DOCUMENTTYPE = Column(String(500))
FILENUM = Column(String(500))
LEASEPAYOR = Column(String(500))
LEASESTATUS = Column(String(500))
PROSPECT = Column(String(500))
SPLIT = Column(String(500))
SPLITSTATUS = Column(String(500))
engine = create_engine('oracle+cx_oracle://%s:%s@%s' % (username, password, dsn))
conn = engine.connect()
Base.metadata.bind = engine
# Creating the session
DBSession = sessionmaker(bind=engine)
session = DBSession()
# Bulk insertion
data = pd.read_csv('data.csv')
lists = data.to_dict(orient='records')
table = sqlalchemy.Table('landmanmineral', Base.metadata, autoreload=True)
conn.execute(table.insert(), lists)
session.commit()
session.close()
print("time taken %8.8f seconds" % (time.time() - t0) )
Comments
Below code might help you, i was facing the same issue while loading 695,000K records
Method Truncate the table before load
with engine.begin() as conn:
conn.execute(sa.text("TRUNCATE TABLE <schama.table>")
Note:- engine= my connection to destination server, sa for (import sqlalchemy as "sa"
table_name = "<destination_table>"
df.to_sql(table_name, engine, schema = 'schema', if_exists = 'replace', index=False)
depends on requirement do append/replace
Comments
For anyone facing this problem and having the destination DB as Redshift, note that Redshift does not implement the full set of Postgres commands, and so some of the answers using either Postgres' COPY FROM or copy_from() will not work.
psycopg2.ProgrammingError: syntax error at or near "stdin" error when trying to copy_from redshift
Solution for speeding up the INSERTs to Redshift is to use a file ingest or Odo.
Reference:
About Odo
http://odo.pydata.org/en/latest/perf.html
Odo with Redshift
https://github.com/blaze/odo/blob/master/docs/source/aws.rst
Redshift COPY (from S3 file)
https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
1 Comment
Explore related questions
See similar questions with these tags.
to_sqlfunction yourself, and I doubt that this will be faster. The bottleneck writing data to SQL lies mainly in the python drivers (pyobdcin your case), and this is something you don't avoid with the above implementation. Furthermore,to_sqldoes not use the ORM, which is considered to be slower than CORE sqlalchemy even when using bulk insert (docs.sqlalchemy.org/en/latest/faq/…)to_sqlis too slow, and you cannot improve it (by eg tweaking the connection parameters, the used driver (eg pymssql), internet speed, by removing constraints on the table, etc), a faster alternative is writing the data to csv, and loading this into the SQL table.to_sqldoes not use ORM at all, as I said before, and is in fact already doing a bulk insert.