9

I need to Import a large data-dump.

The structure is already created. But the data-dump is 16GB large.

If I try it simple with

psql -d "DATABASE" < "datadump.sql"

I get a out of memory Message on Ubuntu DB.

i have no idead how to import.

The DB-Server has 16GB Ram, 50GB Disk-Space free (its a developer vm)

Does anyone has a idea, how to import this file?


UPDATE

My postgresql.conf

# - Memory -
shared_buffers = 4GB # min 128kB
 # (change requires restart)
work_mem = 1048kB # min 64kB
maintenance_work_mem = 2GB # min 1MB
checkpoint_segments = 128 # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Cost Constants -
effective_cache_size = 12GB
wal_buffers = 16MB

The inserts are single-row and all for one table. The Table has 71 columns.

I dont know, whats in line 1447540. No Texteditor can open this file -.- (But the content of the file is correct. After spliting the file into much smaller files, the imports run successful.)

The first screenshot shows the free -ht output before importing. The second screenshot was taken 2 seconds before the out of memory messeage comes up.

enter image description here

enter image description here

asked Oct 11, 2016 at 8:32
0

1 Answer 1

3

It's clear that the issue is related to stdin. If you have control over the dump process, a possible solution is to do not execute the .sql file from psql. To do so, you simply have to run the following two queries:

  1. Execute this from the original database: COPY (SELECT * FROM mytable WHERE ...) TO '/storage/mytable.dump'
  2. Execute this from the destination database (the one where you have the out of memory issue): CREATE TABLE mynewtable ( ... ); COPY mynewtable FROM '/storage/mytable.dump';.

You can mess around with the various parameters available in COPY. For example, you can export a gzipped file or a CSV with headers and custom separator: https://www.postgresql.org/docs/current/static/sql-copy.html .

You can also issue a pg_dumpall > /storage/db.out to export the entire original database and run psql -f /storage/db.out postgres to recreate it on the new database: https://www.postgresql.org/docs/current/static/app-pg-dumpall.html . This should work.. I've just dumped an output file of several tens of GB on OS X (linux-like) without any problem.

Se also pg_dump for more selective database dump options: https://www.postgresql.org/docs/current/static/app-pgdump.html

answered Oct 11, 2016 at 14:24
0

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.