I have a database with many large objects. A table have a column OID that refers to these large objects. I dump the database daily, including all large objects, with command:
pg_dump --format=c --file=/var/backups/pgsql/db-neos.pgdump \
--compress=6 --blobs neos
and then I restore it, on a different machine, with command:
createdb neos
pg_restore -d neos /mnt/db-neos.pgdump
but, while all tables are correctly created, large objects aren't.
Instead, an error message is displayed by pg_restore:
pg_restore: [compress_io] decompressione dei dati fallita: (null)
(english: could not uncompress data)
2 Answers 2
I've faced the same issue some months ago, and I have managed to solve it with a bash script. This is the piece of code that take care to backup large objects, and save it to a local folder (/path/to/destination/lo/folder/
) with filename = oid (the filename is important in order to restore lo with same oid) :
echo -e "going to export largeobjects belonging to community $i..." && sleep 2
psql -U <username> -X -c "SELECT file_oid FROM <my_table>" \
--single-transaction \
--set AUTOCOMMIT=off \
--set ON_ERROR_STOP=on \
--no-align \
-t \
--field-separator ' ' \
--quiet \
-d <my_source_db> \
| while read file_oid ; do
echo "Exporting largeobject OID: $file_oid"
${PSQL} -d <my_source_db> -c "SELECT lo_export($file_oid, '/path/to/destination/lo/folder/$file_oid');"
done
Then to restore them I've written this other piece of code:
echo "going to import largeobjects belonging to community $i..." && sleep 2
LOBJECTS=/path/to/destination/lo/folder/*
for f in $LOBJECTS
do
echo "Processing $f file..."
filename=$(basename "$f")
oid=${filename}
psql -U <username> -d <my_source_db> -c"
-- check if largeobject is already present, if it is delete it
SELECT lo_unlink(${oid});"
psql -U <username> -d <my_source_db> -c"
-- import largeobjects
SELECT lo_import('$f', ${oid});
"
done
Out there it may exist a simpler solution, but at the time I've not been able to find it, this is why used this approach. I hope you will be more lucky and find a cleaner solution. I'll follow this post in case I can learn something useful too : )
-
So, you are changing the way backup is done. This is not the same problem I have, and I cannot change my procedure the same way you did. But I wonder, why do you extract all blobs one by one and store them in separate files? pg_dump and pg_restore are able to manage large objects (when the dump is not corrupt...)eppesuig– eppesuig2015年11月11日 15:35:58 +00:00Commented Nov 11, 2015 at 15:35
-
If I remember well, the cause was that I didn't found a way to ensure the large object to be imported with the same oid. Oh yes, and because I did not have to export all the largeobject, but selectively dump them in base of some other criteria present in the DBlese– lese2015年11月11日 15:39:13 +00:00Commented Nov 11, 2015 at 15:39
-
Indeed this is the opposite: pg_restore uses exactly the same OID, but since a database is part of a cluster, and OID are used among all database in the cluster, it might be that you cannon import a large object because its OID is already taken. What I have to do is: create a new cluster, create a database, then restore on this new cluster. I cannot use a cluster that already include other database because some OIDs are always overlapping.eppesuig– eppesuig2015年11月11日 15:45:03 +00:00Commented Nov 11, 2015 at 15:45
-
Ok interesting ! so you just need to change restore on a new cluster and you are done! Can you tell me if with
pg_dump
is it possible to backup only certain large_objects (select them in base of some creteria)?lese– lese2015年11月11日 15:55:04 +00:00Commented Nov 11, 2015 at 15:55 -
I don't think it is possibile to only extract some of them: they are all large objects referred by any table in the database you dump.eppesuig– eppesuig2015年11月11日 16:05:03 +00:00Commented Nov 11, 2015 at 16:05
I found the problem: the dump file was corrupt, so while importing large objects, pg_restore failed with message:
pg_restore: ripristino del large object con OID 19659
pg_restore: ripristino del large object con OID 19660
pg_restore: [compress_io] decompressione dei dati fallita: (null)
and it stopped after reaching the end of the file.
Explore related questions
See similar questions with these tags.