I am wondering, what is the industry standard regarding dump formats, when making and storing the database dumps?
Is it better to dump it to a custom-format compressed and unreadable archive, or just simply .sql?
This is database-agnostic question, but here are examples for PostgreSQL to give the idea:
# MYDATABASE to CUSTOM ARCHIVE
pg_dump -Fc mydatabase > mydatabase.dump
vs.
# MYDATABASE to SQL
pg_dump -h localhost -U postgres pagila > mydatabase.sql
-
3There is no "industry standard"; one uses the format that's most appropriate for the use case.mustaccio– mustaccio2022年01月04日 15:02:28 +00:00Commented Jan 4, 2022 at 15:02
-
I was just wondering what is typically used by people. Or how to choose the appropriate format... :) Thanks!weno– weno2022年01月04日 15:11:46 +00:00Commented Jan 4, 2022 at 15:11
1 Answer 1
Ask yourself the same question that you ask whenever you need to store any Data:
How are you going to use this
(削除) Data (削除ここまで)Backup?
If you need to get "into" the file and manipulate it (as a Developer might in order to restore part of a dump into a Test database) then something text-based is an obvious choice.
If the database is terabytes in size, then a custom, compressed format might be more appropriate.
Please bear in mind that backups, in themselves, are not important.
(Yes, I really did say that).
What matters is that you can recover the database ... and backups are a really good way of supporting that.
For straight-line speed of recovery, pg_basebackup - or something based on it - should be your first choice. Whilst pg_dump produces a portable, Logical "copy" of [part of] the database, using the output from pg_basebackup will be much, much faster.