I’m trying to set up backup strategy for a production postgreSQL DB. It will have large amount of data and it should be running 24 X 7. Could you recommend some backup & recovery strategies that can meet the following criteria?
- Large amount data (about over 200GB).
- Hot backup (online backup)
- Minimum Impact on the DB performance.
- Minimum restore time.
- Allow PITR(Point-In-Time Recovery)
- Can we execute backup with above criteria on slave server of replication?
- If you know backup strategy using storage snapshot, Could you let me know?
-
1the usual setup is (if I'm not mistaken) to create a slave using streaming replication and then to the backups on the slave (to minimize impact on the master). You might want to look at pgBarman and repmgruser1822– user18222014年02月20日 12:02:55 +00:00Commented Feb 20, 2014 at 12:02
1 Answer 1
It sounds like you'd be best suited by using a physical base backup + WAL archiving, with regularly updated snapshots of the base backup. I strongly recommend taking regular dumps anyway.
Using newer PostgreSQL versions (9.2 and up, IIRC) you can take fresh base backups from a replica server so you don't have to disrupt the master.
File-system or logical volume level snapshot backups work fine with PostgreSQL so long as your snapshots are atomic. Restoring one is like starting PostgreSQL back up after unexpected power loss or OS reboot, not a big deal.
See also:
- The manual
- A blog entry I wrote on this topic
- Postgres Backup queries on AWS after reading Postgres docs
- PostgreSQL: Can I do pg_start_backup() on a live, running db under load?
- PostgreSQL "freeze"/"unfreeze" command equivalents
- How to do incremental backup every hour in Postgres?
- Running pg_dump on a hot standby server?
- PostgreSQL crash recovery