Hi i am able to start streaming replication with postgresql, copying the database files with doing theis command:
select pg_start_backup('clone', true);
That's right, but the thing is that i need to copy all the database files in /var/lib/postgresql/data to one server to another. That files are really big because of indexes for example.
I want to know if it is possible to do the same using a simple dump and restoring it on the slave, what is smaller and faster to transfer in the network, it will work? And what should i do to work?
1 Answer 1
You cannot start streaming replication without a physical copy of the data directory; usually created with the pg_basebackup
command.
The reason for this has to do with what streaming replication and WAL shipping replication do. In both cases you are applying the Write Ahead Log to the data directory using postgresql's recovery mechanism. So in order to not break database consistency you must be applying the same change set to the same starting point and you are restricted to replicating the entire database cluster. This is also why you cannot interrupt streaming or drop WAL segments, since once you have broken continuity the database server cannot guarantee that it is in a consistent state.
In postgreSQL 10 or later you could use logical replication to replicate only one database in a cluster or even a single table. And in that case you can start with a "logical copy" of the database such as are produced by pg_dump
.
-
The logical replication is exactly what i really need. When possible, we will migrate to PostgreSQL 10 and will stop the streaming replication in favor of logical replication.Luciano Andress Martini– Luciano Andress Martini2017年12月26日 20:08:27 +00:00Commented Dec 26, 2017 at 20:08
pg_basebackup
is the way to go. Logically it is the same aspg_dumpall
and will be much faster as the "import" step isn't necessary