Backup and Restorebackup>>
As with everything that contains valuable data, PostgreSQL>
databases should be backed up regularly. While the procedure is
essentially simple, it is important to have a clear understanding of
the underlying techniques and assumptions.
There are three fundamentally different approaches to backing up
PostgreSQL> data:
SQL> dumpFile system level backupContinuous archiving
Each has its own strengths and weaknesses.
Each is discussed in turn below.
SQL> Dump
The idea behind this dump method is to generate a text file with SQL
commands that, when fed back to the server, will recreate the
database in the same state as it was at the time of the dump.
PostgreSQL> provides the utility program
for this purpose. The basic usage of this
command is:
pg_dump dbname > outfile
As you see, pg_dump> writes its results to the
standard output. We will see below how this can be useful.
pg_dump> is a regular PostgreSQL>
client application (albeit a particularly clever one). This means
that you can do this backup procedure from any remote host that has
access to the database. But remember that pg_dump>
does not operate with special permissions. In particular, it must
have read access to all tables that you want to back up, so in
practice you almost always have to run it as a database superuser.
To specify which database server pg_dump> should
contact, use the command line options
Like any other PostgreSQL> client application,
pg_dump> will by default connect with the database
user name that is equal to the current operating system user name. To override
this, either specify the
-U
option or set the
environment variable PGUSER. Remember that
pg_dump> connections are subject to the normal
client authentication mechanisms (which are described in ).
Dumps created by pg_dump> are internally consistent,
that is, the dump represents a snapshot of the database as of the time
pg_dump> begins running. pg_dump> does not
block other operations on the database while it is working.
(Exceptions are those operations that need to operate with an
exclusive lock, such as most forms of ALTER TABLE.)
If your database schema relies on OIDs (for instance as foreign
keys) you must instruct pg_dump> to dump the OIDs
as well. To do this, use the
-o
command line
option.
Restoring the dump
The text files created by pg_dump> are intended to
be read in by the psql program. The
general command form to restore a dump is
psql dbname < infile
where infile is what
you used as outfile
for the pg_dump> command. The database dbname will not be created by this
command, so you must create it yourself from template0>
before executing psql> (e.g., with
createdb -T template0 dbname>). psql>
supports options similar to pg_dump>'s for specifying
the database server to connect to and the user name to use. See
the reference page for more information.
Before restoring a SQL dump, all the users who own objects or were
granted permissions on objects in the dumped database must already
exist. If they do not, then the restore will fail to recreate the
objects with the original ownership and/or permissions.
(Sometimes this is what you want, but usually it is not.)
By default, the psql> script will continue to
execute after an SQL error is encountered. You might wish to use the
following command at the top of the script to alter that
behaviour and have psql exit with an
exit status of 3 if an SQL error occurs:
\set ON_ERROR_STOP
Either way, you will have an only partially restored database.
Alternatively, you can specify that the whole dump should be
restored as a single transaction, so the restore is either fully
completed or fully rolled back. This mode can be specified by
passing the
-1> or
--single-transaction>
command-line options to psql>. When using this
mode, be aware that even the smallest of errors can rollback a
restore that has already run for many hours. However, that might
still be preferable to manually cleaning up a complex database
after a partially restored dump.
The ability of pg_dump> and psql> to
write to or read from pipes makes it possible to dump a database
directly from one server to another, for example:
pg_dump -h host1> dbname> | psql -h host2> dbname>
The dumps produced by pg_dump> are relative to
template0>. This means that any languages, procedures,
etc. added via template1> will also be dumped by
pg_dump>. As a result, when restoring, if you are
using a customized template1>, you must create the
empty database from template0>, as in the example
above.
After restoring a backup, it is wise to run on each
database so the query optimizer has useful statistics;
see
and for more information.
For more advice on how to load large amounts of data
into PostgreSQL> efficiently, refer to .
Using pg_dumpall>pg_dump> dumps only a single database at a time,
and it does not dump information about roles or tablespaces
(because those are cluster-wide rather than per-database).
To support convenient dumping of the entire contents of a database
cluster, the program is provided.
pg_dumpall> backs up each database in a given
cluster, and also preserves cluster-wide data such as role and
tablespace definitions. The basic usage of this command is:
pg_dumpall > outfile>
The resulting dump can be restored with psql>:
psql -f infile postgres
(Actually, you can specify any existing database name to start from,
but if you are reloading into an empty cluster then postgres>
should usually be used.) It is always necessary to have
database superuser access when restoring a pg_dumpall>
dump, as that is required to restore the role and tablespace information.
If you use tablespaces, be careful that the tablespace paths in the
dump are appropriate for the new installation.
pg_dumpall> works by emitting commands to re-create
roles, tablespaces, and empty databases, then invoking
pg_dump> for each database. This means that while
each database will be internally consistent, the snapshots of
different databases might not be exactly in-sync.
Handling large databases
Since PostgreSQL allows tables larger
than the maximum file size on your system, it can be problematic
to dump such a table to a file, since the resulting file will likely
be larger than the maximum size allowed by your system. Since
pg_dump> can write to the standard output, you can
use standard Unix tools to work around this possible problem.
There are several ways to do it:
Use compressed dumps.
You can use your favorite compression program, for example
gzip:
pg_dump dbname | gzip > filename.gz
Reload with:
gunzip -c filename.gz | psql dbname
or:
cat filename.gz | gunzip | psql dbnameUse split>.
The split command
allows you to split the output into pieces that are
acceptable in size to the underlying file system. For example, to
make chunks of 1 megabyte:
pg_dump dbname | split -b 1m - filename
Reload with:
cat filename* | psql dbnameUse pg_dump>'s custom dump format.
If PostgreSQL was built on a system with the
zlib> compression library installed, the custom dump
format will compress data as it writes it to the output file. This will
produce dump file sizes similar to using gzip, but it
has the added advantage that tables can be restored selectively. The
following command dumps a database using the custom dump format:
pg_dump -Fc dbname > filename
A custom-format dump is not a script for psql>, but
instead must be restored with pg_restore>, for example:
pg_restore -d dbnamefilename
See the and reference pages for details.
For very large databases, you might need to combine split>
with one of the other two approaches.
File System Level Backup
An alternative backup strategy is to directly copy the files that
PostgreSQL> uses to store the data in the database. In
it is explained where these files
are located, but you have probably found them already if you are
interested in this method. You can use whatever method you prefer
for doing usual file system backups, for example:
tar -cf backup.tar /usr/local/pgsql/data
There are two restrictions, however, which make this method
impractical, or at least inferior to the pg_dump>
method:
The database server must> be shut down in order to
get a usable backup. Half-way measures such as disallowing all
connections will not work
(in part because tar and similar tools do not take
an atomic snapshot of the state of the file system,
but also because of internal buffering within the server).
Information about stopping the server can be found in
. Needless to say that you
also need to shut down the server before restoring the data.
If you have dug into the details of the file system layout of the
database, you might be tempted to try to back up or restore only certain
individual tables or databases from their respective files or
directories. This will not> work because the
information contained in these files contains only half the
truth. The other half is in the commit log files
pg_clog/*, which contain the commit status of
all transactions. A table file is only usable with this
information. Of course it is also impossible to restore only a
table and the associated pg_clog data
because that would render all other tables in the database
cluster useless. So file system backups only work for complete
backup and restoration of an entire database cluster.
An alternative file-system backup approach is to make a
consistent snapshot of the data directory, if the
file system supports that functionality (and you are willing to
trust that it is implemented correctly). The typical procedure is
to make a frozen snapshot> of the volume containing the
database, then copy the whole data directory (not just parts, see
above) from the snapshot to a backup device, then release the frozen
snapshot. This will work even while the database server is running.
However, a backup created in this way saves
the database files in a state where the database server was not
properly shut down; therefore, when you start the database server
on the backed-up data, it will think the previous server instance had
crashed and replay the WAL log. This is not a problem, just be aware of
it (and be sure to include the WAL files in your backup).
If your database is spread across multiple file systems, there might not
be any way to obtain exactly-simultaneous frozen snapshots of all
the volumes. For example, if your data files and WAL log are on different
disks, or if tablespaces are on different file systems, it might
not be possible to use snapshot backup because the snapshots
must> be simultaneous.
Read your file system documentation very carefully before trusting
to the consistent-snapshot technique in such situations.
If simultaneous snapshots are not possible, one option is to shut down
the database server long enough to establish all the frozen snapshots.
Another option is perform a continuous archiving base backup () because such backups are immune to file
system changes during the backup. This requires enabling continuous
archiving just during the backup process; restore is done using
continuous archive recovery ().
Another option is to use rsync> to perform a file
system backup. This is done by first running rsync>
while the database server is running, then shutting down the database
server just long enough to do a second rsync>. The
second rsync> will be much quicker than the first,
because it has relatively little data to transfer, and the end result
will be consistent because the server was down. This method
allows a file system backup to be performed with minimal downtime.
Note that a file system backup will not necessarily be
smaller than an SQL dump. On the contrary, it will most likely be
larger. (pg_dump does not need to dump
the contents of indexes for example, just the commands to recreate
them.) However, taking a file system backup might be faster.
Continuous Archiving and Point-In-Time Recovery (PITR)continuous archivingpoint-in-time recoveryPITR
At all times, PostgreSQL> maintains a
write ahead log> (WAL) in the pg_xlog/>
subdirectory of the cluster's data directory. The log describes
every change made to the database's data files. This log exists
primarily for crash-safety purposes: if the system crashes, the
database can be restored to consistency by replaying> the
log entries made since the last checkpoint. However, the existence
of the log makes it possible to use a third strategy for backing up
databases: we can combine a file-system-level backup with backup of
the WAL files. If recovery is needed, we restore the backup and
then replay from the backed-up WAL files to bring the backup up to
current time. This approach is more complex to administer than
either of the previous approaches, but it has some significant
benefits:
We do not need a perfectly consistent backup as the starting point.
Any internal inconsistency in the backup will be corrected by log
replay (this is not significantly different from what happens during
crash recovery). So we don't need file system snapshot capability,
just tar> or a similar archiving tool.
Since we can string together an indefinitely long sequence of WAL files
for replay, continuous backup can be achieved simply by continuing to archive
the WAL files. This is particularly valuable for large databases, where
it might not be convenient to take a full backup frequently.
There is nothing that says we have to replay the WAL entries all the
way to the end. We could stop the replay at any point and have a
consistent snapshot of the database as it was at that time. Thus,
this technique supports point-in-time recovery>: it is
possible to restore the database to its state at any time since your base
backup was taken.
If we continuously feed the series of WAL files to another
machine that has been loaded with the same base backup file, we
have a warm standby> system: at any point we can bring up
the second machine and it will have a nearly-current copy of the
database.
As with the plain file-system-backup technique, this method can only
support restoration of an entire database cluster, not a subset.
Also, it requires a lot of archival storage: the base backup might be bulky,
and a busy system will generate many megabytes of WAL traffic that
have to be archived. Still, it is the preferred backup technique in
many situations where high reliability is needed.
To recover successfully using continuous archiving (also called
online backup> by many database vendors), you need a continuous
sequence of archived WAL files that extends back at least as far as the
start time of your backup. So to get started, you should set up and test
your procedure for archiving WAL files before> you take your
first base backup. Accordingly, we first discuss the mechanics of
archiving WAL files.
Setting up WAL archiving
In an abstract sense, a running PostgreSQL> system
produces an indefinitely long sequence of WAL records. The system
physically divides this sequence into WAL segment
files>, which are normally 16MB apiece (although the segment size
can be altered when building PostgreSQL>). The segment
files are given numeric names that reflect their position in the
abstract WAL sequence. When not using WAL archiving, the system
normally creates just a few segment files and then
recycles> them by renaming no-longer-needed segment files
to higher segment numbers. It's assumed that a segment file whose
contents precede the checkpoint-before-last is no longer of
interest and can be recycled.
When archiving WAL data, we need to capture the contents of each segment
file once it is filled, and save that data somewhere before the segment
file is recycled for reuse. Depending on the application and the
available hardware, there could be many different ways of saving
the data somewhere>: we could copy the segment files to an NFS-mounted
directory on another machine, write them onto a tape drive (ensuring that
you have a way of identifying the original name of each file), or batch
them together and burn them onto CDs, or something else entirely. To
provide the database administrator with as much flexibility as possible,
PostgreSQL> tries not to make any assumptions about how
the archiving will be done. Instead, PostgreSQL> lets
the administrator specify a shell command to be executed to copy a
completed segment file to wherever it needs to go. The command could be
as simple as a cp>, or it could invoke a complex shell
script — it's all up to you.
To enable WAL archiving, set the configuration parameter to on>,
and specify the shell command to use in the configuration parameter. In practice
these settings will always be placed in the
postgresql.conf file.
In archive_command>,
any %p> is replaced by the path name of the file to
archive, while any %f> is replaced by the file name only.
(The path name is relative to the current working directory,
i.e., the cluster's data directory.)
Write %%> if you need to embed an actual %>
character in the command. The simplest useful command is something
like:
archive_command = 'cp -i %p /mnt/server/archivedir/%f </dev/null'
which will copy archivable WAL segments to the directory
/mnt/server/archivedir>. (This is an example, not a
recommendation, and might not work on all platforms.) After the
%p> and %f> parameters have been replaced,
the actual command executed might look like this:
cp -i pg_xlog/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065 </dev/null
A similar command will be generated for each new file to be archived.
The archive command will be executed under the ownership of the same
user that the PostgreSQL> server is running as. Since
the series of WAL files being archived contains effectively everything
in your database, you will want to be sure that the archived data is
protected from prying eyes; for example, archive into a directory that
does not have group or world read access.
It is important that the archive command return zero exit status if and
only if it succeeded. Upon getting a zero result,
PostgreSQL> will assume that the file has been
successfully archived, and will remove or recycle it. However, a nonzero
status tells PostgreSQL> that the file was not archived;
it will try again periodically until it succeeds.
The archive command should generally be designed to refuse to overwrite
any pre-existing archive file. This is an important safety feature to
preserve the integrity of your archive in case of administrator error
(such as sending the output of two different servers to the same archive
directory).
It is advisable to test your proposed archive command to ensure that it
indeed does not overwrite an existing file, and that it returns
nonzero status in this case>. We have found that cp -i> does
this correctly on some platforms but not others. If the chosen command
does not itself handle this case correctly, you should add a command
to test for pre-existence of the archive file. For example, something
like:
archive_command = 'test ! -f .../%f && cp %p .../%f'
works correctly on most Unix variants.
While designing your archiving setup, consider what will happen if
the archive command fails repeatedly because some aspect requires
operator intervention or the archive runs out of space. For example, this
could occur if you write to tape without an autochanger; when the tape
fills, nothing further can be archived until the tape is swapped.
You should ensure that any error condition or request to a human operator
is reported appropriately so that the situation can be
resolved reasonably quickly. The pg_xlog/> directory will
continue to fill with WAL segment files until the situation is resolved.
(If the filesystem containing pg_xlog/> fills up,
PostgreSQL> will do a PANIC shutdown. No prior
transactions will be lost, but the database will be unavailable until
you free some space.)
The speed of the archiving command is not important, so long as it can keep up
with the average rate at which your server generates WAL data. Normal
operation continues even if the archiving process falls a little behind.
If archiving falls significantly behind, this will increase the amount of
data that would be lost in the event of a disaster. It will also mean that
the pg_xlog/> directory will contain large numbers of
not-yet-archived segment files, which could eventually exceed available
disk space. You are advised to monitor the archiving process to ensure that
it is working as you intend.
In writing your archive command, you should assume that the file names to
be archived can be up to 64 characters long and can contain any
combination of ASCII letters, digits, and dots. It is not necessary to
remember the original relative path (%p>) but it is necessary to
remember the file name (%f>).
Note that although WAL archiving will allow you to restore any
modifications made to the data in your PostgreSQL> database,
it will not restore changes made to configuration files (that is,
postgresql.conf>, pg_hba.conf> and
pg_ident.conf>), since those are edited manually rather
than through SQL operations.
You might wish to keep the configuration files in a location that will
be backed up by your regular file system backup procedures. See
for how to relocate the
configuration files.
The archive command is only invoked on completed WAL segments. Hence,
if your server generates only little WAL traffic (or has slack periods
where it does so), there could be a long delay between the completion
of a transaction and its safe recording in archive storage. To put
a limit on how old unarchived data can be, you can set
to force the server to switch
to a new WAL segment file at least that often. Note that archived
files that are ended early due to a forced switch are still the same
length as completely full files. It is therefore unwise to set a very
short archive_timeout> — it will bloat your archive
storage. archive_timeout> settings of a minute or so are
usually reasonable.
Also, you can force a segment switch manually with
pg_switch_xlog>, if you want to ensure that a
just-finished transaction is archived as soon as possible. Other utility
functions related to WAL management are listed in .
When archive_mode> is off> some SQL commands
are optimized to avoid WAL logging, as described in . If archiving were turned on during execution
of one of these statements, WAL would not contain enough information
for archive recovery. (Crash recovery is unaffected.) For
this reason, archive_mode> can only be changed at server
start. However, archive_command> can be changed with a
configuration file reload. If you wish to temporarily stop archiving,
one way to do it is to set archive_command> to the empty
string (''>).
This will cause WAL files to accumulate in pg_xlog/> until a
working archive_command> is re-established.
Making a Base Backup
The procedure for making a base backup is relatively simple:
Ensure that WAL archiving is enabled and working.
Connect to the database as a superuser, and issue the command:
SELECT pg_start_backup('label');
where label> is any string you want to use to uniquely
identify this backup operation. (One good practice is to use the
full path where you intend to put the backup dump file.)
pg_start_backup> creates a backup label> file,
called backup_label>, in the cluster directory with
information about your backup.
It does not matter which database within the cluster you connect to to
issue this command. You can ignore the result returned by the function;
but if it reports an error, deal with that before proceeding.
By default, pg_start_backup> can take a long time to finish.
This is because it performs a checkpoint, and the I/O
required for the checkpoint will be spread out over a significant
period of time, by default half your inter-checkpoint interval
(see the configuration parameter
). Usually
this is what you want, because it minimizes the impact on query
processing. If you just want to start the backup as soon as
possible, use:
SELECT pg_start_backup('label', true);
This forces the checkpoint to be done as quickly as possible.
Perform the backup, using any convenient file-system-backup tool
such as tar> or cpio>. It is neither
necessary nor desirable to stop normal operation of the database
while you do this.
Again connect to the database as a superuser, and issue the command:
SELECT pg_stop_backup();
This terminates the backup mode and performs an automatic switch to
the next WAL segment. The reason for the switch is to arrange that
the last WAL segment file written during the backup interval is
immediately ready to archive.
Once the WAL segment files used during the backup are archived, you are
done. The file identified by pg_stop_backup>'s result is
the last segment that is required to form a complete set of backup files.
pg_stop_backup> does not return until the last segment has
been archived.
Archiving of these files happens automatically since you have
already configured archive_command>. In most cases this
happens quickly, but you are advised to monitor your archive
system to ensure there are no delays.
If the archive process has fallen behind
because of failures of the archive command, it will keep retrying
until the archive succeeds and the backup is complete.
If you wish to place a time limit on the execution of
pg_stop_backup>, set an appropriate
statement_timeout value.
Some backup tools that you might wish to use emit warnings or errors
if the files they are trying to copy change while the copy proceeds.
This situation is normal, and not an error, when taking a base backup
of an active database; so you need to ensure that you can distinguish
complaints of this sort from real errors. For example, some versions
of rsync> return a separate exit code for
vanished source files>, and you can write a driver script to
accept this exit code as a non-error case. Also, some versions of
GNU tar> return an error code indistinguishable from
a fatal error if a file was truncated while tar> was
copying it. Fortunately, GNU tar> versions 1.16 and
later exit with 1> if a file was changed during the backup,
and 2> for other errors.
It is not necessary to be very concerned about the amount of time elapsed
between pg_start_backup> and the start of the actual backup,
nor between the end of the backup and pg_stop_backup>; a
few minutes' delay won't hurt anything. (However, if you normally run the
server with full_page_writes> disabled, you might notice a drop
in performance between pg_start_backup> and
pg_stop_backup>, since full_page_writes> is
effectively forced on during backup mode.) You must ensure that these
steps are carried out in sequence without any possible
overlap, or you will invalidate the backup.
Be certain that your backup dump includes all of the files underneath
the database cluster directory (e.g., /usr/local/pgsql/data>).
If you are using tablespaces that do not reside underneath this directory,
be careful to include them as well (and be sure that your backup dump
archives symbolic links as links, otherwise the restore will mess up
your tablespaces).
You can, however, omit from the backup dump the files within the
pg_xlog/> subdirectory of the cluster directory. This
slight complication is worthwhile because it reduces the risk
of mistakes when restoring. This is easy to arrange if
pg_xlog/> is a symbolic link pointing to someplace outside
the cluster directory, which is a common setup anyway for performance
reasons.
To make use of the backup, you will need to keep around all the WAL
segment files generated during and after the file system backup.
To aid you in doing this, the pg_stop_backup> function
creates a backup history file> that is immediately
stored into the WAL archive area. This file is named after the first
WAL segment file that you need to have to make use of the backup.
For example, if the starting WAL file is
0000000100001234000055CD> the backup history file will be
named something like
0000000100001234000055CD.007C9330.backup>. (The second
part of the file name stands for an exact position within the WAL
file, and can ordinarily be ignored.) Once you have safely archived
the file system backup and the WAL segment files used during the
backup (as specified in the backup history file), all archived WAL
segments with names numerically less are no longer needed to recover
the file system backup and can be deleted. However, you should
consider keeping several backup sets to be absolutely certain that
you can recover your data.
The backup history file is just a small text file. It contains the
label string you gave to pg_start_backup>, as well as
the starting and ending times and WAL segments of the backup.
If you used the label to identify where the associated dump file is kept,
then the archived history file is enough to tell you which dump file to
restore, should you need to do so.
Since you have to keep around all the archived WAL files back to your
last base backup, the interval between base backups should usually be
chosen based on how much storage you want to expend on archived WAL
files. You should also consider how long you are prepared to spend
recovering, if recovery should be necessary — the system will have to
replay all those WAL segments, and that could take awhile if it has
been a long time since the last base backup.
It's also worth noting that the pg_start_backup> function
makes a file named backup_label> in the database cluster
directory, which is then removed again by pg_stop_backup>.
This file will of course be archived as a part of your backup dump file.
The backup label file includes the label string you gave to
pg_start_backup>, as well as the time at which
pg_start_backup> was run, and the name of the starting WAL
file. In case of confusion it will
therefore be possible to look inside a backup dump file and determine
exactly which backup session the dump file came from.
It is also possible to make a backup dump while the server is
stopped. In this case, you obviously cannot use
pg_start_backup> or pg_stop_backup>, and
you will therefore be left to your own devices to keep track of which
backup dump is which and how far back the associated WAL files go.
It is generally better to follow the continuous archiving procedure above.
Recovering using a Continuous Archive Backup
Okay, the worst has happened and you need to recover from your backup.
Here is the procedure:
Stop the server, if it's running.
If you have the space to do so,
copy the whole cluster data directory and any tablespaces to a temporary
location in case you need them later. Note that this precaution will
require that you have enough free space on your system to hold two
copies of your existing database. If you do not have enough space,
you need at the least to copy the contents of the pg_xlog>
subdirectory of the cluster data directory, as it might contain logs which
were not archived before the system went down.
Clean out all existing files and subdirectories under the cluster data
directory and under the root directories of any tablespaces you are using.
Restore the database files from your base backup. Be careful that they
are restored with the right ownership (the database system user, not
root>!) and with the right permissions. If you are using
tablespaces,
you should verify that the symbolic links in pg_tblspc/>
were correctly restored.
Remove any files present in pg_xlog/>; these came from the
backup dump and are therefore probably obsolete rather than current.
If you didn't archive pg_xlog/> at all, then recreate it,
being careful to ensure that you re-establish it as a symbolic link
if you had it set up that way before.
If you had unarchived WAL segment files that you saved in step 2,
copy them into pg_xlog/>. (It is best to copy them,
not move them, so that you still have the unmodified files if a
problem occurs and you have to start over.)
Create a recovery command file recovery.conf> in the cluster
data directory (see ). You might
also want to temporarily modify pg_hba.conf> to prevent
ordinary users from connecting until you are sure the recovery has worked.
Start the server. The server will go into recovery mode and
proceed to read through the archived WAL files it needs. Should the
recovery be terminated because of an external error, the server can
simply be restarted and it will continue recovery. Upon completion
of the recovery process, the server will rename
recovery.conf> to recovery.done> (to prevent
accidentally re-entering recovery mode in case of a crash later) and then
commence normal database operations.
Inspect the contents of the database to ensure you have recovered to
where you want to be. If not, return to step 1. If all is well,
let in your users by restoring pg_hba.conf> to normal.
The key part of all this is to set up a recovery command file that
describes how you want to recover and how far the recovery should
run. You can use recovery.conf.sample> (normally
installed in the installation share/> directory) as a
prototype. The one thing that you absolutely must specify in
recovery.conf> is the restore_command>,
which tells PostgreSQL> how to get back archived
WAL file segments. Like the archive_command>, this is
a shell command string. It can contain %f>, which is
replaced by the name of the desired log file, and %p>,
which is replaced by the path name to copy the log file to.
(The path name is relative to the current working directory,
i.e., the cluster's data directory.)
Write %%> if you need to embed an actual %>
character in the command. The simplest useful command is
something like:
restore_command = 'cp /mnt/server/archivedir/%f %p'
which will copy previously archived WAL segments from the directory
/mnt/server/archivedir>. You could of course use something
much more complicated, perhaps even a shell script that requests the
operator to mount an appropriate tape.
It is important that the command return nonzero exit status on failure.
The command will> be asked for files that are not present
in the archive; it must return nonzero when so asked. This is not an
error condition. Not all of the requested files will be WAL segment
files; you should also expect requests for files with a suffix of
.backup> or .history>. Also be aware that
the base name of the %p> path will be different from
%f>; do not expect them to be interchangeable.
WAL segments that cannot be found in the archive will be sought in
pg_xlog/>; this allows use of recent un-archived segments.
However segments that are available from the archive will be used in
preference to files in pg_xlog/>. The system will not
overwrite the existing contents of pg_xlog/> when retrieving
archived files.
Normally, recovery will proceed through all available WAL segments,
thereby restoring the database to the current point in time (or as
close as we can get given the available WAL segments). So a normal
recovery will end with a file not found> message, the exact text
of the error message depending upon your choice of
restore_command>. You may also see an error message
at the start of recovery for a file named something like
00000001.history>. This is also normal and does not
indicate a problem in simple recovery situations. See
for discussion.
If you want to recover to some previous point in time (say, right before
the junior DBA dropped your main transaction table), just specify the
required stopping point in recovery.conf>. You can specify
the stop point, known as the recovery target>, either by
date/time or by completion of a specific transaction ID. As of this
writing only the date/time option is very usable, since there are no tools
to help you identify with any accuracy which transaction ID to use.
The stop point must be after the ending time of the base backup, i.e.,
the end time of pg_stop_backup>. You cannot use a base backup
to recover to a time when that backup was still going on. (To
recover to such a time, you must go back to your previous base backup
and roll forward from there.)
If recovery finds a corruption in the WAL data then recovery will
complete at that point and the server will not start. In such a case the
recovery process could be re-run from the beginning, specifying a
recovery target> before the point of corruption so that recovery
can complete normally.
If recovery fails for an external reason, such as a system crash or
if the WAL archive has become inaccessible, then the recovery can simply
be restarted and it will restart almost from where it failed.
Recovery restart works much like checkpointing in normal operation:
the server periodically forces all its state to disk, and then updates
the pg_control> file to indicate that the already-processed
WAL data need not be scanned again.
Recovery Settings
These settings can only be made in the recovery.conf>
file, and apply only for the duration of the recovery. They must be
reset for any subsequent recovery you wish to perform. They cannot be
changed once recovery has begun.
restore_command (string)
The shell command to execute to retrieve an archived segment of
the WAL file series. This parameter is required.
Any %f> in the string is
replaced by the name of the file to retrieve from the archive,
and any %p> is replaced by the path name to copy
it to on the server.
(The path name is relative to the current working directory,
i.e., the cluster's data directory.)
Any %r> is replaced by the name of the file containing the
last valid restart point. That is the earliest file that must be kept
to allow a restore to be restartable, so this information can be used
to truncate the archive to just the minimum required to support
restart from the current restore. %r> would typically be
used in a warm-standby configuration
(see ).
Write %%> to embed an actual %> character
in the command.
It is important for the command to return a zero exit status if and
only if it succeeds. The command will> be asked for file
names that are not present in the archive; it must return nonzero
when so asked. Examples:
restore_command = 'cp /mnt/server/archivedir/%f "%p"'
restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows
recovery_end_command (string)
This parameter specifies a shell command that will be executed once only
at the end of recovery. This parameter is optional. The purpose of the
recovery_end_command> is to provide a mechanism for cleanup
following replication or recovery.
Any %r> is replaced by the name of the file
containing the last valid restart point. That is the earliest file that
must be kept to allow a restore to be restartable, so this information
can be used to truncate the archive to just the minimum required to
support restart from the current restore. %r> would
typically be used in a warm-standby configuration
(see ).
Write %%> to embed an actual %> character
in the command.
If the command returns a non-zero exit status then a WARNING log
message will be written and the database will proceed to start up
anyway. An exception is that if the command was terminated by a
signal, the database will not proceed with startup.
recovery_target_time
(timestamp)
This parameter specifies the time stamp up to which recovery
will proceed.
At most one of recovery_target_time> and
can be specified.
The default is to recover to the end of the WAL log.
The precise stopping point is also influenced by
.
recovery_target_xid (string)
This parameter specifies the transaction ID up to which recovery
will proceed. Keep in mind
that while transaction IDs are assigned sequentially at transaction
start, transactions can complete in a different numeric order.
The transactions that will be recovered are those that committed
before (and optionally including) the specified one.
At most one of recovery_target_xid> and
can be specified.
The default is to recover to the end of the WAL log.
The precise stopping point is also influenced by
.
recovery_target_inclusive
(boolean)
Specifies whether we stop just after the specified recovery target
(true), or just before the recovery target
(false).
Applies to both
and , whichever one is
specified for this recovery. This indicates whether transactions
having exactly the target commit time or ID, respectively, will
be included in the recovery. Default is true>.
recovery_target_timeline
(string)
Specifies recovering into a particular timeline. The default is
to recover along the same timeline that was current when the
base backup was taken. You would only need to set this parameter
in complex re-recovery situations, where you need to return to
a state that itself was reached after a point-in-time recovery.
See for discussion.
Timelinestimelines
The ability to restore the database to a previous point in time creates
some complexities that are akin to science-fiction stories about time
travel and parallel universes. In the original history of the database,
perhaps you dropped a critical table at 5:15PM on Tuesday evening, but
didn't realize your mistake until Wednesday noon.
Unfazed, you get out your backup, restore to the point-in-time 5:14PM
Tuesday evening, and are up and running. In this> history of
the database universe, you never dropped the table at all. But suppose
you later realize this wasn't such a great idea after all, and would like
to return to sometime Wednesday morning in the original history.
You won't be able
to if, while your database was up-and-running, it overwrote some of the
sequence of WAL segment files that led up to the time you now wish you
could get back to. So you really want to distinguish the series of
WAL records generated after you've done a point-in-time recovery from
those that were generated in the original database history.
To deal with these problems, PostgreSQL> has a notion
of timelines>. Whenever an archive recovery is completed,
a new timeline is created to identify the series of WAL records
generated after that recovery. The timeline
ID number is part of WAL segment file names, and so a new timeline does
not overwrite the WAL data generated by previous timelines. It is
in fact possible to archive many different timelines. While that might
seem like a useless feature, it's often a lifesaver. Consider the
situation where you aren't quite sure what point-in-time to recover to,
and so have to do several point-in-time recoveries by trial and error
until you find the best place to branch off from the old history. Without
timelines this process would soon generate an unmanageable mess. With
timelines, you can recover to any> prior state, including
states in timeline branches that you later abandoned.
Each time a new timeline is created, PostgreSQL> creates
a timeline history> file that shows which timeline it branched
off from and when. These history files are necessary to allow the system
to pick the right WAL segment files when recovering from an archive that
contains multiple timelines. Therefore, they are archived into the WAL
archive area just like WAL segment files. The history files are just
small text files, so it's cheap and appropriate to keep them around
indefinitely (unlike the segment files which are large). You can, if
you like, add comments to a history file to make your own notes about
how and why this particular timeline came to be. Such comments will be
especially valuable when you have a thicket of different timelines as
a result of experimentation.
The default behavior of recovery is to recover along the same timeline
that was current when the base backup was taken. If you want to recover
into some child timeline (that is, you want to return to some state that
was itself generated after a recovery attempt), you need to specify the
target timeline ID in recovery.conf>. You cannot recover into
timelines that branched off earlier than the base backup.
Tips and Examples
Some tips for configuring continuous archiving are given here.
Standalone hot backups
It is possible to use PostgreSQL>'s backup facilities to
produce standalone hot backups. These are backups that cannot be used
for point-in-time recovery, yet are typically much faster to backup and
restore than pg_dump> dumps. (They are also much larger
than pg_dump> dumps, so in some cases the speed advantage
could be negated.)
To prepare for standalone hot backups, set archive_mode> to
on>, and set up an archive_command> that performs
archiving only when a switch file> exists. For example:
archive_command = 'test ! -f /var/lib/pgsql/backup_in_progress || cp -i %p /var/lib/pgsql/archive/%f < /dev/null'
This command will perform archiving when
/var/lib/pgsql/backup_in_progress> exists, and otherwise
silently return zero exit status (allowing PostgreSQL>
to recycle the unwanted WAL file).
With this preparation, a backup can be taken using a script like the
following:
touch /var/lib/pgsql/backup_in_progress
psql -c "select pg_start_backup('hot_backup');"
tar -cf /var/lib/pgsql/backup.tar /var/lib/pgsql/data/
psql -c "select pg_stop_backup();"
rm /var/lib/pgsql/backup_in_progress
tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/
The switch file /var/lib/pgsql/backup_in_progress> is
created first, enabling archiving of completed WAL files to occur.
After the backup the switch file is removed. Archived WAL files are
then added to the backup so that both base backup and all required
WAL files are part of the same tar> file.
Please remember to add error handling to your backup scripts.
If archive storage size is a concern, use pg_compresslog>,
, to
remove unnecessary and trailing
space from the WAL files. You can then use
gzip to further compress the output of
pg_compresslog>:
archive_command = 'pg_compresslog %p - | gzip > /var/lib/pgsql/archive/%f'
You will then need to use gunzip> and
pg_decompresslog> during recovery:
restore_command = 'gunzip < /mnt/server/archivedir/%f | pg_decompresslog - %p'
archive_command scripts
Many people choose to use scripts to define their
archive_command, so that their
postgresql.conf> entry looks very simple:
archive_command = 'local_backup_script.sh'
Using a separate script file is advisable any time you want to use
more than a single command in the archiving process.
This allows all complexity to be managed within the script, which
can be written in a popular scripting language such as
bash> or perl>.
Any messages written to stderr> from the script will appear
in the database server log, allowing complex configurations to be
diagnosed easily if they fail.
Examples of requirements that might be solved within a script include:
Copying data to secure off-site data storage
Batching WAL files so that they are transferred every three hours,
rather than one at a time
Interfacing with other backup and recovery software
Interfacing with monitoring software to report errors
Caveats
At this writing, there are several limitations of the continuous archiving
technique. These will probably be fixed in future releases:
Operations on hash indexes are not presently WAL-logged, so
replay will not update these indexes. This will mean that any new inserts
will be ignored by the index, updated rows will apparently disappear and
deleted rows will still retain pointers. In other words, if you modify a
table with a hash index on it then you will get incorrect query results
on a standby server. When recovery completes it is recommended that you
manually
each such index after completing a recovery operation.
If a
command is executed while a base backup is being taken, and then
the template database that the CREATE DATABASE> copied
is modified while the base backup is still in progress, it is
possible that recovery will cause those modifications to be
propagated into the created database as well. This is of course
undesirable. To avoid this risk, it is best not to modify any
template databases while taking a base backup.
commands are WAL-logged with the literal absolute path, and will
therefore be replayed as tablespace creations with the same
absolute path. This might be undesirable if the log is being
replayed on a different machine. It can be dangerous even if the
log is being replayed on the same machine, but into a new data
directory: the replay will still overwrite the contents of the
original tablespace. To avoid potential gotchas of this sort,
the best practice is to take a new base backup after creating or
dropping tablespaces.
It should also be noted that the default WAL
format is fairly bulky since it includes many disk page snapshots.
These page snapshots are designed to support crash recovery, since
we might need to fix partially-written disk pages. Depending on
your system hardware and software, the risk of partial writes might
be small enough to ignore, in which case you can significantly
reduce the total volume of archived logs by turning off page
snapshots using the
parameter. (Read the notes and warnings in
before you do so.) Turning off page snapshots does not prevent
use of the logs for PITR operations. An area for future
development is to compress archived WAL data by removing
unnecessary page copies even when full_page_writes> is
on. In the meantime, administrators might wish to reduce the number
of page snapshots included in WAL by increasing the checkpoint
interval parameters as much as feasible.
Warm Standby Servers for High Availabilitywarm standbyPITR standbystandby serverlog shippingwitness serverSTONITHhigh availability
Continuous archiving can be used to create a high
availability> (HA) cluster configuration with one or more
standby servers> ready to take over operations if the
primary server fails. This capability is widely referred to as
warm standby> or log shipping>.
The primary and standby server work together to provide this capability,
though the servers are only loosely coupled. The primary server operates
in continuous archiving mode, while each standby server operates in
continuous recovery mode, reading the WAL files from the primary. No
changes to the database tables are required to enable this capability,
so it offers low administration overhead in comparison with some other
replication approaches. This configuration also has relatively low
performance impact on the primary server.
Directly moving WAL records from one database server to another
is typically described as log shipping. PostgreSQL>
implements file-based log shipping, which means that WAL records are
transferred one file (WAL segment) at a time. WAL files (16MB) can be
shipped easily and cheaply over any distance, whether it be to an
adjacent system, another system on the same site or another system on
the far side of the globe. The bandwidth required for this technique
varies according to the transaction rate of the primary server.
Record-based log shipping is also possible with custom-developed
procedures, as discussed in .
It should be noted that the log shipping is asynchronous, i.e., the WAL
records are shipped after transaction commit. As a result there is a
window for data loss should the primary server suffer a catastrophic
failure: transactions not yet shipped will be lost. The length of the
window of data loss can be limited by use of the
archive_timeout parameter, which can be set as low
as a few seconds if required. However such low settings will
substantially increase the bandwidth requirements for file shipping.
If you need a window of less than a minute or so, it's probably better
to look into record-based log shipping.
The standby server is not available for access, since it is continually
performing recovery processing. Recovery performance is sufficiently
good that the standby will typically be only moments away from full
availability once it has been activated. As a result, we refer to this
capability as a warm standby configuration that offers high
availability. Restoring a server from an archived base backup and
rollforward will take considerably longer, so that technique only
offers a solution for disaster recovery, not high availability.
Planning
It is usually wise to create the primary and standby servers
so that they are as similar as possible, at least from the
perspective of the database server. In particular, the path names
associated with tablespaces will be passed across as-is, so both
primary and standby servers must have the same mount paths for
tablespaces if that feature is used. Keep in mind that if
is executed on the primary, any new mount point needed for it must
be created on both the primary and all standby servers before the command
is executed. Hardware need not be exactly the same, but experience shows
that maintaining two identical systems is easier than maintaining two
dissimilar ones over the lifetime of the application and system.
In any case the hardware architecture must be the same — shipping
from, say, a 32-bit to a 64-bit system will not work.
In general, log shipping between servers running different major
PostgreSQL> release
levels will not be possible. It is the policy of the PostgreSQL Global
Development Group not to make changes to disk formats during minor release
upgrades, so it is likely that running different minor release levels
on primary and standby servers will work successfully. However, no
formal support for that is offered and you are advised to keep primary
and standby servers at the same release level as much as possible.
When updating to a new minor release, the safest policy is to update
the standby servers first — a new minor release is more likely
to be able to read WAL files from a previous minor release than vice
versa.
There is no special mode required to enable a standby server. The
operations that occur on both primary and standby servers are entirely
normal continuous archiving and recovery tasks. The only point of
contact between the two database servers is the archive of WAL files
that both share: primary writing to the archive, standby reading from
the archive. Care must be taken to ensure that WAL archives for separate
primary servers do not become mixed together or confused. The archive
need not be large, if it is only required for the standby operation.
The magic that makes the two loosely coupled servers work together is
simply a restore_command> used on the standby that,
when asked for the next WAL file, waits for it to become available from
the primary. The restore_command> is specified in the
recovery.conf> file on the standby server. Normal recovery
processing would request a file from the WAL archive, reporting failure
if the file was unavailable. For standby processing it is normal for
the next WAL file to be unavailable, so we must be patient and wait for
it to appear. For files ending in .backup> or
.history> there is no need to wait, and a non-zero return
code must be returned. A waiting restore_command> can be
written as a custom script that loops after polling for the existence of
the next WAL file. There must also be some way to trigger failover, which
should interrupt the restore_command>, break the loop and
return a file-not-found error to the standby server. This ends recovery
and the standby will then come up as a normal server.
Pseudocode for a suitable restore_command> is:
triggered = false;
while (!NextWALFileReady() && !triggered)
{
sleep(100000L); /* wait for ~0.1 sec */
if (CheckForExternalTrigger())
triggered = true;
}
if (!triggered)
CopyWALFileForRecovery();
A working example of a waiting restore_command> is provided
as a contrib> module named pg_standby>. It
should be used as a reference on how to correctly implement the logic
described above. It can also be extended as needed to support specific
configurations or environments.
PostgreSQL does not provide the system
software required to identify a failure on the primary and notify
the standby system and then the standby database server. Many such
tools exist and are well integrated with other aspects required for
successful failover, such as IP address migration.
The means for triggering failover is an important part of planning and
design. The restore_command> is executed in full once
for each WAL file. The process running the restore_command>
is therefore created and dies for each file, so there is no daemon
or server process and so we cannot use signals and a signal
handler. A more permanent notification is required to trigger the
failover. It is possible to use a simple timeout facility,
especially if used in conjunction with a known
archive_timeout> setting on the primary. This is
somewhat error prone since a network problem or busy primary server might
be sufficient to initiate failover. A notification mechanism such
as the explicit creation of a trigger file is less error prone, if
this can be arranged.
The size of the WAL archive can be minimized by using the %r>
option of the restore_command>. This option specifies the
last archive file name that needs to be kept to allow the recovery to
restart correctly. This can be used to truncate the archive once
files are no longer required, if the archive is writable from the
standby server.
Implementation
The short procedure for configuring a standby server is as follows. For
full details of each step, refer to previous sections as noted.
Set up primary and standby systems as near identically as
possible, including two identical copies of
PostgreSQL> at the same release level.
Set up continuous archiving from the primary to a WAL archive located
in a directory on the standby server. Ensure that
,
and
are set appropriately on the primary
(see ).
Make a base backup of the primary server (see ), and load this data onto the standby.
Begin recovery on the standby server from the local WAL
archive, using a recovery.conf> that specifies a
restore_command> that waits as described
previously (see ).
Recovery treats the WAL archive as read-only, so once a WAL file has
been copied to the standby system it can be copied to tape at the same
time as it is being read by the standby database server.
Thus, running a standby server for high availability can be performed at
the same time as files are stored for longer term disaster recovery
purposes.
For testing purposes, it is possible to run both primary and standby
servers on the same system. This does not provide any worthwhile
improvement in server robustness, nor would it be described as HA.
Failover
If the primary server fails then the standby server should begin
failover procedures.
If the standby server fails then no failover need take place. If the
standby server can be restarted, even some time later, then the recovery
process can also be immediately restarted, taking advantage of
restartable recovery. If the standby server cannot be restarted, then a
full new standby server instance should be created.
If the primary server fails and then immediately restarts, you must have
a mechanism for informing it that it is no longer the primary. This is
sometimes known as STONITH (Shoot the Other Node In The Head), which is
necessary to avoid situations where both systems think they are the
primary, which will lead to confusion and ultimately data loss.
Many failover systems use just two systems, the primary and the standby,
connected by some kind of heartbeat mechanism to continually verify the
connectivity between the two and the viability of the primary. It is
also possible to use a third system (called a witness server) to prevent
some cases of inappropriate failover, but the additional complexity
might not be worthwhile unless it is set up with sufficient care and
rigorous testing.
Once failover to the standby occurs, we have only a
single server in operation. This is known as a degenerate state.
The former standby is now the primary, but the former primary is down
and might stay down. To return to normal operation we must
fully recreate a standby server,
either on the former primary system when it comes up, or on a third,
possibly new, system. Once complete the primary and standby can be
considered to have switched roles. Some people choose to use a third
server to provide backup to the new primary until the new standby
server is recreated,
though clearly this complicates the system configuration and
operational processes.
So, switching from primary to standby server can be fast but requires
some time to re-prepare the failover cluster. Regular switching from
primary to standby is useful, since it allows regular downtime on
each system for maintenance. This also serves as a test of the
failover mechanism to ensure that it will really work when you need it.
Written administration procedures are advised.
Record-based Log ShippingPostgreSQL directly supports file-based
log shipping as described above. It is also possible to implement
record-based log shipping, though this requires custom development.
An external program can call the pg_xlogfile_name_offset()>
function (see )
to find out the file name and the exact byte offset within it of
the current end of WAL. It can then access the WAL file directly
and copy the data from the last known end of WAL through the current end
over to the standby server(s). With this approach, the window for data
loss is the polling cycle time of the copying program, which can be very
small, but there is no wasted bandwidth from forcing partially-used
segment files to be archived. Note that the standby servers'
restore_command> scripts still deal in whole WAL files,
so the incrementally copied data is not ordinarily made available to
the standby servers. It is of use only when the primary dies —
then the last partial WAL file is fed to the standby before allowing
it to come up. So correct implementation of this process requires
cooperation of the restore_command> script with the data
copying program.
Incrementally Updated Backupsincrementally updated backupschange accumulation
In a warm standby configuration, it is possible to offload the expense of
taking periodic base backups from the primary server; instead base backups
can be made by backing
up a standby server's files. This concept is generally known as
incrementally updated backups, log change accumulation, or more simply,
change accumulation.
If we take a backup of the standby server's data directory while it is processing
logs shipped from the primary, we will be able to reload that data and
restart the standby's recovery process from the last restart point.
We no longer need to keep WAL files from before the restart point.
If we need to recover, it will be faster to recover from the incrementally
updated backup than from the original base backup.
Since the standby server is not live>, it is not possible to
use pg_start_backup()> and pg_stop_backup()>
to manage the backup process; it will be up to you to determine how
far back you need to keep WAL segment files to have a recoverable
backup. You can do this by running pg_controldata>
on the standby server to inspect the control file and determine the
current checkpoint WAL location, or by using the
log_checkpoints> option to print values to the server log.
Hot StandbyHot Standby
Hot Standby is the term used to describe the ability to connect to
the server and run queries while the server is in archive recovery. This
is useful for both log shipping replication and for restoring a backup
to an exact state with great precision.
The term Hot Standby also refers to the ability of the server to move
from recovery through to normal running while users continue running
queries and/or continue their connections.
Running queries in recovery is in many ways the same as normal running
though there are a large number of usage and administrative points
to note.
User's Overview
Users can connect to the database while the server is in recovery
and perform read-only queries. Read-only access to catalogs and views
will also occur as normal.
The data on the standby takes some time to arrive from the primary server
so there will be a measurable delay between primary and standby. Running the
same query nearly simultaneously on both primary and standby might therefore
return differing results. We say that data on the standby is eventually
consistent with the primary.
Queries executed on the standby will be correct with regard to the transactions
that had been recovered at the start of the query, or start of first statement,
in the case of serializable transactions. In comparison with the primary,
the standby returns query results that could have been obtained on the primary
at some exact moment in the past.
When a transaction is started in recovery, the parameter
transaction_read_only> will be forced to be true, regardless of the
default_transaction_read_only> setting in postgresql.conf>.
It can't be manually set to false either. As a result, all transactions
started during recovery will be limited to read-only actions only. In all
other ways, connected sessions will appear identical to sessions
initiated during normal processing mode. There are no special commands
required to initiate a connection at this time, so all interfaces
work normally without change. After recovery finishes, the session
will allow normal read-write transactions at the start of the next
transaction, if these are requested.
Read-only here means "no writes to the permanent database tables".
There are no problems with queries that make use of transient sort and
work files.
The following actions are allowed
Query access - SELECT, COPY TO including views and SELECT RULEs
Cursor commands - DECLARE, FETCH, CLOSE,
Parameters - SHOW, SET, RESET
Transaction management commands
BEGIN, END, ABORT, START TRANSACTION
SAVEPOINT, RELEASE, ROLLBACK TO SAVEPOINT
EXCEPTION blocks and other internal subtransactions
LOCK TABLE, though only when explicitly in one of these modes:
ACCESS SHARE, ROW SHARE or ROW EXCLUSIVE.
Plans and resources - PREPARE, EXECUTE, DEALLOCATE, DISCARD
Plugins and extensions - LOAD
These actions produce error messages
Data Manipulation Language (DML) - INSERT, UPDATE, DELETE, COPY FROM, TRUNCATE.
Note that there are no allowed actions that result in a trigger
being executed during recovery.
Data Definition Language (DDL) - CREATE, DROP, ALTER, COMMENT.
This also applies to temporary tables currently because currently their
definition causes writes to catalog tables.
SELECT ... FOR SHARE | UPDATE which cause row locks to be written
RULEs on SELECT statements that generate DML commands.
LOCK TABLE, in short default form, since it requests ACCESS EXCLUSIVE MODE.
LOCK TABLE that explicitly requests a mode higher than ROW EXCLUSIVE MODE.
Transaction management commands that explicitly set non-read only state
BEGIN READ WRITE,
START TRANSACTION READ WRITE
SET TRANSACTION READ WRITE,
SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE
SET transaction_read_only = off
Two-phase commit commands - PREPARE TRANSACTION, COMMIT PREPARED,
ROLLBACK PREPARED because even read-only transactions need to write
WAL in the prepare phase (the first phase of two phase commit).
sequence update - nextval()
LISTEN, UNLISTEN, NOTIFY since they currently write to system tables
Note that current behaviour of read only transactions when not in
recovery is to allow the last two actions, so there are small and
subtle differences in behaviour between read-only transactions
run on standby and during normal running.
It is possible that the restrictions on LISTEN, UNLISTEN, NOTIFY and
temporary tables may be lifted in a future release, if their internal
implementation is altered to make this possible.
If failover or switchover occurs the database will switch to normal
processing mode. Sessions will remain connected while the server
changes mode. Current transactions will continue, though will remain
read-only. After recovery is complete, it will be possible to initiate
read-write transactions.
Users will be able to tell whether their session is read-only by
issuing SHOW transaction_read_only. In addition a set of
functions allow users to
access information about Hot Standby. These allow you to write
functions that are aware of the current state of the database. These
can be used to monitor the progress of recovery, or to allow you to
write complex programs that restore the database to particular states.
In recovery, transactions will not be permitted to take any table lock
higher than RowExclusiveLock. In addition, transactions may never assign
a TransactionId and may never write WAL.
Any LOCK TABLE> command that runs on the standby and requests
a specific lock mode higher than ROW EXCLUSIVE MODE will be rejected.
In general queries will not experience lock conflicts with the database
changes made by recovery. This is becase recovery follows normal
concurrency control mechanisms, known as MVCC>. There are
some types of change that will cause conflicts, covered in the following
section.
Handling query conflicts
The primary and standby nodes are in many ways loosely connected. Actions
on the primary will have an effect on the standby. As a result, there is
potential for negative interactions or conflicts between them. The easiest
conflict to understand is performance: if a huge data load is taking place
on the primary then this will generate a similar stream of WAL records on the
standby, so standby queries may contend for system resources, such as I/O.
There are also additional types of conflict that can occur with Hot Standby.
These conflicts are hard conflicts> in the sense that we may
need to cancel queries and in some cases disconnect sessions to resolve them.
The user is provided with a number of optional ways to handle these
conflicts, though we must first understand the possible reasons behind a conflict.
Access Exclusive Locks from primary node, including both explicit
LOCK commands and various kinds of DDL action
Dropping tablespaces on the primary while standby queries are using
those tablespaces for temporary work files (work_mem overflow)
Dropping databases on the primary while users are connected to that
database on the standby.
Waiting to acquire buffer cleanup locks (for which there is no time out)
Early cleanup of data still visible to the current query's snapshot
Some WAL redo actions will be for DDL actions. These DDL actions are
repeating actions that have already committed on the primary node, so
they must not fail on the standby node. These DDL locks take priority
and will automatically *cancel* any read-only transactions that get in
their way, after a grace period. This is similar to the possibility of
being canceled by the deadlock detector, but in this case the standby
process always wins, since the replayed actions must not fail. This
also ensures that replication doesn't fall behind while we wait for a
query to complete. Again, we assume that the standby is there for high
availability purposes primarily.
An example of the above would be an Administrator on Primary server
runs a DROP TABLE> on a table that's currently being queried
in the standby server.
Clearly the query cannot continue if we let the DROP TABLE>
proceed. If this situation occurred on the primary, the DROP TABLE>
would wait until the query has finished. When the query is on the standby
and the DROP TABLE> is on the primary, the primary doesn't have
information about which queries are running on the standby and so the query
does not wait on the primary. The WAL change records come through to the
standby while the standby query is still running, causing a conflict.
The most common reason for conflict between standby queries and WAL redo is
"early cleanup". Normally, PostgreSQL> allows cleanup of old
row versions when there are no users who may need to see them to ensure correct
visibility of data (the heart of MVCC). If there is a standby query that has
been running for longer than any query on the primary then it is possible
for old row versions to be removed by either a vacuum or HOT. This will
then generate WAL records that, if applied, would remove data on the
standby that might *potentially* be required by the standby query.
In more technical language, the primary's xmin horizon is later than
the standby's xmin horizon, allowing dead rows to be removed.
Experienced users should note that both row version cleanup and row version
freezing will potentially conflict with recovery queries. Running a
manual VACUUM FREEZE> is likely to cause conflicts even on tables
with no updated or deleted rows.
We have a number of choices for resolving query conflicts. The default
is that we wait and hope the query completes. The server will wait
automatically until the lag between primary and standby is at most
max_standby_delay> seconds. Once that grace period expires,
we take one of the following actions:
If the conflict is caused by a lock, we cancel the conflicting standby
transaction immediately. If the transaction is idle-in-transaction
then currently we abort the session instead, though this may change
in the future.
If the conflict is caused by cleanup records we tell the standby query
that a conflict has occurred and that it must cancel itself to avoid the
risk that it silently fails to read relevant data because
that data has been removed. (This is regrettably very similar to the
much feared and iconic error message "snapshot too old"). Some cleanup
records only cause conflict with older queries, though some types of
cleanup record affect all queries.
If cancellation does occur, the query and/or transaction can always
be re-executed. The error is dynamic and will not necessarily occur
the same way if the query is executed again.
max_standby_delay> is set in postgresql.conf>.
The parameter applies to the server as a whole so if the delay is all used
up by a single query then there may be little or no waiting for queries that
follow immediately, though they will have benefited equally from the initial
waiting period. The server may take time to catch up again before the grace
period is available again, though if there is a heavy and constant stream
of conflicts it may seldom catch up fully.
Users should be clear that tables that are regularly and heavily updated on
primary server will quickly cause cancellation of longer running queries on
the standby. In those cases max_standby_delay> can be
considered somewhat but not exactly the same as setting
statement_timeout>.
Other remedial actions exist if the number of cancellations is unacceptable.
The first option is to connect to primary server and keep a query active
for as long as we need to run queries on the standby. This guarantees that
a WAL cleanup record is never generated and we don't ever get query
conflicts as described above. This could be done using contrib/dblink
and pg_sleep(), or via other mechanisms. If you do this, you should note
that this will delay cleanup of dead rows by vacuum or HOT and many
people may find this undesirable. However, we should remember that
primary and standby nodes are linked via the WAL, so this situation is no
different to the case where we ran the query on the primary node itself
except we have the benefit of off-loading the execution onto the standby.
It is also possible to set vacuum_defer_cleanup_age> on the primary
to defer the cleanup of records by autovacuum, vacuum and HOT. This may allow
more time for queries to execute before they are cancelled on the standby,
without the need for setting a high max_standby_delay>.
Three-way deadlocks are possible between AccessExclusiveLocks arriving from
the primary, cleanup WAL records that require buffer cleanup locks and
user requests that are waiting behind replayed AccessExclusiveLocks. Deadlocks
are currently resolved by the cancellation of user processes that would
need to wait on a lock. This is heavy-handed and generates more query
cancellations than we need to, though does remove the possibility of deadlock.
This behaviour is expected to improve substantially for the main release
version of 8.5.
Dropping tablespaces or databases is discussed in the administrator's
section since they are not typical user situations.
Administrator's Overview
If there is a recovery.conf> file present the server will start
in Hot Standby mode by default, though recovery_connections> can
be disabled via postgresql.conf>, if required. The server may take
some time to enable recovery connections since the server must first complete
sufficient recovery to provide a consistent state against which queries
can run before enabling read only connections. Look for these messages
in the server logs
LOG: initializing recovery connections
... then some time later ...
LOG: consistent recovery state reached
LOG: database system is ready to accept read only connections
Consistency information is recorded once per checkpoint on the primary, as long
as recovery_connections> is enabled (on the primary). If this parameter
is disabled, it will not be possible to enable recovery connections on the standby.
The consistent state can also be delayed in the presence of both of these conditions
a write transaction has more than 64 subtransactions
very long-lived write transactions
If you are running file-based log shipping ("warm standby"), you may need
to wait until the next WAL file arrives, which could be as long as the
archive_timeout> setting on the primary.
The setting of some parameters on the standby will need reconfiguration
if they have been changed on the primary. The value on the standby must
be equal to or greater than the value on the primary. If these parameters
are not set high enough then the standby will not be able to track work
correctly from recovering transactions. If these values are set too low the
the server will halt. Higher values can then be supplied and the server
restarted to begin recovery again.
max_connections>
max_prepared_transactions>
max_locks_per_transaction>
It is important that the administrator consider the appropriate setting
of max_standby_delay>, set in postgresql.conf>.
There is no optimal setting and should be set according to business
priorities. For example if the server is primarily tasked as a High
Availability server, then you may wish to lower
max_standby_delay> or even set it to zero, though that is a
very aggressive setting. If the standby server is tasked as an additional
server for decision support queries then it may be acceptable to set this
to a value of many hours (in seconds). It is also possible to set
max_standby_delay> to -1 which means wait forever for queries
to complete, if there are conflicts; this will be useful when performing
an archive recovery from a backup.
Transaction status "hint bits" written on primary are not WAL-logged,
so data on standby will likely re-write the hints again on the standby.
Thus the main database blocks will produce write I/Os even though
all users are read-only; no changes have occurred to the data values
themselves. Users will be able to write large sort temp files and
re-generate relcache info files, so there is no part of the database
that is truly read-only during hot standby mode. There is no restriction
on the use of set returning functions, or other users of tuplestore/tuplesort
code. Note also that writes to remote databases will still be possible,
even though the transaction is read-only locally.
The following types of administrator command are not accepted
during recovery mode
Data Definition Language (DDL) - e.g. CREATE INDEX
Privilege and Ownership - GRANT, REVOKE, REASSIGN
Maintenance commands - ANALYZE, VACUUM, CLUSTER, REINDEX
Note again that some of these commands are actually allowed during
"read only" mode transactions on the primary.
As a result, you cannot create additional indexes that exist solely
on the standby, nor can statistics that exist solely on the standby.
If these administrator commands are needed they should be executed
on the primary so that the changes will propagate through to the
standby.
pg_cancel_backend()> will work on user backends, but not the
Startup process, which performs recovery. pg_stat_activity does not
show an entry for the Startup process, nor do recovering transactions
show as active. As a result, pg_prepared_xacts is always empty during
recovery. If you wish to resolve in-doubt prepared transactions
then look at pg_prepared_xacts on the primary and issue commands to
resolve those transactions there.
pg_locks will show locks held by backends as normal. pg_locks also shows
a virtual transaction managed by the Startup process that owns all
AccessExclusiveLocks held by transactions being replayed by recovery.
Note that Startup process does not acquire locks to
make database changes and thus locks other than AccessExclusiveLocks
do not show in pg_locks for the Startup process, they are just presumed
to exist.
check_pgsql> will work, but it is very simple.
check_postgres> will also work, though many some actions
could give different or confusing results.
e.g. last vacuum time will not be maintained for example, since no
vacuum occurs on the standby (though vacuums running on the primary do
send their changes to the standby).
WAL file control commands will not work during recovery
e.g. pg_start_backup>, pg_switch_xlog> etc..
Dynamically loadable modules work, including pg_stat_statements.
Advisory locks work normally in recovery, including deadlock detection.
Note that advisory locks are never WAL logged, so it is not possible for
an advisory lock on either the primary or the standby to conflict with WAL
replay. Nor is it possible to acquire an advisory lock on the primary
and have it initiate a similar advisory lock on the standby. Advisory
locks relate only to a single server on which they are acquired.
Trigger-based replication systems such as Slony>,
Londiste> and Bucardo> won't run on the
standby at all, though they will run happily on the primary server as
long as the changes are not sent to standby servers to be applied.
WAL replay is not trigger-based so you cannot relay from the
standby to any system that requires additional database writes or
relies on the use of triggers.
New oids cannot be assigned, though some UUID> generators may still
work as long as they do not rely on writing new status to the database.
Currently, temp table creation is not allowed during read only
transactions, so in some cases existing scripts will not run correctly.
It is possible we may relax that restriction in a later release. This is
both a SQL Standard compliance issue and a technical issue.
DROP TABLESPACE> can only succeed if the tablespace is empty.
Some standby users may be actively using the tablespace via their
temp_tablespaces> parameter. If there are temp files in the
tablespace we currently cancel all active queries to ensure that temp
files are removed, so that we can remove the tablespace and continue with
WAL replay.
Running DROP DATABASE>, ALTER DATABASE ... SET TABLESPACE>,
or ALTER DATABASE ... RENAME> on primary will generate a log message
that will cause all users connected to that database on the standby to be
forcibly disconnected. This action occurs immediately, whatever the setting of
max_standby_delay>.
In normal running, if you issue DROP USER> or DROP ROLE>
for a role with login capability while that user is still connected then
nothing happens to the connected user - they remain connected. The user cannot
reconnect however. This behaviour applies in recovery also, so a
DROP USER> on the primary does not disconnect that user on the standby.
Stats collector is active during recovery. All scans, reads, blocks,
index usage etc will all be recorded normally on the standby. Replayed
actions will not duplicate their effects on primary, so replaying an
insert will not increment the Inserts column of pg_stat_user_tables.
The stats file is deleted at start of recovery, so stats from primary
and standby will differ; this is considered a feature not a bug.
Autovacuum is not active during recovery, though will start normally
at the end of recovery.
Background writer is active during recovery and will perform
restartpoints (similar to checkpoints on primary) and normal block
cleaning activities. The CHECKPOINT> command is accepted during recovery,
though performs a restartpoint rather than a new checkpoint.
Hot Standby Parameter Reference
Various parameters have been mentioned above in the
and sections.
On the primary, parameters recovery_connections> and
vacuum_defer_cleanup_age> can be used to enable and control the
primary server to assist the successful configuration of Hot Standby servers.
max_standby_delay> has no effect if set on the primary.
On the standby, parameters recovery_connections> and
max_standby_delay> can be used to enable and control Hot Standby.
standby server to assist the successful configuration of Hot Standby servers.
vacuum_defer_cleanup_age> has no effect during recovery.
Caveats
At this writing, there are several limitations of Hot Standby.
These can and probably will be fixed in future releases:
Operations on hash indexes are not presently WAL-logged, so
replay will not update these indexes. Hash indexes will not be
used for query plans during recovery.
Full knowledge of running transactions is required before snapshots
may be taken. Transactions that take use large numbers of subtransactions
(currently greater than 64) will delay the start of read only
connections until the completion of the longest running write transaction.
If this situation occurs explanatory messages will be sent to server log.
Valid starting points for recovery connections are generated at each
checkpoint on the master. If the standby is shutdown while the master
is in a shutdown state it may not be possible to re-enter Hot Standby
until the primary is started up so that it generates further starting
points in the WAL logs. This is not considered a serious issue
because the standby is usually switched into the primary role while
the first node is taken down.
At the end of recovery, AccessExclusiveLocks held by prepared transactions
will require twice the normal number of lock table entries. If you plan
on running either a large number of concurrent prepared transactions
that normally take AccessExclusiveLocks, or you plan on having one
large transaction that takes many AccessExclusiveLocks then you are
advised to select a larger value of max_locks_per_transaction>,
up to, but never more than twice the value of the parameter setting on
the primary server in rare extremes. You need not consider this at all if
your setting of max_prepared_transactions> is 0>.
Migration Between Releasesupgradingversioncompatibility
This section discusses how to migrate your database data from one
PostgreSQL> release to a newer one.
The software installation procedure per se> is not the
subject of this section; those details are in .
As a general rule, the internal data storage format is subject to
change between major releases of PostgreSQL> (where
the number after the first dot changes). This does not apply to
different minor releases under the same major release (where the
number after the second dot changes); these always have compatible
storage formats. For example, releases 8.1.1, 8.2.3, and 8.3 are
not compatible, whereas 8.2.3 and 8.2.4 are. When you update
between compatible versions, you can simply replace the executables
and reuse the data directory on disk. Otherwise you need to back
up your data and restore it on the new server. This has to be done
using pg_dump>; file system level backup methods
obviously won't work. There are checks in place that prevent you
from using a data directory with an incompatible version of
PostgreSQL, so no great harm can be done by
trying to start the wrong server version on a data directory.
It is recommended that you use the pg_dump> and
pg_dumpall> programs from the newer version of
PostgreSQL>, to take advantage of any enhancements
that might have been made in these programs. Current releases of the
dump programs can read data from any server version back to 7.0.
The least downtime can be achieved by installing the new server in
a different directory and running both the old and the new servers
in parallel, on different ports. Then you can use something like:
pg_dumpall -p 5432 | psql -d postgres -p 6543
to transfer your data. Or use an intermediate file if you want.
Then you can shut down the old server and start the new server at
the port the old one was running at. You should make sure that the
old database is not updated after you begin to run
pg_dumpall>, otherwise you will lose that data. See for information on how to prohibit
access.
It is also possible to use replication methods, such as
Slony>, to create a slave server with the updated version of
PostgreSQL>. The slave can be on the same computer or
a different computer. Once it has synced up with the master server
(running the older version of PostgreSQL>), you can
switch masters and make the slave the master and shut down the older
database instance. Such a switch-over results in only several seconds
of downtime for an upgrade.
If you cannot or do not want to run two servers in parallel, you can
do the backup step before installing the new version, bring down
the server, move the old version out of the way, install the new
version, start the new server, and restore the data. For example:
pg_dumpall > backup
pg_ctl stop
mv /usr/local/pgsql /usr/local/pgsql.old
cd ~/postgresql-&version;
gmake install
initdb -D /usr/local/pgsql/data
postgres -D /usr/local/pgsql/data
psql -f backup postgres
See about ways to start and stop the
server and other details. The installation instructions will advise
you of strategic places to perform these steps.
When you move the old installation out of the way
it might no longer be perfectly usable. Some of the executable programs
contain absolute paths to various installed programs and data files.
This is usually not a big problem, but if you plan on using two
installations in parallel for a while you should assign them
different installation directories at build time. (This problem
is rectified in PostgreSQL> 8.0 and later, so long
as you move all subdirectories containing installed files together;
for example if /usr/local/postgres/bin/> goes to
/usr/local/postgres.old/bin/>, then
/usr/local/postgres/share/> must go to
/usr/local/postgres.old/share/>. In pre-8.0 releases
moving an installation like this will not work.)
In practice you probably want to test your client applications on the
new version before switching over completely. This is another reason
for setting up concurrent installations of old and new versions. When
testing a PostgreSQL> major upgrade, consider the
following categories of possible changes:
Administration
The capabilities available for administrators to monitor and control
the server often change and improve in each major release.
SQL
Typically this includes new SQL command capabilities and not changes
in behavior, unless specifically mentioned in the release notes.
Library API
Typically libraries like libpq> only add new
functionality, again unless mentioned in the release notes.
System Catalogs
System catalog changes usually only affect database management tools.
Server C-language API
This involved changes in the backend function API, which is written
in the C programming language. Such changes effect code that
references backend functions deep inside the server.