1 <!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.89 2006/10/02 22:33:02 momjian Exp $ -->
4 <title>Backup and Restore</title>
6 <indexterm zone="backup"><primary>backup</></>
9 As with everything that contains valuable data, <productname>PostgreSQL</>
10 databases should be backed up regularly. While the procedure is
11 essentially simple, it is important to have a basic understanding of
12 the underlying techniques and assumptions.
16 There are three fundamentally different approaches to backing up
17 <productname>PostgreSQL</> data:
19 <listitem><para><acronym>SQL</> dump</para></listitem>
20 <listitem><para>File system level backup</para></listitem>
21 <listitem><para>Continuous Archiving</para></listitem>
23 Each has its own strengths and weaknesses.
26 <sect1 id="backup-dump">
27 <title><acronym>SQL</> Dump</title>
30 The idea behind the SQL-dump method is to generate a text file with SQL
31 commands that, when fed back to the server, will recreate the
32 database in the same state as it was at the time of the dump.
33 <productname>PostgreSQL</> provides the utility program
34 <xref linkend="app-pgdump"> for this purpose. The basic usage of this
37 pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">outfile</replaceable>
39 As you see, <application>pg_dump</> writes its results to the
40 standard output. We will see below how this can be useful.
44 <application>pg_dump</> is a regular <productname>PostgreSQL</>
45 client application (albeit a particularly clever one). This means
46 that you can do this backup procedure from any remote host that has
47 access to the database. But remember that <application>pg_dump</>
48 does not operate with special permissions. In particular, it must
49 have read access to all tables that you want to back up, so in
50 practice you almost always have to run it as a database superuser.
54 To specify which database server <application>pg_dump</> should
55 contact, use the command line options <option>-h
56 <replaceable>host</></> and <option>-p <replaceable>port</></>. The
57 default host is the local host or whatever your
58 <envar>PGHOST</envar> environment variable specifies. Similarly,
59 the default port is indicated by the <envar>PGPORT</envar>
60 environment variable or, failing that, by the compiled-in default.
61 (Conveniently, the server will normally have the same compiled-in
66 As any other <productname>PostgreSQL</> client application,
67 <application>pg_dump</> will by default connect with the database
68 user name that is equal to the current operating system user name. To override
69 this, either specify the <option>-U</option> option or set the
70 environment variable <envar>PGUSER</envar>. Remember that
71 <application>pg_dump</> connections are subject to the normal
72 client authentication mechanisms (which are described in <xref
73 linkend="client-authentication">).
77 Dumps created by <application>pg_dump</> are internally consistent,
78 that is, updates to the database while <application>pg_dump</> is
79 running will not be in the dump. <application>pg_dump</> does not
80 block other operations on the database while it is working.
81 (Exceptions are those operations that need to operate with an
82 exclusive lock, such as <command>VACUUM FULL</command>.)
87 If your database schema relies on OIDs (for instance as foreign
88 keys) you must instruct <application>pg_dump</> to dump the OIDs
89 as well. To do this, use the <option>-o</option> command line
94 <sect2 id="backup-dump-restore">
95 <title>Restoring the dump</title>
98 The text files created by <application>pg_dump</> are intended to
99 be read in by the <application>psql</application> program. The
100 general command form to restore a dump is
102 psql <replaceable class="parameter">dbname</replaceable> < <replaceable class="parameter">infile</replaceable>
104 where <replaceable class="parameter">infile</replaceable> is what
105 you used as <replaceable class="parameter">outfile</replaceable>
106 for the <application>pg_dump</> command. The database <replaceable
107 class="parameter">dbname</replaceable> will not be created by this
108 command, so you must create it yourself from <literal>template0</>
109 before executing <application>psql</> (e.g., with
110 <literal>createdb -T template0 <replaceable
111 class="parameter">dbname</></literal>). <application>psql</>
112 supports similar options to <application>pg_dump</> for specifying
113 the database server to connect to and the user name to use. See
114 the <xref linkend="app-psql"> reference page for more information.
118 Before restoring a SQL dump, all the users who own objects or were
119 granted permissions on objects in the dumped database must already
120 exist. If they do not, then the restore will fail to recreate the
121 objects with the original ownership and/or permissions.
122 (Sometimes this is what you want, but usually it is not.)
126 By default, the <application>psql</> script will continue to
127 execute after an SQL error is encountered. You may wish to use the
128 following command at the top of the script to alter that
129 behaviour and have <application>psql</application> exit with an
130 exit status of 3 if an SQL error occurs:
134 Either way, you will only have a partially restored
135 dump. Alternatively, you can specify that the whole dump should be
136 restored as a single transaction, so the restore is either fully
137 completed or fully rolled back. This mode can be specified by
138 passing the <option>-1</> or <option>--single-transaction</>
139 command-line options to <application>psql</>. When using this
140 mode, be aware that even the smallest of errors can rollback a
141 restore that has already run for many hours. However, that may
142 still be preferable to manually cleaning up a complex database
143 after a partially restored dump.
147 The ability of <application>pg_dump</> and <application>psql</> to
148 write to or read from pipes makes it possible to dump a database
149 directly from one server to another; for example:
151 pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</>
157 The dumps produced by <application>pg_dump</> are relative to
158 <literal>template0</>. This means that any languages, procedures,
159 etc. added to <literal>template1</> will also be dumped by
160 <application>pg_dump</>. As a result, when restoring, if you are
161 using a customized <literal>template1</>, you must create the
162 empty database from <literal>template0</>, as in the example
168 After restoring a backup, it is wise to run <xref
169 linkend="sql-analyze" endterm="sql-analyze-title"> on each
170 database so the query optimizer has useful statistics. An easy way
171 to do this is to run <command>vacuumdb -a -z</>; this is
172 equivalent to running <command>VACUUM ANALYZE</> on each database
173 manually. For more advice on how to load large amounts of data
174 into <productname>PostgreSQL</> efficiently, refer to <xref
179 <sect2 id="backup-dump-all">
180 <title>Using <application>pg_dumpall</></title>
183 The above mechanism is cumbersome and inappropriate when backing
184 up an entire database cluster. For this reason the <xref
185 linkend="app-pg-dumpall"> program is provided.
186 <application>pg_dumpall</> backs up each database in a given
187 cluster, and also preserves cluster-wide data such as users and
188 groups. The basic usage of this command is:
190 pg_dumpall > <replaceable>outfile</>
192 The resulting dump can be restored with <application>psql</>:
194 psql -f <replaceable class="parameter">infile</replaceable> postgres
196 (Actually, you can specify any existing database name to start from,
197 but if you are reloading in an empty cluster then <literal>postgres</>
198 should generally be used.) It is always necessary to have
199 database superuser access when restoring a <application>pg_dumpall</>
200 dump, as that is required to restore the user and group information.
204 <sect2 id="backup-dump-large">
205 <title>Handling large databases</title>
208 Since <productname>PostgreSQL</productname> allows tables larger
209 than the maximum file size on your system, it can be problematic
210 to dump such a table to a file, since the resulting file will likely
211 be larger than the maximum size allowed by your system. Since
212 <application>pg_dump</> can write to the standard output, you can
213 just use standard Unix tools to work around this possible problem.
217 <title>Use compressed dumps.</title>
219 You can use your favorite compression program, for example
220 <application>gzip</application>.
223 pg_dump <replaceable class="parameter">dbname</replaceable> | gzip > <replaceable class="parameter">filename</replaceable>.gz
229 createdb <replaceable class="parameter">dbname</replaceable>
230 gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
236 cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
242 <title>Use <command>split</>.</title>
244 The <command>split</command> command
245 allows you to split the output into pieces that are
246 acceptable in size to the underlying file system. For example, to
247 make chunks of 1 megabyte:
250 pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 1m - <replaceable class="parameter">filename</replaceable>
256 createdb <replaceable class="parameter">dbname</replaceable>
257 cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
263 <title>Use the custom dump format.</title>
265 If <productname>PostgreSQL</productname> was built on a system with the
266 <application>zlib</> compression library installed, the custom dump
267 format will compress data as it writes it to the output file. This will
268 produce dump file sizes similar to using <command>gzip</command>, but it
269 has the added advantage that tables can be restored selectively. The
270 following command dumps a database using the custom dump format:
273 pg_dump -Fc <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">filename</replaceable>
276 A custom-format dump is not a script for <application>psql</>, but
277 instead must be restored with <application>pg_restore</>.
278 See the <xref linkend="app-pgdump"> and <xref
279 linkend="app-pgrestore"> reference pages for details.
286 <sect1 id="backup-file">
287 <title>File system level backup</title>
290 An alternative backup strategy is to directly copy the files that
291 <productname>PostgreSQL</> uses to store the data in the database. In
292 <xref linkend="creating-cluster"> it is explained where these files
293 are located, but you have probably found them already if you are
294 interested in this method. You can use whatever method you prefer
295 for doing usual file system backups, for example
298 tar -cf backup.tar /usr/local/pgsql/data
303 There are two restrictions, however, which make this method
304 impractical, or at least inferior to the <application>pg_dump</>
310 The database server <emphasis>must</> be shut down in order to
311 get a usable backup. Half-way measures such as disallowing all
312 connections will <emphasis>not</emphasis> work
313 (mainly because <command>tar</command> and similar tools do not take an
314 atomic snapshot of the state of the file system at a point in
315 time). Information about stopping the server can be found in
316 <xref linkend="server-shutdown">. Needless to say that you
317 also need to shut down the server before restoring the data.
323 If you have dug into the details of the file system layout of the
324 database, you may be tempted to try to back up or restore only certain
325 individual tables or databases from their respective files or
326 directories. This will <emphasis>not</> work because the
327 information contained in these files contains only half the
328 truth. The other half is in the commit log files
329 <filename>pg_clog/*</filename>, which contain the commit status of
330 all transactions. A table file is only usable with this
331 information. Of course it is also impossible to restore only a
332 table and the associated <filename>pg_clog</filename> data
333 because that would render all other tables in the database
334 cluster useless. So file system backups only work for complete
335 restoration of an entire database cluster.
342 An alternative file-system backup approach is to make a
343 <quote>consistent snapshot</quote> of the data directory, if the
344 file system supports that functionality (and you are willing to
345 trust that it is implemented correctly). The typical procedure is
346 to make a <quote>frozen snapshot</> of the volume containing the
347 database, then copy the whole data directory (not just parts, see
348 above) from the snapshot to a backup device, then release the frozen
349 snapshot. This will work even while the database server is running.
350 However, a backup created in this way saves
351 the database files in a state where the database server was not
352 properly shut down; therefore, when you start the database server
353 on the backed-up data, it will think the server had crashed
354 and replay the WAL log. This is not a problem, just be aware of
355 it (and be sure to include the WAL files in your backup).
359 If your database is spread across multiple file systems, there may not
360 be any way to obtain exactly-simultaneous frozen snapshots of all
361 the volumes. For example, if your data files and WAL log are on different
362 disks, or if tablespaces are on different file systems, it might
363 not be possible to use snapshot backup because the snapshots must be
365 Read your file system documentation very carefully before trusting
366 to the consistent-snapshot technique in such situations. The safest
367 approach is to shut down the database server for long enough to
368 establish all the frozen snapshots.
372 Another option is to use <application>rsync</> to perform a file
373 system backup. This is done by first running <application>rsync</>
374 while the database server is running, then shutting down the database
375 server just long enough to do a second <application>rsync</>. The
376 second <application>rsync</> will be much quicker than the first,
377 because it has relatively little data to transfer, and the end result
378 will be consistent because the server was down. This method
379 allows a file system backup to be performed with minimal downtime.
383 Note that a file system backup will not necessarily be
384 smaller than an SQL dump. On the contrary, it will most likely be
385 larger. (<application>pg_dump</application> does not need to dump
386 the contents of indexes for example, just the commands to recreate
391 <sect1 id="continuous-archiving">
392 <title>Continuous Archiving and Point-In-Time Recovery (PITR)</title>
394 <indexterm zone="backup">
395 <primary>continuous archiving</primary>
398 <indexterm zone="backup">
399 <primary>point-in-time recovery</primary>
402 <indexterm zone="backup">
403 <primary>PITR</primary>
407 At all times, <productname>PostgreSQL</> maintains a
408 <firstterm>write ahead log</> (WAL) in the <filename>pg_xlog/</>
409 subdirectory of the cluster's data directory. The log describes
410 every change made to the database's data files. This log exists
411 primarily for crash-safety purposes: if the system crashes, the
412 database can be restored to consistency by <quote>replaying</> the
413 log entries made since the last checkpoint. However, the existence
414 of the log makes it possible to use a third strategy for backing up
415 databases: we can combine a file-system-level backup with backup of
416 the WAL files. If recovery is needed, we restore the backup and
417 then replay from the backed-up WAL files to bring the backup up to
418 current time. This approach is more complex to administer than
419 either of the previous approaches, but it has some significant
424 We do not need a perfectly consistent backup as the starting point.
425 Any internal inconsistency in the backup will be corrected by log
426 replay (this is not significantly different from what happens during
427 crash recovery). So we don't need file system snapshot capability,
428 just <application>tar</> or a similar archiving tool.
433 Since we can string together an indefinitely long sequence of WAL files
434 for replay, continuous backup can be achieved simply by continuing to archive
435 the WAL files. This is particularly valuable for large databases, where
436 it may not be convenient to take a full backup frequently.
441 There is nothing that says we have to replay the WAL entries all the
442 way to the end. We could stop the replay at any point and have a
443 consistent snapshot of the database as it was at that time. Thus,
444 this technique supports <firstterm>point-in-time recovery</>: it is
445 possible to restore the database to its state at any time since your base
451 If we continuously feed the series of WAL files to another
452 machine that has been loaded with the same base backup file, we
453 have a <quote>hot standby</> system: at any point we can bring up
454 the second machine and it will have a nearly-current copy of the
462 As with the plain file-system-backup technique, this method can only
463 support restoration of an entire database cluster, not a subset.
464 Also, it requires a lot of archival storage: the base backup may be bulky,
465 and a busy system will generate many megabytes of WAL traffic that
466 have to be archived. Still, it is the preferred backup technique in
467 many situations where high reliability is needed.
471 To recover successfully using continuous archiving (also called "online
472 backup" by many database vendors), you need a continuous
473 sequence of archived WAL files that extends back at least as far as the
474 start time of your backup. So to get started, you should set up and test
475 your procedure for archiving WAL files <emphasis>before</> you take your
476 first base backup. Accordingly, we first discuss the mechanics of
480 <sect2 id="backup-archiving-wal">
481 <title>Setting up WAL archiving</title>
484 In an abstract sense, a running <productname>PostgreSQL</> system
485 produces an indefinitely long sequence of WAL records. The system
486 physically divides this sequence into WAL <firstterm>segment
487 files</>, which are normally 16MB apiece (although the size can be
488 altered when building <productname>PostgreSQL</>). The segment
489 files are given numeric names that reflect their position in the
490 abstract WAL sequence. When not using WAL archiving, the system
491 normally creates just a few segment files and then
492 <quote>recycles</> them by renaming no-longer-needed segment files
493 to higher segment numbers. It's assumed that a segment file whose
494 contents precede the checkpoint-before-last is no longer of
495 interest and can be recycled.
499 When archiving WAL data, we want to capture the contents of each segment
500 file once it is filled, and save that data somewhere before the segment
501 file is recycled for reuse. Depending on the application and the
502 available hardware, there could be many different ways of <quote>saving
503 the data somewhere</>: we could copy the segment files to an NFS-mounted
504 directory on another machine, write them onto a tape drive (ensuring that
505 you have a way of restoring the file with its original file name), or batch
506 them together and burn them onto CDs, or something else entirely. To
507 provide the database administrator with as much flexibility as possible,
508 <productname>PostgreSQL</> tries not to make any assumptions about how
509 the archiving will be done. Instead, <productname>PostgreSQL</> lets
510 the administrator specify a shell command to be executed to copy a
511 completed segment file to wherever it needs to go. The command could be
512 as simple as a <literal>cp</>, or it could invoke a complex shell
513 script — it's all up to you.
517 The shell command to use is specified by the <xref
518 linkend="guc-archive-command"> configuration parameter, which in practice
519 will always be placed in the <filename>postgresql.conf</filename> file.
521 any <literal>%p</> is replaced by the absolute path of the file to
522 archive, while any <literal>%f</> is replaced by the file name only.
523 Write <literal>%%</> if you need to embed an actual <literal>%</>
524 character in the command. The simplest useful command is something
527 archive_command = 'cp -i %p /mnt/server/archivedir/%f </dev/null'
529 which will copy archivable WAL segments to the directory
530 <filename>/mnt/server/archivedir</>. (This is an example, not a
531 recommendation, and may not work on all platforms.)
535 The archive command will be executed under the ownership of the same
536 user that the <productname>PostgreSQL</> server is running as. Since
537 the series of WAL files being archived contains effectively everything
538 in your database, you will want to be sure that the archived data is
539 protected from prying eyes; for example, archive into a directory that
540 does not have group or world read access.
544 It is important that the archive command return zero exit status if and
545 only if it succeeded. Upon getting a zero result,
546 <productname>PostgreSQL</> will assume that the WAL segment file has been
547 successfully archived, and will remove or recycle it.
548 However, a nonzero status tells
549 <productname>PostgreSQL</> that the file was not archived; it will try
550 again periodically until it succeeds.
554 The archive command should generally be designed to refuse to overwrite
555 any pre-existing archive file. This is an important safety feature to
556 preserve the integrity of your archive in case of administrator error
557 (such as sending the output of two different servers to the same archive
559 It is advisable to test your proposed archive command to ensure that it
560 indeed does not overwrite an existing file, <emphasis>and that it returns
561 nonzero status in this case</>. We have found that <literal>cp -i</> does
562 this correctly on some platforms but not others. If the chosen command
563 does not itself handle this case correctly, you should add a command
564 to test for pre-existence of the archive file. For example, something
567 archive_command = 'test ! -f .../%f && cp %p .../%f'
569 works correctly on most Unix variants.
573 While designing your archiving setup, consider what will happen if
574 the archive command fails repeatedly because some aspect requires
575 operator intervention or the archive runs out of space. For example, this
576 could occur if you write to tape without an autochanger; when the tape
577 fills, nothing further can be archived until the tape is swapped.
578 You should ensure that any error condition or request to a human operator
579 is reported appropriately so that the situation can be
580 resolved relatively quickly. The <filename>pg_xlog/</> directory will
581 continue to fill with WAL segment files until the situation is resolved.
585 The speed of the archiving command is not important, so long as it can keep up
586 with the average rate at which your server generates WAL data. Normal
587 operation continues even if the archiving process falls a little behind.
588 If archiving falls significantly behind, this will increase the amount of
589 data that would be lost in the event of a disaster. It will also mean that
590 the <filename>pg_xlog/</> directory will contain large numbers of
591 not-yet-archived segment files, which could eventually exceed available
592 disk space. You are advised to monitor the archiving process to ensure that
593 it is working as you intend.
597 In writing your archive command, you should assume that the file names to
598 be archived may be up to 64 characters long and may contain any
599 combination of ASCII letters, digits, and dots. It is not necessary to
600 remember the original full path (<literal>%p</>) but it is necessary to
601 remember the file name (<literal>%f</>).
605 Note that although WAL archiving will allow you to restore any
606 modifications made to the data in your <productname>PostgreSQL</> database
607 it will not restore changes made to configuration files (that is,
608 <filename>postgresql.conf</>, <filename>pg_hba.conf</> and
609 <filename>pg_ident.conf</>), since those are edited manually rather
610 than through SQL operations.
611 You may wish to keep the configuration files in a location that will
612 be backed up by your regular file system backup procedures. See
613 <xref linkend="runtime-config-file-locations"> for how to relocate the
618 The archive command is only invoked on completed WAL segments. Hence,
619 if your server generates only little WAL traffic (or has slack periods
620 where it does so), there could be a long delay between the completion
621 of a transaction and its safe recording in archive storage. To put
622 a limit on how old unarchived data can be, you can set
623 <xref linkend="guc-archive-timeout"> to force the server to switch
624 to a new WAL segment file at least that often. Note that archived
625 files that are ended early due to a forced switch are still the same
626 length as completely full files. It is therefore unwise to set a very
627 short <varname>archive_timeout</> — it will bloat your archive
628 storage. <varname>archive_timeout</> settings of a minute or so are
633 Also, you can force a segment switch manually with
634 <function>pg_switch_xlog</>, if you want to ensure that a
635 just-finished transaction is archived immediately. Other utility
636 functions related to WAL management are listed in <xref
637 linkend="functions-admin-backup-table">.
641 <sect2 id="backup-base-backup">
642 <title>Making a Base Backup</title>
645 The procedure for making a base backup is relatively simple:
649 Ensure that WAL archiving is enabled and working.
654 Connect to the database as a superuser, and issue the command
656 SELECT pg_start_backup('label');
658 where <literal>label</> is any string you want to use to uniquely
659 identify this backup operation. (One good practice is to use the
660 full path where you intend to put the backup dump file.)
661 <function>pg_start_backup</> creates a <firstterm>backup label</> file,
662 called <filename>backup_label</>, in the cluster directory with
663 information about your backup.
667 It does not matter which database within the cluster you connect to to
668 issue this command. You can ignore the result returned by the function;
669 but if it reports an error, deal with that before proceeding.
674 Perform the backup, using any convenient file-system-backup tool
675 such as <application>tar</> or <application>cpio</>. It is neither
676 necessary nor desirable to stop normal operation of the database
682 Again connect to the database as a superuser, and issue the command
684 SELECT pg_stop_backup();
686 This should return successfully; however, the backup is not yet fully
687 valid. An automatic switch to the next WAL segment occurs, so all
688 WAL segment files that relate to the backup will now be marked ready for
694 Once the WAL segment files used during the backup are archived, you are
695 done. The file identified by <function>pg_stop_backup</>'s result is
696 the last segment that needs to be archived to complete the backup.
697 Archival of these files will happen automatically, since you have
698 already configured <varname>archive_command</>. In many cases, this
699 happens fairly quickly, but you are advised to monitor your archival
700 system to ensure this has taken place so that you can be certain you
708 Some backup tools that you might wish to use emit warnings or errors
709 if the files they are trying to copy change while the copy proceeds.
710 This situation is normal, and not an error, when taking a base backup of
711 an active database; so you need to ensure that you can distinguish
712 complaints of this sort from real errors. For example, some versions
713 of <application>rsync</> return a separate exit code for <quote>vanished
714 source files</>, and you can write a driver script to accept this exit
715 code as a non-error case. Also,
716 some versions of GNU <application>tar</> consider it an error if a file
717 is changed while <application>tar</> is copying it. There does not seem
718 to be any very convenient way to distinguish this error from other types
719 of errors, other than manual inspection of <application>tar</>'s messages.
720 GNU <application>tar</> is therefore not the best tool for making base
725 It is not necessary to be very concerned about the amount of time elapsed
726 between <function>pg_start_backup</> and the start of the actual backup,
727 nor between the end of the backup and <function>pg_stop_backup</>; a
728 few minutes' delay won't hurt anything. However, if you normally run the
729 server with <varname>full_page_writes</> disabled, you may notice a drop
730 in performance between <function>pg_start_backup</> and
731 <function>pg_stop_backup</>. You must ensure that these backup operations
732 are carried out in sequence without any possible overlap, or you will
733 invalidate the backup.
740 Be certain that your backup dump includes all of the files underneath
741 the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>).
742 If you are using tablespaces that do not reside underneath this directory,
743 be careful to include them as well (and be sure that your backup dump
744 archives symbolic links as links, otherwise the restore will mess up
749 You may, however, omit from the backup dump the files within the
750 <filename>pg_xlog/</> subdirectory of the cluster directory. This
751 slight complication is worthwhile because it reduces the risk
752 of mistakes when restoring. This is easy to arrange if
753 <filename>pg_xlog/</> is a symbolic link pointing to someplace outside
754 the cluster directory, which is a common setup anyway for performance
759 To make use of this backup, you will need to keep around all the WAL
760 segment files generated during and after the file system backup.
761 To aid you in doing this, the <function>pg_stop_backup</> function
762 creates a <firstterm>backup history file</> that is immediately
763 stored into the WAL archive area. This file is named after the first
764 WAL segment file that you need to have to make use of the backup.
765 For example, if the starting WAL file is
766 <literal>0000000100001234000055CD</> the backup history file will be
768 <literal>0000000100001234000055CD.007C9330.backup</>. (The second
769 number in the file name stands for an exact position within the WAL
770 file, and can ordinarily be ignored.) Once you have safely archived
771 the file system backup and the WAL segment files used during the
772 backup (as specified in the backup history file), all archived WAL
773 segments with names numerically less are no longer needed to recover
774 the file system backup and may be deleted. However, you should
775 consider keeping several backup sets to be absolutely certain that
776 you can recover your data.
780 The backup history file is just a small text file. It contains the
781 label string you gave to <function>pg_start_backup</>, as well as
782 the starting and ending times and WAL segments of the backup.
783 If you used the label to identify where the associated dump file is kept,
784 then the archived history file is enough to tell you which dump file to
785 restore, should you need to do so.
789 Since you have to keep around all the archived WAL files back to your
790 last base backup, the interval between base backups should usually be
791 chosen based on how much storage you want to expend on archived WAL
792 files. You should also consider how long you are prepared to spend
793 recovering, if recovery should be necessary — the system will have to
794 replay all those WAL segments, and that could take awhile if it has
795 been a long time since the last base backup.
799 It's also worth noting that the <function>pg_start_backup</> function
800 makes a file named <filename>backup_label</> in the database cluster
801 directory, which is then removed again by <function>pg_stop_backup</>.
802 This file will of course be archived as a part of your backup dump file.
803 The backup label file includes the label string you gave to
804 <function>pg_start_backup</>, as well as the time at which
805 <function>pg_start_backup</> was run, and the name of the starting WAL
806 file. In case of confusion it will
807 therefore be possible to look inside a backup dump file and determine
808 exactly which backup session the dump file came from.
812 It is also possible to make a backup dump while the server is
813 stopped. In this case, you obviously cannot use
814 <function>pg_start_backup</> or <function>pg_stop_backup</>, and
815 you will therefore be left to your own devices to keep track of which
816 backup dump is which and how far back the associated WAL files go.
817 It is generally better to follow the continuous archiving procedure above.
821 <sect2 id="backup-pitr-recovery">
822 <title>Recovering using a Continuous Archive Backup</title>
825 Okay, the worst has happened and you need to recover from your backup.
826 Here is the procedure:
830 Stop the server, if it's running.
835 If you have the space to do so,
836 copy the whole cluster data directory and any tablespaces to a temporary
837 location in case you need them later. Note that this precaution will
838 require that you have enough free space on your system to hold two
839 copies of your existing database. If you do not have enough space,
840 you need at the least to copy the contents of the <filename>pg_xlog</>
841 subdirectory of the cluster data directory, as it may contain logs which
842 were not archived before the system went down.
847 Clean out all existing files and subdirectories under the cluster data
848 directory and under the root directories of any tablespaces you are using.
853 Restore the database files from your backup dump. Be careful that they
854 are restored with the right ownership (the database system user, not
855 root!) and with the right permissions. If you are using tablespaces,
856 you may want to verify that the symbolic links in <filename>pg_tblspc/</>
857 were correctly restored.
862 Remove any files present in <filename>pg_xlog/</>; these came from the
863 backup dump and are therefore probably obsolete rather than current.
864 If you didn't archive <filename>pg_xlog/</> at all, then re-create it,
865 and be sure to re-create the subdirectory
866 <filename>pg_xlog/archive_status/</> as well.
871 If you had unarchived WAL segment files that you saved in step 2,
872 copy them into <filename>pg_xlog/</>. (It is best to copy them,
873 not move them, so that you still have the unmodified files if a
874 problem occurs and you have to start over.)
879 Create a recovery command file <filename>recovery.conf</> in the cluster
880 data directory (see <xref linkend="recovery-config-settings">). You may
881 also want to temporarily modify <filename>pg_hba.conf</> to prevent
882 ordinary users from connecting until you are sure the recovery has worked.
887 Start the server. The server will go into recovery mode and
888 proceed to read through the archived WAL files it needs. Should the
889 recovery be terminated because of an external error, the server can
890 simply be restarted and it will continue recovery. Upon completion
891 of the recovery process, the server will rename
892 <filename>recovery.conf</> to <filename>recovery.done</> (to prevent
893 accidentally re-entering recovery mode in case of a crash later) and then
894 commence normal database operations.
899 Inspect the contents of the database to ensure you have recovered to
900 where you want to be. If not, return to step 1. If all is well,
901 let in your users by restoring <filename>pg_hba.conf</> to normal.
908 The key part of all this is to set up a recovery command file that
909 describes how you want to recover and how far the recovery should
910 run. You can use <filename>recovery.conf.sample</> (normally
911 installed in the installation <filename>share/</> directory) as a
912 prototype. The one thing that you absolutely must specify in
913 <filename>recovery.conf</> is the <varname>restore_command</>,
914 which tells <productname>PostgreSQL</> how to get back archived
915 WAL file segments. Like the <varname>archive_command</>, this is
916 a shell command string. It may contain <literal>%f</>, which is
917 replaced by the name of the desired log file, and <literal>%p</>,
918 which is replaced by the absolute path to copy the log file to.
919 Write <literal>%%</> if you need to embed an actual <literal>%</>
920 character in the command. The simplest useful command is
923 restore_command = 'cp /mnt/server/archivedir/%f %p'
925 which will copy previously archived WAL segments from the directory
926 <filename>/mnt/server/archivedir</>. You could of course use something
927 much more complicated, perhaps even a shell script that requests the
928 operator to mount an appropriate tape.
932 It is important that the command return nonzero exit status on failure.
933 The command <emphasis>will</> be asked for log files that are not present
934 in the archive; it must return nonzero when so asked. This is not an
935 error condition. Be aware also that the base name of the <literal>%p</>
936 path will be different from <literal>%f</>; do not expect them to be
941 WAL segments that cannot be found in the archive will be sought in
942 <filename>pg_xlog/</>; this allows use of recent un-archived segments.
943 However segments that are available from the archive will be used in
944 preference to files in <filename>pg_xlog/</>. The system will not
945 overwrite the existing contents of <filename>pg_xlog/</> when retrieving
950 Normally, recovery will proceed through all available WAL segments,
951 thereby restoring the database to the current point in time (or as
952 close as we can get given the available WAL segments). But if you want
953 to recover to some previous point in time (say, right before the junior
954 DBA dropped your main transaction table), just specify the required
955 stopping point in <filename>recovery.conf</>. You can specify the stop
956 point, known as the <quote>recovery target</>, either by date/time or
957 by completion of a specific transaction ID. As of this writing only
958 the date/time option is very usable, since there are no tools to help
959 you identify with any accuracy which transaction ID to use.
964 The stop point must be after the ending time of the base backup (the
965 time of <function>pg_stop_backup</>). You cannot use a base backup
966 to recover to a time when that backup was still going on. (To
967 recover to such a time, you must go back to your previous base backup
968 and roll forward from there.)
973 If recovery finds a corruption in the WAL data then recovery will
974 complete at that point and the server will not start. The recovery
975 process could be re-run from the beginning, specifying a
976 <quote>recovery target</> so that recovery can complete normally.
977 If recovery fails for an external reason, such as a system crash or
978 the WAL archive has become inaccessible, then the recovery can be
979 simply restarted and it will restart almost from where it failed.
980 Restartable recovery works by writing a restartpoint record to the control
981 file at the first safely usable checkpoint record found after
982 <varname>checkpoint_timeout</> seconds.
986 <sect3 id="recovery-config-settings" xreflabel="Recovery Settings">
987 <title>Recovery Settings</title>
990 These settings can only be made in the <filename>recovery.conf</>
991 file, and apply only for the duration of the recovery. They must be
992 reset for any subsequent recovery you wish to perform. They cannot be
993 changed once recovery has begun.
998 <varlistentry id="restore-command" xreflabel="restore_command">
999 <term><varname>restore_command</varname> (<type>string</type>)</term>
1002 The shell command to execute to retrieve an archived segment of
1003 the WAL file series. This parameter is required.
1004 Any <literal>%f</> in the string is
1005 replaced by the name of the file to retrieve from the archive,
1006 and any <literal>%p</> is replaced by the absolute path to copy
1007 it to on the server.
1008 Write <literal>%%</> to embed an actual <literal>%</> character
1012 It is important for the command to return a zero exit status if and
1013 only if it succeeds. The command <emphasis>will</> be asked for file
1014 names that are not present in the archive; it must return nonzero
1015 when so asked. Examples:
1017 restore_command = 'cp /mnt/server/archivedir/%f "%p"'
1018 restore_command = 'copy /mnt/server/archivedir/%f "%p"' # Windows
1024 <varlistentry id="recovery-target-time" xreflabel="recovery_target_time">
1025 <term><varname>recovery_target_time</varname>
1026 (<type>timestamp</type>)
1030 This parameter specifies the time stamp up to which recovery
1032 At most one of <varname>recovery_target_time</> and
1033 <xref linkend="recovery-target-xid"> can be specified.
1034 The default is to recover to the end of the WAL log.
1035 The precise stopping point is also influenced by
1036 <xref linkend="recovery-target-inclusive">.
1041 <varlistentry id="recovery-target-xid" xreflabel="recovery_target_xid">
1042 <term><varname>recovery_target_xid</varname> (<type>string</type>)</term>
1045 This parameter specifies the transaction ID up to which recovery
1046 will proceed. Keep in mind
1047 that while transaction IDs are assigned sequentially at transaction
1048 start, transactions can complete in a different numeric order.
1049 The transactions that will be recovered are those that committed
1050 before (and optionally including) the specified one.
1051 At most one of <varname>recovery_target_xid</> and
1052 <xref linkend="recovery-target-time"> can be specified.
1053 The default is to recover to the end of the WAL log.
1054 The precise stopping point is also influenced by
1055 <xref linkend="recovery-target-inclusive">.
1060 <varlistentry id="recovery-target-inclusive"
1061 xreflabel="recovery_target_inclusive">
1062 <term><varname>recovery_target_inclusive</varname>
1063 (<type>boolean</type>)
1067 Specifies whether we stop just after the specified recovery target
1068 (<literal>true</literal>), or just before the recovery target
1069 (<literal>false</literal>).
1070 Applies to both <xref linkend="recovery-target-time">
1071 and <xref linkend="recovery-target-xid">, whichever one is
1072 specified for this recovery. This indicates whether transactions
1073 having exactly the target commit time or ID, respectively, will
1074 be included in the recovery. Default is <literal>true</>.
1079 <varlistentry id="recovery-target-timeline"
1080 xreflabel="recovery_target_timeline">
1081 <term><varname>recovery_target_timeline</varname>
1082 (<type>string</type>)
1086 Specifies recovering into a particular timeline. The default is
1087 to recover along the same timeline that was current when the
1088 base backup was taken. You would only need to set this parameter
1089 in complex re-recovery situations, where you need to return to
1090 a state that itself was reached after a point-in-time recovery.
1091 See <xref linkend="backup-timelines"> for discussion.
1102 <sect2 id="backup-timelines">
1103 <title>Timelines</title>
1105 <indexterm zone="backup">
1106 <primary>timelines</primary>
1110 The ability to restore the database to a previous point in time creates
1111 some complexities that are akin to science-fiction stories about time
1112 travel and parallel universes. In the original history of the database,
1113 perhaps you dropped a critical table at 5:15PM on Tuesday evening.
1114 Unfazed, you get out your backup, restore to the point-in-time 5:14PM
1115 Tuesday evening, and are up and running. In <emphasis>this</> history of
1116 the database universe, you never dropped the table at all. But suppose
1117 you later realize this wasn't such a great idea after all, and would like
1118 to return to some later point in the original history. You won't be able
1119 to if, while your database was up-and-running, it overwrote some of the
1120 sequence of WAL segment files that led up to the time you now wish you
1121 could get back to. So you really want to distinguish the series of
1122 WAL records generated after you've done a point-in-time recovery from
1123 those that were generated in the original database history.
1127 To deal with these problems, <productname>PostgreSQL</> has a notion
1128 of <firstterm>timelines</>. Each time you recover to a point-in-time
1129 earlier than the end of the WAL sequence, a new timeline is created
1130 to identify the series of WAL records generated after that recovery.
1131 (If recovery proceeds all the way to the end of WAL, however, we do not
1132 start a new timeline: we just extend the existing one.) The timeline
1133 ID number is part of WAL segment file names, and so a new timeline does
1134 not overwrite the WAL data generated by previous timelines. It is
1135 in fact possible to archive many different timelines. While that might
1136 seem like a useless feature, it's often a lifesaver. Consider the
1137 situation where you aren't quite sure what point-in-time to recover to,
1138 and so have to do several point-in-time recoveries by trial and error
1139 until you find the best place to branch off from the old history. Without
1140 timelines this process would soon generate an unmanageable mess. With
1141 timelines, you can recover to <emphasis>any</> prior state, including
1142 states in timeline branches that you later abandoned.
1146 Each time a new timeline is created, <productname>PostgreSQL</> creates
1147 a <quote>timeline history</> file that shows which timeline it branched
1148 off from and when. These history files are necessary to allow the system
1149 to pick the right WAL segment files when recovering from an archive that
1150 contains multiple timelines. Therefore, they are archived into the WAL
1151 archive area just like WAL segment files. The history files are just
1152 small text files, so it's cheap and appropriate to keep them around
1153 indefinitely (unlike the segment files which are large). You can, if
1154 you like, add comments to a history file to make your own notes about
1155 how and why this particular timeline came to be. Such comments will be
1156 especially valuable when you have a thicket of different timelines as
1157 a result of experimentation.
1161 The default behavior of recovery is to recover along the same timeline
1162 that was current when the base backup was taken. If you want to recover
1163 into some child timeline (that is, you want to return to some state that
1164 was itself generated after a recovery attempt), you need to specify the
1165 target timeline ID in <filename>recovery.conf</>. You cannot recover into
1166 timelines that branched off earlier than the base backup.
1170 <sect2 id="backup-incremental-updated">
1171 <title>Incrementally Updated Backups</title>
1173 <indexterm zone="backup">
1174 <primary>incrementally updated backups</primary>
1177 <indexterm zone="backup">
1178 <primary>change accumulation</primary>
1182 Restartable Recovery can also be utilised to offload the expense of
1183 taking periodic base backups from a main server, by instead backing
1184 up a Standby server's files. This concept is also generally known as
1185 incrementally updated backups, log change accumulation or more simply,
1186 change accumulation.
1190 If we take a backup of the server files whilst a recovery is in progress,
1191 we will be able to restart the recovery from the last restartpoint.
1192 That backup now has many of the changes from previous WAL archive files,
1193 so this version is now an updated version of the original base backup.
1194 If we need to recover, it will be faster to recover from the
1195 incrementally updated backup than from the base backup.
1199 To make use of this capability you will need to set up a Standby database
1200 on a second system, as described in <xref linkend="warm-standby">. By
1201 taking a backup of the Standby server while it is running you will
1202 have produced an incrementally updated backup. Once this configuration
1203 has been implemented you will no longer need to produce regular base
1204 backups of the Primary server: all base backups can be performed on the
1205 Standby server. If you wish to do this, it is not a requirement that you
1206 also implement the failover features of a Warm Standby configuration,
1207 though you may find it desirable to do both.
1212 <sect2 id="continuous-archiving-caveats">
1213 <title>Caveats</title>
1216 At this writing, there are several limitations of the continuous archiving
1217 technique. These will probably be fixed in future releases:
1222 Operations on hash indexes are
1223 not presently WAL-logged, so replay will not update these indexes.
1224 The recommended workaround is to manually <command>REINDEX</> each
1225 such index after completing a recovery operation.
1231 If a <command>CREATE DATABASE</> command is executed while a base
1232 backup is being taken, and then the template database that the
1233 <command>CREATE DATABASE</> copied is modified while the base backup
1234 is still in progress, it is possible that recovery will cause those
1235 modifications to be propagated into the created database as well.
1236 This is of course undesirable. To avoid this risk, it is best not to
1237 modify any template databases while taking a base backup.
1243 <command>CREATE TABLESPACE</> commands are WAL-logged with the literal
1244 absolute path, and will therefore be replayed as tablespace creations
1245 with the same absolute path. This might be undesirable if the log is
1246 being replayed on a different machine. It can be dangerous even if
1247 the log is being replayed on the same machine, but into a new data
1248 directory: the replay will still overwrite the contents of the original
1249 tablespace. To avoid potential gotchas of this sort, the best practice
1250 is to take a new base backup after creating or dropping tablespaces.
1257 It should also be noted that the default <acronym>WAL</acronym>
1258 format is fairly bulky since it includes many disk page snapshots.
1259 These page snapshots are designed to support crash recovery,
1260 since we may need to fix partially-written disk pages. Depending
1261 on your system hardware and software, the risk of partial writes may
1262 be small enough to ignore, in which case you can significantly reduce
1263 the total volume of archived logs by turning off page snapshots
1264 using the <xref linkend="guc-full-page-writes"> parameter.
1265 (Read the notes and warnings in
1266 <xref linkend="wal"> before you do so.)
1267 Turning off page snapshots does not prevent use of the logs for PITR
1269 An area for future development is to compress archived WAL data by
1270 removing unnecessary page copies even when <varname>full_page_writes</>
1271 is on. In the meantime, administrators
1272 may wish to reduce the number of page snapshots included in WAL by
1273 increasing the checkpoint interval parameters as much as feasible.
1278 <sect1 id="warm-standby">
1279 <title>Warm Standby Servers for High Availability</title>
1281 <indexterm zone="backup">
1282 <primary>Warm Standby</primary>
1285 <indexterm zone="backup">
1286 <primary>PITR Standby</primary>
1289 <indexterm zone="backup">
1290 <primary>Standby Server</primary>
1293 <indexterm zone="backup">
1294 <primary>Log Shipping</primary>
1297 <indexterm zone="backup">
1298 <primary>Witness Server</primary>
1301 <indexterm zone="backup">
1302 <primary>STONITH</primary>
1305 <indexterm zone="backup">
1306 <primary>High Availability</primary>
1310 Continuous Archiving can be used to create a High Availability (HA)
1311 cluster configuration with one or more Standby Servers ready to take
1312 over operations in the case that the Primary Server fails. This
1313 capability is more widely known as Warm Standby Log Shipping.
1317 The Primary and Standby Server work together to provide this capability,
1318 though the servers are only loosely coupled. The Primary Server operates
1319 in Continuous Archiving mode, while the Standby Server operates in a
1320 continuous Recovery mode, reading the WAL files from the Primary. No
1321 changes to the database tables are required to enable this capability,
1322 so it offers a low administration overhead in comparison with other
1323 replication approaches. This configuration also has a very low
1324 performance impact on the Primary server.
1328 Directly moving WAL or "log" records from one database server to another
1329 is typically described as Log Shipping. PostgreSQL implements file-based
1330 Log Shipping, meaning WAL records are batched one file at a time. WAL
1331 files can be shipped easily and cheaply over any distance, whether it be
1332 to an adjacent system, another system on the same site or another system
1333 on the far side of the globe. The bandwidth required for this technique
1334 varies according to the transaction rate of the Primary Server.
1335 Record-based Log Shipping is also possible with custom-developed
1336 procedures, discussed in a later section. Future developments are likely
1337 to include options for synchronous and/or integrated record-based log
1342 It should be noted that the log shipping is asynchronous, i.e. the WAL
1343 records are shipped after transaction commit. As a result there can be a
1344 small window of data loss, should the Primary Server suffer a
1345 catastrophic failure. The window of data loss is minimised by the use of
1346 the archive_timeout parameter, which can be set as low as a few seconds
1347 if required. A very low setting can increase the bandwidth requirements
1352 The Standby server is not available for access, since it is continually
1353 performing recovery processing. Recovery performance is sufficiently
1354 good that the Standby will typically be only minutes away from full
1355 availability once it has been activated. As a result, we refer to this
1356 capability as a Warm Standby configuration that offers High
1357 Availability. Restoring a server from an archived base backup and
1358 rollforward can take considerably longer and so that technique only
1359 really offers a solution for Disaster Recovery, not HA.
1363 When running a Standby Server, backups can be performed on the Standby
1364 rather than the Primary, thereby offloading the expense of
1365 taking periodic base backups. (See
1366 <xref linkend="backup-incremental-updated">)
1371 Other mechanisms for High Availability replication are available, both
1372 commercially and as open-source software.
1376 In general, log shipping between servers running different release
1377 levels will not be possible. It is the policy of the PostgreSQL Worldwide
1378 Development Group not to make changes to disk formats during minor release
1379 upgrades, so it is likely that running different minor release levels
1380 on Primary and Standby servers will work successfully. However, no
1381 formal support for that is offered and you are advised not to allow this
1382 to occur over long periods.
1385 <sect2 id="warm-standby-planning">
1386 <title>Planning</title>
1389 On the Standby server all tablespaces and paths will refer to similarly
1390 named mount points, so it is important to create the Primary and Standby
1391 servers so that they are as similar as possible, at least from the
1392 perspective of the database server. Furthermore, any CREATE TABLESPACE
1393 commands will be passed across as-is, so any new mount points must be
1394 created on both servers before they are used on the Primary. Hardware
1395 need not be the same, but experience shows that maintaining two
1396 identical systems is easier than maintaining two dissimilar ones over
1397 the whole lifetime of the application and system.
1401 There is no special mode required to enable a Standby server. The
1402 operations that occur on both Primary and Standby servers are entirely
1403 normal continuous archiving and recovery tasks. The primary point of
1404 contact between the two database servers is the archive of WAL files
1405 that both share: Primary writing to the archive, Standby reading from
1406 the archive. Care must be taken to ensure that WAL archives for separate
1407 servers do not become mixed together or confused.
1411 The magic that makes the two loosely coupled servers work together is
1412 simply a restore_command that waits for the next WAL file to be archived
1413 from the Primary. The restore_command is specified in the recovery.conf
1414 file on the Standby Server. Normal recovery processing would request a
1415 file from the WAL archive, causing an error if the file was unavailable.
1416 For Standby processing it is normal for the next file to be unavailable,
1417 so we must be patient and wait for it to appear. A waiting
1418 restore_command can be written as a custom script that loops after
1419 polling for the existence of the next WAL file. There must also be some
1420 way to trigger failover, which should interrupt the restore_command,
1421 break the loop and return a file not found error to the Standby Server.
1422 This then ends recovery and the Standby will then come up as a normal
1427 Sample code for the C version of the restore_command would be be:
1430 while (!NextWALFileReady() && !triggered)
1432 sleep(100000L); // wait for ~0.1 sec
1433 if (CheckForExternalTrigger())
1437 CopyWALFileForRecovery();
1442 PostgreSQL does not provide the system software required to identify a
1443 failure on the Primary and notify the Standby system and then the
1444 Standby database server. Many such tools exist and are well integrated
1445 with other aspects of a system failover, such as ip address migration.
1449 Triggering failover is an important part of planning and design. The
1450 restore_command is executed in full once for each WAL file. The process
1451 running the restore_command is therefore created and dies for each file,
1452 so there is no daemon or server process and so we cannot use signals and
1453 a signal handler. A more permanent notification is required to trigger
1454 the failover. It is possible to use a simple timeout facility,
1455 especially if used in conjunction with a known archive_timeout setting
1456 on the Primary. This is somewhat error prone since a network or busy
1457 Primary server might be sufficient to initiate failover. A notification
1458 mechanism such as the explicit creation of a trigger file is less error
1459 prone, if this can be arranged.
1463 <sect2 id="warm-standby-config">
1464 <title>Implementation</title>
1467 The short procedure for configuring a Standby Server is as follows. For
1468 full details of each step, refer to previous sections as noted.
1472 Set up Primary and Standby systems as near identically as possible,
1473 including two identical copies of PostgreSQL at same release level.
1478 Set up Continuous Archiving from the Primary to a WAL archive located
1479 in a directory on the Standby Server. Ensure that both <xref
1480 linkend="guc-archive-command"> and <xref linkend="guc-archive-timeout">
1481 are set. (See <xref linkend="backup-archiving-wal">)
1486 Make a Base Backup of the Primary Server. (See <xref
1487 linkend="backup-base-backup">)
1492 Begin recovery on the Standby Server from the local WAL archive,
1493 using a recovery.conf that specifies a restore_command that waits as
1494 described previously. (See <xref linkend="backup-pitr-recovery">)
1501 Recovery treats the WAL Archive as read-only, so once a WAL file has
1502 been copied to the Standby system it can be copied to tape at the same
1503 time as it is being used by the Standby database server to recover.
1504 Thus, running a Standby Server for High Availability can be performed at
1505 the same time as files are stored for longer term Disaster Recovery
1510 For testing purposes, it is possible to run both Primary and Standby
1511 servers on the same system. This does not provide any worthwhile
1512 improvement on server robustness, nor would it be described as HA.
1516 <sect2 id="warm-standby-failover">
1517 <title>Failover</title>
1520 If the Primary Server fails then the Standby Server should begin
1521 failover procedures.
1525 If the Standby Server fails then no failover need take place. If the
1526 Standby Server can be restarted, even some time later, then the recovery
1527 process can also be immediately restarted, taking advantage of
1528 Restartable Recovery. If the Standby Server cannot be restarted, then a
1529 full new Standby Server should be created.
1533 If the Primary Server fails and then immediately restarts, you must have
1534 a mechanism for informing it that it is no longer the Primary. This is
1535 sometimes known as STONITH (Shoot the Other Node In The Head), which is
1536 necessary to avoid situations where both systems think they are the
1537 Primary, which can lead to confusion and ultimately data loss.
1541 Many failover systems use just two systems, the Primary and the Standby,
1542 connected by some kind of heartbeat mechanism to continually verify the
1543 connectivity between the two and the viability of the Primary. It is
1544 also possible to use a third system, known as a Witness Server to avoid
1545 some problems of inappropriate failover, but the additional complexity
1546 may not be worthwhile unless it is set-up with sufficient care and
1551 At the instant that failover takes place to the Standby, we have only a
1552 single server in operation. This is known as a degenerate state.
1553 The former Standby is now the Primary, but the former Primary is down
1554 and may stay down. We must now fully re-create a Standby server,
1555 either on the former Primary system when it comes up, or on a third,
1556 possibly new, system. Once complete the Primary and Standby can be
1557 considered to have switched roles. Some people choose to use a third
1558 server to provide additional protection across the failover interval,
1559 though clearly this complicates the system configuration and
1560 operational processes (and this can also act as a Witness Server).
1564 So, switching from Primary to Standby Server can be fast but requires
1565 some time to re-prepare the failover cluster. Regular switching from
1566 Primary to Standby is encouraged, since it allows the regular downtime
1567 that each system requires to maintain HA. This also acts as a test of the
1568 failover mechanism so that it definitely works when you really need it.
1569 Written administration procedures are advised.
1573 <sect2 id="warm-standby-record">
1574 <title>Implementing Record-based Log Shipping</title>
1577 The main features for Log Shipping in this release are based
1578 around the file-based Log Shipping described above. It is also
1579 possible to implement record-based Log Shipping using the
1580 <function>pg_xlogfile_name_offset</function> function (see <xref
1581 linkend="functions-admin">), though this requires custom
1586 An external program can call pg_xlogfile_name_offset() to find out the
1587 filename and the exact byte offset within it of the latest WAL pointer.
1588 If the external program regularly polls the server it can find out how
1589 far forward the pointer has moved. It can then access the WAL file
1590 directly and copy those bytes across to a less up-to-date copy on a
1596 <sect1 id="migration">
1597 <title>Migration Between Releases</title>
1599 <indexterm zone="migration">
1600 <primary>upgrading</primary>
1603 <indexterm zone="migration">
1604 <primary>version</primary>
1605 <secondary>compatibility</secondary>
1609 This section discusses how to migrate your database data from one
1610 <productname>PostgreSQL</> release to a newer one.
1611 The software installation procedure <foreignphrase>per se</> is not the
1612 subject of this section; those details are in <xref linkend="installation">.
1616 As a general rule, the internal data storage format is subject to
1617 change between major releases of <productname>PostgreSQL</> (where
1618 the number after the first dot changes). This does not apply to
1619 different minor releases under the same major release (where the
1620 number after the second dot changes); these always have compatible
1621 storage formats. For example, releases 7.2.1, 7.3.2, and 7.4 are
1622 not compatible, whereas 7.2.1 and 7.2.2 are. When you update
1623 between compatible versions, you can simply replace the executables
1624 and reuse the data directory on disk. Otherwise you need to back
1625 up your data and restore it on the new server. This has to be done
1626 using <application>pg_dump</>; file system level backup methods
1627 obviously won't work. There are checks in place that prevent you
1628 from using a data directory with an incompatible version of
1629 <productname>PostgreSQL</productname>, so no great harm can be done by
1630 trying to start the wrong server version on a data directory.
1634 It is recommended that you use the <application>pg_dump</> and
1635 <application>pg_dumpall</> programs from the newer version of
1636 <productname>PostgreSQL</>, to take advantage of any enhancements
1637 that may have been made in these programs. Current releases of the
1638 dump programs can read data from any server version back to 7.0.
1642 The least downtime can be achieved by installing the new server in
1643 a different directory and running both the old and the new servers
1644 in parallel, on different ports. Then you can use something like
1647 pg_dumpall -p 5432 | psql -d postgres -p 6543
1650 to transfer your data. Or use an intermediate file if you want.
1651 Then you can shut down the old server and start the new server at
1652 the port the old one was running at. You should make sure that the
1653 old database is not updated after you run <application>pg_dumpall</>,
1654 otherwise you will obviously lose that data. See <xref
1655 linkend="client-authentication"> for information on how to prohibit
1660 In practice you probably want to test your client
1661 applications on the new setup before switching over completely.
1662 This is another reason for setting up concurrent installations
1663 of old and new versions.
1667 If you cannot or do not want to run two servers in parallel you can
1668 do the backup step before installing the new version, bring down
1669 the server, move the old version out of the way, install the new
1670 version, start the new server, restore the data. For example:
1673 pg_dumpall > backup
1675 mv /usr/local/pgsql /usr/local/pgsql.old
1676 cd ~/postgresql-&version;
1678 initdb -D /usr/local/pgsql/data
1679 postgres -D /usr/local/pgsql/data
1680 psql -f backup postgres
1683 See <xref linkend="runtime"> about ways to start and stop the
1684 server and other details. The installation instructions will advise
1685 you of strategic places to perform these steps.
1690 When you <quote>move the old installation out of the way</quote>
1691 it may no longer be perfectly usable. Some of the executable programs
1692 contain absolute paths to various installed programs and data files.
1693 This is usually not a big problem but if you plan on using two
1694 installations in parallel for a while you should assign them
1695 different installation directories at build time. (This problem
1696 is rectified in <productname>PostgreSQL</> 8.0 and later, but you
1697 need to be wary of moving older installations.)