1 <!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.151 2010/04/12 19:08:28 momjian Exp $ -->
4 <title>Backup and Restore</title>
6 <indexterm zone="backup"><primary>backup</></>
9 As with everything that contains valuable data, <productname>PostgreSQL</>
10 databases should be backed up regularly. While the procedure is
11 essentially simple, it is important to have a clear understanding of
12 the underlying techniques and assumptions.
16 There are three fundamentally different approaches to backing up
17 <productname>PostgreSQL</> data:
19 <listitem><para><acronym>SQL</> dump</para></listitem>
20 <listitem><para>File system level backup</para></listitem>
21 <listitem><para>Continuous archiving</para></listitem>
23 Each has its own strengths and weaknesses; each is discussed in turn below.
26 <sect1 id="backup-dump">
27 <title><acronym>SQL</> Dump</title>
30 The idea behind this dump method is to generate a text file with SQL
31 commands that, when fed back to the server, will recreate the
32 database in the same state as it was at the time of the dump.
33 <productname>PostgreSQL</> provides the utility program
34 <xref linkend="app-pgdump"> for this purpose. The basic usage of this
37 pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">outfile</replaceable>
39 As you see, <application>pg_dump</> writes its result to the
40 standard output. We will see below how this can be useful.
44 <application>pg_dump</> is a regular <productname>PostgreSQL</>
45 client application (albeit a particularly clever one). This means
46 that you can perform this backup procedure from any remote host that has
47 access to the database. But remember that <application>pg_dump</>
48 does not operate with special permissions. In particular, it must
49 have read access to all tables that you want to back up, so in
50 practice you almost always have to run it as a database superuser.
54 To specify which database server <application>pg_dump</> should
55 contact, use the command line options <option>-h
56 <replaceable>host</></> and <option>-p <replaceable>port</></>. The
57 default host is the local host or whatever your
58 <envar>PGHOST</envar> environment variable specifies. Similarly,
59 the default port is indicated by the <envar>PGPORT</envar>
60 environment variable or, failing that, by the compiled-in default.
61 (Conveniently, the server will normally have the same compiled-in
66 Like any other <productname>PostgreSQL</> client application,
67 <application>pg_dump</> will by default connect with the database
68 user name that is equal to the current operating system user name. To override
69 this, either specify the <option>-U</option> option or set the
70 environment variable <envar>PGUSER</envar>. Remember that
71 <application>pg_dump</> connections are subject to the normal
72 client authentication mechanisms (which are described in <xref
73 linkend="client-authentication">).
77 Dumps created by <application>pg_dump</> are internally consistent,
78 meaning, the dump represents a snapshot of the database at the time
79 <application>pg_dump</> began running. <application>pg_dump</> does not
80 block other operations on the database while it is working.
81 (Exceptions are those operations that need to operate with an
82 exclusive lock, such as most forms of <command>ALTER TABLE</command>.)
87 If your database schema relies on OIDs (for instance, as foreign
88 keys) you must instruct <application>pg_dump</> to dump the OIDs
89 as well. To do this, use the <option>-o</option> command-line
94 <sect2 id="backup-dump-restore">
95 <title>Restoring the dump</title>
98 The text files created by <application>pg_dump</> are intended to
99 be read in by the <application>psql</application> program. The
100 general command form to restore a dump is
102 psql <replaceable class="parameter">dbname</replaceable> < <replaceable class="parameter">infile</replaceable>
104 where <replaceable class="parameter">infile</replaceable> is the
105 file output by the <application>pg_dump</> command. The database <replaceable
106 class="parameter">dbname</replaceable> will not be created by this
107 command, so you must create it yourself from <literal>template0</>
108 before executing <application>psql</> (e.g., with
109 <literal>createdb -T template0 <replaceable
110 class="parameter">dbname</></literal>). <application>psql</>
111 supports options similar to <application>pg_dump</> for specifying
112 the database server to connect to and the user name to use. See
113 the <xref linkend="app-psql"> reference page for more information.
117 Before restoring an SQL dump, all the users who own objects or were
118 granted permissions on objects in the dumped database must already
119 exist. If they do not, the restore will fail to recreate the
120 objects with the original ownership and/or permissions.
121 (Sometimes this is what you want, but usually it is not.)
125 By default, the <application>psql</> script will continue to
126 execute after an SQL error is encountered. You might wish to run
127 <application>psql</application> with
128 the <literal>ON_ERROR_STOP</> variable set to alter that
129 behavior and have <application>psql</application> exit with an
130 exit status of 3 if an SQL error occurs:
132 psql --set ON_ERROR_STOP=on dbname < infile
134 Either way, you will only have a partially restored database.
135 Alternatively, you can specify that the whole dump should be
136 restored as a single transaction, so the restore is either fully
137 completed or fully rolled back. This mode can be specified by
138 passing the <option>-1</> or <option>--single-transaction</>
139 command-line options to <application>psql</>. When using this
140 mode, be aware that even a minor error can rollback a
141 restore that has already run for many hours. However, that might
142 still be preferable to manually cleaning up a complex database
143 after a partially restored dump.
147 The ability of <application>pg_dump</> and <application>psql</> to
148 write to or read from pipes makes it possible to dump a database
149 directly from one server to another, for example:
151 pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</>
157 The dumps produced by <application>pg_dump</> are relative to
158 <literal>template0</>. This means that any languages, procedures,
159 etc. added via <literal>template1</> will also be dumped by
160 <application>pg_dump</>. As a result, when restoring, if you are
161 using a customized <literal>template1</>, you must create the
162 empty database from <literal>template0</>, as in the example
168 After restoring a backup, it is wise to run <xref
169 linkend="sql-analyze"> on each
170 database so the query optimizer has useful statistics;
171 see <xref linkend="vacuum-for-statistics">
172 and <xref linkend="autovacuum"> for more information.
173 For more advice on how to load large amounts of data
174 into <productname>PostgreSQL</> efficiently, refer to <xref
179 <sect2 id="backup-dump-all">
180 <title>Using <application>pg_dumpall</></title>
183 <application>pg_dump</> dumps only a single database at a time,
184 and it does not dump information about roles or tablespaces
185 (because those are cluster-wide rather than per-database).
186 To support convenient dumping of the entire contents of a database
187 cluster, the <xref linkend="app-pg-dumpall"> program is provided.
188 <application>pg_dumpall</> backs up each database in a given
189 cluster, and also preserves cluster-wide data such as role and
190 tablespace definitions. The basic usage of this command is:
192 pg_dumpall > <replaceable>outfile</>
194 The resulting dump can be restored with <application>psql</>:
196 psql -f <replaceable class="parameter">infile</replaceable> postgres
198 (Actually, you can specify any existing database name to start from,
199 but if you are loading into an empty cluster then <literal>postgres</>
200 should usually be used.) It is always necessary to have
201 database superuser access when restoring a <application>pg_dumpall</>
202 dump, as that is required to restore the role and tablespace information.
203 If you use tablespaces, make sure that the tablespace paths in the
204 dump are appropriate for the new installation.
208 <application>pg_dumpall</> works by emitting commands to re-create
209 roles, tablespaces, and empty databases, then invoking
210 <application>pg_dump</> for each database. This means that while
211 each database will be internally consistent, the snapshots of
212 different databases might not be exactly in-sync.
216 <sect2 id="backup-dump-large">
217 <title>Handling large databases</title>
220 Some operating systems have maximum file size limits that cause
221 problems when creating large <application>pg_dump</> output files.
222 Fortunately, <application>pg_dump</> can write to the standard
223 output, so you can use standard Unix tools to work around this
224 potential problem. There are several possible methods:
228 <title>Use compressed dumps.</title>
230 You can use your favorite compression program, for example
231 <application>gzip</application>:
234 pg_dump <replaceable class="parameter">dbname</replaceable> | gzip > <replaceable class="parameter">filename</replaceable>.gz
240 gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
246 cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
252 <title>Use <command>split</>.</title>
254 The <command>split</command> command
255 allows you to split the output into smaller files that are
256 acceptable in size to the underlying file system. For example, to
257 make chunks of 1 megabyte:
260 pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 1m - <replaceable class="parameter">filename</replaceable>
266 cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
272 <title>Use <application>pg_dump</>'s custom dump format.</title>
274 If <productname>PostgreSQL</productname> was built on a system with the
275 <application>zlib</> compression library installed, the custom dump
276 format will compress data as it writes it to the output file. This will
277 produce dump file sizes similar to using <command>gzip</command>, but it
278 has the added advantage that tables can be restored selectively. The
279 following command dumps a database using the custom dump format:
282 pg_dump -Fc <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">filename</replaceable>
285 A custom-format dump is not a script for <application>psql</>, but
286 instead must be restored with <application>pg_restore</>, for example:
289 pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable class="parameter">filename</replaceable>
292 See the <xref linkend="app-pgdump"> and <xref
293 linkend="app-pgrestore"> reference pages for details.
298 For very large databases, you might need to combine <command>split</>
299 with one of the other two approaches.
305 <sect1 id="backup-file">
306 <title>File System Level Backup</title>
309 An alternative backup strategy is to directly copy the files that
310 <productname>PostgreSQL</> uses to store the data in the database;
311 <xref linkend="creating-cluster"> explains where these files
312 are located. You can use whatever method you prefer
313 for doing file system backups; for example:
316 tar -cf backup.tar /usr/local/pgsql/data
321 There are two restrictions, however, which make this method
322 impractical, or at least inferior to the <application>pg_dump</>
328 The database server <emphasis>must</> be shut down in order to
329 get a usable backup. Half-way measures such as disallowing all
330 connections will <emphasis>not</emphasis> work
331 (in part because <command>tar</command> and similar tools do not take
332 an atomic snapshot of the state of the file system,
333 but also because of internal buffering within the server).
334 Information about stopping the server can be found in
335 <xref linkend="server-shutdown">. Needless to say, you
336 also need to shut down the server before restoring the data.
342 If you have dug into the details of the file system layout of the
343 database, you might be tempted to try to back up or restore only certain
344 individual tables or databases from their respective files or
345 directories. This will <emphasis>not</> work because the
346 information contained in these files is not usable without
347 the commit log files,
348 <filename>pg_clog/*</filename>, which contain the commit status of
349 all transactions. A table file is only usable with this
350 information. Of course it is also impossible to restore only a
351 table and the associated <filename>pg_clog</filename> data
352 because that would render all other tables in the database
353 cluster useless. So file system backups only work for complete
354 backup and restoration of an entire database cluster.
361 An alternative file-system backup approach is to make a
362 <quote>consistent snapshot</quote> of the data directory, if the
363 file system supports that functionality (and you are willing to
364 trust that it is implemented correctly). The typical procedure is
365 to make a <quote>frozen snapshot</> of the volume containing the
366 database, then copy the whole data directory (not just parts, see
367 above) from the snapshot to a backup device, then release the frozen
368 snapshot. This will work even while the database server is running.
369 However, a backup created in this way saves
370 the database files in a state as if the database server was not
371 properly shut down; therefore, when you start the database server
372 on the backed-up data, it will think the previous server instance
373 crashed and will replay the WAL log. This is not a problem; just
374 be aware of it (and be sure to include the WAL files in your backup).
378 If your database is spread across multiple file systems, there might not
379 be any way to obtain exactly-simultaneous frozen snapshots of all
380 the volumes. For example, if your data files and WAL log are on different
381 disks, or if tablespaces are on different file systems, it might
382 not be possible to use snapshot backup because the snapshots
383 <emphasis>must</> be simultaneous.
384 Read your file system documentation very carefully before trusting
385 the consistent-snapshot technique in such situations.
389 If simultaneous snapshots are not possible, one option is to shut down
390 the database server long enough to establish all the frozen snapshots.
391 Another option is perform a continuous archiving base backup (<xref
392 linkend="backup-base-backup">) because such backups are immune to file
393 system changes during the backup. This requires enabling continuous
394 archiving just during the backup process; restore is done using
395 continuous archive recovery (<xref linkend="backup-pitr-recovery">).
399 Another option is to use <application>rsync</> to perform a file
400 system backup. This is done by first running <application>rsync</>
401 while the database server is running, then shutting down the database
402 server just long enough to do a second <application>rsync</>. The
403 second <application>rsync</> will be much quicker than the first,
404 because it has relatively little data to transfer, and the end result
405 will be consistent because the server was down. This method
406 allows a file system backup to be performed with minimal downtime.
410 Note that a file system backup will typically be larger
411 than an SQL dump. (<application>pg_dump</application> does not need to dump
412 the contents of indexes for example, just the commands to recreate
413 them.) However, taking a file system backup might be faster.
417 <sect1 id="continuous-archiving">
418 <title>Continuous Archiving and Point-In-Time Recovery (PITR)</title>
420 <indexterm zone="backup">
421 <primary>continuous archiving</primary>
424 <indexterm zone="backup">
425 <primary>point-in-time recovery</primary>
428 <indexterm zone="backup">
429 <primary>PITR</primary>
433 At all times, <productname>PostgreSQL</> maintains a
434 <firstterm>write ahead log</> (WAL) in the <filename>pg_xlog/</>
435 subdirectory of the cluster's data directory. The log records
436 every change made to the database's data files. This log exists
437 primarily for crash-safety purposes: if the system crashes, the
438 database can be restored to consistency by <quote>replaying</> the
439 log entries made since the last checkpoint. However, the existence
440 of the log makes it possible to use a third strategy for backing up
441 databases: we can combine a file-system-level backup with backup of
442 the WAL files. If recovery is needed, we restore the file system backup and
443 then replay from the backed-up WAL files to bring the system to a
444 current state. This approach is more complex to administer than
445 either of the previous approaches, but it has some significant
450 We do not need a perfectly consistent file system backup as the starting point.
451 Any internal inconsistency in the backup will be corrected by log
452 replay (this is not significantly different from what happens during
453 crash recovery). So we do not need a file system snapshot capability,
454 just <application>tar</> or a similar archiving tool.
459 Since we can combine an indefinitely long sequence of WAL files
460 for replay, continuous backup can be achieved simply by continuing to archive
461 the WAL files. This is particularly valuable for large databases, where
462 it might not be convenient to take a full backup frequently.
467 It is not necessary to replay the WAL entries all the
468 way to the end. We could stop the replay at any point and have a
469 consistent snapshot of the database as it was at that time. Thus,
470 this technique supports <firstterm>point-in-time recovery</>: it is
471 possible to restore the database to its state at any time since your base
477 If we continuously feed the series of WAL files to another
478 machine that has been loaded with the same base backup file, we
479 have a <firstterm>warm standby</> system: at any point we can bring up
480 the second machine and it will have a nearly-current copy of the
489 <application>pg_dump</application> and
490 <application>pg_dumpall</application> do not produce file-system-level
491 backups and cannot be used as part of a continuous-archiving solution.
492 Such dumps are <emphasis>logical</> and do not contain enough
493 information to used by WAL reply.
498 As with the plain file-system-backup technique, this method can only
499 support restoration of an entire database cluster, not a subset.
500 Also, it requires a lot of archival storage: the base backup might be bulky,
501 and a busy system will generate many megabytes of WAL traffic that
502 have to be archived. Still, it is the preferred backup technique in
503 many situations where high reliability is needed.
507 To recover successfully using continuous archiving (also called
508 <quote>online backup</> by many database vendors), you need a continuous
509 sequence of archived WAL files that extends back at least as far as the
510 start time of your backup. So to get started, you should set up and test
511 your procedure for archiving WAL files <emphasis>before</> you take your
512 first base backup. Accordingly, we first discuss the mechanics of
516 <sect2 id="backup-archiving-wal">
517 <title>Setting up WAL archiving</title>
520 In an abstract sense, a running <productname>PostgreSQL</> system
521 produces an indefinitely long sequence of WAL records. The system
522 physically divides this sequence into WAL <firstterm>segment
523 files</>, which are normally 16MB apiece (although the segment size
524 can be altered when building <productname>PostgreSQL</>). The segment
525 files are given numeric names that reflect their position in the
526 abstract WAL sequence. When not using WAL archiving, the system
527 normally creates just a few segment files and then
528 <quote>recycles</> them by renaming no-longer-needed segment files
529 to higher segment numbers. It's assumed that segment files whose
530 contents precede the checkpoint-before-last are no longer of
531 interest and can be recycled.
535 When archiving WAL data, we need to capture the contents of each segment
536 file once it is filled, and save that data somewhere before the segment
537 file is recycled for reuse. Depending on the application and the
538 available hardware, there could be many different ways of <quote>saving
539 the data somewhere</>: we could copy the segment files to an NFS-mounted
540 directory on another machine, write them onto a tape drive (ensuring that
541 you have a way of identifying the original name of each file), or batch
542 them together and burn them onto CDs, or something else entirely. To
543 provide the database administrator with flexibility,
544 <productname>PostgreSQL</> tries not to make any assumptions about how
545 the archiving will be done. Instead, <productname>PostgreSQL</> lets
546 the administrator specify a shell command to be executed to copy a
547 completed segment file to wherever it needs to go. The command could be
548 as simple as a <literal>cp</>, or it could invoke a complex shell
549 script — it's all up to you.
553 To enable WAL archiving, set the <xref
554 linkend="guc-archive-mode"> configuration parameter to <literal>on</>,
555 and specify the shell command to use in the <xref
556 linkend="guc-archive-command"> configuration parameter. In practice
557 these settings will always be placed in the
558 <filename>postgresql.conf</filename> file.
559 In <varname>archive_command</>,
560 <literal>%p</> is replaced by the path name of the file to
561 archive, while <literal>%f</> is replaced by only the file name.
562 (The path name is relative to the current working directory,
563 i.e., the cluster's data directory.)
564 Use <literal>%%</> if you need to embed an actual <literal>%</>
565 character in the command. The simplest useful command is something
568 archive_command = 'cp -i %p /mnt/server/archivedir/%f </dev/null'
570 which will copy archivable WAL segments to the directory
571 <filename>/mnt/server/archivedir</>. (This is an example, not a
572 recommendation, and might not work on all platforms.) After the
573 <literal>%p</> and <literal>%f</> parameters have been replaced,
574 the actual command executed might look like this:
576 cp -i pg_xlog/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065 </dev/null
578 A similar command will be generated for each new file to be archived.
582 The archive command will be executed under the ownership of the same
583 user that the <productname>PostgreSQL</> server is running as. Since
584 the series of WAL files being archived contains effectively everything
585 in your database, you will want to be sure that the archived data is
586 protected from prying eyes; for example, archive into a directory that
587 does not have group or world read access.
591 It is important that the archive command return zero exit status if and
592 only if it succeeds. Upon getting a zero result,
593 <productname>PostgreSQL</> will assume that the file has been
594 successfully archived, and will remove or recycle it. However, a nonzero
595 status tells <productname>PostgreSQL</> that the file was not archived;
596 it will try again periodically until it succeeds.
600 The archive command should generally be designed to refuse to overwrite
601 any pre-existing archive file. This is an important safety feature to
602 preserve the integrity of your archive in case of administrator error
603 (such as sending the output of two different servers to the same archive
605 It is advisable to test your proposed archive command to ensure that it
606 indeed does not overwrite an existing file, <emphasis>and that it returns
607 nonzero status in this case</>. On many Unix platforms, <command>cp
608 -i</> causes copy to prompt before overwriting a file, and
609 <literal>< /dev/null</> causes the prompt (and overwriting) to
610 fail. If your platform does not support this behavior, you should
611 add a command to test for the existence of the archive file. For
612 example, something like:
614 archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
616 works correctly on most Unix variants.
620 While designing your archiving setup, consider what will happen if
621 the archive command fails repeatedly because some aspect requires
622 operator intervention or the archive runs out of space. For example, this
623 could occur if you write to tape without an autochanger; when the tape
624 fills, nothing further can be archived until the tape is swapped.
625 You should ensure that any error condition or request to a human operator
626 is reported appropriately so that the situation can be
627 resolved reasonably quickly. The <filename>pg_xlog/</> directory will
628 continue to fill with WAL segment files until the situation is resolved.
629 (If the file system containing <filename>pg_xlog/</> fills up,
630 <productname>PostgreSQL</> will do a PANIC shutdown. No committed
631 transactions will be lost, but the database will remain offline until
632 you free some space.)
636 The speed of the archiving command is unimportant as long as it can keep up
637 with the average rate at which your server generates WAL data. Normal
638 operation continues even if the archiving process falls a little behind.
639 If archiving falls significantly behind, this will increase the amount of
640 data that would be lost in the event of a disaster. It will also mean that
641 the <filename>pg_xlog/</> directory will contain large numbers of
642 not-yet-archived segment files, which could eventually exceed available
643 disk space. You are advised to monitor the archiving process to ensure that
644 it is working as you intend.
648 In writing your archive command, you should assume that the file names to
649 be archived can be up to 64 characters long and can contain any
650 combination of ASCII letters, digits, and dots. It is not necessary to
651 preserve the original relative path (<literal>%p</>) but it is necessary to
652 preserve the file name (<literal>%f</>).
656 Note that although WAL archiving will allow you to restore any
657 modifications made to the data in your <productname>PostgreSQL</> database,
658 it will not restore changes made to configuration files (that is,
659 <filename>postgresql.conf</>, <filename>pg_hba.conf</> and
660 <filename>pg_ident.conf</>), since those are edited manually rather
661 than through SQL operations.
662 You might wish to keep the configuration files in a location that will
663 be backed up by your regular file system backup procedures. See
664 <xref linkend="runtime-config-file-locations"> for how to relocate the
669 The archive command is only invoked on completed WAL segments. Hence,
670 if your server generates only little WAL traffic (or has slack periods
671 where it does so), there could be a long delay between the completion
672 of a transaction and its safe recording in archive storage. To put
673 a limit on how old unarchived data can be, you can set
674 <xref linkend="guc-archive-timeout"> to force the server to switch
675 to a new WAL segment file at least that often. Note that archived
676 files that are archived early due to a forced switch are still the same
677 length as completely full files. It is therefore unwise to set a very
678 short <varname>archive_timeout</> — it will bloat your archive
679 storage. <varname>archive_timeout</> settings of a minute or so are
684 Also, you can force a segment switch manually with
685 <function>pg_switch_xlog</> if you want to ensure that a
686 just-finished transaction is archived as soon as possible. Other utility
687 functions related to WAL management are listed in <xref
688 linkend="functions-admin-backup-table">.
692 When <varname>archive_mode</> is <literal>off</> some SQL commands
693 are optimized to avoid WAL logging, as described in <xref
694 linkend="populate-pitr">. If archiving were turned on during execution
695 of one of these statements, WAL would not contain enough information
696 for archive recovery. (Crash recovery is unaffected.) For
697 this reason, <varname>archive_mode</> can only be changed at server
698 start. However, <varname>archive_command</> can be changed with a
699 configuration file reload. If you wish to temporarily stop archiving,
700 one way to do it is to set <varname>archive_command</> to the empty
701 string (<literal>''</>).
702 This will cause WAL files to accumulate in <filename>pg_xlog/</> until a
703 working <varname>archive_command</> is re-established.
707 <sect2 id="backup-base-backup">
708 <title>Making a Base Backup</title>
711 The procedure for making a base backup is relatively simple:
715 Ensure that WAL archiving is enabled and working.
720 Connect to the database as a superuser and issue the command:
722 SELECT pg_start_backup('label');
724 where <literal>label</> is any string you want to use to uniquely
725 identify this backup operation. (One good practice is to use the
726 full path where you intend to put the backup dump file.)
727 <function>pg_start_backup</> creates a <firstterm>backup label</> file,
728 called <filename>backup_label</>, in the cluster directory with
729 information about your backup, including the start time and label
734 It does not matter which database within the cluster you connect to to
735 issue this command. You can ignore the result returned by the function;
736 but if it reports an error, deal with that before proceeding.
740 By default, <function>pg_start_backup</> can take a long time to finish.
741 This is because it performs a checkpoint, and the I/O
742 required for the checkpoint will be spread out over a significant
743 period of time, by default half your inter-checkpoint interval
744 (see the configuration parameter
745 <xref linkend="guc-checkpoint-completion-target">). This is
746 usually what you want, because it minimizes the impact on query
747 processing. If you want to start the backup as soon as
750 SELECT pg_start_backup('label', true);
752 This forces the checkpoint to be done as quickly as possible.
757 Perform the backup, using any convenient file-system-backup tool
758 such as <application>tar</> or <application>cpio</> (not
759 <application>pg_dump</application> or
760 <application>pg_dumpall</application>). It is neither
761 necessary nor desirable to stop normal operation of the database
767 Again connect to the database as a superuser, and issue the command:
769 SELECT pg_stop_backup();
771 This terminates the backup mode and performs an automatic switch to
772 the next WAL segment. The reason for the switch is to arrange for
773 the last WAL segment file written during the backup interval to be
779 Once the WAL segment files active during the backup are archived, you are
780 done. The file identified by <function>pg_stop_backup</>'s result is
781 the last segment that is required to form a complete set of backup files.
782 <function>pg_stop_backup</> does not return until the last segment has
784 Archiving of these files happens automatically since you have
785 already configured <varname>archive_command</>. In most cases this
786 happens quickly, but you are advised to monitor your archive
787 system to ensure there are no delays.
788 If the archive process has fallen behind
789 because of failures of the archive command, it will keep retrying
790 until the archive succeeds and the backup is complete.
791 If you wish to place a time limit on the execution of
792 <function>pg_stop_backup</>, set an appropriate
793 <varname>statement_timeout</varname> value.
800 Some file system backup tools emit warnings or errors
801 if the files they are trying to copy change while the copy proceeds.
802 When taking a base backup of an active database, this situation is normal
803 and not an error. However, you need to ensure that you can distinguish
804 complaints of this sort from real errors. For example, some versions
805 of <application>rsync</> return a separate exit code for
806 <quote>vanished source files</>, and you can write a driver script to
807 accept this exit code as a non-error case. Also, some versions of
808 GNU <application>tar</> return an error code indistinguishable from
809 a fatal error if a file was truncated while <application>tar</> was
810 copying it. Fortunately, GNU <application>tar</> versions 1.16 and
811 later exit with <literal>1</> if a file was changed during the backup,
812 and <literal>2</> for other errors.
816 It is not necessary to be concerned about the amount of time elapsed
817 between <function>pg_start_backup</> and the start of the actual backup,
818 nor between the end of the backup and <function>pg_stop_backup</>; a
819 few minutes' delay won't hurt anything. (However, if you normally run the
820 server with <varname>full_page_writes</> disabled, you might notice a drop
821 in performance between <function>pg_start_backup</> and
822 <function>pg_stop_backup</>, since <varname>full_page_writes</> is
823 effectively forced on during backup mode.) You must ensure that these
824 steps are carried out in sequence, without any possible
825 overlap, or you will invalidate the backup.
829 Be certain that your backup dump includes all of the files under
830 the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>).
831 If you are using tablespaces that do not reside underneath this directory,
832 be careful to include them as well (and be sure that your backup dump
833 archives symbolic links as links, otherwise the restore will corrupt
838 You can, however, omit from the backup dump the files within the
839 cluster's <filename>pg_xlog/</> subdirectory. This
840 slight adjustment is worthwhile because it reduces the risk
841 of mistakes when restoring. This is easy to arrange if
842 <filename>pg_xlog/</> is a symbolic link pointing to someplace outside
843 the cluster directory, which is a common setup anyway for performance
848 To make use of the backup, you will need to keep all the WAL
849 segment files generated during and after the file system backup.
850 To aid you in doing this, the <function>pg_stop_backup</> function
851 creates a <firstterm>backup history file</> that is immediately
852 stored into the WAL archive area. This file is named after the first
853 WAL segment file that you need for the file system backup.
854 For example, if the starting WAL file is
855 <literal>0000000100001234000055CD</> the backup history file will be
857 <literal>0000000100001234000055CD.007C9330.backup</>. (The second
858 part of the file name stands for an exact position within the WAL
859 file, and can ordinarily be ignored.) Once you have safely archived
860 the file system backup and the WAL segment files used during the
861 backup (as specified in the backup history file), all archived WAL
862 segments with names numerically less are no longer needed to recover
863 the file system backup and can be deleted. However, you should
864 consider keeping several backup sets to be absolutely certain that
865 you can recover your data.
869 The backup history file is just a small text file. It contains the
870 label string you gave to <function>pg_start_backup</>, as well as
871 the starting and ending times and WAL segments of the backup.
872 If you used the label to identify the associated dump file,
873 then the archived history file is enough to tell you which dump file to
878 Since you have to keep around all the archived WAL files back to your
879 last base backup, the interval between base backups should usually be
880 chosen based on how much storage you want to expend on archived WAL
881 files. You should also consider how long you are prepared to spend
882 recovering, if recovery should be necessary — the system will have to
883 replay all those WAL segments, and that could take awhile if it has
884 been a long time since the last base backup.
888 It's also worth noting that the <function>pg_start_backup</> function
889 makes a file named <filename>backup_label</> in the database cluster
890 directory, which is removed by <function>pg_stop_backup</>.
891 This file will of course be archived as a part of your backup dump file.
892 The backup label file includes the label string you gave to
893 <function>pg_start_backup</>, as well as the time at which
894 <function>pg_start_backup</> was run, and the name of the starting WAL
895 file. In case of confusion it is
896 therefore possible to look inside a backup dump file and determine
897 exactly which backup session the dump file came from.
901 It is also possible to make a backup dump while the server is
902 stopped. In this case, you obviously cannot use
903 <function>pg_start_backup</> or <function>pg_stop_backup</>, and
904 you will therefore be left to your own devices to keep track of which
905 backup dump is which and how far back the associated WAL files go.
906 It is generally better to follow the continuous archiving procedure above.
910 <sect2 id="backup-pitr-recovery">
911 <title>Recovering using a Continuous Archive Backup</title>
914 Okay, the worst has happened and you need to recover from your backup.
915 Here is the procedure:
919 Stop the server, if it's running.
924 If you have the space to do so,
925 copy the whole cluster data directory and any tablespaces to a temporary
926 location in case you need them later. Note that this precaution will
927 require that you have enough free space on your system to hold two
928 copies of your existing database. If you do not have enough space,
929 you should at least save the contents of the cluster's <filename>pg_xlog</>
930 subdirectory, as it might contain logs which
931 were not archived before the system went down.
936 Remove all existing files and subdirectories under the cluster data
937 directory and under the root directories of any tablespaces you are using.
942 Restore the database files from your file system backup. Be sure that they
943 are restored with the right ownership (the database system user, not
944 <literal>root</>!) and with the right permissions. If you are using
946 you should verify that the symbolic links in <filename>pg_tblspc/</>
947 were correctly restored.
952 Remove any files present in <filename>pg_xlog/</>; these came from the
953 file system backup and are therefore probably obsolete rather than current.
954 If you didn't archive <filename>pg_xlog/</> at all, then recreate
955 it with proper permissions,
956 being careful to ensure that you re-establish it as a symbolic link
957 if you had it set up that way before.
962 If you have unarchived WAL segment files that you saved in step 2,
963 copy them into <filename>pg_xlog/</>. (It is best to copy them,
964 not move them, so you still have the unmodified files if a
965 problem occurs and you have to start over.)
970 Create a recovery command file <filename>recovery.conf</> in the cluster
971 data directory (see <xref linkend="recovery-config">). You might
972 also want to temporarily modify <filename>pg_hba.conf</> to prevent
973 ordinary users from connecting until you are sure the recovery was successful.
978 Start the server. The server will go into recovery mode and
979 proceed to read through the archived WAL files it needs. Should the
980 recovery be terminated because of an external error, the server can
981 simply be restarted and it will continue recovery. Upon completion
982 of the recovery process, the server will rename
983 <filename>recovery.conf</> to <filename>recovery.done</> (to prevent
984 accidentally re-entering recovery mode later) and then
985 commence normal database operations.
990 Inspect the contents of the database to ensure you have recovered to
991 the desired state. If not, return to step 1. If all is well,
992 allow your users to connect by restoring <filename>pg_hba.conf</> to normal.
999 The key part of all this is to set up a recovery configuration file that
1000 describes how you want to recover and how far the recovery should
1001 run. You can use <filename>recovery.conf.sample</> (normally
1002 located in the installation's <filename>share/</> directory) as a
1003 prototype. The one thing that you absolutely must specify in
1004 <filename>recovery.conf</> is the <varname>restore_command</>,
1005 which tells <productname>PostgreSQL</> how to retrieve archived
1006 WAL file segments. Like the <varname>archive_command</>, this is
1007 a shell command string. It can contain <literal>%f</>, which is
1008 replaced by the name of the desired log file, and <literal>%p</>,
1009 which is replaced by the path name to copy the log file to.
1010 (The path name is relative to the current working directory,
1011 i.e., the cluster's data directory.)
1012 Write <literal>%%</> if you need to embed an actual <literal>%</>
1013 character in the command. The simplest useful command is
1016 restore_command = 'cp /mnt/server/archivedir/%f %p'
1018 which will copy previously archived WAL segments from the directory
1019 <filename>/mnt/server/archivedir</>. Of course, you can use something
1020 much more complicated, perhaps even a shell script that requests the
1021 operator to mount an appropriate tape.
1025 It is important that the command return nonzero exit status on failure.
1026 The command <emphasis>will</> be called requesting files that are not present
1027 in the archive; it must return nonzero when so asked. This is not an
1028 error condition. Not all of the requested files will be WAL segment
1029 files; you should also expect requests for files with a suffix of
1030 <literal>.backup</> or <literal>.history</>. Also be aware that
1031 the base name of the <literal>%p</> path will be different from
1032 <literal>%f</>; do not expect them to be interchangeable.
1036 WAL segments that cannot be found in the archive will be sought in
1037 <filename>pg_xlog/</>; this allows use of recent un-archived segments.
1038 However, segments that are available from the archive will be used in
1039 preference to files in <filename>pg_xlog/</>. The system will not
1040 overwrite the existing contents of <filename>pg_xlog/</> when retrieving
1045 Normally, recovery will proceed through all available WAL segments,
1046 thereby restoring the database to the current point in time (or as
1047 close as possible given the available WAL segments). Therefore, a normal
1048 recovery will end with a <quote>file not found</> message, the exact text
1049 of the error message depending upon your choice of
1050 <varname>restore_command</>. You may also see an error message
1051 at the start of recovery for a file named something like
1052 <filename>00000001.history</>. This is also normal and does not
1053 indicate a problem in simple recovery situations; see
1054 <xref linkend="backup-timelines"> for discussion.
1058 If you want to recover to some previous point in time (say, right before
1059 the junior DBA dropped your main transaction table), just specify the
1060 required stopping point in <filename>recovery.conf</>. You can specify
1061 the stop point, known as the <quote>recovery target</>, either by
1062 date/time or by completion of a specific transaction ID. As of this
1063 writing only the date/time option is very usable, since there are no tools
1064 to help you identify with any accuracy which transaction ID to use.
1069 The stop point must be after the ending time of the base backup, i.e.,
1070 the end time of <function>pg_stop_backup</>. You cannot use a base backup
1071 to recover to a time when that backup was in progress. (To
1072 recover to such a time, you must go back to your previous base backup
1073 and roll forward from there.)
1078 If recovery finds corrupted WAL data, recovery will
1079 halt at that point and the server will not start. In such a case the
1080 recovery process could be re-run from the beginning, specifying a
1081 <quote>recovery target</> before the point of corruption so that recovery
1082 can complete normally.
1083 If recovery fails for an external reason, such as a system crash or
1084 if the WAL archive has become inaccessible, then the recovery can simply
1085 be restarted and it will restart almost from where it failed.
1086 Recovery restart works much like checkpointing in normal operation:
1087 the server periodically forces all its state to disk, and then updates
1088 the <filename>pg_control</> file to indicate that the already-processed
1089 WAL data need not be scanned again.
1094 <sect2 id="backup-timelines">
1095 <title>Timelines</title>
1097 <indexterm zone="backup">
1098 <primary>timelines</primary>
1102 The ability to restore the database to a previous point in time creates
1103 some complexities that are akin to science-fiction stories about time
1104 travel and parallel universes. For example, in the original history of the database,
1105 suppose you dropped a critical table at 5:15PM on Tuesday evening, but
1106 didn't realize your mistake until Wednesday noon.
1107 Unfazed, you get out your backup, restore to the point-in-time 5:14PM
1108 Tuesday evening, and are up and running. In <emphasis>this</> history of
1109 the database universe, you never dropped the table. But suppose
1110 you later realize this wasn't such a great idea, and would like
1111 to return to sometime Wednesday morning in the original history.
1113 to if, while your database was up-and-running, it overwrote some of the
1114 WAL segment files that led up to the time you now wish you
1115 could get back to. Thus, to avoid this, you need to distinguish the series of
1116 WAL records generated after you've done a point-in-time recovery from
1117 those that were generated in the original database history.
1121 To deal with this problem, <productname>PostgreSQL</> has a notion
1122 of <firstterm>timelines</>. Whenever an archive recovery completes,
1123 a new timeline is created to identify the series of WAL records
1124 generated after that recovery. The timeline
1125 ID number is part of WAL segment file names so a new timeline does
1126 not overwrite the WAL data generated by previous timelines. It is
1127 in fact possible to archive many different timelines. While that might
1128 seem like a useless feature, it's often a lifesaver. Consider the
1129 situation where you aren't quite sure what point-in-time to recover to,
1130 and so have to do several point-in-time recoveries by trial and error
1131 until you find the best place to branch off from the old history. Without
1132 timelines this process would soon generate an unmanageable mess. With
1133 timelines, you can recover to <emphasis>any</> prior state, including
1134 states in timeline branches that you abandoned earlier.
1138 Every time a new timeline is created, <productname>PostgreSQL</> creates
1139 a <quote>timeline history</> file that shows which timeline it branched
1140 off from and when. These history files are necessary to allow the system
1141 to pick the right WAL segment files when recovering from an archive that
1142 contains multiple timelines. Therefore, they are archived into the WAL
1143 archive area just like WAL segment files. The history files are just
1144 small text files, so it's cheap and appropriate to keep them around
1145 indefinitely (unlike the segment files which are large). You can, if
1146 you like, add comments to a history file to record your own notes about
1147 how and why this particular timeline was created. Such comments will be
1148 especially valuable when you have a thicket of different timelines as
1149 a result of experimentation.
1153 The default behavior of recovery is to recover along the same timeline
1154 that was current when the base backup was taken. If you wish to recover
1155 into some child timeline (that is, you want to return to some state that
1156 was itself generated after a recovery attempt), you need to specify the
1157 target timeline ID in <filename>recovery.conf</>. You cannot recover into
1158 timelines that branched off earlier than the base backup.
1162 <sect2 id="backup-tips">
1163 <title>Tips and Examples</title>
1166 Some tips for configuring continuous archiving are given here.
1169 <sect3 id="backup-standalone">
1170 <title>Standalone hot backups</title>
1173 It is possible to use <productname>PostgreSQL</>'s backup facilities to
1174 produce standalone hot backups. These are backups that cannot be used
1175 for point-in-time recovery, yet are typically much faster to backup and
1176 restore than <application>pg_dump</> dumps. (They are also much larger
1177 than <application>pg_dump</> dumps, so in some cases the speed advantage
1182 To prepare for standalone hot backups, set <varname>archive_mode</> to
1183 <literal>on</>, and set up an <varname>archive_command</> that performs
1184 archiving only when a <emphasis>switch file</> exists. For example:
1186 archive_command = 'test ! -f /var/lib/pgsql/backup_in_progress || cp -i %p /var/lib/pgsql/archive/%f < /dev/null'
1188 This command will perform archiving when
1189 <filename>/var/lib/pgsql/backup_in_progress</> exists, and otherwise
1190 silently return zero exit status (allowing <productname>PostgreSQL</>
1191 to recycle the unwanted WAL file).
1195 With this preparation, a backup can be taken using a script like the
1198 touch /var/lib/pgsql/backup_in_progress
1199 psql -c "select pg_start_backup('hot_backup');"
1200 tar -cf /var/lib/pgsql/backup.tar /var/lib/pgsql/data/
1201 psql -c "select pg_stop_backup();"
1202 rm /var/lib/pgsql/backup_in_progress
1203 tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/
1205 The switch file <filename>/var/lib/pgsql/backup_in_progress</> is
1206 created first, enabling archiving of completed WAL files to occur.
1207 After the backup the switch file is removed. Archived WAL files are
1208 then added to the backup so that both base backup and all required
1209 WAL files are part of the same <application>tar</> file.
1210 Please remember to add error handling to your backup scripts.
1214 If archive storage size is a concern, use <application>pg_compresslog</>,
1215 <ulink url="http://pglesslog.projects.postgresql.org"></ulink>, to
1216 remove unnecessary <xref linkend="guc-full-page-writes"> and trailing
1217 space from the WAL files. You can then use
1218 <application>gzip</application> to further compress the output of
1219 <application>pg_compresslog</>:
1221 archive_command = 'pg_compresslog %p - | gzip > /var/lib/pgsql/archive/%f'
1223 You will then need to use <application>gunzip</> and
1224 <application>pg_decompresslog</> during recovery:
1226 restore_command = 'gunzip < /mnt/server/archivedir/%f | pg_decompresslog - %p'
1231 <sect3 id="backup-scripts">
1232 <title><varname>archive_command</varname> scripts</title>
1235 Many people choose to use scripts to define their
1236 <varname>archive_command</varname>, so that their
1237 <filename>postgresql.conf</> entry looks very simple:
1239 archive_command = 'local_backup_script.sh'
1241 Using a separate script file is advisable any time you want to use
1242 more than a single command in the archiving process.
1243 This allows all complexity to be managed within the script, which
1244 can be written in a popular scripting language such as
1245 <application>bash</> or <application>perl</>.
1246 Any messages written to <literal>stderr</> from the script will appear
1247 in the database server log, allowing complex configurations to be
1248 diagnosed easily if they fail.
1252 Examples of requirements that might be solved within a script include:
1256 Copying data to secure off-site data storage
1261 Batching WAL files so that they are transferred every three hours,
1262 rather than one at a time
1267 Interfacing with other backup and recovery software
1272 Interfacing with monitoring software to report errors
1280 <sect2 id="continuous-archiving-caveats">
1281 <title>Caveats</title>
1284 At this writing, there are several limitations of the continuous archiving
1285 technique. These will probably be fixed in future releases:
1290 Operations on hash indexes are not presently WAL-logged, so
1291 replay will not update these indexes. This will mean that any new inserts
1292 will be ignored by the index, updated rows will apparently disappear and
1293 deleted rows will still retain pointers. In other words, if you modify a
1294 table with a hash index on it then you will get incorrect query results
1295 on a standby server. When recovery completes it is recommended that you
1296 manually <xref linkend="sql-reindex">
1297 each such index after completing a recovery operation.
1303 If a <xref linkend="sql-createdatabase">
1304 command is executed while a base backup is being taken, and then
1305 the template database that the <command>CREATE DATABASE</> copied
1306 is modified while the base backup is still in progress, it is
1307 possible that recovery will cause those modifications to be
1308 propagated into the created database as well. This is of course
1309 undesirable. To avoid this risk, it is best not to modify any
1310 template databases while taking a base backup.
1316 <xref linkend="sql-createtablespace">
1317 commands are WAL-logged with the literal absolute path, and will
1318 therefore be replayed as tablespace creations with the same
1319 absolute path. This might be undesirable if the log is being
1320 replayed on a different machine. It can be dangerous even if the
1321 log is being replayed on the same machine, but into a new data
1322 directory: the replay will still overwrite the contents of the
1323 original tablespace. To avoid potential gotchas of this sort,
1324 the best practice is to take a new base backup after creating or
1325 dropping tablespaces.
1332 It should also be noted that the default <acronym>WAL</acronym>
1333 format is fairly bulky since it includes many disk page snapshots.
1334 These page snapshots are designed to support crash recovery, since
1335 we might need to fix partially-written disk pages. Depending on
1336 your system hardware and software, the risk of partial writes might
1337 be small enough to ignore, in which case you can significantly
1338 reduce the total volume of archived logs by turning off page
1339 snapshots using the <xref linkend="guc-full-page-writes">
1340 parameter. (Read the notes and warnings in <xref linkend="wal">
1341 before you do so.) Turning off page snapshots does not prevent
1342 use of the logs for PITR operations. An area for future
1343 development is to compress archived WAL data by removing
1344 unnecessary page copies even when <varname>full_page_writes</> is
1345 on. In the meantime, administrators might wish to reduce the number
1346 of page snapshots included in WAL by increasing the checkpoint
1347 interval parameters as much as feasible.
1352 <sect1 id="migration">
1353 <title>Migration Between Releases</title>
1355 <indexterm zone="migration">
1356 <primary>upgrading</primary>
1359 <indexterm zone="migration">
1360 <primary>version</primary>
1361 <secondary>compatibility</secondary>
1365 This section discusses how to migrate your database data from one
1366 <productname>PostgreSQL</> release to a newer one.
1367 The software installation procedure <foreignphrase>per se</> is not the
1368 subject of this section; those details are in <xref linkend="installation">.
1372 <productname>PostgreSQL</> major versions are represented by the
1373 first two digit groups of the version number, e.g. 8.4.
1374 <productname>PostgreSQL</> minor versions are represented by the
1375 the third group of version digits, i.e., 8.4.2 is the second minor
1376 release of 8.4. Minor releases never change the internal storage
1377 format and are always compatible with earlier and later minor
1378 releases of the same major version number, i.e. 8.4.2 is compatible
1379 with 8.4, 8.4.1 and 8.4.6. To update between compatible versions,
1380 you simply replace the executables while the server is down and
1381 restart the server. The data directory remains unchanged —
1382 minor upgrades are that simple.
1386 For <emphasis>major</> releases of <productname>PostgreSQL</>, the
1387 internal data storage format is subject to change. When migrating
1388 data from one major version of <productname>PostgreSQL</> to another,
1389 you need to back up your data and restore it on the new server.
1390 This must be done using <application>pg_dump</>; file system level
1391 backup methods will not work. There are checks in place that prevent
1392 you from using a data directory with an incompatible version of
1393 <productname>PostgreSQL</productname>, so no great harm can be done
1394 by trying to start the wrong server version on a data directory.
1398 It is recommended that you use the <application>pg_dump</> and
1399 <application>pg_dumpall</> programs from the newer version of
1400 <productname>PostgreSQL</>, to take advantage of enhancements
1401 that might have been made in these programs. Current releases of the
1402 dump programs can read data from any server version back to 7.0.
1406 The least downtime can be achieved by installing the new server in
1407 a different directory and running both the old and the new servers
1408 in parallel, on different ports. Then you can use something like:
1411 pg_dumpall -p 5432 | psql -d postgres -p 6543
1414 to transfer your data. Or use an intermediate file if you wish.
1415 Then you can shut down the old server and start the new server using
1416 the port the old one was running on. You should make sure that the
1417 old database is not updated after you begin to run
1418 <application>pg_dumpall</>, otherwise you will lose that data. See <xref
1419 linkend="client-authentication"> for information on how to prohibit
1424 It is also possible to use replication methods, such as
1425 <productname>Slony</>, to create a slave server with the updated version of
1426 <productname>PostgreSQL</>. The slave can be on the same computer or
1427 a different computer. Once it has synced up with the master server
1428 (running the older version of <productname>PostgreSQL</>), you can
1429 switch masters and make the slave the master and shut down the older
1430 database instance. Such a switch-over results in only several seconds
1431 of downtime for an upgrade.
1435 If you cannot or do not want to run two servers in parallel, you can
1436 do the backup step before installing the new version, bring down
1437 the old server, move the old version out of the way, install the new
1438 version, start the new server, and restore the data. For example:
1441 pg_dumpall > backup
1443 mv /usr/local/pgsql /usr/local/pgsql.old
1444 # Rename any tablespace directories as well
1445 cd ~/postgresql-&version;
1447 initdb -D /usr/local/pgsql/data
1448 postgres -D /usr/local/pgsql/data
1449 psql -f backup postgres
1452 See <xref linkend="runtime"> about ways to start and stop the
1453 server and other details. The installation instructions will advise
1454 you of strategic places to perform these steps.
1459 When you <quote>move the old installation out of the way</quote>
1460 it might no longer be perfectly usable. Some of the executable programs
1461 contain absolute paths to various installed programs and data files.
1462 This is usually not a big problem, but if you plan on using two
1463 installations in parallel for a while you should assign them
1464 different installation directories at build time. (This problem
1465 is rectified in <productname>PostgreSQL</> version 8.0 and later, so long
1466 as you move all subdirectories containing installed files together;
1467 for example if <filename>/usr/local/postgres/bin/</> goes to
1468 <filename>/usr/local/postgres.old/bin/</>, then
1469 <filename>/usr/local/postgres/share/</> must go to
1470 <filename>/usr/local/postgres.old/share/</>. In pre-8.0 releases
1471 moving an installation like this will not work.)
1476 In practice you probably want to test your client applications on the
1477 new version before switching over completely. This is another reason
1478 for setting up concurrent installations of old and new versions. When
1479 testing a <productname>PostgreSQL</> major upgrade, consider the
1480 following categories of possible changes:
1486 <term>Administration</term>
1489 The capabilities available for administrators to monitor and control
1490 the server often change and improve in each major release.
1499 Typically this includes new SQL command capabilities and not changes
1500 in behavior, unless specifically mentioned in the release notes.
1506 <term>Library API</term>
1509 Typically libraries like <application>libpq</> only add new
1510 functionality, again unless mentioned in the release notes.
1516 <term>System Catalogs</term>
1519 System catalog changes usually only affect database management tools.
1525 <term>Server C-language API</term>
1528 This involved changes in the backend function API, which is written
1529 in the C programming language. Such changes effect code that
1530 references backend functions deep inside the server.