1 <!-- $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.155 2010/05/03 09:14:16 heikki Exp $ -->
4 <title>Backup and Restore</title>
6 <indexterm zone="backup"><primary>backup</></>
9 As with everything that contains valuable data, <productname>PostgreSQL</>
10 databases should be backed up regularly. While the procedure is
11 essentially simple, it is important to have a clear understanding of
12 the underlying techniques and assumptions.
16 There are three fundamentally different approaches to backing up
17 <productname>PostgreSQL</> data:
19 <listitem><para><acronym>SQL</> dump</para></listitem>
20 <listitem><para>File system level backup</para></listitem>
21 <listitem><para>Continuous archiving</para></listitem>
23 Each has its own strengths and weaknesses; each is discussed in turn below.
26 <sect1 id="backup-dump">
27 <title><acronym>SQL</> Dump</title>
30 The idea behind this dump method is to generate a text file with SQL
31 commands that, when fed back to the server, will recreate the
32 database in the same state as it was at the time of the dump.
33 <productname>PostgreSQL</> provides the utility program
34 <xref linkend="app-pgdump"> for this purpose. The basic usage of this
37 pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">outfile</replaceable>
39 As you see, <application>pg_dump</> writes its result to the
40 standard output. We will see below how this can be useful.
44 <application>pg_dump</> is a regular <productname>PostgreSQL</>
45 client application (albeit a particularly clever one). This means
46 that you can perform this backup procedure from any remote host that has
47 access to the database. But remember that <application>pg_dump</>
48 does not operate with special permissions. In particular, it must
49 have read access to all tables that you want to back up, so in
50 practice you almost always have to run it as a database superuser.
54 To specify which database server <application>pg_dump</> should
55 contact, use the command line options <option>-h
56 <replaceable>host</></> and <option>-p <replaceable>port</></>. The
57 default host is the local host or whatever your
58 <envar>PGHOST</envar> environment variable specifies. Similarly,
59 the default port is indicated by the <envar>PGPORT</envar>
60 environment variable or, failing that, by the compiled-in default.
61 (Conveniently, the server will normally have the same compiled-in
66 Like any other <productname>PostgreSQL</> client application,
67 <application>pg_dump</> will by default connect with the database
68 user name that is equal to the current operating system user name. To override
69 this, either specify the <option>-U</option> option or set the
70 environment variable <envar>PGUSER</envar>. Remember that
71 <application>pg_dump</> connections are subject to the normal
72 client authentication mechanisms (which are described in <xref
73 linkend="client-authentication">).
77 Dumps created by <application>pg_dump</> are internally consistent,
78 meaning, the dump represents a snapshot of the database at the time
79 <application>pg_dump</> began running. <application>pg_dump</> does not
80 block other operations on the database while it is working.
81 (Exceptions are those operations that need to operate with an
82 exclusive lock, such as most forms of <command>ALTER TABLE</command>.)
87 If your database schema relies on OIDs (for instance, as foreign
88 keys) you must instruct <application>pg_dump</> to dump the OIDs
89 as well. To do this, use the <option>-o</option> command-line
94 <sect2 id="backup-dump-restore">
95 <title>Restoring the dump</title>
98 The text files created by <application>pg_dump</> are intended to
99 be read in by the <application>psql</application> program. The
100 general command form to restore a dump is
102 psql <replaceable class="parameter">dbname</replaceable> < <replaceable class="parameter">infile</replaceable>
104 where <replaceable class="parameter">infile</replaceable> is the
105 file output by the <application>pg_dump</> command. The database <replaceable
106 class="parameter">dbname</replaceable> will not be created by this
107 command, so you must create it yourself from <literal>template0</>
108 before executing <application>psql</> (e.g., with
109 <literal>createdb -T template0 <replaceable
110 class="parameter">dbname</></literal>). <application>psql</>
111 supports options similar to <application>pg_dump</> for specifying
112 the database server to connect to and the user name to use. See
113 the <xref linkend="app-psql"> reference page for more information.
117 Before restoring an SQL dump, all the users who own objects or were
118 granted permissions on objects in the dumped database must already
119 exist. If they do not, the restore will fail to recreate the
120 objects with the original ownership and/or permissions.
121 (Sometimes this is what you want, but usually it is not.)
125 By default, the <application>psql</> script will continue to
126 execute after an SQL error is encountered. You might wish to run
127 <application>psql</application> with
128 the <literal>ON_ERROR_STOP</> variable set to alter that
129 behavior and have <application>psql</application> exit with an
130 exit status of 3 if an SQL error occurs:
132 psql --set ON_ERROR_STOP=on dbname < infile
134 Either way, you will only have a partially restored database.
135 Alternatively, you can specify that the whole dump should be
136 restored as a single transaction, so the restore is either fully
137 completed or fully rolled back. This mode can be specified by
138 passing the <option>-1</> or <option>--single-transaction</>
139 command-line options to <application>psql</>. When using this
140 mode, be aware that even a minor error can rollback a
141 restore that has already run for many hours. However, that might
142 still be preferable to manually cleaning up a complex database
143 after a partially restored dump.
147 The ability of <application>pg_dump</> and <application>psql</> to
148 write to or read from pipes makes it possible to dump a database
149 directly from one server to another, for example:
151 pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</>
157 The dumps produced by <application>pg_dump</> are relative to
158 <literal>template0</>. This means that any languages, procedures,
159 etc. added via <literal>template1</> will also be dumped by
160 <application>pg_dump</>. As a result, when restoring, if you are
161 using a customized <literal>template1</>, you must create the
162 empty database from <literal>template0</>, as in the example
168 After restoring a backup, it is wise to run <xref
169 linkend="sql-analyze"> on each
170 database so the query optimizer has useful statistics;
171 see <xref linkend="vacuum-for-statistics">
172 and <xref linkend="autovacuum"> for more information.
173 For more advice on how to load large amounts of data
174 into <productname>PostgreSQL</> efficiently, refer to <xref
179 <sect2 id="backup-dump-all">
180 <title>Using <application>pg_dumpall</></title>
183 <application>pg_dump</> dumps only a single database at a time,
184 and it does not dump information about roles or tablespaces
185 (because those are cluster-wide rather than per-database).
186 To support convenient dumping of the entire contents of a database
187 cluster, the <xref linkend="app-pg-dumpall"> program is provided.
188 <application>pg_dumpall</> backs up each database in a given
189 cluster, and also preserves cluster-wide data such as role and
190 tablespace definitions. The basic usage of this command is:
192 pg_dumpall > <replaceable>outfile</>
194 The resulting dump can be restored with <application>psql</>:
196 psql -f <replaceable class="parameter">infile</replaceable> postgres
198 (Actually, you can specify any existing database name to start from,
199 but if you are loading into an empty cluster then <literal>postgres</>
200 should usually be used.) It is always necessary to have
201 database superuser access when restoring a <application>pg_dumpall</>
202 dump, as that is required to restore the role and tablespace information.
203 If you use tablespaces, make sure that the tablespace paths in the
204 dump are appropriate for the new installation.
208 <application>pg_dumpall</> works by emitting commands to re-create
209 roles, tablespaces, and empty databases, then invoking
210 <application>pg_dump</> for each database. This means that while
211 each database will be internally consistent, the snapshots of
212 different databases might not be exactly in-sync.
216 <sect2 id="backup-dump-large">
217 <title>Handling large databases</title>
220 Some operating systems have maximum file size limits that cause
221 problems when creating large <application>pg_dump</> output files.
222 Fortunately, <application>pg_dump</> can write to the standard
223 output, so you can use standard Unix tools to work around this
224 potential problem. There are several possible methods:
228 <title>Use compressed dumps.</title>
230 You can use your favorite compression program, for example
231 <application>gzip</application>:
234 pg_dump <replaceable class="parameter">dbname</replaceable> | gzip > <replaceable class="parameter">filename</replaceable>.gz
240 gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
246 cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
252 <title>Use <command>split</>.</title>
254 The <command>split</command> command
255 allows you to split the output into smaller files that are
256 acceptable in size to the underlying file system. For example, to
257 make chunks of 1 megabyte:
260 pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 1m - <replaceable class="parameter">filename</replaceable>
266 cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
272 <title>Use <application>pg_dump</>'s custom dump format.</title>
274 If <productname>PostgreSQL</productname> was built on a system with the
275 <application>zlib</> compression library installed, the custom dump
276 format will compress data as it writes it to the output file. This will
277 produce dump file sizes similar to using <command>gzip</command>, but it
278 has the added advantage that tables can be restored selectively. The
279 following command dumps a database using the custom dump format:
282 pg_dump -Fc <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">filename</replaceable>
285 A custom-format dump is not a script for <application>psql</>, but
286 instead must be restored with <application>pg_restore</>, for example:
289 pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable class="parameter">filename</replaceable>
292 See the <xref linkend="app-pgdump"> and <xref
293 linkend="app-pgrestore"> reference pages for details.
298 For very large databases, you might need to combine <command>split</>
299 with one of the other two approaches.
305 <sect1 id="backup-file">
306 <title>File System Level Backup</title>
309 An alternative backup strategy is to directly copy the files that
310 <productname>PostgreSQL</> uses to store the data in the database;
311 <xref linkend="creating-cluster"> explains where these files
312 are located. You can use whatever method you prefer
313 for doing file system backups; for example:
316 tar -cf backup.tar /usr/local/pgsql/data
321 There are two restrictions, however, which make this method
322 impractical, or at least inferior to the <application>pg_dump</>
328 The database server <emphasis>must</> be shut down in order to
329 get a usable backup. Half-way measures such as disallowing all
330 connections will <emphasis>not</emphasis> work
331 (in part because <command>tar</command> and similar tools do not take
332 an atomic snapshot of the state of the file system,
333 but also because of internal buffering within the server).
334 Information about stopping the server can be found in
335 <xref linkend="server-shutdown">. Needless to say, you
336 also need to shut down the server before restoring the data.
342 If you have dug into the details of the file system layout of the
343 database, you might be tempted to try to back up or restore only certain
344 individual tables or databases from their respective files or
345 directories. This will <emphasis>not</> work because the
346 information contained in these files is not usable without
347 the commit log files,
348 <filename>pg_clog/*</filename>, which contain the commit status of
349 all transactions. A table file is only usable with this
350 information. Of course it is also impossible to restore only a
351 table and the associated <filename>pg_clog</filename> data
352 because that would render all other tables in the database
353 cluster useless. So file system backups only work for complete
354 backup and restoration of an entire database cluster.
361 An alternative file-system backup approach is to make a
362 <quote>consistent snapshot</quote> of the data directory, if the
363 file system supports that functionality (and you are willing to
364 trust that it is implemented correctly). The typical procedure is
365 to make a <quote>frozen snapshot</> of the volume containing the
366 database, then copy the whole data directory (not just parts, see
367 above) from the snapshot to a backup device, then release the frozen
368 snapshot. This will work even while the database server is running.
369 However, a backup created in this way saves
370 the database files in a state as if the database server was not
371 properly shut down; therefore, when you start the database server
372 on the backed-up data, it will think the previous server instance
373 crashed and will replay the WAL log. This is not a problem; just
374 be aware of it (and be sure to include the WAL files in your backup).
378 If your database is spread across multiple file systems, there might not
379 be any way to obtain exactly-simultaneous frozen snapshots of all
380 the volumes. For example, if your data files and WAL log are on different
381 disks, or if tablespaces are on different file systems, it might
382 not be possible to use snapshot backup because the snapshots
383 <emphasis>must</> be simultaneous.
384 Read your file system documentation very carefully before trusting
385 the consistent-snapshot technique in such situations.
389 If simultaneous snapshots are not possible, one option is to shut down
390 the database server long enough to establish all the frozen snapshots.
391 Another option is perform a continuous archiving base backup (<xref
392 linkend="backup-base-backup">) because such backups are immune to file
393 system changes during the backup. This requires enabling continuous
394 archiving just during the backup process; restore is done using
395 continuous archive recovery (<xref linkend="backup-pitr-recovery">).
399 Another option is to use <application>rsync</> to perform a file
400 system backup. This is done by first running <application>rsync</>
401 while the database server is running, then shutting down the database
402 server just long enough to do a second <application>rsync</>. The
403 second <application>rsync</> will be much quicker than the first,
404 because it has relatively little data to transfer, and the end result
405 will be consistent because the server was down. This method
406 allows a file system backup to be performed with minimal downtime.
410 Note that a file system backup will typically be larger
411 than an SQL dump. (<application>pg_dump</application> does not need to dump
412 the contents of indexes for example, just the commands to recreate
413 them.) However, taking a file system backup might be faster.
417 <sect1 id="continuous-archiving">
418 <title>Continuous Archiving and Point-In-Time Recovery (PITR)</title>
420 <indexterm zone="backup">
421 <primary>continuous archiving</primary>
424 <indexterm zone="backup">
425 <primary>point-in-time recovery</primary>
428 <indexterm zone="backup">
429 <primary>PITR</primary>
433 At all times, <productname>PostgreSQL</> maintains a
434 <firstterm>write ahead log</> (WAL) in the <filename>pg_xlog/</>
435 subdirectory of the cluster's data directory. The log records
436 every change made to the database's data files. This log exists
437 primarily for crash-safety purposes: if the system crashes, the
438 database can be restored to consistency by <quote>replaying</> the
439 log entries made since the last checkpoint. However, the existence
440 of the log makes it possible to use a third strategy for backing up
441 databases: we can combine a file-system-level backup with backup of
442 the WAL files. If recovery is needed, we restore the file system backup and
443 then replay from the backed-up WAL files to bring the system to a
444 current state. This approach is more complex to administer than
445 either of the previous approaches, but it has some significant
450 We do not need a perfectly consistent file system backup as the starting point.
451 Any internal inconsistency in the backup will be corrected by log
452 replay (this is not significantly different from what happens during
453 crash recovery). So we do not need a file system snapshot capability,
454 just <application>tar</> or a similar archiving tool.
459 Since we can combine an indefinitely long sequence of WAL files
460 for replay, continuous backup can be achieved simply by continuing to archive
461 the WAL files. This is particularly valuable for large databases, where
462 it might not be convenient to take a full backup frequently.
467 It is not necessary to replay the WAL entries all the
468 way to the end. We could stop the replay at any point and have a
469 consistent snapshot of the database as it was at that time. Thus,
470 this technique supports <firstterm>point-in-time recovery</>: it is
471 possible to restore the database to its state at any time since your base
477 If we continuously feed the series of WAL files to another
478 machine that has been loaded with the same base backup file, we
479 have a <firstterm>warm standby</> system: at any point we can bring up
480 the second machine and it will have a nearly-current copy of the
489 <application>pg_dump</application> and
490 <application>pg_dumpall</application> do not produce file-system-level
491 backups and cannot be used as part of a continuous-archiving solution.
492 Such dumps are <emphasis>logical</> and do not contain enough
493 information to used by WAL reply.
498 As with the plain file-system-backup technique, this method can only
499 support restoration of an entire database cluster, not a subset.
500 Also, it requires a lot of archival storage: the base backup might be bulky,
501 and a busy system will generate many megabytes of WAL traffic that
502 have to be archived. Still, it is the preferred backup technique in
503 many situations where high reliability is needed.
507 To recover successfully using continuous archiving (also called
508 <quote>online backup</> by many database vendors), you need a continuous
509 sequence of archived WAL files that extends back at least as far as the
510 start time of your backup. So to get started, you should set up and test
511 your procedure for archiving WAL files <emphasis>before</> you take your
512 first base backup. Accordingly, we first discuss the mechanics of
516 <sect2 id="backup-archiving-wal">
517 <title>Setting up WAL archiving</title>
520 In an abstract sense, a running <productname>PostgreSQL</> system
521 produces an indefinitely long sequence of WAL records. The system
522 physically divides this sequence into WAL <firstterm>segment
523 files</>, which are normally 16MB apiece (although the segment size
524 can be altered when building <productname>PostgreSQL</>). The segment
525 files are given numeric names that reflect their position in the
526 abstract WAL sequence. When not using WAL archiving, the system
527 normally creates just a few segment files and then
528 <quote>recycles</> them by renaming no-longer-needed segment files
529 to higher segment numbers. It's assumed that segment files whose
530 contents precede the checkpoint-before-last are no longer of
531 interest and can be recycled.
535 When archiving WAL data, we need to capture the contents of each segment
536 file once it is filled, and save that data somewhere before the segment
537 file is recycled for reuse. Depending on the application and the
538 available hardware, there could be many different ways of <quote>saving
539 the data somewhere</>: we could copy the segment files to an NFS-mounted
540 directory on another machine, write them onto a tape drive (ensuring that
541 you have a way of identifying the original name of each file), or batch
542 them together and burn them onto CDs, or something else entirely. To
543 provide the database administrator with flexibility,
544 <productname>PostgreSQL</> tries not to make any assumptions about how
545 the archiving will be done. Instead, <productname>PostgreSQL</> lets
546 the administrator specify a shell command to be executed to copy a
547 completed segment file to wherever it needs to go. The command could be
548 as simple as a <literal>cp</>, or it could invoke a complex shell
549 script — it's all up to you.
553 To enable WAL archiving, set the <xref linkend="guc-wal-level">
554 configuration parameter to <literal>archive</> (or <literal>hot_standby</>),
555 <xref linkend="guc-archive-mode"> to <literal>on</>,
556 and specify the shell command to use in the <xref
557 linkend="guc-archive-command"> configuration parameter. In practice
558 these settings will always be placed in the
559 <filename>postgresql.conf</filename> file.
560 In <varname>archive_command</>,
561 <literal>%p</> is replaced by the path name of the file to
562 archive, while <literal>%f</> is replaced by only the file name.
563 (The path name is relative to the current working directory,
564 i.e., the cluster's data directory.)
565 Use <literal>%%</> if you need to embed an actual <literal>%</>
566 character in the command. The simplest useful command is something
569 archive_command = 'cp -i %p /mnt/server/archivedir/%f </dev/null'
571 which will copy archivable WAL segments to the directory
572 <filename>/mnt/server/archivedir</>. (This is an example, not a
573 recommendation, and might not work on all platforms.) After the
574 <literal>%p</> and <literal>%f</> parameters have been replaced,
575 the actual command executed might look like this:
577 cp -i pg_xlog/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065 </dev/null
579 A similar command will be generated for each new file to be archived.
583 The archive command will be executed under the ownership of the same
584 user that the <productname>PostgreSQL</> server is running as. Since
585 the series of WAL files being archived contains effectively everything
586 in your database, you will want to be sure that the archived data is
587 protected from prying eyes; for example, archive into a directory that
588 does not have group or world read access.
592 It is important that the archive command return zero exit status if and
593 only if it succeeds. Upon getting a zero result,
594 <productname>PostgreSQL</> will assume that the file has been
595 successfully archived, and will remove or recycle it. However, a nonzero
596 status tells <productname>PostgreSQL</> that the file was not archived;
597 it will try again periodically until it succeeds.
601 The archive command should generally be designed to refuse to overwrite
602 any pre-existing archive file. This is an important safety feature to
603 preserve the integrity of your archive in case of administrator error
604 (such as sending the output of two different servers to the same archive
606 It is advisable to test your proposed archive command to ensure that it
607 indeed does not overwrite an existing file, <emphasis>and that it returns
608 nonzero status in this case</>. On many Unix platforms, <command>cp
609 -i</> causes copy to prompt before overwriting a file, and
610 <literal>< /dev/null</> causes the prompt (and overwriting) to
611 fail. If your platform does not support this behavior, you should
612 add a command to test for the existence of the archive file. For
613 example, something like:
615 archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
617 works correctly on most Unix variants.
621 While designing your archiving setup, consider what will happen if
622 the archive command fails repeatedly because some aspect requires
623 operator intervention or the archive runs out of space. For example, this
624 could occur if you write to tape without an autochanger; when the tape
625 fills, nothing further can be archived until the tape is swapped.
626 You should ensure that any error condition or request to a human operator
627 is reported appropriately so that the situation can be
628 resolved reasonably quickly. The <filename>pg_xlog/</> directory will
629 continue to fill with WAL segment files until the situation is resolved.
630 (If the file system containing <filename>pg_xlog/</> fills up,
631 <productname>PostgreSQL</> will do a PANIC shutdown. No committed
632 transactions will be lost, but the database will remain offline until
633 you free some space.)
637 The speed of the archiving command is unimportant as long as it can keep up
638 with the average rate at which your server generates WAL data. Normal
639 operation continues even if the archiving process falls a little behind.
640 If archiving falls significantly behind, this will increase the amount of
641 data that would be lost in the event of a disaster. It will also mean that
642 the <filename>pg_xlog/</> directory will contain large numbers of
643 not-yet-archived segment files, which could eventually exceed available
644 disk space. You are advised to monitor the archiving process to ensure that
645 it is working as you intend.
649 In writing your archive command, you should assume that the file names to
650 be archived can be up to 64 characters long and can contain any
651 combination of ASCII letters, digits, and dots. It is not necessary to
652 preserve the original relative path (<literal>%p</>) but it is necessary to
653 preserve the file name (<literal>%f</>).
657 Note that although WAL archiving will allow you to restore any
658 modifications made to the data in your <productname>PostgreSQL</> database,
659 it will not restore changes made to configuration files (that is,
660 <filename>postgresql.conf</>, <filename>pg_hba.conf</> and
661 <filename>pg_ident.conf</>), since those are edited manually rather
662 than through SQL operations.
663 You might wish to keep the configuration files in a location that will
664 be backed up by your regular file system backup procedures. See
665 <xref linkend="runtime-config-file-locations"> for how to relocate the
670 The archive command is only invoked on completed WAL segments. Hence,
671 if your server generates only little WAL traffic (or has slack periods
672 where it does so), there could be a long delay between the completion
673 of a transaction and its safe recording in archive storage. To put
674 a limit on how old unarchived data can be, you can set
675 <xref linkend="guc-archive-timeout"> to force the server to switch
676 to a new WAL segment file at least that often. Note that archived
677 files that are archived early due to a forced switch are still the same
678 length as completely full files. It is therefore unwise to set a very
679 short <varname>archive_timeout</> — it will bloat your archive
680 storage. <varname>archive_timeout</> settings of a minute or so are
685 Also, you can force a segment switch manually with
686 <function>pg_switch_xlog</> if you want to ensure that a
687 just-finished transaction is archived as soon as possible. Other utility
688 functions related to WAL management are listed in <xref
689 linkend="functions-admin-backup-table">.
693 When <varname>wal_level</> is <literal>minimal</> some SQL commands
694 are optimized to avoid WAL logging, as described in <xref
695 linkend="populate-pitr">. If archiving or streaming replication were
696 turned on during execution of one of these statements, WAL would not
697 contain enough information for archive recovery. (Crash recovery is
698 unaffected.) For this reason, <varname>wal_level</> can only be changed at
699 server start. However, <varname>archive_command</> can be changed with a
700 configuration file reload. If you wish to temporarily stop archiving,
701 one way to do it is to set <varname>archive_command</> to the empty
702 string (<literal>''</>).
703 This will cause WAL files to accumulate in <filename>pg_xlog/</> until a
704 working <varname>archive_command</> is re-established.
708 <sect2 id="backup-base-backup">
709 <title>Making a Base Backup</title>
712 The procedure for making a base backup is relatively simple:
716 Ensure that WAL archiving is enabled and working.
721 Connect to the database as a superuser and issue the command:
723 SELECT pg_start_backup('label');
725 where <literal>label</> is any string you want to use to uniquely
726 identify this backup operation. (One good practice is to use the
727 full path where you intend to put the backup dump file.)
728 <function>pg_start_backup</> creates a <firstterm>backup label</> file,
729 called <filename>backup_label</>, in the cluster directory with
730 information about your backup, including the start time and label
735 It does not matter which database within the cluster you connect to to
736 issue this command. You can ignore the result returned by the function;
737 but if it reports an error, deal with that before proceeding.
741 By default, <function>pg_start_backup</> can take a long time to finish.
742 This is because it performs a checkpoint, and the I/O
743 required for the checkpoint will be spread out over a significant
744 period of time, by default half your inter-checkpoint interval
745 (see the configuration parameter
746 <xref linkend="guc-checkpoint-completion-target">). This is
747 usually what you want, because it minimizes the impact on query
748 processing. If you want to start the backup as soon as
751 SELECT pg_start_backup('label', true);
753 This forces the checkpoint to be done as quickly as possible.
758 Perform the backup, using any convenient file-system-backup tool
759 such as <application>tar</> or <application>cpio</> (not
760 <application>pg_dump</application> or
761 <application>pg_dumpall</application>). It is neither
762 necessary nor desirable to stop normal operation of the database
768 Again connect to the database as a superuser, and issue the command:
770 SELECT pg_stop_backup();
772 This terminates the backup mode and performs an automatic switch to
773 the next WAL segment. The reason for the switch is to arrange for
774 the last WAL segment file written during the backup interval to be
780 Once the WAL segment files active during the backup are archived, you are
781 done. The file identified by <function>pg_stop_backup</>'s result is
782 the last segment that is required to form a complete set of backup files.
783 If <varname>archive_mode</> is enabled,
784 <function>pg_stop_backup</> does not return until the last segment has
786 Archiving of these files happens automatically since you have
787 already configured <varname>archive_command</>. In most cases this
788 happens quickly, but you are advised to monitor your archive
789 system to ensure there are no delays.
790 If the archive process has fallen behind
791 because of failures of the archive command, it will keep retrying
792 until the archive succeeds and the backup is complete.
793 If you wish to place a time limit on the execution of
794 <function>pg_stop_backup</>, set an appropriate
795 <varname>statement_timeout</varname> value.
802 Some file system backup tools emit warnings or errors
803 if the files they are trying to copy change while the copy proceeds.
804 When taking a base backup of an active database, this situation is normal
805 and not an error. However, you need to ensure that you can distinguish
806 complaints of this sort from real errors. For example, some versions
807 of <application>rsync</> return a separate exit code for
808 <quote>vanished source files</>, and you can write a driver script to
809 accept this exit code as a non-error case. Also, some versions of
810 GNU <application>tar</> return an error code indistinguishable from
811 a fatal error if a file was truncated while <application>tar</> was
812 copying it. Fortunately, GNU <application>tar</> versions 1.16 and
813 later exit with <literal>1</> if a file was changed during the backup,
814 and <literal>2</> for other errors.
818 It is not necessary to be concerned about the amount of time elapsed
819 between <function>pg_start_backup</> and the start of the actual backup,
820 nor between the end of the backup and <function>pg_stop_backup</>; a
821 few minutes' delay won't hurt anything. (However, if you normally run the
822 server with <varname>full_page_writes</> disabled, you might notice a drop
823 in performance between <function>pg_start_backup</> and
824 <function>pg_stop_backup</>, since <varname>full_page_writes</> is
825 effectively forced on during backup mode.) You must ensure that these
826 steps are carried out in sequence, without any possible
827 overlap, or you will invalidate the backup.
831 Be certain that your backup dump includes all of the files under
832 the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>).
833 If you are using tablespaces that do not reside underneath this directory,
834 be careful to include them as well (and be sure that your backup dump
835 archives symbolic links as links, otherwise the restore will corrupt
840 You can, however, omit from the backup dump the files within the
841 cluster's <filename>pg_xlog/</> subdirectory. This
842 slight adjustment is worthwhile because it reduces the risk
843 of mistakes when restoring. This is easy to arrange if
844 <filename>pg_xlog/</> is a symbolic link pointing to someplace outside
845 the cluster directory, which is a common setup anyway for performance
850 To make use of the backup, you will need to keep all the WAL
851 segment files generated during and after the file system backup.
852 To aid you in doing this, the <function>pg_stop_backup</> function
853 creates a <firstterm>backup history file</> that is immediately
854 stored into the WAL archive area. This file is named after the first
855 WAL segment file that you need for the file system backup.
856 For example, if the starting WAL file is
857 <literal>0000000100001234000055CD</> the backup history file will be
859 <literal>0000000100001234000055CD.007C9330.backup</>. (The second
860 part of the file name stands for an exact position within the WAL
861 file, and can ordinarily be ignored.) Once you have safely archived
862 the file system backup and the WAL segment files used during the
863 backup (as specified in the backup history file), all archived WAL
864 segments with names numerically less are no longer needed to recover
865 the file system backup and can be deleted. However, you should
866 consider keeping several backup sets to be absolutely certain that
867 you can recover your data.
871 The backup history file is just a small text file. It contains the
872 label string you gave to <function>pg_start_backup</>, as well as
873 the starting and ending times and WAL segments of the backup.
874 If you used the label to identify the associated dump file,
875 then the archived history file is enough to tell you which dump file to
880 Since you have to keep around all the archived WAL files back to your
881 last base backup, the interval between base backups should usually be
882 chosen based on how much storage you want to expend on archived WAL
883 files. You should also consider how long you are prepared to spend
884 recovering, if recovery should be necessary — the system will have to
885 replay all those WAL segments, and that could take awhile if it has
886 been a long time since the last base backup.
890 It's also worth noting that the <function>pg_start_backup</> function
891 makes a file named <filename>backup_label</> in the database cluster
892 directory, which is removed by <function>pg_stop_backup</>.
893 This file will of course be archived as a part of your backup dump file.
894 The backup label file includes the label string you gave to
895 <function>pg_start_backup</>, as well as the time at which
896 <function>pg_start_backup</> was run, and the name of the starting WAL
897 file. In case of confusion it is
898 therefore possible to look inside a backup dump file and determine
899 exactly which backup session the dump file came from.
903 It is also possible to make a backup dump while the server is
904 stopped. In this case, you obviously cannot use
905 <function>pg_start_backup</> or <function>pg_stop_backup</>, and
906 you will therefore be left to your own devices to keep track of which
907 backup dump is which and how far back the associated WAL files go.
908 It is generally better to follow the continuous archiving procedure above.
912 <sect2 id="backup-pitr-recovery">
913 <title>Recovering using a Continuous Archive Backup</title>
916 Okay, the worst has happened and you need to recover from your backup.
917 Here is the procedure:
921 Stop the server, if it's running.
926 If you have the space to do so,
927 copy the whole cluster data directory and any tablespaces to a temporary
928 location in case you need them later. Note that this precaution will
929 require that you have enough free space on your system to hold two
930 copies of your existing database. If you do not have enough space,
931 you should at least save the contents of the cluster's <filename>pg_xlog</>
932 subdirectory, as it might contain logs which
933 were not archived before the system went down.
938 Remove all existing files and subdirectories under the cluster data
939 directory and under the root directories of any tablespaces you are using.
944 Restore the database files from your file system backup. Be sure that they
945 are restored with the right ownership (the database system user, not
946 <literal>root</>!) and with the right permissions. If you are using
948 you should verify that the symbolic links in <filename>pg_tblspc/</>
949 were correctly restored.
954 Remove any files present in <filename>pg_xlog/</>; these came from the
955 file system backup and are therefore probably obsolete rather than current.
956 If you didn't archive <filename>pg_xlog/</> at all, then recreate
957 it with proper permissions,
958 being careful to ensure that you re-establish it as a symbolic link
959 if you had it set up that way before.
964 If you have unarchived WAL segment files that you saved in step 2,
965 copy them into <filename>pg_xlog/</>. (It is best to copy them,
966 not move them, so you still have the unmodified files if a
967 problem occurs and you have to start over.)
972 Create a recovery command file <filename>recovery.conf</> in the cluster
973 data directory (see <xref linkend="recovery-config">). You might
974 also want to temporarily modify <filename>pg_hba.conf</> to prevent
975 ordinary users from connecting until you are sure the recovery was successful.
980 Start the server. The server will go into recovery mode and
981 proceed to read through the archived WAL files it needs. Should the
982 recovery be terminated because of an external error, the server can
983 simply be restarted and it will continue recovery. Upon completion
984 of the recovery process, the server will rename
985 <filename>recovery.conf</> to <filename>recovery.done</> (to prevent
986 accidentally re-entering recovery mode later) and then
987 commence normal database operations.
992 Inspect the contents of the database to ensure you have recovered to
993 the desired state. If not, return to step 1. If all is well,
994 allow your users to connect by restoring <filename>pg_hba.conf</> to normal.
1001 The key part of all this is to set up a recovery configuration file that
1002 describes how you want to recover and how far the recovery should
1003 run. You can use <filename>recovery.conf.sample</> (normally
1004 located in the installation's <filename>share/</> directory) as a
1005 prototype. The one thing that you absolutely must specify in
1006 <filename>recovery.conf</> is the <varname>restore_command</>,
1007 which tells <productname>PostgreSQL</> how to retrieve archived
1008 WAL file segments. Like the <varname>archive_command</>, this is
1009 a shell command string. It can contain <literal>%f</>, which is
1010 replaced by the name of the desired log file, and <literal>%p</>,
1011 which is replaced by the path name to copy the log file to.
1012 (The path name is relative to the current working directory,
1013 i.e., the cluster's data directory.)
1014 Write <literal>%%</> if you need to embed an actual <literal>%</>
1015 character in the command. The simplest useful command is
1018 restore_command = 'cp /mnt/server/archivedir/%f %p'
1020 which will copy previously archived WAL segments from the directory
1021 <filename>/mnt/server/archivedir</>. Of course, you can use something
1022 much more complicated, perhaps even a shell script that requests the
1023 operator to mount an appropriate tape.
1027 It is important that the command return nonzero exit status on failure.
1028 The command <emphasis>will</> be called requesting files that are not present
1029 in the archive; it must return nonzero when so asked. This is not an
1030 error condition. Not all of the requested files will be WAL segment
1031 files; you should also expect requests for files with a suffix of
1032 <literal>.backup</> or <literal>.history</>. Also be aware that
1033 the base name of the <literal>%p</> path will be different from
1034 <literal>%f</>; do not expect them to be interchangeable.
1038 WAL segments that cannot be found in the archive will be sought in
1039 <filename>pg_xlog/</>; this allows use of recent un-archived segments.
1040 However, segments that are available from the archive will be used in
1041 preference to files in <filename>pg_xlog/</>. The system will not
1042 overwrite the existing contents of <filename>pg_xlog/</> when retrieving
1047 Normally, recovery will proceed through all available WAL segments,
1048 thereby restoring the database to the current point in time (or as
1049 close as possible given the available WAL segments). Therefore, a normal
1050 recovery will end with a <quote>file not found</> message, the exact text
1051 of the error message depending upon your choice of
1052 <varname>restore_command</>. You may also see an error message
1053 at the start of recovery for a file named something like
1054 <filename>00000001.history</>. This is also normal and does not
1055 indicate a problem in simple recovery situations; see
1056 <xref linkend="backup-timelines"> for discussion.
1060 If you want to recover to some previous point in time (say, right before
1061 the junior DBA dropped your main transaction table), just specify the
1062 required stopping point in <filename>recovery.conf</>. You can specify
1063 the stop point, known as the <quote>recovery target</>, either by
1064 date/time or by completion of a specific transaction ID. As of this
1065 writing only the date/time option is very usable, since there are no tools
1066 to help you identify with any accuracy which transaction ID to use.
1071 The stop point must be after the ending time of the base backup, i.e.,
1072 the end time of <function>pg_stop_backup</>. You cannot use a base backup
1073 to recover to a time when that backup was in progress. (To
1074 recover to such a time, you must go back to your previous base backup
1075 and roll forward from there.)
1080 If recovery finds corrupted WAL data, recovery will
1081 halt at that point and the server will not start. In such a case the
1082 recovery process could be re-run from the beginning, specifying a
1083 <quote>recovery target</> before the point of corruption so that recovery
1084 can complete normally.
1085 If recovery fails for an external reason, such as a system crash or
1086 if the WAL archive has become inaccessible, then the recovery can simply
1087 be restarted and it will restart almost from where it failed.
1088 Recovery restart works much like checkpointing in normal operation:
1089 the server periodically forces all its state to disk, and then updates
1090 the <filename>pg_control</> file to indicate that the already-processed
1091 WAL data need not be scanned again.
1096 <sect2 id="backup-timelines">
1097 <title>Timelines</title>
1099 <indexterm zone="backup">
1100 <primary>timelines</primary>
1104 The ability to restore the database to a previous point in time creates
1105 some complexities that are akin to science-fiction stories about time
1106 travel and parallel universes. For example, in the original history of the database,
1107 suppose you dropped a critical table at 5:15PM on Tuesday evening, but
1108 didn't realize your mistake until Wednesday noon.
1109 Unfazed, you get out your backup, restore to the point-in-time 5:14PM
1110 Tuesday evening, and are up and running. In <emphasis>this</> history of
1111 the database universe, you never dropped the table. But suppose
1112 you later realize this wasn't such a great idea, and would like
1113 to return to sometime Wednesday morning in the original history.
1115 to if, while your database was up-and-running, it overwrote some of the
1116 WAL segment files that led up to the time you now wish you
1117 could get back to. Thus, to avoid this, you need to distinguish the series of
1118 WAL records generated after you've done a point-in-time recovery from
1119 those that were generated in the original database history.
1123 To deal with this problem, <productname>PostgreSQL</> has a notion
1124 of <firstterm>timelines</>. Whenever an archive recovery completes,
1125 a new timeline is created to identify the series of WAL records
1126 generated after that recovery. The timeline
1127 ID number is part of WAL segment file names so a new timeline does
1128 not overwrite the WAL data generated by previous timelines. It is
1129 in fact possible to archive many different timelines. While that might
1130 seem like a useless feature, it's often a lifesaver. Consider the
1131 situation where you aren't quite sure what point-in-time to recover to,
1132 and so have to do several point-in-time recoveries by trial and error
1133 until you find the best place to branch off from the old history. Without
1134 timelines this process would soon generate an unmanageable mess. With
1135 timelines, you can recover to <emphasis>any</> prior state, including
1136 states in timeline branches that you abandoned earlier.
1140 Every time a new timeline is created, <productname>PostgreSQL</> creates
1141 a <quote>timeline history</> file that shows which timeline it branched
1142 off from and when. These history files are necessary to allow the system
1143 to pick the right WAL segment files when recovering from an archive that
1144 contains multiple timelines. Therefore, they are archived into the WAL
1145 archive area just like WAL segment files. The history files are just
1146 small text files, so it's cheap and appropriate to keep them around
1147 indefinitely (unlike the segment files which are large). You can, if
1148 you like, add comments to a history file to record your own notes about
1149 how and why this particular timeline was created. Such comments will be
1150 especially valuable when you have a thicket of different timelines as
1151 a result of experimentation.
1155 The default behavior of recovery is to recover along the same timeline
1156 that was current when the base backup was taken. If you wish to recover
1157 into some child timeline (that is, you want to return to some state that
1158 was itself generated after a recovery attempt), you need to specify the
1159 target timeline ID in <filename>recovery.conf</>. You cannot recover into
1160 timelines that branched off earlier than the base backup.
1164 <sect2 id="backup-tips">
1165 <title>Tips and Examples</title>
1168 Some tips for configuring continuous archiving are given here.
1171 <sect3 id="backup-standalone">
1172 <title>Standalone hot backups</title>
1175 It is possible to use <productname>PostgreSQL</>'s backup facilities to
1176 produce standalone hot backups. These are backups that cannot be used
1177 for point-in-time recovery, yet are typically much faster to backup and
1178 restore than <application>pg_dump</> dumps. (They are also much larger
1179 than <application>pg_dump</> dumps, so in some cases the speed advantage
1184 To prepare for standalone hot backups, set <varname>wal_level</> to
1185 <literal>archive</> (or <literal>hot_standby</>), <varname>archive_mode</> to
1186 <literal>on</>, and set up an <varname>archive_command</> that performs
1187 archiving only when a <emphasis>switch file</> exists. For example:
1189 archive_command = 'test ! -f /var/lib/pgsql/backup_in_progress || cp -i %p /var/lib/pgsql/archive/%f < /dev/null'
1191 This command will perform archiving when
1192 <filename>/var/lib/pgsql/backup_in_progress</> exists, and otherwise
1193 silently return zero exit status (allowing <productname>PostgreSQL</>
1194 to recycle the unwanted WAL file).
1198 With this preparation, a backup can be taken using a script like the
1201 touch /var/lib/pgsql/backup_in_progress
1202 psql -c "select pg_start_backup('hot_backup');"
1203 tar -cf /var/lib/pgsql/backup.tar /var/lib/pgsql/data/
1204 psql -c "select pg_stop_backup();"
1205 rm /var/lib/pgsql/backup_in_progress
1206 tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/
1208 The switch file <filename>/var/lib/pgsql/backup_in_progress</> is
1209 created first, enabling archiving of completed WAL files to occur.
1210 After the backup the switch file is removed. Archived WAL files are
1211 then added to the backup so that both base backup and all required
1212 WAL files are part of the same <application>tar</> file.
1213 Please remember to add error handling to your backup scripts.
1217 If archive storage size is a concern, use <application>pg_compresslog</>,
1218 <ulink url="http://pglesslog.projects.postgresql.org"></ulink>, to
1219 remove unnecessary <xref linkend="guc-full-page-writes"> and trailing
1220 space from the WAL files. You can then use
1221 <application>gzip</application> to further compress the output of
1222 <application>pg_compresslog</>:
1224 archive_command = 'pg_compresslog %p - | gzip > /var/lib/pgsql/archive/%f'
1226 You will then need to use <application>gunzip</> and
1227 <application>pg_decompresslog</> during recovery:
1229 restore_command = 'gunzip < /mnt/server/archivedir/%f | pg_decompresslog - %p'
1234 <sect3 id="backup-scripts">
1235 <title><varname>archive_command</varname> scripts</title>
1238 Many people choose to use scripts to define their
1239 <varname>archive_command</varname>, so that their
1240 <filename>postgresql.conf</> entry looks very simple:
1242 archive_command = 'local_backup_script.sh'
1244 Using a separate script file is advisable any time you want to use
1245 more than a single command in the archiving process.
1246 This allows all complexity to be managed within the script, which
1247 can be written in a popular scripting language such as
1248 <application>bash</> or <application>perl</>.
1249 Any messages written to <literal>stderr</> from the script will appear
1250 in the database server log, allowing complex configurations to be
1251 diagnosed easily if they fail.
1255 Examples of requirements that might be solved within a script include:
1259 Copying data to secure off-site data storage
1264 Batching WAL files so that they are transferred every three hours,
1265 rather than one at a time
1270 Interfacing with other backup and recovery software
1275 Interfacing with monitoring software to report errors
1283 <sect2 id="continuous-archiving-caveats">
1284 <title>Caveats</title>
1287 At this writing, there are several limitations of the continuous archiving
1288 technique. These will probably be fixed in future releases:
1293 Operations on hash indexes are not presently WAL-logged, so
1294 replay will not update these indexes. This will mean that any new inserts
1295 will be ignored by the index, updated rows will apparently disappear and
1296 deleted rows will still retain pointers. In other words, if you modify a
1297 table with a hash index on it then you will get incorrect query results
1298 on a standby server. When recovery completes it is recommended that you
1299 manually <xref linkend="sql-reindex">
1300 each such index after completing a recovery operation.
1306 If a <xref linkend="sql-createdatabase">
1307 command is executed while a base backup is being taken, and then
1308 the template database that the <command>CREATE DATABASE</> copied
1309 is modified while the base backup is still in progress, it is
1310 possible that recovery will cause those modifications to be
1311 propagated into the created database as well. This is of course
1312 undesirable. To avoid this risk, it is best not to modify any
1313 template databases while taking a base backup.
1319 <xref linkend="sql-createtablespace">
1320 commands are WAL-logged with the literal absolute path, and will
1321 therefore be replayed as tablespace creations with the same
1322 absolute path. This might be undesirable if the log is being
1323 replayed on a different machine. It can be dangerous even if the
1324 log is being replayed on the same machine, but into a new data
1325 directory: the replay will still overwrite the contents of the
1326 original tablespace. To avoid potential gotchas of this sort,
1327 the best practice is to take a new base backup after creating or
1328 dropping tablespaces.
1335 It should also be noted that the default <acronym>WAL</acronym>
1336 format is fairly bulky since it includes many disk page snapshots.
1337 These page snapshots are designed to support crash recovery, since
1338 we might need to fix partially-written disk pages. Depending on
1339 your system hardware and software, the risk of partial writes might
1340 be small enough to ignore, in which case you can significantly
1341 reduce the total volume of archived logs by turning off page
1342 snapshots using the <xref linkend="guc-full-page-writes">
1343 parameter. (Read the notes and warnings in <xref linkend="wal">
1344 before you do so.) Turning off page snapshots does not prevent
1345 use of the logs for PITR operations. An area for future
1346 development is to compress archived WAL data by removing
1347 unnecessary page copies even when <varname>full_page_writes</> is
1348 on. In the meantime, administrators might wish to reduce the number
1349 of page snapshots included in WAL by increasing the checkpoint
1350 interval parameters as much as feasible.
1355 <sect1 id="migration">
1356 <title>Migration Between Releases</title>
1358 <indexterm zone="migration">
1359 <primary>upgrading</primary>
1362 <indexterm zone="migration">
1363 <primary>version</primary>
1364 <secondary>compatibility</secondary>
1368 This section discusses how to migrate your database data from one
1369 <productname>PostgreSQL</> release to a newer one.
1370 The software installation procedure <foreignphrase>per se</> is not the
1371 subject of this section; those details are in <xref linkend="installation">.
1375 <productname>PostgreSQL</> major versions are represented by the
1376 first two digit groups of the version number, e.g. 8.4.
1377 <productname>PostgreSQL</> minor versions are represented by the
1378 the third group of version digits, i.e., 8.4.2 is the second minor
1379 release of 8.4. Minor releases never change the internal storage
1380 format and are always compatible with earlier and later minor
1381 releases of the same major version number, i.e. 8.4.2 is compatible
1382 with 8.4, 8.4.1 and 8.4.6. To update between compatible versions,
1383 you simply replace the executables while the server is down and
1384 restart the server. The data directory remains unchanged —
1385 minor upgrades are that simple.
1389 For <emphasis>major</> releases of <productname>PostgreSQL</>, the
1390 internal data storage format is subject to change. When migrating
1391 data from one major version of <productname>PostgreSQL</> to another,
1392 you need to back up your data and restore it on the new server.
1393 This must be done using <application>pg_dump</>; file system level
1394 backup methods will not work. There are checks in place that prevent
1395 you from using a data directory with an incompatible version of
1396 <productname>PostgreSQL</productname>, so no great harm can be done
1397 by trying to start the wrong server version on a data directory.
1401 It is recommended that you use the <application>pg_dump</> and
1402 <application>pg_dumpall</> programs from the newer version of
1403 <productname>PostgreSQL</>, to take advantage of enhancements
1404 that might have been made in these programs. Current releases of the
1405 dump programs can read data from any server version back to 7.0.
1409 The least downtime can be achieved by installing the new server in
1410 a different directory and running both the old and the new servers
1411 in parallel, on different ports. Then you can use something like:
1414 pg_dumpall -p 5432 | psql -d postgres -p 6543
1417 to transfer your data. Or use an intermediate file if you wish.
1418 Then you can shut down the old server and start the new server using
1419 the port the old one was running on. You should make sure that the
1420 old database is not updated after you begin to run
1421 <application>pg_dumpall</>, otherwise you will lose that data. See <xref
1422 linkend="client-authentication"> for information on how to prohibit
1427 It is also possible to use replication methods, such as
1428 <productname>Slony</>, to create a slave server with the updated version of
1429 <productname>PostgreSQL</>. The slave can be on the same computer or
1430 a different computer. Once it has synced up with the master server
1431 (running the older version of <productname>PostgreSQL</>), you can
1432 switch masters and make the slave the master and shut down the older
1433 database instance. Such a switch-over results in only several seconds
1434 of downtime for an upgrade.
1438 If you cannot or do not want to run two servers in parallel, you can
1439 do the backup step before installing the new version, bring down
1440 the old server, move the old version out of the way, install the new
1441 version, start the new server, and restore the data. For example:
1444 pg_dumpall > backup
1446 mv /usr/local/pgsql /usr/local/pgsql.old
1447 # Rename any tablespace directories as well
1448 cd ~/postgresql-&version;
1450 initdb -D /usr/local/pgsql/data
1451 postgres -D /usr/local/pgsql/data
1452 psql -f backup postgres
1455 See <xref linkend="runtime"> about ways to start and stop the
1456 server and other details. The installation instructions will advise
1457 you of strategic places to perform these steps.
1462 When you <quote>move the old installation out of the way</quote>
1463 it might no longer be perfectly usable. Some of the executable programs
1464 contain absolute paths to various installed programs and data files.
1465 This is usually not a big problem, but if you plan on using two
1466 installations in parallel for a while you should assign them
1467 different installation directories at build time. (This problem
1468 is rectified in <productname>PostgreSQL</> version 8.0 and later, so long
1469 as you move all subdirectories containing installed files together;
1470 for example if <filename>/usr/local/postgres/bin/</> goes to
1471 <filename>/usr/local/postgres.old/bin/</>, then
1472 <filename>/usr/local/postgres/share/</> must go to
1473 <filename>/usr/local/postgres.old/share/</>. In pre-8.0 releases
1474 moving an installation like this will not work.)
1479 In practice you probably want to test your client applications on the
1480 new version before switching over completely. This is another reason
1481 for setting up concurrent installations of old and new versions. When
1482 testing a <productname>PostgreSQL</> major upgrade, consider the
1483 following categories of possible changes:
1489 <term>Administration</term>
1492 The capabilities available for administrators to monitor and control
1493 the server often change and improve in each major release.
1502 Typically this includes new SQL command capabilities and not changes
1503 in behavior, unless specifically mentioned in the release notes.
1509 <term>Library API</term>
1512 Typically libraries like <application>libpq</> only add new
1513 functionality, again unless mentioned in the release notes.
1519 <term>System Catalogs</term>
1522 System catalog changes usually only affect database management tools.
1528 <term>Server C-language API</term>
1531 This involved changes in the backend function API, which is written
1532 in the C programming language. Such changes effect code that
1533 references backend functions deep inside the server.