From: Tom Lane Date: Tue, 28 Dec 2004 19:08:58 +0000 (+0000) Subject: More minor updates and copy-editing. X-Git-Tag: REL8_0_0RC3~14 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=7737d01ece4823a6de70e57a35ab1c735984bd0b;p=postgresql More minor updates and copy-editing. --- diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml index bdfae16869..76457b6842 100644 --- a/doc/src/sgml/backup.sgml +++ b/doc/src/sgml/backup.sgml @@ -1,5 +1,5 @@ Backup and Restore @@ -7,7 +7,7 @@ $PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.53 2004/12/13 18:05:07 petere Ex backup - As everything that contains valuable data, PostgreSQL + As with everything that contains valuable data, PostgreSQL databases should be backed up regularly. While the procedure is essentially simple, it is important to have a basic understanding of the underlying techniques and assumptions. @@ -46,9 +46,9 @@ pg_dump dbname > pg_dump - does not operate with special permissions. In particular, you must + does not operate with special permissions. In particular, it must have read access to all tables that you want to back up, so in - practice you almost always have to be a database superuser. + practice you almost always have to run it as a database superuser. @@ -111,26 +111,25 @@ psql dbname < template0 before executing psql (e.g., with createdb -T template0 dbname). - psql supports similar options to pg_dump + psql supports options similar to pg_dump for controlling the database server location and the user name. See - its reference page for more information. + 's reference page for more information. - If the objects in the original database were owned by different - users, then the dump will instruct psql to connect - as each affected user in turn and then create the relevant - objects. This way the original ownership is preserved. This also - means, however, that all these users must already exist, and - furthermore that you must be allowed to connect as each of them. - It might therefore be necessary to temporarily relax the client - authentication settings. + Not only must the target database already exist before starting to + run the restore, but so must all the users who own objects in the + dumped database or were granted permissions on the objects. If they + do not, then the restore will fail to recreate the objects with the + original ownership and/or permissions. (Sometimes this is what you want, + but usually it is not.) Once restored, it is wise to run on each database so the optimizer has - useful statistics. You can also run vacuumdb -a -z to + useful statistics. An easy way to do this is to run + vacuumdb -a -z to VACUUM ANALYZE all databases; this is equivalent to running VACUUM ANALYZE manually. @@ -189,7 +188,7 @@ psql template1 < infile - Large Databases + Handling large databases Since PostgreSQL allows tables larger @@ -249,17 +248,19 @@ cat filename* | psql Use the custom dump format. - If PostgreSQL was built on a system with the zlib compression library - installed, the custom dump format will compress data as it writes it - to the output file. For large databases, this will produce similar dump - sizes to using gzip, but has the added advantage that the tables can be - restored selectively. The following command dumps a database using the - custom dump format: + If PostgreSQL was built on a system with the + zlib compression library installed, the custom dump + format will compress data as it writes it to the output file. This will + produce dump file sizes similar to using gzip, but it + has the added advantage that tables can be restored selectively. The + following command dumps a database using the custom dump format: pg_dump -Fc dbname > filename + A custom-format dump is not a script for psql, but + instead must be restored with pg_restore. See the and reference pages for details. @@ -276,7 +277,8 @@ pg_dump -Fc dbname > backup To dump large objects you must use either the custom or the tar output format, and use the