From: Heikki Linnakangas Date: Wed, 31 Mar 2010 19:13:01 +0000 (+0000) Subject: Enhance documentation of the build-in standby mode, explaining the retry X-Git-Tag: REL9_0_ALPHA5~11 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=991bfe11d28a9d2c70d54203bac2562995af504a;p=postgresql Enhance documentation of the build-in standby mode, explaining the retry loop in standby mode, trying to restore from archive, pg_xlog and streaming. Move sections around to make the high availability chapter more coherent: the most prominent part is now a "Log-Shipping Standby Servers" section that describes what a standby server is (like the old "Warm Standby Servers for High Availability" section), and how to set up a warm standby server, including streaming replication, using the built-in standby mode. The pg_standby method is desribed in another section called "Alternative method for log shipping", with the added caveat that it doesn't work with streaming replication. --- diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index 00f2779229..9b24ebbb2a 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -1,4 +1,4 @@ - + High Availability, Load Balancing, and Replication @@ -455,32 +455,10 @@ protocol to make nodes agree on a serializable transactional order. - - File-based Log Shipping - - - warm standby - - - - PITR standby - - - standby server - - - - log shipping - + + Log-Shipping Standby Servers - - witness server - - - - STONITH - Continuous archiving can be used to create a high @@ -510,8 +488,8 @@ protocol to make nodes agree on a serializable transactional order. adjacent system, another system at the same site, or another system on the far side of the globe. The bandwidth required for this technique varies according to the transaction rate of the primary server. - Record-based log shipping is also possible with custom-developed - procedures, as discussed in . + Record-based log shipping is also possible with streaming replication + (see ). @@ -519,26 +497,52 @@ protocol to make nodes agree on a serializable transactional order. records are shipped after transaction commit. As a result, there is a window for data loss should the primary server suffer a catastrophic failure; transactions not yet shipped will be lost. The size of the - data loss window can be limited by use of the + data loss window in file-based log shipping can be limited by use of the archive_timeout parameter, which can be set as low as a few seconds. However such a low setting will substantially increase the bandwidth required for file shipping. If you need a window of less than a minute or so, consider using - . + streaming replication (see ). - The standby server is not available for access, since it is continually - performing recovery processing. Recovery performance is sufficiently - good that the standby will typically be only moments away from full + Recovery performance is sufficiently good that the standby will + typically be only moments away from full availability once it has been activated. As a result, this is called a warm standby configuration which offers high availability. Restoring a server from an archived base backup and rollforward will take considerably longer, so that technique only offers a solution for disaster recovery, not high availability. + A standby server can also be used for read-only queries, in which case + it is called a Hot Standby server. See for + more information. - + + warm standby + + + + PITR standby + + + + standby server + + + + log shipping + + + + witness server + + + + STONITH + + + Planning @@ -573,188 +577,114 @@ protocol to make nodes agree on a serializable transactional order. versa. - - There is no special mode required to enable a standby server. The - operations that occur on both primary and standby servers are - normal continuous archiving and recovery tasks. The only point of - contact between the two database servers is the archive of WAL files - that both share: primary writing to the archive, standby reading from - the archive. Care must be taken to ensure that WAL archives from separate - primary servers do not become mixed together or confused. The archive - need not be large if it is only required for standby operation. - + - - The magic that makes the two loosely coupled servers work together is - simply a restore_command used on the standby that, - when asked for the next WAL file, waits for it to become available from - the primary. The restore_command is specified in the - recovery.conf file on the standby server. Normal recovery - processing would request a file from the WAL archive, reporting failure - if the file was unavailable. For standby processing it is normal for - the next WAL file to be unavailable, so the standby must wait for - it to appear. For files ending in .backup or - .history there is no need to wait, and a non-zero return - code must be returned. A waiting restore_command can be - written as a custom script that loops after polling for the existence of - the next WAL file. There must also be some way to trigger failover, which - should interrupt the restore_command, break the loop and - return a file-not-found error to the standby server. This ends recovery - and the standby will then come up as a normal server. - + + Standby Server Operation - Pseudocode for a suitable restore_command is: - -triggered = false; -while (!NextWALFileReady() && !triggered) -{ - sleep(100000L); /* wait for ~0.1 sec */ - if (CheckForExternalTrigger()) - triggered = true; -} -if (!triggered) - CopyWALFileForRecovery(); - + In standby mode, the server continously applies WAL received from the + master server. The standby server can read WAL from a WAL archive + (see restore_command) or directly from the master + over a TCP connection (streaming replication). The standby server will + also attempt to restore any WAL found in the standby cluster's + pg_xlog directory. That typically happens after a server + restart, when the standby replays again WAL that was streamed from the + master before the restart, but you can also manually copy files to + pg_xlog at any time to have them replayed. - A working example of a waiting restore_command is provided - as a contrib module named pg_standby. It - should be used as a reference on how to correctly implement the logic - described above. It can also be extended as needed to support specific - configurations and environments. + At startup, the standby begins by restoring all WAL available in the + archive location, calling restore_command. Once it + reaches the end of WAL available there and restore_command + fails, it tries to restore any WAL available in the pg_xlog directory. + If that fails, and streaming replication has been configured, the + standby tries to connect to the primary server and start streaming WAL + from the last valid record found in archive or pg_xlog. If that fails + or streaming replication is not configured, or if the connection is + later disconnected, the standby goes back to step 1 and tries to + restore the file from the archive again. This loop of retries from the + archive, pg_xlog, and via streaming replication goes on until the server + is stopped or failover is triggered by a trigger file. - PostgreSQL does not provide the system - software required to identify a failure on the primary and notify - the standby database server. Many such tools exist and are well - integrated with the operating system facilities required for - successful failover, such as IP address migration. + Standby mode is exited and the server switches to normal operation, + when a trigger file is found (trigger_file). Before failover, it will + restore any WAL available in the archive or in pg_xlog, but won't try + to connect to the master or wait for files to become available in the + archive. + - - The method for triggering failover is an important part of planning - and design. One potential option is the restore_command - command. It is executed once for each WAL file, but the process - running the restore_command is created and dies for - each file, so there is no daemon or server process, and - signals or a signal handler cannot be used. Therefore, the - restore_command is not suitable to trigger failover. - It is possible to use a simple timeout facility, especially if - used in conjunction with a known archive_timeout - setting on the primary. However, this is somewhat error prone - since a network problem or busy primary server might be sufficient - to initiate failover. A notification mechanism such as the explicit - creation of a trigger file is ideal, if this can be arranged. - + + Preparing Master for Standby Servers - The size of the WAL archive can be minimized by using the %r - option of the restore_command. This option specifies the - last archive file name that needs to be kept to allow the recovery to - restart correctly. This can be used to truncate the archive once - files are no longer required, assuming the archive is writable from the - standby server. + Set up continuous archiving to a WAL archive on the master, as described + in . The archive location should be + accessible from the standby even when the master is down, ie. it should + reside on the standby server itself or another trusted server, not on + the master server. - - - - Implementation - The short procedure for configuring a standby server is as follows. For - full details of each step, refer to previous sections as noted. - - - - Set up primary and standby systems as nearly identical as - possible, including two identical copies of - PostgreSQL at the same release level. - - - - - Set up continuous archiving from the primary to a WAL archive - directory on the standby server. Ensure that - , - and - - are set appropriately on the primary - (see ). - - - - - Make a base backup of the primary server (see ), and load this data onto the standby. - - - - - Begin recovery on the standby server from the local WAL - archive, using a recovery.conf that specifies a - restore_command that waits as described - previously (see ). - - - + If you want to use streaming replication, set up authentication to allow + streaming replication connections and set max_wal_senders in + the configuration file of the primary server. - Recovery treats the WAL archive as read-only, so once a WAL file has - been copied to the standby system it can be copied to tape at the same - time as it is being read by the standby database server. - Thus, running a standby server for high availability can be performed at - the same time as files are stored for longer term disaster recovery - purposes. + Take a base backup as described in + to bootstrap the standby server. + + + + Setting up the standby server - For testing purposes, it is possible to run both primary and standby - servers on the same system. This does not provide any worthwhile - improvement in server robustness, nor would it be described as HA. + To set up the standby server, restore the base backup taken from primary + server (see ). In the recovery command file + recovery.conf in the standby's cluster data directory, + turn on standby_mode. Set restore_command to + a simple command to copy files from the WAL archive. If you want to + use streaming replication, set primary_conninfo. - - - Record-based Log Shipping + + + Do not use pg_standby or similar tools with the built-in standby mode + described here. restore_command should return immediately + if the file does not exist, the server will retry the command again if + necessary. See + for using tools like pg_standby. + + - PostgreSQL directly supports file-based - log shipping as described above. It is also possible to implement - record-based log shipping, though this requires custom development. + You can use restartpoint_command to prune the archive of files no longer + needed by the standby. - An external program can call the pg_xlogfile_name_offset() - function (see ) - to find out the file name and the exact byte offset within it of - the current end of WAL. It can then access the WAL file directly - and copy the data from the last known end of WAL through the current end - over to the standby servers. With this approach, the window for data - loss is the polling cycle time of the copying program, which can be very - small, and there is no wasted bandwidth from forcing partially-used - segment files to be archived. Note that the standby servers' - restore_command scripts can only deal with whole WAL files, - so the incrementally copied data is not ordinarily made available to - the standby servers. It is of use only when the primary dies — - then the last partial WAL file is fed to the standby before allowing - it to come up. The correct implementation of this process requires - cooperation of the restore_command script with the data - copying program. + If you're setting up the standby server for high availability purposes, + set up WAL archiving, connections and authentication like the primary + server, because the standby server will work as a primary server after + failover. If you're setting up the standby server for reporting + purposes, with no plans to fail over to it, configure the standby + accordingly. - Starting with PostgreSQL version 9.0, you can use - streaming replication (see ) to - achieve the same benefits with less effort. + You can have any number of standby servers, but if you use streaming + replication, make sure you set max_wal_senders high enough in + the primary to allow them to be connected simultaneously. - - + Streaming Replication @@ -785,101 +715,40 @@ if (!triggered) delete old WAL files still required by the standby. - - Setup - - The short procedure for configuring streaming replication is as follows. - For full details of each step, refer to other sections as noted. - - - - - Set up primary and standby systems as near identically as possible, - including two identical copies of PostgreSQL at the - same release level. - - - - - Set up continuous archiving from the primary to a WAL archive located - in a directory on the standby server. In particular, set - and - - to archive WAL files in a location accessible from the standby - (see ). - - + + To use streaming replication, set up a file-based log-shipping standby + server as described in . The step that + turns a file-based log-shipping standby into streaming replication + standby is setting primary_conninfo setting in the + recovery.conf file to point to the primary server. Set + and authentication options + (see pg_hba.conf) on the primary so that the standby server + can connect to the replication pseudo-database on the primary + server (see ). + - - - Set and authentication options - (see pg_hba.conf) on the primary so that the standby server can connect to - the replication pseudo-database on the primary server (see - ). - - - On systems that support the keepalive socket option, setting - , - and - helps the master promptly - notice a broken connection. - - - - - Set the maximum number of concurrent connections from the standby servers - (see for details). - - - - - Start the PostgreSQL server on the primary. - - - - - Make a base backup of the primary server (see - ), and load this data onto the - standby. Note that all files present in pg_xlog - and pg_xlog/archive_status on the standby - server should be removed because they might be obsolete. - - - - - If you're setting up the standby server for high availability purposes, - set up WAL archiving, connections and authentication like the primary - server, because the standby server will work as a primary server after - failover. If you're setting up the standby server for reporting - purposes, with no plans to fail over to it, configure the standby - accordingly. - - - - - Create a recovery command file recovery.conf in the data - directory on the standby server. Set restore_command - as you would in normal recovery from a continuous archiving backup - (see ). pg_standby or - similar tools that wait for the next WAL file to arrive cannot be used - with streaming replication, as the server handles retries and waiting - itself. Enable standby_mode. Set - primary_conninfo to point to the primary server. - + + On systems that support the keepalive socket option, setting + , + and + helps the master promptly + notice a broken connection. + - - - - Start the PostgreSQL server on the standby. The standby - server will go into recovery mode and proceed to receive WAL records - from the primary and apply them continuously. - - - - - + + Set the maximum number of concurrent connections from the standby servers + (see for details). + + + + When the standby is started and primary_conninfo is set + correctly, the standby will connect to the primary after replaying all + WAL files available in the archive. If the connection is established + successfully, you will see a walreceiver process in the standby, and + a corresponding walsender process in the primary. + - + Authentication It is very important that the access privilege for replication be setup @@ -928,7 +797,8 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' automatically. If you mention the database parameter at all within primary_conninfo then a FATAL error will be raised. - + + @@ -989,8 +859,220 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' failover mechanism to ensure that it will really work when you need it. Written administration procedures are advised. + + + To trigger failover of a log-shipping standby server, create a trigger + file with the filename and path specified by the trigger_file + setting in recovery.conf. If trigger_file is + not given, there is no way to exit recovery in the standby and promote + it to a master. That can be useful for e.g reporting servers that are + only used to offload read-only queries from the primary, not for high + availability purposes. + + + Alternative method for log shipping + + + An alternative to the built-in standby mode desribed in the previous + sections is to use a restore_command that polls the archive location. + This was the only option available in versions 8.4 and below. In this + setup, set standby_mode off, because you are implementing + the polling required for standby operation yourself. See + contrib/pg_standby () for a reference + implementation of this. + + + + Note that in this mode, the server will apply WAL one file at a + time, so if you use the standby server for queries (see Hot Standby), + there is a bigger delay between an action in the master and when the + action becomes visible in the standby, corresponding the time it takes + to fill up the WAL file. archive_timeout can be used to make that delay + shorter. Also note that you can't combine streaming replication with + this method. + + + + The operations that occur on both primary and standby servers are + normal continuous archiving and recovery tasks. The only point of + contact between the two database servers is the archive of WAL files + that both share: primary writing to the archive, standby reading from + the archive. Care must be taken to ensure that WAL archives from separate + primary servers do not become mixed together or confused. The archive + need not be large if it is only required for standby operation. + + + + The magic that makes the two loosely coupled servers work together is + simply a restore_command used on the standby that, + when asked for the next WAL file, waits for it to become available from + the primary. The restore_command is specified in the + recovery.conf file on the standby server. Normal recovery + processing would request a file from the WAL archive, reporting failure + if the file was unavailable. For standby processing it is normal for + the next WAL file to be unavailable, so the standby must wait for + it to appear. For files ending in .backup or + .history there is no need to wait, and a non-zero return + code must be returned. A waiting restore_command can be + written as a custom script that loops after polling for the existence of + the next WAL file. There must also be some way to trigger failover, which + should interrupt the restore_command, break the loop and + return a file-not-found error to the standby server. This ends recovery + and the standby will then come up as a normal server. + + + + Pseudocode for a suitable restore_command is: + +triggered = false; +while (!NextWALFileReady() && !triggered) +{ + sleep(100000L); /* wait for ~0.1 sec */ + if (CheckForExternalTrigger()) + triggered = true; +} +if (!triggered) + CopyWALFileForRecovery(); + + + + + A working example of a waiting restore_command is provided + as a contrib module named pg_standby. It + should be used as a reference on how to correctly implement the logic + described above. It can also be extended as needed to support specific + configurations and environments. + + + + PostgreSQL does not provide the system + software required to identify a failure on the primary and notify + the standby database server. Many such tools exist and are well + integrated with the operating system facilities required for + successful failover, such as IP address migration. + + + + The method for triggering failover is an important part of planning + and design. One potential option is the restore_command + command. It is executed once for each WAL file, but the process + running the restore_command is created and dies for + each file, so there is no daemon or server process, and + signals or a signal handler cannot be used. Therefore, the + restore_command is not suitable to trigger failover. + It is possible to use a simple timeout facility, especially if + used in conjunction with a known archive_timeout + setting on the primary. However, this is somewhat error prone + since a network problem or busy primary server might be sufficient + to initiate failover. A notification mechanism such as the explicit + creation of a trigger file is ideal, if this can be arranged. + + + + The size of the WAL archive can be minimized by using the %r + option of the restore_command. This option specifies the + last archive file name that needs to be kept to allow the recovery to + restart correctly. This can be used to truncate the archive once + files are no longer required, assuming the archive is writable from the + standby server. + + + + Implementation + + + The short procedure for configuring a standby server is as follows. For + full details of each step, refer to previous sections as noted. + + + + Set up primary and standby systems as nearly identical as + possible, including two identical copies of + PostgreSQL at the same release level. + + + + + Set up continuous archiving from the primary to a WAL archive + directory on the standby server. Ensure that + , + and + + are set appropriately on the primary + (see ). + + + + + Make a base backup of the primary server (see ), and load this data onto the standby. + + + + + Begin recovery on the standby server from the local WAL + archive, using a recovery.conf that specifies a + restore_command that waits as described + previously (see ). + + + + + + + Recovery treats the WAL archive as read-only, so once a WAL file has + been copied to the standby system it can be copied to tape at the same + time as it is being read by the standby database server. + Thus, running a standby server for high availability can be performed at + the same time as files are stored for longer term disaster recovery + purposes. + + + + For testing purposes, it is possible to run both primary and standby + servers on the same system. This does not provide any worthwhile + improvement in server robustness, nor would it be described as HA. + + + + + Record-based Log Shipping + + + PostgreSQL directly supports file-based + log shipping as described above. It is also possible to implement + record-based log shipping, though this requires custom development. + + + + An external program can call the pg_xlogfile_name_offset() + function (see ) + to find out the file name and the exact byte offset within it of + the current end of WAL. It can then access the WAL file directly + and copy the data from the last known end of WAL through the current end + over to the standby servers. With this approach, the window for data + loss is the polling cycle time of the copying program, which can be very + small, and there is no wasted bandwidth from forcing partially-used + segment files to be archived. Note that the standby servers' + restore_command scripts can only deal with whole WAL files, + so the incrementally copied data is not ordinarily made available to + the standby servers. It is of use only when the primary dies — + then the last partial WAL file is fed to the standby before allowing + it to come up. The correct implementation of this process requires + cooperation of the restore_command script with the data + copying program. + + + + Starting with PostgreSQL version 9.0, you can use + streaming replication (see ) to + achieve the same benefits with less effort. + + + + Hot Standby