From: Alvaro Herrera Date: Mon, 23 Jul 2007 17:22:00 +0000 (+0000) Subject: Reword paragraph about the autovacuum_max_workers setting. Patch from X-Git-Tag: REL8_3_BETA1~406 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=aa81c558ee5293fef96aa4049cd5a4d9da90954e;p=postgresql Reword paragraph about the autovacuum_max_workers setting. Patch from Jim Nasby. --- diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml index bbe7319826..40e9527d0c 100644 --- a/doc/src/sgml/maintenance.sgml +++ b/doc/src/sgml/maintenance.sgml @@ -1,4 +1,4 @@ - + Routine Database Maintenance Tasks @@ -496,16 +496,16 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb". - There is a limit of worker - processes that may be running at any time, so if the VACUUM - and ANALYZE work to do takes too long to run, the deadline may - be failed to meet for other databases. Also, if a particular database - takes a long time to process, more than one worker may be processing it - simultaneously. The workers are smart enough to avoid repeating work that - other workers have done, so this is normally not a problem. Note that the - number of running workers does not count towards the nor the limits. + The setting limits how many + workers may be running at any time. If several large tables all become + eligible for vacuuming in a short amount of time, all autovacuum workers + may end up vacuuming those tables for a very long time. This would result + in other tables and databases not being vacuumed until a worker became + available. There is also not a limit on how many workers might be in a + single database, but workers do try and avoid repeating work that has + already been done by other workers. Note that the number of running + workers does not count towards the nor + the limits.