Was broken by my xloginsert scaling patch. XLogCtl global variable needs
to be initialized in each process, as it's not inherited by fork() on
Windows.
This patch replaces WALInsertLock with a number of WAL insertion slots,
allowing multiple backends to insert WAL records to the WAL buffers
concurrently. This is particularly useful for parallel loading large amounts
of data on a system with many CPUs.
This has one user-visible change: switching to a new WAL segment with
pg_switch_xlog() now fills the remaining unused portion of the segment with
zeros. This potentially adds some overhead, but it has been a very common
practice by DBA's to clear the "tail" of the segment with an external
pg_clearxlogtail utility anyway, to make the WAL files compress better.
With this patch, it's no longer necessary to do that.
This patch adds a new GUC, xloginsert_slots, to tune the number of WAL
insertion slots. Performance testing suggests that the default, 8, works
pretty well for all kinds of worklods, but I left the GUC in place to allow
others with different hardware to test that easily. We might want to remove
that before release.
Tom Lane [Mon, 8 Jul 2013 02:37:24 +0000 (22:37 -0400)]
Fix planning of parameterized appendrel paths with expensive join quals.
The code in set_append_rel_pathlist() for building parameterized paths
for append relations (inheritance and UNION ALL combinations) supposed
that the cheapest regular path for a child relation would still be cheapest
when reparameterized. Which might not be the case, particularly if the
added join conditions are expensive to compute, as in a recent example from
Jeff Janes. Fix it to compare child path costs *after* reparameterizing.
We can short-circuit that if the cheapest pre-existing path is already
parameterized correctly, which seems likely to be true often enough to be
worth checking for.
Back-patch to 9.2 where parameterized paths were introduced.
Jeff Davis [Sat, 6 Jul 2013 20:46:04 +0000 (13:46 -0700)]
Handle posix_fallocate() errors.
On some platforms, posix_fallocate() is available but may still return
EINVAL if the underlying filesystem does not support it. So, in case
of an error, fall through to the alternate implementation that just
writes zeros.
Tom Lane [Sat, 6 Jul 2013 15:16:50 +0000 (11:16 -0400)]
Rename a function to avoid naming conflict in parallel regression tests.
Commit 31a891857a128828d47d93c63e041f3b69cbab70 added some tests in
plpgsql.sql that used a function rather unthinkingly named "foo()".
However, rangefuncs.sql has some much older tests that create a function
of that name, and since these test scripts run in parallel, there is a
chance of failures if the timing is just right. Use another name to
avoid that. Per buildfarm (failure seen today on "hamerkop", but
probably it's happened before and not been noticed).
The old implementation converted PostgreSQL numeric to Python float,
which was always considered a shortcoming. Now numeric is converted to
the Python Decimal object. Either the external cdecimal module or the
standard library decimal module are supported.
Jeff Davis [Fri, 5 Jul 2013 19:30:29 +0000 (12:30 -0700)]
Use posix_fallocate() for new WAL files, where available.
This function is more efficient than actually writing out zeroes to
the new file, per microbenchmarks by Jon Nelson. Also, it may reduce
the likelihood of WAL file fragmentation.
Jon Nelson, with review by Andres Freund, Greg Smith and me.
Use type "int64" for memory accounting in tuplesort.c/tuplestore.c.
Commit 263865a48973767ce8ed7b7788059a38a24a9f37 switched tuplesort.c and
tuplestore.c variables representing memory usage from type "long" to
type "Size". This was unnecessary; I thought doing so avoided overflow
scenarios on 64-bit Windows, but guc.c already limited work_mem so as to
prevent the overflow. It was also incomplete, not touching the logic
that assumed a signed data type. Change the affected variables to
"int64". This is perfect for 64-bit platforms, and it reduces the need
to contemplate platform-specific overflow scenarios. It also puts us
close to being able to support work_mem over 2 GiB on 64-bit Windows.
Bruce Momjian [Thu, 4 Jul 2013 17:09:52 +0000 (13:09 -0400)]
Add C comment about \copy bug in CSV mode
Comment: This code erroneously assumes '\.' on a line alone inside a
quoted CSV string terminates the \copy.
http://www.postgresql.org/message-id/E1TdNVQ-0001ju-GO@wrigleys.postgresql.org
Robert Haas [Thu, 4 Jul 2013 15:24:24 +0000 (11:24 -0400)]
Add new GUC, max_worker_processes, limiting number of bgworkers.
In 9.3, there's no particular limit on the number of bgworkers;
instead, we just count up the number that are actually registered,
and use that to set MaxBackends. However, that approach causes
problems for Hot Standby, which needs both MaxBackends and the
size of the lock table to be the same on the standby as on the
master, yet it may not be desirable to run the same bgworkers in
both places. 9.3 handles that by failing to notice the problem,
which will probably work fine in nearly all cases anyway, but is
not theoretically sound.
A further problem with simply counting the number of registered
workers is that new workers can't be registered without a
postmaster restart. This is inconvenient for administrators,
since bouncing the postmaster causes an interruption of service.
Moreover, there are a number of applications for background
processes where, by necessity, the background process must be
started on the fly (e.g. parallel query). While this patch
doesn't actually make it possible to register new background
workers after startup time, it's a necessary prerequisite.
Treat TOAST index just the same as normal one and get the OID
of TOAST index from pg_index but not pg_class.reltoastidxid.
This change allows us to handle multiple TOAST indexes, and
which is required infrastructure for upcoming
REINDEX CONCURRENTLY feature.
Patch by Michael Paquier, reviewed by Andres Freund and me.
Robert Haas [Wed, 3 Jul 2013 16:24:26 +0000 (12:24 -0400)]
Hopefully-portable regression tests for CREATE/ALTER/DROP COLLATION.
The collate.linux.utf8 test covers some of the same territory, but
isn't portable and so probably does not get run often, or on
non-Linux platforms. If this approach turns out to be sufficiently
portable, we may want to look at trimming the redundant tests out
of that file to avoid duplication.
Robins Tharakan, reviewed by Michael Paquier and Fabien Coelho,
with further changes and cleanup by me.
Tom Lane [Wed, 3 Jul 2013 16:26:19 +0000 (12:26 -0400)]
Fix handling of auto-updatable views on inherited tables.
An INSERT into such a view should work just like an INSERT into its base
table, ie the insertion should go directly into that table ... not be
duplicated into each child table, as was happening before, per bug #8275
from Rushabh Lathia. On the other hand, the current behavior for
UPDATE/DELETE seems reasonable: the update/delete traverses the child
tables, or not, depending on whether the view specifies ONLY or not.
Add some regression tests covering this area.
In patch 82233ce7ea42, AbortStartTime wasn't being reset appropriately
after the restart sequence, causing subsequent iterations through
ServerLoop to malfunction.
Specifically, permit attaching them to the error in RAISE and retrieving
them from a caught error in GET STACKED DIAGNOSTICS. RAISE enforces
nothing about the content of the fields; for its purposes, they are just
additional string fields. Consequently, clarify in the protocol and
libpq documentation that the usual relationships between error fields,
like a schema name appearing wherever a table name appears, are not
universal. This freedom has other applications; consider a FDW
propagating an error from an RDBMS having no schema support.
Back-patch to 9.3, where core support for the error fields was
introduced. This prevents the confusion of having a release where libpq
exposes the fields and PL/pgSQL does not.
Robert Haas [Tue, 2 Jul 2013 17:35:14 +0000 (13:35 -0400)]
Add support for multiple kinds of external toast datums.
To that end, support tags rather than lengths for external datums.
As an example of how this can be used, add support or "indirect"
tuples which point to some externally allocated memory containing
a toast tuple. Similar infrastructure could be used for other
purposes, including, perhaps, support for alternative compression
algorithms.
Andres Freund, reviewed by Hitoshi Harada and myself
Silence compiler warning in assertion-enabled builds.
With -Wtype-limits, gcc correctly points out that size_t can never be < 0.
Backpatch to 9.3 and 9.2. It's been like this forever, but in <= 9.1 you got
a lot other warnings with -Wtype-limits anyway (at least with my version of
gcc).
Bruce Momjian [Tue, 2 Jul 2013 14:29:27 +0000 (10:29 -0400)]
pg_upgrade: revert changing '' to ""
On the command line, GUC option strings are handled by the guc parser,
not by the shell parser, so '' is the proper way to represent a
zero-length string. This reverts commit 3132a9b7ab3d76c15f88cfa29792fd888e7a959e.
Robert Haas [Tue, 2 Jul 2013 13:47:01 +0000 (09:47 -0400)]
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
The dependencies on the spi and dummy_seclabel contrib modules were
incomplete, because they did not pick up automatically generated
dependencies on header files. This will manifest itself especially when
switching major versions, where the contrib modules would not be
recompiled to contain the new version number, leading to regression test
failures.
To fix this, use the submake approach already in use elsewhere, so that
the contrib modules are built using their full rules.
Bruce Momjian [Mon, 1 Jul 2013 18:52:56 +0000 (14:52 -0400)]
pg_dump docs: use escaped double-quotes, for Windows
On Unix, you can embed double-quotes in single-quotes, and via versa.
However, on Windows, you can only escape double-quotes in double-quotes,
so use that in the pg_dump -t/table example.
Backpatch to 9.3.
Report from Mike Toews
Bruce Momjian [Mon, 1 Jul 2013 18:45:45 +0000 (14:45 -0400)]
pg_upgrade: use "" rather than '', for Windows
If we ever support unix sockets on Windows, we should use "" rather than
'' for zero-length strings on the command-line, so use that.
Bruce Momjian [Mon, 1 Jul 2013 17:40:18 +0000 (13:40 -0400)]
Add timezone offset output option to to_char()
Add ability for to_char() to output the timezone's UTC offset (OF). We
already have the ability to return the timezone abbeviation (TZ/tz).
Per request from Andrew Dunstan
Andrew Dunstan [Mon, 1 Jul 2013 16:53:05 +0000 (12:53 -0400)]
Improve support for building PGXS modules with VPATH.
A VPATH build will be performed when the module's make file path is not
the current directory or when USE_VPATH is set.
This will assist packagers and others who prefer to build without
polluting the source directories.
There is still a bit of work to do here, notably documentation, but it's
probably a good idea to commit what we have so far and let people test
it out on their modules.
Bruce Momjian [Mon, 1 Jul 2013 16:40:02 +0000 (12:40 -0400)]
Remove undocumented -h (help) option
The -h option was not supported by many tools, and not documented, so
remove them for consistency from pg_upgrade, pg_test_fsync, and
pg_test_timing.
The pglz compressor has a significant startup cost, because it has to
initialize to zeros the history-tracking hash table. On a 64-bit system, the
hash table was 64kB in size. While clearing memory is pretty fast, for very
short inputs the relative cost of that was quite large.
This patch alleviates that in two ways. First, instead of storing pointers
in the hash table, store 16-bit indexes into the hist_entries array. That
slashes the size of the hash table to 1/2 or 1/4 of the original, depending
on the pointer width. Secondly, adjust the size of the hash table based on
input size. For very small inputs, you don't need a large hash table to
avoid collisions.
We don't normally bother retrying when the number of bytes written by
write() is short of what was requested. It is generally assumed that a
write() to disk doesn't return short, unless you run out of disk space.
While writing the WAL, however, it seems prudent to try a bit harder,
because a failure leads to PANIC. The write() is also much larger than most
write()s in the backend (up to wal_buffers), so there's more room for
surprises.
Also retry on EINTR. All signals used in the backend are flagged SA_RESTART
nowadays, so it shouldn't happen, but better to be defensive.
Bruce Momjian [Fri, 28 Jun 2013 22:01:46 +0000 (18:01 -0400)]
pg_upgrade: trim down --help and doc option descriptions
Previous code had old/new prefixes on option values, e.g.
--old-datadir=OLDDATADIR. Remove them, for simplicity; now:
--old-datadir=DATADIR. Also update docs to do the same.
Alvaro Herrera [Fri, 28 Jun 2013 21:20:53 +0000 (17:20 -0400)]
Send SIGKILL to children if they don't die quickly in immediate shutdown
On immediate shutdown, or during a restart-after-crash sequence,
postmaster used to send SIGQUIT (and then abandon ship if shutdown); but
this is not a good strategy if backends don't die because of that
signal. (This might happen, for example, if a backend gets tangled
trying to malloc() due to gettext(), as in an example illustrated by
MauMau.) This causes problems when later trying to restart the server,
because some processes are still attached to the shared memory segment.
Instead of just abandoning such backends to their fates, we now have
postmaster hang around for a little while longer, send a SIGKILL after
some reasonable waiting period, and then exit. This makes immediate
shutdown more reliable.
There is disagreement on whether it's best for postmaster to exit after
sending SIGKILL, or to stick around until all children have reported
death. If this controversy is resolved differently than what this patch
implements, it's an easy change to make.
Bruce Momjian [Fri, 28 Jun 2013 21:27:02 +0000 (17:27 -0400)]
pg_upgrade: change -u to -U, for consistency
Change -u (user) option to -U, for consistency with other tools like
pg_dump and psql. Also expand --user to --username, again for
consistency.
BACKWARD INCOMPATIBILITY
Robert Haas [Fri, 28 Jun 2013 14:18:00 +0000 (10:18 -0400)]
Make the OVER keyword unreserved.
This results in a slightly less specific error message when OVER
is used in a context where we don't accept window functions, but
per discussion, it's worth it to get the benefit of not needing
to reserve this keyword any more. This same refactoring will
also let us avoid reserving some other keywords that we expect
to add in upcoming patches (specifically, IGNORE, RESPECT, and
FILTER).
On many platforms the OS will round the sleep time to millisecond
resolution, but there is no reason for us to pre-emptively round the
argument to pg_usleep.
When the delay was measured in milliseconds and started from 1 ms, it
sometimes took many attempts until the logic that increases the delay by
multiplying with a random value between 1 and 2 actually managed to bump it
from 1 ms to 2 ms. That lead to a sequence of 1 ms waits until the delay
started to increase. This wasn't really a problem but it looked odd if you
observed the waits. There is no measurable difference in performance, but
it's more readable this way.
Noah Misch [Thu, 27 Jun 2013 18:53:57 +0000 (14:53 -0400)]
Permit super-MaxAllocSize allocations with MemoryContextAllocHuge().
The MaxAllocSize guard is convenient for most callers, because it
reduces the need for careful attention to overflow, data type selection,
and the SET_VARSIZE() limit. A handful of callers are happy to navigate
those hazards in exchange for the ability to allocate a larger chunk.
Introduce MemoryContextAllocHuge() and repalloc_huge(). Use this in
tuplesort.c and tuplestore.c, enabling internal sorts of up to INT_MAX
tuples, a factor-of-48 increase. In particular, B-tree index builds can
now benefit from much-larger maintenance_work_mem settings.
Reviewed by Stephen Frost, Simon Riggs and Jeff Janes.
Tom Lane [Thu, 27 Jun 2013 17:54:50 +0000 (13:54 -0400)]
Mark index-constraint comments with correct dependency in pg_dump.
When there's a comment on an index that was created with UNIQUE or PRIMARY
KEY constraint syntax, we need to label the comment as depending on the
constraint not the index, since only the constraint object actually appears
in the dump. This incorrect dependency can lead to parallel pg_restore
trying to restore the comment before the index has been created, per bug
#8257 from Lloyd Albin.
This patch fixes pg_dump to produce the right dependency in dumps made
in the future. Usually we also try to hack pg_restore to work around
bogus dependencies, so that existing (wrong) dumps can still be restored in
parallel mode; but that doesn't seem practical here since there's no easy
way to relate the constraint dump entry to the comment after the fact.
Tom Lane [Thu, 27 Jun 2013 16:36:44 +0000 (12:36 -0400)]
Expect EWOULDBLOCK from a non-blocking connect() call only on Windows.
On Unix-ish platforms, EWOULDBLOCK may be the same as EAGAIN, which is
*not* a success return, at least not on Linux. We need to treat it as a
failure to avoid giving a misleading error message. Per the Single Unix
Spec, only EINPROGRESS and EINTR returns indicate that the connection
attempt is in progress.
On Windows, on the other hand, EWOULDBLOCK (WSAEWOULDBLOCK) is the expected
case. We must accept EINPROGRESS as well because Cygwin will return that,
and it doesn't seem worth distinguishing Cygwin from native Windows here.
It's not very clear whether EINTR can occur on Windows, but let's leave
that part of the logic alone in the absence of concrete trouble reports.
Also, remove the test for errno == 0, effectively reverting commit da9501bddb42222dc33c031b1db6ce2133bcee7b, which AFAICS was just a thinko;
or at best it might have been a workaround for a platform-specific bug,
which we can hope is gone now thirteen years later. In any case, since
libpq makes no effort to reset errno to zero before calling connect(),
it seems unlikely that that test has ever reliably done anything useful.
Tom Lane [Thu, 27 Jun 2013 04:23:37 +0000 (00:23 -0400)]
Tweak wording in sequence-function docs to avoid PDF build failures.
Adjust the wording in the first para of "Sequence Manipulation Functions"
so that neither of the link phrases in it break across line boundaries,
in either A4- or US-page-size PDF output. This fixes a reported build
failure for the 9.3beta2 A4 PDF docs, and future-proofs this particular
para against causing similar problems in future. (Perhaps somebody will
fix this issue in the SGML/TeX documentation tool chain someday, but I'm
not holding my breath.)
Back-patch to all supported branches, since the same problem could rise up
to bite us in future updates if anyone changes anything earlier than this
in func.sgml.
Noah Misch [Thu, 27 Jun 2013 00:00:08 +0000 (20:00 -0400)]
Cooperate with the Valgrind instrumentation framework.
Valgrind "client requests" in aset.c and mcxt.c teach Valgrind and its
Memcheck tool about the PostgreSQL allocator. This makes Valgrind
roughly as sensitive to memory errors involving palloc chunks as it is
to memory errors involving malloc chunks. Further client requests in
PageAddItem() and printtup() verify that all bits being added to a
buffer page or furnished to an output function are predictably-defined.
Those tests catch failures of C-language functions to fully initialize
the bits of a Datum, which in turn stymie optimizations that rely on
_equalConst(). Define the USE_VALGRIND symbol in pg_config_manual.h to
enable these additions. An included "suppression file" silences nominal
errors we don't plan to fix.
Reviewed in earlier versions by Peter Geoghegan and Korry Douglas.
Noah Misch [Wed, 26 Jun 2013 23:56:03 +0000 (19:56 -0400)]
Refactor aset.c and mcxt.c in preparation for Valgrind cooperation.
Move some repeated debugging code into functions and store intermediates
in variables where not presently necessary. No code-generation changes
in a production build, and no functional changes. This simplifies and
focuses the main patch.
Noah Misch [Wed, 26 Jun 2013 23:55:15 +0000 (19:55 -0400)]
Initialize pad bytes in GinFormTuple().
Every other core buffer page consumer initializes the bytes it furnishes
to PageAddItem(). For consistency, do the same here. No back-patch;
regardless, we couldn't count on the fix so long as binary upgrade can
carry forward affected index builds.
Noah Misch [Wed, 26 Jun 2013 15:17:33 +0000 (11:17 -0400)]
Renovate display of non-ASCII messages on Windows.
GNU gettext selects a default encoding for the messages it emits in a
platform-specific manner; it uses the Windows ANSI code page on Windows
and follows LC_CTYPE on other platforms. This is inconvenient for
PostgreSQL server processes, so realize consistent cross-platform
behavior by calling bind_textdomain_codeset() on Windows each time we
permanently change LC_CTYPE. This primarily affects SQL_ASCII databases
and processes like the postmaster that do not attach to a database,
making their behavior consistent with PostgreSQL on non-Windows
platforms. Messages from SQL_ASCII databases use the encoding implied
by the database LC_CTYPE, and messages from non-database processes use
LC_CTYPE from the postmaster system environment. PlatformEncoding
becomes unused, so remove it.
Make write_console() prefer WriteConsoleW() to write() regardless of the
encodings in use. In this situation, write() will invariably mishandle
non-ASCII characters.
elog.c has assumed that messages conform to the database encoding.
While usually true, this does not hold for SQL_ASCII and MULE_INTERNAL.
Introduce MessageEncoding to track the actual encoding of message text.
The present consumers are Windows-specific code for converting messages
to UTF16 for use in system interfaces. This fixes the appearance in
Windows event logs and consoles of translated messages from SQL_ASCII
processes like the postmaster. Note that SQL_ASCII inherently disclaims
a strong notion of encoding, so non-ASCII byte sequences interpolated
into messages by %s may yet yield a nonsensical message. MULE_INTERNAL
has similar problems at present, albeit for a different reason: its lack
of libiconv support or a conversion to UTF8.
Consequently, one need no longer restart Windows with a different
Windows ANSI code page to broadly test backend logging under a given
language. Changing the user's locale ("Format") is enough. Several
accounts can simultaneously run postmasters under different locales, all
correctly logging localized messages to Windows event logs and consoles.
Alvaro Herrera [Tue, 25 Jun 2013 20:36:29 +0000 (16:36 -0400)]
Avoid inconsistent type declaration
Clang 3.3 correctly complains that a variable of type enum
MultiXactStatus cannot hold a value of -1, which makes sense. Change
the declared type of the variable to int instead, and apply casting as
necessary to avoid the warning.
Andrew Dunstan [Tue, 25 Jun 2013 17:46:34 +0000 (13:46 -0400)]
Properly dump dropped foreign table cols in binary-upgrade mode.
In binary upgrade mode, we need to recreate and then drop dropped
columns so that all the columns get the right attribute number. This is
true for foreign tables as well as for native tables. For foreign
tables we have been getting the first part right but not the second,
leading to bogus columns in the upgraded database. Fix this all the way
back to 9.1, where foreign tables were introduced.
Fujii Masao [Tue, 25 Jun 2013 17:14:37 +0000 (02:14 +0900)]
Support clean switchover.
In replication, when we shutdown the master, walsender tries to send
all the outstanding WAL records to the standby, and then to exit. This
basically means that all the WAL records are fully synced between
two servers after the clean shutdown of the master. So, after
promoting the standby to new master, we can restart the stopped
master as new standby without the need for a fresh backup from
new master.
But there was one problem so far: though walsender tries to send all
the outstanding WAL records, it doesn't wait for them to be replicated
to the standby. Then, before receiving all the WAL records,
walreceiver can detect the closure of connection and exit. We cannot
guarantee that there is no missing WAL in the standby after clean
shutdown of the master. In this case, backup from new master is
required when restarting the stopped master as new standby.
This patch fixes this problem. It just changes walsender so that it
waits for all the outstanding WAL records to be replicated to the
standby before closing the replication connection.
Per discussion, this is a fix that needs to get backpatched rather than
new feature. So, back-patch to 9.1 where enough infrastructure for
this exists.
Tom Lane [Sun, 23 Jun 2013 18:43:10 +0000 (14:43 -0400)]
Add a comment warning against use of pg_usleep() for long sleeps.
Follow-up to commit 873ab97219caabeb2f7b390268a4fe01e2b7518c, in which
I noted that WaitLatch was a better solution in the commit log message,
but neglected to add any documentation in the code.
Peter Eisentraut [Sat, 22 Jun 2013 02:48:06 +0000 (22:48 -0400)]
doc: Fix date in EPUB manifest
If there is no <date> element, the publication date for the EPUB
manifest is taken from the copyright year. But something like
"1996-2013" is not a legal date specification. So the EPUB output
currently fails epubcheck.
Put in a separate <date> element with the current year. Put it in
legal.sgml, because copyright.pl already instructs to update that
manually, so it hopefully won't be missed.
Peter Eisentraut [Fri, 21 Jun 2013 03:03:18 +0000 (23:03 -0400)]
Clarify terminology standalone backend vs. single-user mode
Most of the documentation uses "single-user mode", so use that in the
code as well. Adjust the documentation to match the new error message
wording. Also add a documentation index entry for "single-user mode".
Based-on-patch-by: Jeff Janes <jeff.janes@gmail.com>