Robert Haas [Tue, 16 Jul 2013 17:02:15 +0000 (13:02 -0400)]
Allow background workers to be started dynamically.
There is a new API, RegisterDynamicBackgroundWorker, which allows
an ordinary user backend to register a new background writer during
normal running. This means that it's no longer necessary for all
background workers to be registered during processing of
shared_preload_libraries, although the option of registering workers
at that time remains available.
When a background worker exits and will not be restarted, the
slot previously used by that background worker is automatically
released and becomes available for reuse. Slots used by background
workers that are configured for automatic restart can't (yet) be
released without shutting down the system.
This commit adds a new source file, bgworker.c, and moves some
of the existing control logic for background workers there.
Previously, there was little enough logic that it made sense to
keep everything in postmaster.c, but not any more.
This commit also makes the worker_spi contrib module into an
extension and adds a new function, worker_spi_launch, which can
be used to demonstrate the new facility.
Stephen Frost [Mon, 15 Jul 2013 18:53:17 +0000 (14:53 -0400)]
Check get_tle_by_resno() result before deref
When creating a sort to support a group by, we need to look up the
target entry in the target list by the resno using get_tle_by_resno().
This particular code-path didn't check the result prior to attempting
to dereference it, while all other callers did. While I can't see a
way for this usage of get_tle_by_resno() to fail (you can't ask for
a column to be sorted on which isn't included in the group by), it's
probably best to check that we didn't end up with a NULL somehow
anyway than risk the segfault.
I'm willing to back-patch this if others feel it's necessary, but my
guess is new features are what might tickle this rather than anything
existing.
Stephen Frost [Mon, 15 Jul 2013 14:42:27 +0000 (10:42 -0400)]
Correct off-by-one when reading from pipe
In pg_basebackup.c:reached_end_position(), we're reading from an
internal pipe with our own background process but we're possibly
reading more bytes than will actually fit into our buffer due to
an off-by-one error. As we're reading from an internal pipe
there's no real risk here, but it's good form to not depend on
such convenient arrangements.
Stephen Frost [Sun, 14 Jul 2013 21:44:29 +0000 (17:44 -0400)]
Fix resource leak in initdb -X option
When creating the symlink for the xlog directory, free the string
which stores the link location. Not really an issue but it doesn't
hurt to be good about this- prior cleanups have fixed similar
issues.
Leak found by the Coverity scanner.
Not back-patching as I don't see it being worth the code churn.
Stephen Frost [Sun, 14 Jul 2013 21:30:43 +0000 (17:30 -0400)]
Be sure to close() file descriptor on error case
In receivelog.c:writeTimeLineHistoryFile(), we were not properly
closing the open'd file descriptor in error cases. While this
wouldn't matter much if we were about to exit due to such an
error, that's not the case with pg_receivexlog as it can be a
long-running process and these errors are non-fatal.
This resource leak was found by the Coverity scanner.
Back-patch to 9.3 where this issue first appeared.
Stephen Frost [Sun, 14 Jul 2013 20:26:16 +0000 (16:26 -0400)]
Ensure 64bit arithmetic when calculating tapeSpace
In tuplesort.c:inittapes(), we calculate tapeSpace by first figuring
out how many 'tapes' we can use (maxTapes) and then multiplying the
result by the tape buffer overhead for each. Unfortunately, when
we are on a system with an 8-byte long, we allow work_mem to be
larger than 2GB and that allows maxTapes to be large enough that the
32bit arithmetic can overflow when multiplied against the buffer
overhead.
When this overflow happens, we end up adding the overflow to the
amount of space available, causing the amount of memory allocated to
be larger than work_mem.
Note that to reach this point, you have to set work mem to at least
24GB and be sorting a set which is at least that size. Given that a
user who can set work_mem to 24GB could also set it even higher, if
they were looking to run the system out of memory, this isn't
considered a security issue.
This overflow risk was found by the Coverity scanner.
Back-patch to all supported branches, as this issue has existed
since before 8.4.
Stephen Frost [Sun, 14 Jul 2013 19:31:23 +0000 (15:31 -0400)]
pg_receivexlog - Exit on failure to parse
In streamutil.c:GetConnection(), upgrade failure to parse the
connection string to an exit(1) instead of simply returning NULL.
Most callers already immediately exited, but pg_receivexlog would
loop on this case, continually trying to re-parse the connection
string (which can't be changed after pg_receivexlog has started).
GetConnection() was already expected to exit(1) in some cases
(eg: failure to allocate memory or if unable to determine the
integer_datetimes flag), so this change shouldn't surprise anyone.
Began looking at this due to the Coverity scanner complaining that
we were leaking err_msg in this case- no longer an issue since we
just exit(1) immediately.
Stephen Frost [Sun, 14 Jul 2013 18:35:26 +0000 (14:35 -0400)]
During parallel pg_dump, free commands from master
The command strings read by the child processes during parallel
pg_dump, after being read and handled, were not being free'd.
This patch corrects this relatively minor memory leak.
Leak found by the Coverity scanner.
Back patch to 9.3 where parallel pg_dump was introduced.
This is like shared_preload_libraries except that it takes effect at
backend start and can be changed without a full postmaster restart. It
is like local_preload_libraries except that it is still only settable by
a superuser. This can be a better way to load modules such as
auto_explain.
Since there are now three preload parameters, regroup the documentation
a bit. Put all parameters into one section, explain common
functionality only once, update the descriptions to reflect current and
future realities.
Switch user ID to the object owner when populating a materialized view.
This makes superuser-issued REFRESH MATERIALIZED VIEW safe regardless of
the object's provenance. REINDEX is an earlier example of this pattern.
As a downside, functions called from materialized views must tolerate
running in a security-restricted operation. CREATE MATERIALIZED VIEW
need not change user ID. Nonetheless, avoid creation of materialized
views that will invariably fail REFRESH by making it, too, start a
security-restricted operation.
Back-patch to 9.3 so materialized views have this from the beginning.
Bruce Momjian [Thu, 11 Jul 2013 13:43:22 +0000 (09:43 -0400)]
pg_upgrade: document possible pg_hba.conf options
Previously, pg_upgrade docs recommended using .pgpass if using MD5
authentication to avoid being prompted for a password. Turns out pg_ctl
never prompts for a password, so MD5 requires .pgpass --- document that.
Also recommend 'peer' for authentication too.
Backpatch back to 9.1.
path_encode's "closed" argument used to take three values: TRUE, FALSE,
or -1, while being of type bool. Replace that with a three-valued enum
for more clarity.
Was broken by my xloginsert scaling patch. XLogCtl global variable needs
to be initialized in each process, as it's not inherited by fork() on
Windows.
This patch replaces WALInsertLock with a number of WAL insertion slots,
allowing multiple backends to insert WAL records to the WAL buffers
concurrently. This is particularly useful for parallel loading large amounts
of data on a system with many CPUs.
This has one user-visible change: switching to a new WAL segment with
pg_switch_xlog() now fills the remaining unused portion of the segment with
zeros. This potentially adds some overhead, but it has been a very common
practice by DBA's to clear the "tail" of the segment with an external
pg_clearxlogtail utility anyway, to make the WAL files compress better.
With this patch, it's no longer necessary to do that.
This patch adds a new GUC, xloginsert_slots, to tune the number of WAL
insertion slots. Performance testing suggests that the default, 8, works
pretty well for all kinds of worklods, but I left the GUC in place to allow
others with different hardware to test that easily. We might want to remove
that before release.
Tom Lane [Mon, 8 Jul 2013 02:37:24 +0000 (22:37 -0400)]
Fix planning of parameterized appendrel paths with expensive join quals.
The code in set_append_rel_pathlist() for building parameterized paths
for append relations (inheritance and UNION ALL combinations) supposed
that the cheapest regular path for a child relation would still be cheapest
when reparameterized. Which might not be the case, particularly if the
added join conditions are expensive to compute, as in a recent example from
Jeff Janes. Fix it to compare child path costs *after* reparameterizing.
We can short-circuit that if the cheapest pre-existing path is already
parameterized correctly, which seems likely to be true often enough to be
worth checking for.
Back-patch to 9.2 where parameterized paths were introduced.
Jeff Davis [Sat, 6 Jul 2013 20:46:04 +0000 (13:46 -0700)]
Handle posix_fallocate() errors.
On some platforms, posix_fallocate() is available but may still return
EINVAL if the underlying filesystem does not support it. So, in case
of an error, fall through to the alternate implementation that just
writes zeros.
Tom Lane [Sat, 6 Jul 2013 15:16:50 +0000 (11:16 -0400)]
Rename a function to avoid naming conflict in parallel regression tests.
Commit 31a891857a128828d47d93c63e041f3b69cbab70 added some tests in
plpgsql.sql that used a function rather unthinkingly named "foo()".
However, rangefuncs.sql has some much older tests that create a function
of that name, and since these test scripts run in parallel, there is a
chance of failures if the timing is just right. Use another name to
avoid that. Per buildfarm (failure seen today on "hamerkop", but
probably it's happened before and not been noticed).
The old implementation converted PostgreSQL numeric to Python float,
which was always considered a shortcoming. Now numeric is converted to
the Python Decimal object. Either the external cdecimal module or the
standard library decimal module are supported.
Jeff Davis [Fri, 5 Jul 2013 19:30:29 +0000 (12:30 -0700)]
Use posix_fallocate() for new WAL files, where available.
This function is more efficient than actually writing out zeroes to
the new file, per microbenchmarks by Jon Nelson. Also, it may reduce
the likelihood of WAL file fragmentation.
Jon Nelson, with review by Andres Freund, Greg Smith and me.
Use type "int64" for memory accounting in tuplesort.c/tuplestore.c.
Commit 263865a48973767ce8ed7b7788059a38a24a9f37 switched tuplesort.c and
tuplestore.c variables representing memory usage from type "long" to
type "Size". This was unnecessary; I thought doing so avoided overflow
scenarios on 64-bit Windows, but guc.c already limited work_mem so as to
prevent the overflow. It was also incomplete, not touching the logic
that assumed a signed data type. Change the affected variables to
"int64". This is perfect for 64-bit platforms, and it reduces the need
to contemplate platform-specific overflow scenarios. It also puts us
close to being able to support work_mem over 2 GiB on 64-bit Windows.
Bruce Momjian [Thu, 4 Jul 2013 17:09:52 +0000 (13:09 -0400)]
Add C comment about \copy bug in CSV mode
Comment: This code erroneously assumes '\.' on a line alone inside a
quoted CSV string terminates the \copy.
http://www.postgresql.org/message-id/E1TdNVQ-0001ju-GO@wrigleys.postgresql.org
Robert Haas [Thu, 4 Jul 2013 15:24:24 +0000 (11:24 -0400)]
Add new GUC, max_worker_processes, limiting number of bgworkers.
In 9.3, there's no particular limit on the number of bgworkers;
instead, we just count up the number that are actually registered,
and use that to set MaxBackends. However, that approach causes
problems for Hot Standby, which needs both MaxBackends and the
size of the lock table to be the same on the standby as on the
master, yet it may not be desirable to run the same bgworkers in
both places. 9.3 handles that by failing to notice the problem,
which will probably work fine in nearly all cases anyway, but is
not theoretically sound.
A further problem with simply counting the number of registered
workers is that new workers can't be registered without a
postmaster restart. This is inconvenient for administrators,
since bouncing the postmaster causes an interruption of service.
Moreover, there are a number of applications for background
processes where, by necessity, the background process must be
started on the fly (e.g. parallel query). While this patch
doesn't actually make it possible to register new background
workers after startup time, it's a necessary prerequisite.
Treat TOAST index just the same as normal one and get the OID
of TOAST index from pg_index but not pg_class.reltoastidxid.
This change allows us to handle multiple TOAST indexes, and
which is required infrastructure for upcoming
REINDEX CONCURRENTLY feature.
Patch by Michael Paquier, reviewed by Andres Freund and me.
Robert Haas [Wed, 3 Jul 2013 16:24:26 +0000 (12:24 -0400)]
Hopefully-portable regression tests for CREATE/ALTER/DROP COLLATION.
The collate.linux.utf8 test covers some of the same territory, but
isn't portable and so probably does not get run often, or on
non-Linux platforms. If this approach turns out to be sufficiently
portable, we may want to look at trimming the redundant tests out
of that file to avoid duplication.
Robins Tharakan, reviewed by Michael Paquier and Fabien Coelho,
with further changes and cleanup by me.
Tom Lane [Wed, 3 Jul 2013 16:26:19 +0000 (12:26 -0400)]
Fix handling of auto-updatable views on inherited tables.
An INSERT into such a view should work just like an INSERT into its base
table, ie the insertion should go directly into that table ... not be
duplicated into each child table, as was happening before, per bug #8275
from Rushabh Lathia. On the other hand, the current behavior for
UPDATE/DELETE seems reasonable: the update/delete traverses the child
tables, or not, depending on whether the view specifies ONLY or not.
Add some regression tests covering this area.
In patch 82233ce7ea42, AbortStartTime wasn't being reset appropriately
after the restart sequence, causing subsequent iterations through
ServerLoop to malfunction.
Specifically, permit attaching them to the error in RAISE and retrieving
them from a caught error in GET STACKED DIAGNOSTICS. RAISE enforces
nothing about the content of the fields; for its purposes, they are just
additional string fields. Consequently, clarify in the protocol and
libpq documentation that the usual relationships between error fields,
like a schema name appearing wherever a table name appears, are not
universal. This freedom has other applications; consider a FDW
propagating an error from an RDBMS having no schema support.
Back-patch to 9.3, where core support for the error fields was
introduced. This prevents the confusion of having a release where libpq
exposes the fields and PL/pgSQL does not.
Robert Haas [Tue, 2 Jul 2013 17:35:14 +0000 (13:35 -0400)]
Add support for multiple kinds of external toast datums.
To that end, support tags rather than lengths for external datums.
As an example of how this can be used, add support or "indirect"
tuples which point to some externally allocated memory containing
a toast tuple. Similar infrastructure could be used for other
purposes, including, perhaps, support for alternative compression
algorithms.
Andres Freund, reviewed by Hitoshi Harada and myself
Silence compiler warning in assertion-enabled builds.
With -Wtype-limits, gcc correctly points out that size_t can never be < 0.
Backpatch to 9.3 and 9.2. It's been like this forever, but in <= 9.1 you got
a lot other warnings with -Wtype-limits anyway (at least with my version of
gcc).
Bruce Momjian [Tue, 2 Jul 2013 14:29:27 +0000 (10:29 -0400)]
pg_upgrade: revert changing '' to ""
On the command line, GUC option strings are handled by the guc parser,
not by the shell parser, so '' is the proper way to represent a
zero-length string. This reverts commit 3132a9b7ab3d76c15f88cfa29792fd888e7a959e.
Robert Haas [Tue, 2 Jul 2013 13:47:01 +0000 (09:47 -0400)]
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
The dependencies on the spi and dummy_seclabel contrib modules were
incomplete, because they did not pick up automatically generated
dependencies on header files. This will manifest itself especially when
switching major versions, where the contrib modules would not be
recompiled to contain the new version number, leading to regression test
failures.
To fix this, use the submake approach already in use elsewhere, so that
the contrib modules are built using their full rules.
Bruce Momjian [Mon, 1 Jul 2013 18:52:56 +0000 (14:52 -0400)]
pg_dump docs: use escaped double-quotes, for Windows
On Unix, you can embed double-quotes in single-quotes, and via versa.
However, on Windows, you can only escape double-quotes in double-quotes,
so use that in the pg_dump -t/table example.
Backpatch to 9.3.
Report from Mike Toews
Bruce Momjian [Mon, 1 Jul 2013 18:45:45 +0000 (14:45 -0400)]
pg_upgrade: use "" rather than '', for Windows
If we ever support unix sockets on Windows, we should use "" rather than
'' for zero-length strings on the command-line, so use that.
Bruce Momjian [Mon, 1 Jul 2013 17:40:18 +0000 (13:40 -0400)]
Add timezone offset output option to to_char()
Add ability for to_char() to output the timezone's UTC offset (OF). We
already have the ability to return the timezone abbeviation (TZ/tz).
Per request from Andrew Dunstan
Andrew Dunstan [Mon, 1 Jul 2013 16:53:05 +0000 (12:53 -0400)]
Improve support for building PGXS modules with VPATH.
A VPATH build will be performed when the module's make file path is not
the current directory or when USE_VPATH is set.
This will assist packagers and others who prefer to build without
polluting the source directories.
There is still a bit of work to do here, notably documentation, but it's
probably a good idea to commit what we have so far and let people test
it out on their modules.
Bruce Momjian [Mon, 1 Jul 2013 16:40:02 +0000 (12:40 -0400)]
Remove undocumented -h (help) option
The -h option was not supported by many tools, and not documented, so
remove them for consistency from pg_upgrade, pg_test_fsync, and
pg_test_timing.
The pglz compressor has a significant startup cost, because it has to
initialize to zeros the history-tracking hash table. On a 64-bit system, the
hash table was 64kB in size. While clearing memory is pretty fast, for very
short inputs the relative cost of that was quite large.
This patch alleviates that in two ways. First, instead of storing pointers
in the hash table, store 16-bit indexes into the hist_entries array. That
slashes the size of the hash table to 1/2 or 1/4 of the original, depending
on the pointer width. Secondly, adjust the size of the hash table based on
input size. For very small inputs, you don't need a large hash table to
avoid collisions.
We don't normally bother retrying when the number of bytes written by
write() is short of what was requested. It is generally assumed that a
write() to disk doesn't return short, unless you run out of disk space.
While writing the WAL, however, it seems prudent to try a bit harder,
because a failure leads to PANIC. The write() is also much larger than most
write()s in the backend (up to wal_buffers), so there's more room for
surprises.
Also retry on EINTR. All signals used in the backend are flagged SA_RESTART
nowadays, so it shouldn't happen, but better to be defensive.
Bruce Momjian [Fri, 28 Jun 2013 22:01:46 +0000 (18:01 -0400)]
pg_upgrade: trim down --help and doc option descriptions
Previous code had old/new prefixes on option values, e.g.
--old-datadir=OLDDATADIR. Remove them, for simplicity; now:
--old-datadir=DATADIR. Also update docs to do the same.
Alvaro Herrera [Fri, 28 Jun 2013 21:20:53 +0000 (17:20 -0400)]
Send SIGKILL to children if they don't die quickly in immediate shutdown
On immediate shutdown, or during a restart-after-crash sequence,
postmaster used to send SIGQUIT (and then abandon ship if shutdown); but
this is not a good strategy if backends don't die because of that
signal. (This might happen, for example, if a backend gets tangled
trying to malloc() due to gettext(), as in an example illustrated by
MauMau.) This causes problems when later trying to restart the server,
because some processes are still attached to the shared memory segment.
Instead of just abandoning such backends to their fates, we now have
postmaster hang around for a little while longer, send a SIGKILL after
some reasonable waiting period, and then exit. This makes immediate
shutdown more reliable.
There is disagreement on whether it's best for postmaster to exit after
sending SIGKILL, or to stick around until all children have reported
death. If this controversy is resolved differently than what this patch
implements, it's an easy change to make.
Bruce Momjian [Fri, 28 Jun 2013 21:27:02 +0000 (17:27 -0400)]
pg_upgrade: change -u to -U, for consistency
Change -u (user) option to -U, for consistency with other tools like
pg_dump and psql. Also expand --user to --username, again for
consistency.
BACKWARD INCOMPATIBILITY
Robert Haas [Fri, 28 Jun 2013 14:18:00 +0000 (10:18 -0400)]
Make the OVER keyword unreserved.
This results in a slightly less specific error message when OVER
is used in a context where we don't accept window functions, but
per discussion, it's worth it to get the benefit of not needing
to reserve this keyword any more. This same refactoring will
also let us avoid reserving some other keywords that we expect
to add in upcoming patches (specifically, IGNORE, RESPECT, and
FILTER).
On many platforms the OS will round the sleep time to millisecond
resolution, but there is no reason for us to pre-emptively round the
argument to pg_usleep.
When the delay was measured in milliseconds and started from 1 ms, it
sometimes took many attempts until the logic that increases the delay by
multiplying with a random value between 1 and 2 actually managed to bump it
from 1 ms to 2 ms. That lead to a sequence of 1 ms waits until the delay
started to increase. This wasn't really a problem but it looked odd if you
observed the waits. There is no measurable difference in performance, but
it's more readable this way.