Tom Lane [Sun, 8 Mar 2015 17:35:28 +0000 (13:35 -0400)]
Fix documentation for libpq's PQfn().
The SGML docs claimed that 1-byte integers could be sent or received with
the "isint" options, but no such behavior has ever been implemented in
pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was
even less in tune with reality, and the code itself used parameter names
matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit
of additional wordsmithing on the SGML docs while at it.
Since the business about 1-byte integers is a clear documentation bug,
back-patch to all supported branches.
Tom Lane [Fri, 6 Mar 2015 18:27:46 +0000 (13:27 -0500)]
Rethink function argument sorting in pg_dump.
Commit 7b583b20b1c95acb621c71251150beef958bb603 created an unnecessary
dump failure hazard by applying pg_get_function_identity_arguments()
to every function in the database, even those that won't get dumped.
This could result in snapshot-related problems if concurrent sessions are,
for example, creating and dropping temporary functions, as noted by Marko
Tiikkaja in bug #12832. While this is by no means pg_dump's only such
issue with concurrent DDL, it's unfortunate that we added a new failure
mode for cases that used to work, and even more so that the failure was
created for basically cosmetic reasons (ie, to sort overloaded functions
more deterministically).
To fix, revert that patch and instead sort function arguments using
information that pg_dump has available anyway, namely the names of the
argument types. This will produce a slightly different sort ordering for
overloaded functions than the previous coding; but applying strcmp
directly to the output of pg_get_function_identity_arguments really was
a bit odd anyway. The sorting will still be name-based and hence
independent of possibly-installation-specific OID assignments. A small
additional benefit is that sorting now works regardless of server version.
Back-patch to 9.3, where the previous commit appeared.
Alvaro Herrera [Thu, 5 Mar 2015 21:03:16 +0000 (18:03 -0300)]
Fix user mapping object description
We were using "user mapping for user XYZ" as description for user mappings, but
that's ambiguous because users can have mappings on multiple foreign
servers; therefore change it to "for user XYZ on server UVW" instead.
Object identities for user mappings are also updated in the same way, in
branches 9.3 and above.
The incomplete description string was introduced together with the whole
SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all
the way back.
Stephen Frost [Mon, 2 Mar 2015 19:12:28 +0000 (14:12 -0500)]
Fix pg_dump handling of extension config tables
Since 9.1, we've provided extensions with a way to denote
"configuration" tables- tables created by an extension which the user
may modify. By marking these as "configuration" tables, the extension
is asking for the data in these tables to be pg_dump'd (tables which
are not marked in this way are assumed to be entirely handled during
CREATE EXTENSION and are not included at all in a pg_dump).
Unfortunately, pg_dump neglected to consider foreign key relationships
between extension configuration tables and therefore could end up
trying to reload the data in an order which would cause FK violations.
This patch teaches pg_dump about these dependencies, so that the data
dumped out is done so in the best order possible. Note that there's no
way to handle circular dependencies, but those have yet to be seen in
the wild.
The release notes for this should include a caution to users that
existing pg_dump-based backups may be invalid due to this issue. The
data is all there, but restoring from it will require extracting the
data for the configuration tables and then loading them in the correct
order by hand.
Discussed initially back in bug #6738, more recently brought up by
Gilles Darold, who provided an initial patch which was further reworked
by Michael Paquier. Further modifications and documentation updates
by me.
Back-patch to 9.1 where we added the concept of extension configuration
tables.
Stephen Frost [Sun, 1 Mar 2015 20:27:06 +0000 (15:27 -0500)]
Fix targetRelation initializiation in prepsecurity
In 6f9bd50eabb0a4960e94c83dac8855771c9f340d, we modified
expand_security_quals() to tell expand_security_qual() about when the
current RTE was the targetRelation. Unfortunately, that commit
initialized the targetRelation variable used outside of the loop over
the RTEs instead of at the start of it.
This patch moves the variable and the initialization of it into the
loop, where it should have been to begin with.
Noah Misch [Sun, 1 Mar 2015 18:05:23 +0000 (13:05 -0500)]
Unlink static libraries before rebuilding them.
When the library already exists in the build directory, "ar" preserves
members not named on its command line. This mattered when, for example,
a "configure" rerun dropped a file from $(LIBOBJS). libpgport carried
the obsolete member until "make clean". Back-patch to 9.0 (all
supported versions).
Tom Lane [Sat, 28 Feb 2015 17:43:04 +0000 (12:43 -0500)]
Fix planning of star-schema-style queries.
Part of the intent of the parameterized-path mechanism was to handle
star-schema queries efficiently, but some overly-restrictive search
limiting logic added in commit e2fa76d80ba571d4de8992de6386536867250474
prevented such cases from working as desired. Fix that and add a
regression test about it. Per gripe from Marc Cousin.
This is arguably a bug rather than a new feature, so back-patch to 9.2
where parameterized paths were introduced.
Tom Lane [Fri, 27 Feb 2015 23:19:22 +0000 (18:19 -0500)]
Suppress uninitialized-variable warning from less-bright compilers.
The type variable must get set on first iteration of the while loop,
but there are reasonably modern gcc versions that don't realize that.
Initialize it with a dummy value. This undoes a removal of initialization
in commit 654809e770ce270c0bb9de726c5df1ab193d60f0.
Alvaro Herrera [Fri, 27 Feb 2015 21:54:49 +0000 (18:54 -0300)]
Fix a couple of trivial issues in jsonb.c
Typo "aggreagate" appeared three times, and the return value of function
JsonbIteratorNext() was being assigned to an int variable in a bunch of
places.
Andrew Dunstan [Thu, 26 Feb 2015 17:34:43 +0000 (12:34 -0500)]
Render infinite date/timestamps as 'infinity' for json/jsonb
Commit ab14a73a6c raised an error in these cases and later the
behaviour was copied to jsonb. This is what the XML code, which we
then adopted, does, as the XSD types don't accept infinite values.
However, json dates and timestamps are just strings as far as json is
concerned, so there is no reason not to render these values as
'infinity'.
The json portion of this is backpatched to 9.4 where the behaviour was
introduced. The jsonb portion only affects the development branch.
Andres Freund [Thu, 26 Feb 2015 11:50:07 +0000 (12:50 +0100)]
Reconsider when to wait for WAL flushes/syncrep during commit.
Up to now RecordTransactionCommit() waited for WAL to be flushed (if
synchronous_commit != off) and to be synchronously replicated (if
enabled), even if a transaction did not have a xid assigned. The primary
reason for that is that sequence's nextval() did not assign a xid, but
are worthwhile to wait for on commit.
This can be problematic because sometimes read only transactions do
write WAL, e.g. HOT page prune records. That then could lead to read only
transactions having to wait during commit. Not something people expect
in a read only transaction.
This lead to such strange symptoms as backends being seemingly stuck
during connection establishment when all synchronous replicas are
down. Especially annoying when said stuck connection is the standby
trying to reconnect to allow syncrep again...
This behavior also is involved in a rather complicated <= 9.4 bug where
the transaction started by catchup interrupt processing waited for
syncrep using latches, but didn't get the wakeup because it was already
running inside the same overloaded signal handler. Fix the issue here
doesn't properly solve that issue, merely papers over the problems. In
9.5 catchup interrupts aren't processed out of signal handlers anymore.
To fix all this, make nextval() acquire a top level xid, and only wait for
transaction commit if a transaction both acquired a xid and emitted WAL
records. If only a xid has been assigned we don't uselessly want to
wait just because of writes to temporary/unlogged tables; if only WAL
has been written we don't want to wait just because of HOT prunes.
The xid assignment in nextval() is unlikely to cause overhead in
real-world workloads. For one it only happens SEQ_LOG_VALS/32 values
anyway, for another only usage of nextval() without using the result in
an insert or similar is affected.
Noah Misch [Thu, 26 Feb 2015 04:48:28 +0000 (23:48 -0500)]
Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables.
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed
memory. Back-patch to 9.0 (all supported versions). Little code ran
between the premature free and the read, so non-assert builds are
unlikely to witness user-visible consequences.
Stephen Frost [Thu, 26 Feb 2015 02:36:40 +0000 (21:36 -0500)]
Add locking clause for SB views for update/delete
In expand_security_qual(), we were handling locking correctly when a
PlanRowMark existed, but not when we were working with the target
relation (which doesn't have any PlanRowMarks, but the subquery created
for the security barrier quals still needs to lock the rows under it).
Noted by Etsuro Fujita when working with the Postgres FDW, which wasn't
properly issuing a SELECT ... FOR UPDATE to the remote side under a
DELETE.
Back-patch to 9.4 where updatable security barrier views were
introduced.
Tom Lane [Wed, 25 Feb 2015 17:01:12 +0000 (12:01 -0500)]
Fix dumping of views that are just VALUES(...) but have column aliases.
The "simple" path for printing VALUES clauses doesn't work if we need
to attach nondefault column aliases, because there's noplace to do that
in the minimal VALUES() syntax. So modify get_simple_values_rte() to
detect nondefault aliases and treat that as a non-simple case. This
further exposes that the "non-simple" path never actually worked;
it didn't produce valid syntax. Fix that too. Per bug #12789 from
Curtis McEnroe, and analysis by Andrew Gierth.
Back-patch to all supported branches. Before 9.3, this also requires
back-patching the part of commit 092d7ded29f36b0539046b23b81b9f0bf2d637f1
that created get_simple_values_rte() to begin with; inserting the extra
test into the old factorization of that logic would've been too messy.
Andres Freund [Mon, 23 Feb 2015 15:11:11 +0000 (16:11 +0100)]
Guard against spurious signals in LockBufferForCleanup.
When LockBufferForCleanup() has to wait for getting a cleanup lock on a
buffer it does so by setting a flag in the buffer header and then wait
for other backends to signal it using ProcWaitForSignal().
Unfortunately LockBufferForCleanup() missed that ProcWaitForSignal() can
return for other reasons than the signal it is hoping for. If such a
spurious signal arrives the wait flags on the buffer header will still
be set. That then triggers "ERROR: multiple backends attempting to wait
for pincount 1".
The fix is simple, unset the flag if still set when retrying. That
implies an additional spinlock acquisition/release, but that's unlikely
to matter given the cost of waiting for a cleanup lock. Alternatively
it'd have been possible to move responsibility for maintaining the
relevant flag to the waiter all together, but that might have had
negative consequences due to possible floods of signals. Besides being
more invasive.
This looks to be a very longstanding bug. The relevant code in
LockBufferForCleanup() hasn't changed materially since its introduction
and ProcWaitForSignal() was documented to return for unrelated reasons
since 8.2. The master only patch series removing ImmediateInterruptOK
made it much easier to hit though, as ProcSendSignal/ProcWaitForSignal
now uses a latch shared with other tasks.
Per discussion with Kevin Grittner, Tom Lane and me.
Fix potential deadlock with libpq non-blocking mode.
If libpq output buffer is full, pqSendSome() function tries to drain any
incoming data. This avoids deadlock, if the server e.g. sends a lot of
NOTICE messages, and blocks until we read them. However, pqSendSome() only
did that in blocking mode. In non-blocking mode, the deadlock could still
happen.
To fix, take a two-pronged approach:
1. Change the documentation to instruct that when PQflush() returns 1, you
should wait for both read- and write-ready, and call PQconsumeInput() if it
becomes read-ready. That fixes the deadlock, but applications are not going
to change overnight.
2. In pqSendSome(), drain the input buffer before returning 1. This
alleviates the problem for applications that only wait for write-ready. In
particular, a slow but steady stream of NOTICE messages during COPY FROM
STDIN will no longer cause a deadlock. The risk remains that the server
attempts to send a large burst of data and fills its output buffer, and at
the same time the client also sends enough data to fill its output buffer.
The application will deadlock if it goes to sleep, waiting for the socket
to become write-ready, before the server's data arrives. In practice,
NOTICE messages and such that the server might be sending are usually
short, so it's highly unlikely that the server would fill its output buffer
so quickly.
Tom Lane [Sat, 21 Feb 2015 17:59:25 +0000 (12:59 -0500)]
Fix misparsing of empty value in conninfo_uri_parse_params().
After finding an "=" character, the pointer was advanced twice when it
should only advance once. This is harmless as long as the value after "="
has at least one character; but if it doesn't, we'd miss the terminator
character and include too much in the value.
In principle this could lead to reading off the end of memory. It does not
seem worth treating as a security issue though, because it would happen on
client side, and besides client logic that's taking conninfo strings from
untrusted sources has much worse security problems than this.
Report and patch received off-list from Thomas Fanghaenel.
Back-patch to 9.2 where the faulty code was introduced.
Tom Lane [Wed, 18 Feb 2015 16:43:00 +0000 (11:43 -0500)]
Fix failure to honor -Z compression level option in pg_dump -Fd.
cfopen() and cfopen_write() failed to pass the compression level through
to zlib, so that you always got the default compression level if you got
any at all.
In passing, also fix these and related functions so that the correct errno
is reliably returned on failure; the original coding supposes that free()
cannot change errno, which is untrue on at least some platforms.
Per bug #12779 from Christoph Berg. Back-patch to 9.1 where the faulty
code was introduced.
Tom Lane [Tue, 17 Feb 2015 17:49:18 +0000 (12:49 -0500)]
Remove code to match IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses.
In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only
looked back as far as commit f3aec2c7f51904e7 where the breakage occurred
(which is why I thought the IPv4-in-IPv6 business was undocumented). But
actually the logic dates back to commit 3c9bb8886df7d56a and was simply
broken by erroneous refactoring in the later commit. A bit of archives
excavation shows that we added the whole business in response to a report
that some 2003-era Linux kernels would report IPv4 connections as having
IPv4-in-IPv6 addresses. The fact that we've had no complaints since 9.0
seems to be sufficient confirmation that no modern kernels do that, so
let's just rip it all out rather than trying to fix it.
Do this in the back branches too, thus essentially deciding that our
effective behavior since 9.0 is correct. If there are any platforms on
which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix
would have made for a subtle and potentially security-sensitive change in
the effective meaning of IPv4 pg_hba.conf entries, which does not seem like
a good thing to do in minor releases. So let's let the post-9.0 behavior
stand, and change the documentation to match it.
In passing, I failed to resist the temptation to wordsmith the description
of pg_hba.conf IPv4 and IPv6 address entries a bit. A lot of this text
hasn't been touched since we were IPv4-only.
Robert Haas [Tue, 17 Feb 2015 15:19:30 +0000 (10:19 -0500)]
Improve pg_check_dir code and comments.
Avoid losing errno if readdir() fails and closedir() works. Consistently
return 4 rather than 3 if both a lost+found directory and other files are
found, rather than returning one value or the other depending on the
order of the directory listing. Update comments to match the actual
behavior.
Tom Lane [Mon, 16 Feb 2015 21:17:48 +0000 (16:17 -0500)]
Fix misuse of memcpy() in check_ip().
The previous coding copied garbage into a local variable, pretty much
ensuring that the intended test of an IPv6 connection address against a
promoted IPv4 address from pg_hba.conf would never match. The lack of
field complaints likely indicates that nobody realized this was supposed
to work, which is unsurprising considering that no user-facing docs suggest
it should work.
In principle this could have led to a SIGSEGV due to reading off the end of
memory, but since the source address would have pointed to somewhere in the
function's stack frame, that's quite unlikely. What led to discovery of
the bug is Hugo Osvaldo Barrera's report of a crash after an OS upgrade,
which is probably because he is now running a system in which memcpy raises
abort() upon detecting overlapping source and destination areas. (You'd
have to additionally suppose some things about the stack frame layout to
arrive at this conclusion, but it seems plausible.)
This has been broken since the code was added, in commit f3aec2c7f51904e7,
so back-patch to all supported branches.
Tom Lane [Mon, 16 Feb 2015 04:26:45 +0000 (23:26 -0500)]
Fix null-pointer-deref crash while doing COPY IN with check constraints.
In commit bf7ca15875988a88e97302e012d7c4808bef3ea9 I introduced an
assumption that an RTE referenced by a whole-row Var must have a valid eref
field. This is false for RTEs constructed by DoCopy, and there are other
places taking similar shortcuts. Perhaps we should make all those places
go through addRangeTableEntryForRelation or its siblings instead of having
ad-hoc logic, but the most reliable fix seems to be to make the new code in
ExecEvalWholeRowVar cope if there's no eref. We can reasonably assume that
there's no need to insert column aliases if no aliases were provided.
Add a regression test case covering this, and also verifying that a sane
column name is in fact available in this situation.
Although the known case only crashes in 9.4 and HEAD, it seems prudent to
back-patch the code change to 9.2, since all the ingredients for a similar
failure exist in the variant patch applied to 9.3 and 9.2.
Bruce Momjian [Thu, 12 Feb 2015 02:02:07 +0000 (21:02 -0500)]
pg_upgrade: preserve freeze info for postgres/template1 dbs
pg_database.datfrozenxid and pg_database.datminmxid were not preserved
for the 'postgres' and 'template1' databases. This could cause missing
clog file errors on access to user tables and indexes after upgrades in
these databases.
Tom Lane [Thu, 12 Feb 2015 00:09:54 +0000 (19:09 -0500)]
Fix minor memory leak in ident_inet().
We'd leak the ident_serv data structure if the second pg_getaddrinfo_all
(the one for the local address) failed. This is not of great consequence
because a failure return here just leads directly to backend exit(), but
if this function is going to try to clean up after itself at all, it should
not have such holes in the logic. Try to fix it in a future-proof way by
having all the failure exits go through the same cleanup path, rather than
"optimizing" some of them.
Per Coverity. Back-patch to 9.2, which is as far back as this patch
applies cleanly.
Tom Lane [Wed, 11 Feb 2015 23:35:23 +0000 (18:35 -0500)]
Fix more memory leaks in failure path in buildACLCommands.
We already had one go at this issue in commit d73b7f973db5ec7e, but we
failed to notice that buildACLCommands also leaked several PQExpBuffers
along with a simply malloc'd string. This time let's try to make the
fix a bit more future-proof by eliminating the separate exit path.
It's still not exactly critical because pg_dump will curl up and die on
failure; but since the amount of the potential leak is now several KB,
it seems worth back-patching as far as 9.2 where the previous fix landed.
Per Coverity, which evidently is smarter than clang's static analyzer.
Michael Meskes [Wed, 11 Feb 2015 09:45:46 +0000 (10:45 +0100)]
Fixed array handling in ecpg.
When ecpg was rewritten to the new protocol version not all variable types
were corrected. This patch rewrites the code for these types to fix that. It
also fixes the documentation to correctly tell the status of array handling.
Tom Lane [Wed, 11 Feb 2015 03:38:17 +0000 (22:38 -0500)]
Fix pg_dump's heuristic for deciding which casts to dump.
Back in 2003 we had a discussion about how to decide which casts to dump.
At the time pg_dump really only considered an object's containing schema
to decide what to dump (ie, dump whatever's not in pg_catalog), and so
we chose a complicated idea involving whether the underlying types were to
be dumped (cf commit a6790ce85752b67ad994f55fdf1a450262ccc32e). But users
are allowed to create casts between built-in types, and we failed to dump
such casts. Let's get rid of that heuristic, which has accreted even more
ugliness since then, in favor of just looking at the cast's OID to decide
if it's a built-in cast or not.
In passing, also fix some really ancient code that supposed that it had to
manufacture a dependency for the cast on its cast function; that's only
true when dumping from a pre-7.3 server. This just resulted in some wasted
cycles and duplicate dependency-list entries with newer servers, but we
might as well improve it.
Per gripes from a number of people, most recently Greg Sabino Mullane.
Back-patch to all supported branches.
Tom Lane [Wed, 11 Feb 2015 01:37:22 +0000 (20:37 -0500)]
Fix GEQO to not assume its join order heuristic always works.
Back in commit 400e2c934457bef4bc3cc9a3e49b6289bd761bc0 I rewrote GEQO's
gimme_tree function to improve its heuristic for modifying the given tour
into a legal join order. In what can only be called a fit of hubris,
I supposed that this new heuristic would *always* find a legal join order,
and ripped out the old logic that allowed gimme_tree to sometimes fail.
The folly of this is exposed by bug #12760, in which the "greedy" clumping
behavior of merge_clump() can lead it into a dead end which could only be
recovered from by un-clumping. We have no code for that and wouldn't know
exactly what to do with it if we did. Rather than try to improve the
heuristic rules still further, let's just recognize that it *is* a
heuristic and probably must always have failure cases. So, put back the
code removed in the previous commit to allow for failure (but comment it
a bit better this time).
It's possible that this code was actually fully correct at the time and
has only been broken by the introduction of LATERAL. But having seen this
example I no longer have much faith in that proposition, so back-patch to
all supported branches.
Tom Lane [Mon, 9 Feb 2015 17:30:55 +0000 (12:30 -0500)]
Minor cleanup/code review for "indirect toast" stuff.
Fix some issues I noticed while fooling with an extension to allow an
additional kind of toast pointer. Much of this is just comment
improvement, but there are a couple of actual bugs, which might or might
not be reachable today depending on what can happen during logical
decoding. An example is that toast_flatten_tuple() failed to cover the
possibility of an indirection pointer in its input. Back-patch to 9.4
just in case that is reachable now.
In HEAD, also correct some really minor issues with recent compression
reorganization, such as dangerously underparenthesized macros.
Report WAL flush, not insert, position in replication IDENTIFY_SYSTEM
When beginning streaming replication, the client usually issues the
IDENTIFY_SYSTEM command, which used to return the current WAL insert
position. That's not suitable for the intended purpose of that field,
however. pg_receivexlog uses it to start replication from the reported
point, but if it hasn't been flushed to disk yet, it will fail. Change
IDENTIFY_SYSTEM to report the flush position instead.
Backpatch to 9.1 and above. 9.0 doesn't report any WAL position.
Fix reference-after-free when waiting for another xact due to constraint.
If an insertion or update had to wait for another transaction to finish,
because there was another insertion with conflicting key in progress,
we would pass a just-free'd item pointer to XactLockTableWait().
All calls to XactLockTableWait() and MultiXactIdWait() had similar issues.
Some passed a pointer to a buffer in the buffer cache, after already
releasing the lock. The call in EvalPlanQualFetch had already released the
pin too. All but the call in execUtils.c would merely lead to reporting a
bogus ctid, however (or an assertion failure, if enabled).
All the callers that passed HeapTuple->t_data->t_ctid were slightly bogus
anyway: if the tuple was updated (again) in the same transaction, its ctid
field would point to the next tuple in the chain, not the tuple itself.
Backpatch to 9.4, where the 'ctid' argument to XactLockTableWait was added
(in commit f88d4cfc)
Tom Lane [Tue, 3 Feb 2015 20:20:48 +0000 (15:20 -0500)]
Fix breakage in GEODEBUG debug code.
LINE doesn't have an "m" field (anymore anyway). Also fix unportable
assumption that %x can print the result of pointer subtraction.
In passing, improve single_decode() in minor ways:
* Remove unnecessary leading-whitespace skip (strtod does that already).
* Make GEODEBUG message more intelligible.
* Remove entirely-useless test to see if strtod returned a silly pointer.
* Don't bother computing trailing-whitespace skip unless caller wants
an ending pointer.
Be more careful to not lose sync in the FE/BE protocol.
If any error occurred while we were in the middle of reading a protocol
message from the client, we could lose sync, and incorrectly try to
interpret a part of another message as a new protocol message. That will
usually lead to an "invalid frontend message" error that terminates the
connection. However, this is a security issue because an attacker might
be able to deliberately cause an error, inject a Query message in what's
supposed to be just user data, and have the server execute it.
We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other
operations that could ereport(ERROR) in the middle of processing a message,
but a query cancel interrupt or statement timeout could nevertheless cause
it to happen. Also, the V2 fastpath and COPY handling were not so careful.
It's very difficult to recover in the V2 COPY protocol, so we will just
terminate the connection on error. In practice, that's what happened
previously anyway, as we lost protocol sync.
To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set
whenever we're in the middle of reading a message. When it's set, we cannot
safely ERROR out and continue running, because we might've read only part
of a message. PqCommReadingMsg acts somewhat similarly to critical sections
in that if an error occurs while it's set, the error handler will force the
connection to be terminated, as if the error was FATAL. It's not
implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted
to PANIC in critical sections, because we want to be able to use
PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes
advantage of that to prevent an OOM error from terminating the connection.
To prevent unnecessary connection terminations, add a holdoff mechanism
similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel
interrupts, but still allow die interrupts. The rules on which interrupts
are processed when are now a bit more complicated, so refactor
ProcessInterrupts() and the calls to it in signal handlers so that the
signal handlers always call it if ImmediateInterruptOK is set, and
ProcessInterrupts() can decide to not do anything if the other conditions
are not met.
Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund.
Backpatch to all supported versions.
Noah Misch [Mon, 2 Feb 2015 15:00:45 +0000 (10:00 -0500)]
Prevent Valgrind Memcheck errors around px_acquire_system_randomness().
This function uses uninitialized stack and heap buffers as supplementary
entropy sources. Mark them so Memcheck will not complain. Back-patch
to 9.4, where Valgrind Memcheck cooperation first appeared.
Noah Misch [Mon, 2 Feb 2015 15:00:45 +0000 (10:00 -0500)]
Cherry-pick security-relevant fixes from upstream imath library.
This covers alterations to buffer sizing and zeroing made between imath
1.3 and imath 1.20. Valgrind Memcheck identified the buffer overruns
and reliance on uninitialized data; their exploit potential is unknown.
Builds specifying --with-openssl are unaffected, because they use the
OpenSSL BIGNUM facility instead of imath. Back-patch to 9.0 (all
supported versions).
Noah Misch [Mon, 2 Feb 2015 15:00:45 +0000 (10:00 -0500)]
Fix buffer overrun after incomplete read in pullf_read_max().
Most callers pass a stack buffer. The ensuing stack smash can crash the
server, and we have not ruled out the viability of attacks that lead to
privilege escalation. Back-patch to 9.0 (all supported versions).
Bruce Momjian [Mon, 2 Feb 2015 15:00:45 +0000 (10:00 -0500)]
port/snprintf(): fix overflow and do padding
Prevent port/snprintf() from overflowing its local fixed-size
buffer and pad to the desired number of digits with zeros, even
if the precision is beyond the ability of the native sprintf().
port/snprintf() is only used on systems that lack a native
snprintf().
Reported by Bruce Momjian. Patch by Tom Lane. Backpatch to all
supported versions.
doc: Improve claim about location of pg_service.conf
The previous wording claimed that the file was always in /etc, but of
course this varies with the installation layout. Write instead that it
can be found via `pg_config --sysconfdir`. Even though this is still
somewhat incorrect because it doesn't account of moved installations, it
at least conveys that the location depends on the installation.
Tom Lane [Sat, 31 Jan 2015 23:35:17 +0000 (18:35 -0500)]
Fix documentation of psql's ECHO all mode.
"ECHO all" is ignored for interactive input, and has been for a very long
time, though possibly not for as long as the documentation has claimed the
opposite. Fix that, and also note that empty lines aren't echoed, which
while dubious is another longstanding behavior (it's embedded in our
regression test files for one thing). Per bug #12721 from Hans Ginzel.
In HEAD, also improve the code comments in this area, and suppress an
unnecessary fflush(stdout) when we're not echoing. That would likely
be safe to back-patch, but I'll not risk it mere hours before a release
wrap.
Tom Lane [Fri, 30 Jan 2015 19:44:49 +0000 (14:44 -0500)]
Fix jsonb Unicode escape processing, and in consequence disallow \u0000.
We've been trying to support \u0000 in JSON values since commit 78ed8e03c67d7333, and have introduced increasingly worse hacks to try to
make it work, such as commit 0ad1a816320a2b53. However, it fundamentally
can't work in the way envisioned, because the stored representation looks
the same as for \\u0000 which is not the same thing at all. It's also
entirely bogus to output \u0000 when de-escaped output is called for.
The right way to do this would be to store an actual 0x00 byte, and then
throw error only if asked to produce de-escaped textual output. However,
getting to that point seems likely to take considerable work and may well
never be practical in the 9.4.x series.
To preserve our options for better behavior while getting rid of the nasty
side-effects of 0ad1a816320a2b53, revert that commit in toto and instead
throw error if \u0000 is used in a context where it needs to be de-escaped.
(These are the same contexts where non-ASCII Unicode escapes throw error
if the database encoding isn't UTF8, so this behavior is by no means
without precedent.)
In passing, make both the \u0000 case and the non-ASCII Unicode case report
ERRCODE_UNTRANSLATABLE_CHARACTER / "unsupported Unicode escape sequence"
rather than claiming there's something wrong with the input syntax.
Back-patch to 9.4, where we have to do something because 0ad1a816320a2b53
broke things for many cases having nothing to do with \u0000. 9.3 also has
bogus behavior, but only for that specific escape value, so given the lack
of field complaints it seems better to leave 9.3 alone.
Tom Lane [Fri, 30 Jan 2015 18:04:59 +0000 (13:04 -0500)]
Fix Coverity warning about contrib/pgcrypto's mdc_finish().
Coverity points out that mdc_finish returns a pointer to a local buffer
(which of course is gone as soon as the function returns), leaving open
a risk of misbehaviors possibly as bad as a stack overwrite.
In reality, the only possible call site is in process_data_packets()
which does not examine the returned pointer at all. So there's no
live bug, but nonetheless the code is confusing and risky. Refactor
to avoid the issue by letting process_data_packets() call mdc_finish()
directly instead of going through the pullf_read() API.
Although this is only cosmetic, it seems good to back-patch so that
the logic in pgp-decrypt.c stays in sync across all branches.
Tom Lane [Fri, 30 Jan 2015 17:30:41 +0000 (12:30 -0500)]
Fix assorted oversights in range selectivity estimation.
calc_rangesel() failed outright when comparing range variables to empty
constant ranges with < or >=, as a result of missing cases in a switch.
It also produced a bogus estimate for > comparison to an empty range.
On top of that, the >= and > cases were mislabeled throughout. For
nonempty constant ranges, they managed to produce the right answers
anyway as a result of counterbalancing typos.
Also, default_range_selectivity() omitted cases for elem <@ range,
range &< range, and range &> range, so that rather dubious defaults
were applied for these operators.
In passing, rearrange the code in rangesel() so that the elem <@ range
case is handled in a less opaque fashion.
Report and patch by Emre Hasegeli, some additional work by me
The requiredEntries / additionalEntries arrays were not freed in
freeScanKeys() like other per-key stuff.
It's not obvious, but startScanKey() was only ever called after the keys
have been initialized with ginNewScanKey(). That's why it doesn't need to
worry about freeing existing arrays. The ginIsNewKey() test in gingetbitmap
was never true, because ginrescan free's the existing keys, and it's not OK
to call gingetbitmap twice in a row without calling ginrescan in between.
To make that clear, remove the unnecessary ginIsNewKey(). And just to be
extra sure that nothing funny happens if there is an existing key after all,
call freeScanKeys() to free it if it exists. This makes the code more
straightforward.
(I'm seeing other similar leaks in testing a query that rescans an GIN index
scan, but that's a different issue. This just fixes the obvious leak with
those two arrays.)
Kevin Grittner [Fri, 30 Jan 2015 14:57:53 +0000 (08:57 -0600)]
Allow pg_dump to use jobs and serializable transactions together.
Since 9.3, when the --jobs option was introduced, using it together
with the --serializable-deferrable option generated multiple
errors. We can get correct behavior by allowing the connection
which acquires the snapshot to use SERIALIZABLE, READ ONLY,
DEFERRABLE and pass that to the workers running the other
connections using REPEATABLE READ, READ ONLY. This is a bit of a
kluge since the SERIALIZABLE behavior is achieved by running some
of the participating connections at a different isolation level,
but it is a simple and safe change, suitable for back-patching.
This will be followed by a proposal for a more invasive fix with
some slight behavioral changes on just the master branch, based on
suggestions from Andres Freund, but the kluge will be applied to
master until something is agreed along those lines.
Back-patched to 9.3, where the --jobs option was added.
Stephen Frost [Fri, 30 Jan 2015 02:59:43 +0000 (21:59 -0500)]
Fix BuildIndexValueDescription for expressions
In 804b6b6db4dcfc590a468e7be390738f9f7755fb we modified
BuildIndexValueDescription to pay attention to which columns are visible
to the user, but unfortunatley that commit neglected to consider indexes
which are built on expressions.
Handle error-reporting of violations of constraint indexes based on
expressions by not returning any detail when the user does not have
table-level SELECT rights.
Tom Lane [Fri, 30 Jan 2015 01:18:37 +0000 (20:18 -0500)]
Handle unexpected query results, especially NULLs, safely in connectby().
connectby() didn't adequately check that the constructed SQL query returns
what it's expected to; in fact, since commit 08c33c426bfebb32 it wasn't
checking that at all. This could result in a null-pointer-dereference
crash if the constructed query returns only one column instead of the
expected two. Less excitingly, it could also result in surprising data
conversion failures if the constructed query returned values that were
not I/O-conversion-compatible with the types specified by the query
calling connectby().
In all branches, insist that the query return at least two columns;
this seems like a minimal sanity check that can't break any reasonable
use-cases.
In HEAD, insist that the constructed query return the types specified by
the outer query, including checking for typmod incompatibility, which the
code never did even before it got broken. This is to hide the fact that
the implementation does a conversion to text and back; someday we might
want to improve that.
In back branches, leave that alone, since adding a type check in a minor
release is more likely to break things than make people happy. Type
inconsistencies will continue to work so long as the actual type and
declared type are I/O representation compatible, and otherwise will fail
the same way they used to.
Also, in all branches, be on guard for NULL results from the constructed
query, which formerly would cause null-pointer dereference crashes.
We now print the row with the NULL but don't recurse down from it.
In passing, get rid of the rather pointless idea that
build_tuplestore_recursively() should return the same tuplestore that's
passed to it.
Andres Freund [Thu, 29 Jan 2015 16:49:03 +0000 (17:49 +0100)]
Properly terminate the array returned by GetLockConflicts().
GetLockConflicts() has for a long time not properly terminated the
returned array. During normal processing the returned array is zero
initialized which, while not pretty, is sufficient to be recognized as
a invalid virtual transaction id. But the HotStandby case is more than
aesthetically broken: The allocated (and reused) array is neither
zeroed upon allocation, nor reinitialized, nor terminated.
Not having a terminating element means that the end of the array will
not be recognized and that recovery conflict handling will thus read
ahead into adjacent memory. Only terminating when hitting memory
content that looks like a invalid virtual transaction id. Luckily
this seems so far not have caused significant problems, besides making
recovery conflict more expensive.
Fix bug where GIN scan keys were not initialized with gin_fuzzy_search_limit.
When gin_fuzzy_search_limit was used, we could jump out of startScan()
without calling startScanKey(). That was harmless in 9.3 and below, because
startScanKey()() didn't do anything interesting, but in 9.4 it initializes
information needed for skipping entries (aka GIN fast scans), and you
readily get a segfault if it's not done. Nevertheless, it was clearly wrong
all along, so backpatch all the way to 9.1 where the early return was
introduced.
(AFAICS startScanKey() did nothing useful in 9.3 and below, because the
fields it initialized were already initialized in ginFillScanKey(), but I
don't dare to change that in a minor release. ginFillScanKey() is always
called in gingetbitmap() even though there's a check there to see if the
scan keys have already been initialized, because they never are; ginrescan()
free's them.)
In the passing, remove unnecessary if-check from the second inner loop in
startScan(). We already check in the first loop that the condition is true
for all entries.
Reported by Olaf Gawenda, bug #12694, Backpatch to 9.1 and above, although
AFAICS it causes a live bug only in 9.4.
Stephen Frost [Wed, 28 Jan 2015 22:42:51 +0000 (17:42 -0500)]
Clean up range-table building in copy.c
Commit 804b6b6db4dcfc590a468e7be390738f9f7755fb added the build of a
range table in copy.c to initialize the EState es_range_table since it
can be needed in error paths. Unfortunately, that commit didn't
appreciate that some code paths might end up not initializing the rte
which is used to build the range table.
Fix that and clean up a couple others things along the way- build it
only once and don't explicitly set it on the !is_from path as it
doesn't make any sense there (cstate is palloc0'd, so this isn't an
issue from an initializing standpoint either).
The prior commit went back to 9.0, but this only goes back to 9.1 as
prior to that the range table build happens immediately after building
the RTE and therefore doesn't suffer from this issue.
Stephen Frost [Mon, 12 Jan 2015 22:04:11 +0000 (17:04 -0500)]
Fix column-privilege leak in error-message paths
While building error messages to return to the user,
BuildIndexValueDescription, ExecBuildSlotValueDescription and
ri_ReportViolation would happily include the entire key or entire row in
the result returned to the user, even if the user didn't have access to
view all of the columns being included.
Instead, include only those columns which the user is providing or which
the user has select rights on. If the user does not have any rights
to view the table or any of the columns involved then no detail is
provided and a NULL value is returned from BuildIndexValueDescription
and ExecBuildSlotValueDescription. Note that, for key cases, the user
must have access to all of the columns for the key to be shown; a
partial key will not be returned.
Back-patch all the way, as column-level privileges are now in all
supported versions.
This has been assigned CVE-2014-8161, but since the issue and the patch
have already been publicized on pgsql-hackers, there's no point in trying
to hide this commit.
Tom Lane [Tue, 27 Jan 2015 17:06:36 +0000 (12:06 -0500)]
Fix NUMERIC field access macros to treat NaNs consistently.
Commit 145343534c153d1e6c3cff1fa1855787684d9a38 arranged to store numeric
NaN values as short-header numerics, but the field access macros did not
get the memo: they thought only "SHORT" numerics have short headers.
Most of the time this makes no difference because we don't access the
weight or dscale of a NaN; but numeric_send does that. As pointed out
by Andrew Gierth, this led to fetching uninitialized bytes.
AFAICS this could not have any worse consequences than that; in particular,
an unaligned stored numeric would have been detoasted by PG_GETARG_NUMERIC,
so that there's no risk of a fetch off the end of memory. Still, the code
is wrong on its own terms, and it's not hard to foresee future changes that
might expose us to real risks. So back-patch to all affected branches.
Tom Lane [Mon, 26 Jan 2015 20:17:36 +0000 (15:17 -0500)]
Fix volatile-safety issue in dblink's materializeQueryResult().
Some fields of the sinfo struct are modified within PG_TRY and then
referenced within PG_CATCH, so as with recent patch to async.c, "volatile"
is necessary for strict POSIX compliance; and that propagates to a couple
of subroutines as well as materializeQueryResult() itself. I think the
risk of actual issues here is probably higher than in async.c, because
storeQueryResult() is likely to get inlined into materializeQueryResult(),
leaving the compiler free to conclude that its stores into sinfo fields are
dead code.
Tom Lane [Mon, 26 Jan 2015 17:18:25 +0000 (12:18 -0500)]
Fix volatile-safety issue in pltcl_SPI_execute_plan().
The "callargs" variable is modified within PG_TRY and then referenced
within PG_CATCH, which is exactly the coding pattern we've now found
to be unsafe. Marking "callargs" volatile would be problematic because
it is passed by reference to some Tcl functions, so fix the problem
by not modifying it within PG_TRY. We can just postpone the free()
till we exit the PG_TRY construct, as is already done elsewhere in this
same file.
Also, fix failure to free(callargs) when exiting on too-many-arguments
error. This is only a minor memory leak, but a leak nonetheless.
In passing, remove some unnecessary "volatile" markings in the same
function. Those doubtless are there because gcc 2.95.3 whinged about
them, but we now know that its algorithm for complaining is many bricks
shy of a load.
This is certainly a live bug with compilers that optimize similarly
to current gcc, so back-patch to all active branches.
Tom Lane [Mon, 26 Jan 2015 16:57:36 +0000 (11:57 -0500)]
Fix volatile-safety issue in asyncQueueReadAllNotifications().
The "pos" variable is modified within PG_TRY and then referenced
within PG_CATCH, so for strict POSIX conformance it must be marked
volatile. Superficially the code looked safe because pos's address
was taken, which was sufficient to force it into memory ... but it's
not sufficient to ensure that the compiler applies updates exactly
where the program text says to. The volatility marking has to extend
into a couple of subroutines too, but I think that's probably a good
thing because the risk of out-of-order updates is mostly in those
subroutines not asyncQueueReadAllNotifications() itself. In principle
the compiler could have re-ordered operations such that an error could
be thrown while "pos" had an incorrect value.
It's unclear how real the risk is here, but for safety back-patch
to all active branches.
Tom Lane [Mon, 26 Jan 2015 03:49:59 +0000 (22:49 -0500)]
Further cleanup of ReorderBufferCommit().
On closer inspection, we can remove the "volatile" qualifier on
"using_subtxn" so long as we initialize that before the PG_TRY block,
which there's no particularly good reason not to do.
Also, push the "change" variable inside the PG_TRY so as to remove
all question of whether it needs "volatile", and remove useless
early initializations of "snapshow_now" and "using_subtxn".
Tom Lane [Mon, 26 Jan 2015 01:19:07 +0000 (20:19 -0500)]
Clean up assorted issues in ALTER SYSTEM coding.
Fix unsafe use of a non-volatile variable in PG_TRY/PG_CATCH in
AlterSystemSetConfigFile(). While at it, clean up a bundle of other
infelicities and outright bugs, including corner-case-incorrect linked list
manipulation, a poorly designed and worse documented parse-and-validate
function (which even included some randomly chosen hard-wired substitutes
for the specified elevel in one code path ... wtf?), direct use of open()
instead of fd.c's facilities, inadequate checking of write()'s return
value, and generally poorly written commentary.
Tom Lane [Sat, 24 Jan 2015 18:25:22 +0000 (13:25 -0500)]
Fix unsafe coding in ReorderBufferCommit().
"iterstate" must be marked volatile since it's changed inside the PG_TRY
block and then used in the PG_CATCH stanza. Noted by Mark Wilding of
Salesforce. (We really need to see if we can't get the C compiler to warn
about this.)
Also, reset iterstate to NULL after the mainline ReorderBufferIterTXNFinish
call, to ensure the PG_CATCH block doesn't try to do that a second time.
Tom Lane [Sat, 24 Jan 2015 18:05:45 +0000 (13:05 -0500)]
Replace a bunch more uses of strncpy() with safer coding.
strncpy() has a well-deserved reputation for being unsafe, so make an
effort to get rid of nearly all occurrences in HEAD.
A large fraction of the remaining uses were passing length less than or
equal to the known strlen() of the source, in which case no null-padding
can occur and the behavior is equivalent to memcpy(), though doubtless
slower and certainly harder to reason about. So just use memcpy() in
these cases.
In other cases, use either StrNCpy() or strlcpy() as appropriate (depending
on whether padding to the full length of the destination buffer seems
useful).
I left a few strncpy() calls alone in the src/timezone/ code, to keep it
in sync with upstream (the IANA tzcode distribution). There are also a
few such calls in ecpg that could possibly do with more analysis.
AFAICT, none of these changes are more than cosmetic, except for the four
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength
source leads to a non-null-terminated destination buffer and ensuing
misbehavior. These don't seem like security issues, first because no stack
clobber is possible and second because if your values of sslcert etc are
coming from untrusted sources then you've got problems way worse than this.
Still, it's undesirable to have unpredictable behavior for overlength
inputs, so back-patch those four changes to all active branches.
Tom Lane [Tue, 20 Jan 2015 04:44:22 +0000 (23:44 -0500)]
In pg_regress, remove the temporary installation upon successful exit.
This results in a very substantial reduction in disk space usage during
"make check-world", since that sequence involves creation of numerous
temporary installations. It should also help a bit in the buildfarm, even
though the buildfarm script doesn't create as many temp installations,
because the current script misses deleting some of them; and anyway it
seems better to do this once in one place rather than expecting that
script to get it right every time.
In 9.4 and HEAD, also undo the unwise choice in commit b1aebbb6a86e96d7
to report strerror(errno) after a rmtree() failure. rmtree has already
reported that, possibly for multiple failures with distinct errnos; and
what's more, by the time it returns there is no good reason to assume
that errno still reflects the last reportable error. So reporting errno
here is at best redundant and at worst badly misleading.
Back-patch to all supported branches, so that future revisions of the
buildfarm script can rely on this behavior.
Tom Lane [Tue, 20 Jan 2015 04:01:36 +0000 (23:01 -0500)]
Adjust "pgstat wait timeout" message to be a translatable LOG message.
Per discussion, change the log level of this message to be LOG not WARNING.
The main point of this change is to avoid causing buildfarm run failures
when the stats collector is exceptionally slow to respond, which it not
infrequently is on some of the smaller/slower buildfarm members.
This change does lose notice to an interactive user when his stats query
is looking at out-of-date stats, but the majority opinion (not necessarily
that of yours truly) is that WARNING messages would probably not get
noticed anyway on heavily loaded production systems. A LOG message at
least ensures that the problem is recorded somewhere where bulk auditing
for the issue is possible.
Also, instead of an untranslated "pgstat wait timeout" message, provide
a translatable and hopefully more understandable message "using stale
statistics instead of current ones because stats collector is not
responding". The original text was written hastily under the assumption
that it would never really happen in practice, which we now know to be
unduly optimistic.
Back-patch to all active branches, since we've seen the buildfarm issue
in all branches.
Previously, the xml value resulting from an xpath query would not have
namespace declarations if the namespace declarations were attached to
an ancestor element in the input xml value. That means the output value
was not correct XML. Fix that by running the result value through
xmlCopyNode(), which produces the correct namespace declarations.
Another attempt at fixing Windows Norwegian locale.
Previous fix mapped "Norwegian (Bokmål)" locale, which contains a non-ASCII
character, to the pure ASCII alias "norwegian-bokmal". However, it turns
out that more recent versions of the CRT library, in particular MSVCR110
(Visual Studio 2012), changed the behaviour of setlocale() so that if
you pass "norwegian-bokmal" to setlocale, it returns "Norwegian_Norway".
That meant trouble, when setlocale(..., NULL) first returned
"Norwegian (Bokmål)_Norway", which we mapped to "norwegian-bokmal_Norway",
but another call to setlocale(..., "norwegian-bokmal_Norway") returned
"Norwegian_Norway". That caused PostgreSQL to think that they are different
locales, and therefore not compatible. That caused initdb to fail at
CREATE DATABASE.
Older CRT versions seem to accept "Norwegian_Norway" too, so change the
mapping to return "Norwegian_Norway" instead of "norwegian-bokmal".
Backpatch to 9.2 like the previous attempt. We haven't made a release that
includes the previous fix yet, so we don't need to worry about changing the
locale of existing clusters from "norwegian-bokmal" to "Norwegian_Norway".
(Doing any mapping like this at all requires changing the locale of
existing databases; the release notes need to include instructions for
that).
Noah Misch [Fri, 16 Jan 2015 06:27:31 +0000 (01:27 -0500)]
Update "pg_regress --no-locale" for Darwin and Windows.
Commit 894459e59ffa5c7fee297b246c17e1f72564db1d revealed this option to
be broken for NLS builds on Darwin, but "make -C contrib/unaccent check"
and the buildfarm client rely on it. Fix that configuration by
redefining the option to imply LANG=C on Darwin. In passing, use LANG=C
instead of LANG=en on Windows; since only postmaster startup uses that
value, testers are unlikely to notice the change. Back-patch to 9.0,
like the predecessor commit.
Tom Lane [Thu, 15 Jan 2015 23:52:25 +0000 (18:52 -0500)]
Fix use-of-already-freed-memory problem in EvalPlanQual processing.
Up to now, the "child" executor state trees generated for EvalPlanQual
rechecks have simply shared the ResultRelInfo arrays used for the original
execution tree. However, this leads to dangling-pointer problems, because
ExecInitModifyTable() is all too willing to scribble on some fields of the
ResultRelInfo(s) even when it's being run in one of those child trees.
This trashes those fields from the perspective of the parent tree, because
even if the generated subtree is logically identical to what was in use in
the parent, it's in a memory context that will go away when we're done
with the child state tree.
We do however want to share information in the direction from the parent
down to the children; in particular, fields such as es_instrument *must*
be shared or we'll lose the stats arising from execution of the children.
So the simplest fix is to make a copy of the parent's ResultRelInfo array,
but not copy any fields back at end of child execution.
Per report from Manuel Kniep. The added isolation test is based on his
example. In an unpatched memory-clobber-enabled build it will reliably
fail with "ctid is NULL" errors in all branches back to 9.1, as a
consequence of junkfilter->jf_junkAttNo being overwritten with $7f7f.
This test cannot be run as-is before that for lack of WITH syntax; but
I have no doubt that some variant of this problem can arise in older
branches, so apply the code change all the way back.
Tom Lane [Thu, 15 Jan 2015 18:18:16 +0000 (13:18 -0500)]
Improve performance of EXPLAIN with large range tables.
As of 9.3, ruleutils.c goes to some lengths to ensure that table and column
aliases used in its output are unique. Of course this takes more time than
was required before, which in itself isn't fatal. However, EXPLAIN was set
up so that recalculation of the unique aliases was repeated for each
subexpression printed in a plan. That results in O(N^2) time and memory
consumption for large plan trees, which did not happen in older branches.
Fortunately, the expensive work is the same across a whole plan tree,
so there is no need to repeat it; we can do most of the initialization
just once per query and re-use it for each subexpression. This buys
back most (not all) of the performance loss since 9.2.
We need an extra ExplainState field to hold the precalculated deparse
context. That's no problem in HEAD, but in the back branches, expanding
sizeof(ExplainState) seems risky because third-party extensions might
have local variables of that struct type. So, in 9.4 and 9.3, introduce
an auxiliary struct to keep sizeof(ExplainState) the same. We should
refactor the APIs to avoid such local variables in future, but that's
material for a separate HEAD-only commit.
Per gripe from Alexey Bashtanov. Back-patch to 9.3 where the issue
was introduced.
Robert Haas [Thu, 15 Jan 2015 14:26:03 +0000 (09:26 -0500)]
pg_standby: Avoid writing one byte beyond the end of the buffer.
Previously, read() might have returned a length equal to the buffer
length, and then the subsequent store to buf[len] would write a
zero-byte one byte past the end. This doesn't seem likely to be
a security issue, but there's some chance it could result in
pg_standby misbehaving.
Spotted by Coverity; patch by Michael Paquier, reviewed by me.
Tom Lane [Wed, 14 Jan 2015 16:08:17 +0000 (11:08 -0500)]
Allow CFLAGS from configure's environment to override automatic CFLAGS.
Previously, configure would add any switches that it chose of its own
accord to the end of the user-specified CFLAGS string. Since most
compilers process these left-to-right, this meant that configure's choices
would override the user-specified flags in case of conflicts. We'd rather
that worked the other way around, so adjust the logic to put the user's
string at the end not the beginning.
There does not seem to be a need for a similar behavior change for CPPFLAGS
or LDFLAGS: in those, the earlier switches tend to win (think -I or -L
behavior) so putting the user's string at the front is fine.
Backpatch to 9.4 but not earlier. I'm not planning to run buildfarm member
guar on older branches, and it seems a bit risky to change this behavior
in long-stable branches.
Andres Freund [Tue, 13 Jan 2015 20:02:47 +0000 (21:02 +0100)]
Make logging_collector=on work with non-windows EXEC_BACKEND again.
Commit b94ce6e80 reordered postmaster's startup sequence so that the
tempfile directory is only cleaned up after all the necessary state
for pg_ctl is collected. Unfortunately the chosen location is after
the syslogger has been started; which normally is fine, except for
!WIN32 EXEC_BACKEND builds, which pass information to children via
files in the temp directory.
Move the call to RemovePgTempFiles() to just before the syslogger has
started. That's the first child we fork.
Luckily EXEC_BACKEND is pretty much only used by endusers on windows,
which has a separate method to pass information to children. That
means the real world impact of this bug is very small.
Tom Lane [Mon, 12 Jan 2015 20:13:31 +0000 (15:13 -0500)]
Avoid unexpected slowdown in vacuum regression test.
I noticed the "vacuum" regression test taking really significantly longer
than it used to on a slow machine. Investigation pointed the finger at
commit e415b469b33ba328765e39fd62edcd28f30d9c3c, which added creation of
an index using an extremely expensive index function. That function was
evidently meant to be applied only twice ... but the test re-used an
existing test table, which up till a couple lines before that had had over
two thousand rows. Depending on timing of the concurrent regression tests,
the intervening VACUUMs might have been unable to remove those
recently-dead rows, and then the index build would need to create index
entries for them too, leading to the wrap_do_analyze() function being
executed 2000+ times not twice. Avoid this by using a different table
that is guaranteed to have only the intended two rows in it.
Back-patch to 9.0, like the commit that created the problem.
Tom Lane [Mon, 12 Jan 2015 17:40:12 +0000 (12:40 -0500)]
Use correct text domain for errcontext() appearing within ereport().
The mechanism added in commit dbdf9679d7d61b03a3bf73af9b095831b7010eb5
for associating the correct translation domain with errcontext strings
potentially fails in cases where errcontext() is used within an ereport()
macro. Such usage was not originally envisioned for errcontext(), but we
do have a few places that do it. In this situation, the intended comma
expression becomes just a couple of arguments to errfinish(), which the
compiler might choose to evaluate right-to-left.
Fortunately, in such cases the textdomain for the errcontext string must
be the same as for the surrounding ereport. So we can fix this by letting
errstart initialize context_domain along with domain; then it will have
the correct value no matter which order the calls occur in. (Note that
error stack callback functions are not invoked until errfinish, so normal
usage of errcontext won't affect what happens for errcontext calls within
the ereport macro.)
In passing, make sure that errcontext calls within the main backend set
context_domain to something non-NULL. This isn't a live bug because
NULL would select the current textdomain() setting which should be the
right thing anyway --- but it seems better to handle this completely
consistently with the regular domain field.
Per report from Dmitry Voronin. Backpatch to 9.3; before that, there
wasn't any attempt to ensure that errcontext strings were translated
in an appropriate domain.
Stephen Frost [Mon, 12 Jan 2015 15:13:18 +0000 (10:13 -0500)]
Skip dead backends in MinimumActiveBackends
Back in ed0b409, PGPROC was split and moved to static variables in
procarray.c, with procs in ProcArrayStruct replaced by an array of
integers representing process numbers (pgprocnos), with -1 indicating a
dead process which has yet to be removed. Access to procArray is
generally done under ProcArrayLock and therefore most code does not have
to concern itself with -1 entries.
However, MinimumActiveBackends intentionally does not take
ProcArrayLock, which means it has to be extra careful when accessing
procArray. Prior to ed0b409, this was handled by checking for a NULL
in the pointer array, but that check was no longer valid after the
split. Coverity pointed out that the check could never happen and so
it was removed in 5592eba. That didn't make anything worse, but it
didn't fix the issue either.
The correct fix is to check for pgprocno == -1 and skip over that entry
if it is encountered.
Back-patch to 9.2, since there can be attempts to access the arrays
prior to their start otherwise. Note that the changes prior to 9.4 will
look a bit different due to the change in 5592eba.
Note that MinimumActiveBackends only returns a bool for heuristic
purposes and any pre-array accesses are strictly read-only and so there
is no security implication and the lack of fields complaints indicates
it's very unlikely to run into issues due to this.
Tom Lane [Sun, 11 Jan 2015 17:35:47 +0000 (12:35 -0500)]
Fix libpq's behavior when /etc/passwd isn't readable.
Some users run their applications in chroot environments that lack an
/etc/passwd file. This means that the current UID's user name and home
directory are not obtainable. libpq used to be all right with that,
so long as the database role name to use was specified explicitly.
But commit a4c8f14364c27508233f8a31ac4b10a4c90235a9 broke such cases by
causing any failure of pg_fe_getauthname() to be treated as a hard error.
In any case it did little to advance its nominal goal of causing errors
in pg_fe_getauthname() to be reported better. So revert that and instead
put some real error-reporting code in place. This requires changes to the
APIs of pg_fe_getauthname() and pqGetpwuid(), since the latter had
departed from the POSIX-specified API of getpwuid_r() in a way that made
it impossible to distinguish actual lookup errors from "no such user".
To allow such failures to be reported, while not failing if the caller
supplies a role name, add a second call of pg_fe_getauthname() in
connectOptions2(). This is a tad ugly, and could perhaps be avoided with
some refactoring of PQsetdbLogin(), but I'll leave that idea for later.
(Note that the complained-of misbehavior only occurs in PQsetdbLogin,
not when using the PQconnect functions, because in the latter we will
never bother to call pg_fe_getauthname() if the user gives a role name.)
In passing also clean up the Windows-side usage of GetUserName(): the
recommended buffer size is 257 bytes, the passed buffer length should
be the buffer size not buffer size less 1, and any error is reported
by GetLastError() not errno.
Per report from Christoph Berg. Back-patch to 9.4 where the chroot
failure case was introduced. The generally poor reporting of errors
here is of very long standing, of course, but given the lack of field
complaints about it we won't risk changing these APIs further back
(even though they're theoretically internal to libpq).
Alvaro Herrera [Fri, 9 Jan 2015 15:34:25 +0000 (12:34 -0300)]
xlogreader.c: Fix report_invalid_record translatability flag
For some reason I overlooked in GETTEXT_TRIGGERS that the right argument
be read by gettext in 7fcbf6a405ffc12a4546a25b98592ee6733783fc. This
will drop the translation percentages for the backend all the way back
to 9.3 ...
Andres Freund [Thu, 8 Jan 2015 12:35:04 +0000 (13:35 +0100)]
Protect against XLogReaderAllocate() failing to allocate memory.
logical.c's StartupDecodingContext() forgot to check whether
XLogReaderAllocate() returns NULL indicating a memory allocation
failure. This could lead, although quite unlikely, lead to a NULL
pointer dereference.
This only applies to 9.4 as earlier versions don't do logical
decoding, and later versions don't return NULL after allocation
failures in XLogReaderAllocate().
Noah Misch [Thu, 8 Jan 2015 03:35:44 +0000 (22:35 -0500)]
On Darwin, detect and report a multithreaded postmaster.
Darwin --enable-nls builds use a substitute setlocale() that may start a
thread. Buildfarm member orangutan experienced BackendList corruption
on account of different postmaster threads executing signal handlers
simultaneously. Furthermore, a multithreaded postmaster risks undefined
behavior from sigprocmask() and fork(). Emit LOG messages about the
problem and its workaround. Back-patch to 9.0 (all supported versions).
Noah Misch [Thu, 8 Jan 2015 03:34:57 +0000 (22:34 -0500)]
Always set the six locale category environment variables in main().
Typical server invocations already achieved that. Invalid locale
settings in the initial postmaster environment interfered, as could
malloc() failure. Setting "LC_MESSAGES=pt_BR.utf8 LC_ALL=invalid" in
the postmaster environment will now choose C-locale messages, not
Brazilian Portuguese messages. Most localized programs, including all
PostgreSQL frontend executables, do likewise. Users are unlikely to
observe changes involving locale categories other than LC_MESSAGES.
CheckMyDatabase() ensures that we successfully set LC_COLLATE and
LC_CTYPE; main() sets the remaining three categories to locale "C",
which almost cannot fail. Back-patch to 9.0 (all supported versions).
Noah Misch [Thu, 8 Jan 2015 03:33:58 +0000 (22:33 -0500)]
Reject ANALYZE commands during VACUUM FULL or another ANALYZE.
vacuum()'s static variable handling makes it non-reentrant; an ensuing
null pointer deference crashed the backend. Back-patch to 9.0 (all
supported versions).