Don't archive bogus recycled or preallocated files after timeline switch.
After a timeline switch, we would leave behind recycled WAL segments that
are in the future, but on the old timeline. After promotion, and after they
become old enough to be recycled again, we would notice that they don't have
a .ready or .done file, create a .ready file for them, and archive them.
That's bogus, because the files contain garbage, recycled from an older
timeline (or prealloced as zeros). We shouldn't archive such files.
This could happen when we're following a timeline switch during replay, or
when we switch to new timeline at end-of-recovery.
To fix, whenever we switch to a new timeline, scan the data directory for
WAL segments on the old timeline, but with a higher segment number, and
remove them. Those don't belong to our timeline history, and are most
likely bogus recycled or preallocated files. They could also be valid files
that we streamed from the primary ahead of time, but in any case, they're
not needed to recover to the new timeline.
It was previously possible to have the launcher re-execute its main loop
before shutting down if some other signal was received or an error
occurred after getting SIGTERM, as reported by Qingqing Zhou.
While investigating, Tom Lane further noticed that if autovacuum had
been disabled in the config file, it would misbehave by trying to start
a new worker instead of bailing out immediately -- it would consider
itself as invoked in emergency mode.
Fix both problems by checking the shutdown flag in a few more places.
These problems have existed since autovacuum was introduced, so
backpatch all the way back.
Tom Lane [Tue, 7 Apr 2015 20:56:21 +0000 (16:56 -0400)]
Fix assorted inconsistent function declarations.
While gcc doesn't complain if you declare a function "static" and then
define it not-static, other compilers do; and in any case the code is
highly misleading this way. Add the missing "static" keywords to a
couple of recent patches. Per buildfarm member pademelon.
Tom Lane [Sat, 4 Apr 2015 23:55:15 +0000 (19:55 -0400)]
Fix incorrect matching of subexpressions in outer-join plan nodes.
Previously we would re-use input subexpressions in all expression trees
attached to a Join plan node. However, if it's an outer join and the
subexpression appears in the nullable-side input, this is potentially
incorrect for apparently-matching subexpressions that came from above
the outer join (ie, targetlist and qpqual expressions), because the
executor will treat the subexpression value as NULL when maybe it should
not be.
The case is fairly hard to hit because (a) you need a non-strict
subexpression (else NULL is correct), and (b) we don't usually compute
expressions in the outputs of non-toplevel plan nodes. But we might do
so if the expressions are sort keys for a mergejoin, for example.
Probably in the long run we should make a more explicit distinction between
Vars appearing above and below an outer join, but that will be a major
planner redesign and not at all back-patchable. For the moment, just hack
set_join_references so that it will not match any non-Var expressions
coming from nullable inputs to expressions that came from above the join.
(This is somewhat overkill, in that a strict expression could still be
matched, but it doesn't seem worth the effort to check that.)
Per report from Qingqing Zhou. The added regression test case is based
on his example.
This has been broken for a very long time, so back-patch to all active
branches.
Tom Lane [Fri, 3 Apr 2015 20:49:12 +0000 (16:49 -0400)]
Remove unnecessary variables in _hash_splitbucket().
Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass
start_nblkno to _hash_splitbucket(), and for that matter unnecessary to
have the internal nblkno variable either. My compiler didn't complain
about that, but some did. I also rearranged the use of oblkno a bit to
make that case more parallel.
Report and initial patch by Petr Jelinek, rearranged a bit by me.
Back-patch to all branches, like the previous patch.
psql was already accepting conninfo strings as the first parameter in
\connect, but the way it worked wasn't sane; some of the other
parameters would get the previous connection's values, causing it to
connect to a completely unexpected server or, more likely, not finding
any server at all because of completely wrong combinations of
parameters.
Fix by explicitely checking for a conninfo-looking parameter in the
dbname position; if one is found, use its complete specification rather
than mix with the other arguments. Also, change tab-completion to not
try to complete conninfo/URI-looking "dbnames" and document that
conninfos are accepted as first argument.
There was a weak consensus to backpatch this, because while the behavior
of using the dbname as a conninfo is nowhere documented for \connect, it
is reasonable to expect that it works because it does work in many other
contexts. Therefore this is backpatched all the way back to 9.0.
To implement this, routines previously private to libpq have been
duplicated so that psql can decide what looks like a conninfo/URI
string. In back branches, just duplicate the same code all the way back
to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version.
In master, the routines are moved to src/common and renamed.
Author: David Fetter, Andrew Dunstan. Some editorialization by me
(probably earning a Gierth's "Sloppy" badge in the process.)
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,
Robert Haas, Andrew Dunstan.
Tom Lane [Wed, 1 Apr 2015 00:02:40 +0000 (20:02 -0400)]
Fix incorrect markup in documentation of window frame clauses.
You're required to write either RANGE or ROWS to start a frame clause,
but the documentation incorrectly implied this is optional. Noted by
David Johnston.
Andrew Dunstan [Mon, 30 Mar 2015 21:17:54 +0000 (17:17 -0400)]
Run pg_upgrade and pg_resetxlog with restricted token on Windows
As with initdb these programs need to run with a restricted token, and
if they don't pg_upgrade will fail when run as a user with Adminstrator
privileges.
Backpatch to all live branches. On the development branch the code is
reorganized so that the restricted token code is now in a single
location. On the stable bramches a less invasive change is made by
simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c.
Patches and bug report from Muhammad Asif Naeem, reviewed by Michael
Paquier, slightly edited by me.
Tom Lane [Mon, 30 Mar 2015 20:40:05 +0000 (16:40 -0400)]
Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.
_hash_splitbucket() obtained the base page of the new bucket by calling
_hash_getnewbuf(), but it held no exclusive lock that would prevent some
other process from calling _hash_getnewbuf() at the same time. This is
contrary to _hash_getnewbuf()'s API spec and could in fact cause failures.
In practice, we must only call that function while holding write lock on
the hash index's metapage.
An additional problem was that we'd already modified the metapage's bucket
mapping data, meaning that failure to extend the index would leave us with
a corrupt index.
Fix both issues by moving the _hash_getnewbuf() call to just before we
modify the metapage in _hash_expandtable().
Unfortunately there's still a large problem here, which is that we could
also incur ENOSPC while trying to get an overflow page for the new bucket.
That would leave the index corrupt in a more subtle way, namely that some
index tuples that should be in the new bucket might still be in the old
one. Fixing that seems substantially more difficult; even preallocating as
many pages as we could possibly need wouldn't entirely guarantee that the
bucket split would complete successfully. So for today let's just deal
with the base case.
Per report from Antonin Houska. Back-patch to all active branches.
Tom Lane [Sun, 29 Mar 2015 19:04:09 +0000 (15:04 -0400)]
Add vacuum_delay_point call in compute_index_stats's per-sample-row loop.
Slow functions in index expressions might cause this loop to take long
enough to make it worth being cancellable. Probably it would be enough
to call CHECK_FOR_INTERRUPTS here, but for consistency with other
per-sample-row loops in this file, let's use vacuum_delay_point.
Report and patch by Jeff Janes. Back-patch to all supported branches.
Tom Lane [Tue, 24 Mar 2015 19:53:06 +0000 (15:53 -0400)]
Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.
ExecOpenScanRelation assumed that any relation listed in the ExecRowMark
list has been locked by InitPlan; but this is not true if the rel's
markType is ROW_MARK_COPY, which is possible if it's a foreign table.
In most (possibly all) cases, failure to acquire a lock here isn't really
problematic because the parser, planner, or plancache would have taken the
appropriate lock already. In principle though it might leave us vulnerable
to working with a relation that we hold no lock on, and in any case if the
executor isn't depending on previously-taken locks otherwise then it should
not do so for ROW_MARK_COPY relations.
Noted by Etsuro Fujita. Back-patch to all active versions, since the
inconsistency has been there a long time. (It's almost certainly
irrelevant in 9.0, since that predates foreign tables, but the code's
still wrong on its own terms.)
Tom Lane [Mon, 16 Mar 2015 03:22:03 +0000 (23:22 -0400)]
Replace insertion sort in contrib/intarray with qsort().
It's all very well to claim that a simplistic sort is fast in easy
cases, but O(N^2) in the worst case is not good ... especially if the
worst case is as easy to hit as "descending order input". Replace that
bit with our standard qsort.
Per bug #12866 from Maksym Boguk. Back-patch to all active branches.
Tom Lane [Sat, 14 Mar 2015 17:43:00 +0000 (13:43 -0400)]
Remove workaround for ancient incompatibility between readline and libedit.
GNU readline defines the return value of write_history() as "zero if OK,
else an errno code". libedit's version of that function used to have a
different definition (to wit, "-1 if error, else the number of lines
written to the file"). We tried to work around that by checking whether
errno had become nonzero, but this method has never been kosher according
to the published API of either library. It's reportedly completely broken
in recent Ubuntu releases: psql bleats about "No such file or directory"
when saving ~/.psql_history, even though the write worked fine.
However, libedit has been following the readline definition since somewhere
around 2006, so it seems all right to finally break compatibility with
ancient libedit releases and trust that the return value is what readline
specifies. (I'm not sure when the various Linux distributions incorporated
this fix, but I did find that OS X has been shipping fixed versions since
10.5/Leopard.)
If anyone is still using such an ancient libedit, they will find that psql
complains it can't write ~/.psql_history at exit, even when the file was
written correctly. This is no worse than the behavior we're fixing for
current releases.
Tom Lane [Thu, 12 Mar 2015 17:38:49 +0000 (13:38 -0400)]
Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.
The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting
tableoid to InvalidOid, I think on the assumption that the referenced
RTE must be a subquery or other case without a meaningful OID. However,
foreign tables also use this code path, and they do have meaningful
table OIDs; so failure to set the tuple field can lead to user-visible
misbehavior. Fix that by fetching the appropriate OID from the range
table.
There's still an issue about whether CTID can ever have a meaningful
value in this case; at least with postgres_fdw foreign tables, it does.
But that is a different problem that seems to require a significantly
different patch --- it's debatable whether postgres_fdw really wants to
use this code path at all.
Simplified version of a patch by Etsuro Fujita, who also noted the
problem to begin with. The issue can be demonstrated in all versions
having FDWs, so back-patch to 9.1.
Tom Lane [Sun, 8 Mar 2015 17:35:28 +0000 (13:35 -0400)]
Fix documentation for libpq's PQfn().
The SGML docs claimed that 1-byte integers could be sent or received with
the "isint" options, but no such behavior has ever been implemented in
pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was
even less in tune with reality, and the code itself used parameter names
matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit
of additional wordsmithing on the SGML docs while at it.
Since the business about 1-byte integers is a clear documentation bug,
back-patch to all supported branches.
Alvaro Herrera [Thu, 5 Mar 2015 21:03:16 +0000 (18:03 -0300)]
Fix user mapping object description
We were using "user mapping for user XYZ" as description for user mappings, but
that's ambiguous because users can have mappings on multiple foreign
servers; therefore change it to "for user XYZ on server UVW" instead.
Object identities for user mappings are also updated in the same way, in
branches 9.3 and above.
The incomplete description string was introduced together with the whole
SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all
the way back.
Stephen Frost [Mon, 2 Mar 2015 19:12:43 +0000 (14:12 -0500)]
Fix pg_dump handling of extension config tables
Since 9.1, we've provided extensions with a way to denote
"configuration" tables- tables created by an extension which the user
may modify. By marking these as "configuration" tables, the extension
is asking for the data in these tables to be pg_dump'd (tables which
are not marked in this way are assumed to be entirely handled during
CREATE EXTENSION and are not included at all in a pg_dump).
Unfortunately, pg_dump neglected to consider foreign key relationships
between extension configuration tables and therefore could end up
trying to reload the data in an order which would cause FK violations.
This patch teaches pg_dump about these dependencies, so that the data
dumped out is done so in the best order possible. Note that there's no
way to handle circular dependencies, but those have yet to be seen in
the wild.
The release notes for this should include a caution to users that
existing pg_dump-based backups may be invalid due to this issue. The
data is all there, but restoring from it will require extracting the
data for the configuration tables and then loading them in the correct
order by hand.
Discussed initially back in bug #6738, more recently brought up by
Gilles Darold, who provided an initial patch which was further reworked
by Michael Paquier. Further modifications and documentation updates
by me.
Back-patch to 9.1 where we added the concept of extension configuration
tables.
Noah Misch [Sun, 1 Mar 2015 18:05:23 +0000 (13:05 -0500)]
Unlink static libraries before rebuilding them.
When the library already exists in the build directory, "ar" preserves
members not named on its command line. This mattered when, for example,
a "configure" rerun dropped a file from $(LIBOBJS). libpgport carried
the obsolete member until "make clean". Back-patch to 9.0 (all
supported versions).
Andres Freund [Thu, 26 Feb 2015 11:50:08 +0000 (12:50 +0100)]
Reconsider when to wait for WAL flushes/syncrep during commit.
Up to now RecordTransactionCommit() waited for WAL to be flushed (if
synchronous_commit != off) and to be synchronously replicated (if
enabled), even if a transaction did not have a xid assigned. The primary
reason for that is that sequence's nextval() did not assign a xid, but
are worthwhile to wait for on commit.
This can be problematic because sometimes read only transactions do
write WAL, e.g. HOT page prune records. That then could lead to read only
transactions having to wait during commit. Not something people expect
in a read only transaction.
This lead to such strange symptoms as backends being seemingly stuck
during connection establishment when all synchronous replicas are
down. Especially annoying when said stuck connection is the standby
trying to reconnect to allow syncrep again...
This behavior also is involved in a rather complicated <= 9.4 bug where
the transaction started by catchup interrupt processing waited for
syncrep using latches, but didn't get the wakeup because it was already
running inside the same overloaded signal handler. Fix the issue here
doesn't properly solve that issue, merely papers over the problems. In
9.5 catchup interrupts aren't processed out of signal handlers anymore.
To fix all this, make nextval() acquire a top level xid, and only wait for
transaction commit if a transaction both acquired a xid and emitted WAL
records. If only a xid has been assigned we don't uselessly want to
wait just because of writes to temporary/unlogged tables; if only WAL
has been written we don't want to wait just because of HOT prunes.
The xid assignment in nextval() is unlikely to cause overhead in
real-world workloads. For one it only happens SEQ_LOG_VALS/32 values
anyway, for another only usage of nextval() without using the result in
an insert or similar is affected.
Noah Misch [Thu, 26 Feb 2015 04:48:28 +0000 (23:48 -0500)]
Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables.
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed
memory. Back-patch to 9.0 (all supported versions). Little code ran
between the premature free and the read, so non-assert builds are
unlikely to witness user-visible consequences.
Tom Lane [Wed, 25 Feb 2015 17:01:12 +0000 (12:01 -0500)]
Fix dumping of views that are just VALUES(...) but have column aliases.
The "simple" path for printing VALUES clauses doesn't work if we need
to attach nondefault column aliases, because there's noplace to do that
in the minimal VALUES() syntax. So modify get_simple_values_rte() to
detect nondefault aliases and treat that as a non-simple case. This
further exposes that the "non-simple" path never actually worked;
it didn't produce valid syntax. Fix that too. Per bug #12789 from
Curtis McEnroe, and analysis by Andrew Gierth.
Back-patch to all supported branches. Before 9.3, this also requires
back-patching the part of commit 092d7ded29f36b0539046b23b81b9f0bf2d637f1
that created get_simple_values_rte() to begin with; inserting the extra
test into the old factorization of that logic would've been too messy.
Andres Freund [Mon, 23 Feb 2015 15:11:11 +0000 (16:11 +0100)]
Guard against spurious signals in LockBufferForCleanup.
When LockBufferForCleanup() has to wait for getting a cleanup lock on a
buffer it does so by setting a flag in the buffer header and then wait
for other backends to signal it using ProcWaitForSignal().
Unfortunately LockBufferForCleanup() missed that ProcWaitForSignal() can
return for other reasons than the signal it is hoping for. If such a
spurious signal arrives the wait flags on the buffer header will still
be set. That then triggers "ERROR: multiple backends attempting to wait
for pincount 1".
The fix is simple, unset the flag if still set when retrying. That
implies an additional spinlock acquisition/release, but that's unlikely
to matter given the cost of waiting for a cleanup lock. Alternatively
it'd have been possible to move responsibility for maintaining the
relevant flag to the waiter all together, but that might have had
negative consequences due to possible floods of signals. Besides being
more invasive.
This looks to be a very longstanding bug. The relevant code in
LockBufferForCleanup() hasn't changed materially since its introduction
and ProcWaitForSignal() was documented to return for unrelated reasons
since 8.2. The master only patch series removing ImmediateInterruptOK
made it much easier to hit though, as ProcSendSignal/ProcWaitForSignal
now uses a latch shared with other tasks.
Per discussion with Kevin Grittner, Tom Lane and me.
Fix potential deadlock with libpq non-blocking mode.
If libpq output buffer is full, pqSendSome() function tries to drain any
incoming data. This avoids deadlock, if the server e.g. sends a lot of
NOTICE messages, and blocks until we read them. However, pqSendSome() only
did that in blocking mode. In non-blocking mode, the deadlock could still
happen.
To fix, take a two-pronged approach:
1. Change the documentation to instruct that when PQflush() returns 1, you
should wait for both read- and write-ready, and call PQconsumeInput() if it
becomes read-ready. That fixes the deadlock, but applications are not going
to change overnight.
2. In pqSendSome(), drain the input buffer before returning 1. This
alleviates the problem for applications that only wait for write-ready. In
particular, a slow but steady stream of NOTICE messages during COPY FROM
STDIN will no longer cause a deadlock. The risk remains that the server
attempts to send a large burst of data and fills its output buffer, and at
the same time the client also sends enough data to fill its output buffer.
The application will deadlock if it goes to sleep, waiting for the socket
to become write-ready, before the server's data arrives. In practice,
NOTICE messages and such that the server might be sending are usually
short, so it's highly unlikely that the server would fill its output buffer
so quickly.
Tom Lane [Wed, 18 Feb 2015 16:43:00 +0000 (11:43 -0500)]
Fix failure to honor -Z compression level option in pg_dump -Fd.
cfopen() and cfopen_write() failed to pass the compression level through
to zlib, so that you always got the default compression level if you got
any at all.
In passing, also fix these and related functions so that the correct errno
is reliably returned on failure; the original coding supposes that free()
cannot change errno, which is untrue on at least some platforms.
Per bug #12779 from Christoph Berg. Back-patch to 9.1 where the faulty
code was introduced.
Stephen Frost [Tue, 17 Feb 2015 20:36:24 +0000 (15:36 -0500)]
Minor cleanup of column-level priv fix
Commit 9406884af19e2620a14059e64d4eb6ab430ab328 cleaned up
column-privilege related leaks in various error-message paths, but ended
up including a few more things than it should have in the back branches.
Specifically, there's no need for the GetModifiedColumns macro in
execMain.c as 9.1 and older didn't include the row in check constraint
violations. Further, the regression tests added to check those cases
aren't necessary.
This patch removes the GetModifiedColumns macro from execMain.c, removes
the comment which was added to trigger.c related to the duplicate macro
definition, and removes the check-constraint-related regression tests.
Tom Lane [Tue, 17 Feb 2015 17:49:18 +0000 (12:49 -0500)]
Remove code to match IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses.
In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only
looked back as far as commit f3aec2c7f51904e7 where the breakage occurred
(which is why I thought the IPv4-in-IPv6 business was undocumented). But
actually the logic dates back to commit 3c9bb8886df7d56a and was simply
broken by erroneous refactoring in the later commit. A bit of archives
excavation shows that we added the whole business in response to a report
that some 2003-era Linux kernels would report IPv4 connections as having
IPv4-in-IPv6 addresses. The fact that we've had no complaints since 9.0
seems to be sufficient confirmation that no modern kernels do that, so
let's just rip it all out rather than trying to fix it.
Do this in the back branches too, thus essentially deciding that our
effective behavior since 9.0 is correct. If there are any platforms on
which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix
would have made for a subtle and potentially security-sensitive change in
the effective meaning of IPv4 pg_hba.conf entries, which does not seem like
a good thing to do in minor releases. So let's let the post-9.0 behavior
stand, and change the documentation to match it.
In passing, I failed to resist the temptation to wordsmith the description
of pg_hba.conf IPv4 and IPv6 address entries a bit. A lot of this text
hasn't been touched since we were IPv4-only.
Robert Haas [Tue, 17 Feb 2015 15:19:30 +0000 (10:19 -0500)]
Improve pg_check_dir's handling of closedir() failures.
Avoid losing errno if readdir() fails and closedir() works. This also
avoids leaking the directory handle when readdir() fails. Commit 6f03927fce038096f53ca67eeab9adb24938f8a6 introduced logic to better
handle readdir() and closedir() failures, bu it missed these cases.
Extracted from a larger patch by Marco Nenciarini.
Tom Lane [Mon, 16 Feb 2015 21:17:48 +0000 (16:17 -0500)]
Fix misuse of memcpy() in check_ip().
The previous coding copied garbage into a local variable, pretty much
ensuring that the intended test of an IPv6 connection address against a
promoted IPv4 address from pg_hba.conf would never match. The lack of
field complaints likely indicates that nobody realized this was supposed
to work, which is unsurprising considering that no user-facing docs suggest
it should work.
In principle this could have led to a SIGSEGV due to reading off the end of
memory, but since the source address would have pointed to somewhere in the
function's stack frame, that's quite unlikely. What led to discovery of
the bug is Hugo Osvaldo Barrera's report of a crash after an OS upgrade,
which is probably because he is now running a system in which memcpy raises
abort() upon detecting overlapping source and destination areas. (You'd
have to additionally suppose some things about the stack frame layout to
arrive at this conclusion, but it seems plausible.)
This has been broken since the code was added, in commit f3aec2c7f51904e7,
so back-patch to all supported branches.
Bruce Momjian [Thu, 12 Feb 2015 02:02:07 +0000 (21:02 -0500)]
pg_upgrade: preserve freeze info for postgres/template1 dbs
pg_database.datfrozenxid and pg_database.datminmxid were not preserved
for the 'postgres' and 'template1' databases. This could cause missing
clog file errors on access to user tables and indexes after upgrades in
these databases.
Michael Meskes [Wed, 11 Feb 2015 10:13:11 +0000 (11:13 +0100)]
Fixed array handling in ecpg.
When ecpg was rewritten to the new protocol version not all variable types
were corrected. This patch rewrites the code for these types to fix that. It
also fixes the documentation to correctly tell the status of array handling.
Tom Lane [Wed, 11 Feb 2015 03:38:26 +0000 (22:38 -0500)]
Fix pg_dump's heuristic for deciding which casts to dump.
Back in 2003 we had a discussion about how to decide which casts to dump.
At the time pg_dump really only considered an object's containing schema
to decide what to dump (ie, dump whatever's not in pg_catalog), and so
we chose a complicated idea involving whether the underlying types were to
be dumped (cf commit a6790ce85752b67ad994f55fdf1a450262ccc32e). But users
are allowed to create casts between built-in types, and we failed to dump
such casts. Let's get rid of that heuristic, which has accreted even more
ugliness since then, in favor of just looking at the cast's OID to decide
if it's a built-in cast or not.
In passing, also fix some really ancient code that supposed that it had to
manufacture a dependency for the cast on its cast function; that's only
true when dumping from a pre-7.3 server. This just resulted in some wasted
cycles and duplicate dependency-list entries with newer servers, but we
might as well improve it.
Per gripes from a number of people, most recently Greg Sabino Mullane.
Back-patch to all supported branches.
Tom Lane [Wed, 11 Feb 2015 01:37:29 +0000 (20:37 -0500)]
Fix GEQO to not assume its join order heuristic always works.
Back in commit 400e2c934457bef4bc3cc9a3e49b6289bd761bc0 I rewrote GEQO's
gimme_tree function to improve its heuristic for modifying the given tour
into a legal join order. In what can only be called a fit of hubris,
I supposed that this new heuristic would *always* find a legal join order,
and ripped out the old logic that allowed gimme_tree to sometimes fail.
The folly of this is exposed by bug #12760, in which the "greedy" clumping
behavior of merge_clump() can lead it into a dead end which could only be
recovered from by un-clumping. We have no code for that and wouldn't know
exactly what to do with it if we did. Rather than try to improve the
heuristic rules still further, let's just recognize that it *is* a
heuristic and probably must always have failure cases. So, put back the
code removed in the previous commit to allow for failure (but comment it
a bit better this time).
It's possible that this code was actually fully correct at the time and
has only been broken by the introduction of LATERAL. But having seen this
example I no longer have much faith in that proposition, so back-patch to
all supported branches.
Report WAL flush, not insert, position in replication IDENTIFY_SYSTEM
When beginning streaming replication, the client usually issues the
IDENTIFY_SYSTEM command, which used to return the current WAL insert
position. That's not suitable for the intended purpose of that field,
however. pg_receivexlog uses it to start replication from the reported
point, but if it hasn't been flushed to disk yet, it will fail. Change
IDENTIFY_SYSTEM to report the flush position instead.
Backpatch to 9.1 and above. 9.0 doesn't report any WAL position.
Be more careful to not lose sync in the FE/BE protocol.
If any error occurred while we were in the middle of reading a protocol
message from the client, we could lose sync, and incorrectly try to
interpret a part of another message as a new protocol message. That will
usually lead to an "invalid frontend message" error that terminates the
connection. However, this is a security issue because an attacker might
be able to deliberately cause an error, inject a Query message in what's
supposed to be just user data, and have the server execute it.
We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other
operations that could ereport(ERROR) in the middle of processing a message,
but a query cancel interrupt or statement timeout could nevertheless cause
it to happen. Also, the V2 fastpath and COPY handling were not so careful.
It's very difficult to recover in the V2 COPY protocol, so we will just
terminate the connection on error. In practice, that's what happened
previously anyway, as we lost protocol sync.
To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set
whenever we're in the middle of reading a message. When it's set, we cannot
safely ERROR out and continue running, because we might've read only part
of a message. PqCommReadingMsg acts somewhat similarly to critical sections
in that if an error occurs while it's set, the error handler will force the
connection to be terminated, as if the error was FATAL. It's not
implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted
to PANIC in critical sections, because we want to be able to use
PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes
advantage of that to prevent an OOM error from terminating the connection.
To prevent unnecessary connection terminations, add a holdoff mechanism
similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel
interrupts, but still allow die interrupts. The rules on which interrupts
are processed when are now a bit more complicated, so refactor
ProcessInterrupts() and the calls to it in signal handlers so that the
signal handlers always call it if ImmediateInterruptOK is set, and
ProcessInterrupts() can decide to not do anything if the other conditions
are not met.
Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund.
Backpatch to all supported versions.
Noah Misch [Mon, 2 Feb 2015 15:00:45 +0000 (10:00 -0500)]
Cherry-pick security-relevant fixes from upstream imath library.
This covers alterations to buffer sizing and zeroing made between imath
1.3 and imath 1.20. Valgrind Memcheck identified the buffer overruns
and reliance on uninitialized data; their exploit potential is unknown.
Builds specifying --with-openssl are unaffected, because they use the
OpenSSL BIGNUM facility instead of imath. Back-patch to 9.0 (all
supported versions).
Noah Misch [Mon, 2 Feb 2015 15:00:45 +0000 (10:00 -0500)]
Fix buffer overrun after incomplete read in pullf_read_max().
Most callers pass a stack buffer. The ensuing stack smash can crash the
server, and we have not ruled out the viability of attacks that lead to
privilege escalation. Back-patch to 9.0 (all supported versions).
Bruce Momjian [Mon, 2 Feb 2015 15:00:45 +0000 (10:00 -0500)]
port/snprintf(): fix overflow and do padding
Prevent port/snprintf() from overflowing its local fixed-size
buffer and pad to the desired number of digits with zeros, even
if the precision is beyond the ability of the native sprintf().
port/snprintf() is only used on systems that lack a native
snprintf().
Reported by Bruce Momjian. Patch by Tom Lane. Backpatch to all
supported versions.
doc: Improve claim about location of pg_service.conf
The previous wording claimed that the file was always in /etc, but of
course this varies with the installation layout. Write instead that it
can be found via `pg_config --sysconfdir`. Even though this is still
somewhat incorrect because it doesn't account of moved installations, it
at least conveys that the location depends on the installation.
Tom Lane [Sat, 31 Jan 2015 23:35:31 +0000 (18:35 -0500)]
Fix documentation of psql's ECHO all mode.
"ECHO all" is ignored for interactive input, and has been for a very long
time, though possibly not for as long as the documentation has claimed the
opposite. Fix that, and also note that empty lines aren't echoed, which
while dubious is another longstanding behavior (it's embedded in our
regression test files for one thing). Per bug #12721 from Hans Ginzel.
In HEAD, also improve the code comments in this area, and suppress an
unnecessary fflush(stdout) when we're not echoing. That would likely
be safe to back-patch, but I'll not risk it mere hours before a release
wrap.
Tom Lane [Fri, 30 Jan 2015 18:05:07 +0000 (13:05 -0500)]
Fix Coverity warning about contrib/pgcrypto's mdc_finish().
Coverity points out that mdc_finish returns a pointer to a local buffer
(which of course is gone as soon as the function returns), leaving open
a risk of misbehaviors possibly as bad as a stack overwrite.
In reality, the only possible call site is in process_data_packets()
which does not examine the returned pointer at all. So there's no
live bug, but nonetheless the code is confusing and risky. Refactor
to avoid the issue by letting process_data_packets() call mdc_finish()
directly instead of going through the pullf_read() API.
Although this is only cosmetic, it seems good to back-patch so that
the logic in pgp-decrypt.c stays in sync across all branches.
Stephen Frost [Fri, 30 Jan 2015 03:00:03 +0000 (22:00 -0500)]
Fix BuildIndexValueDescription for expressions
In 804b6b6db4dcfc590a468e7be390738f9f7755fb we modified
BuildIndexValueDescription to pay attention to which columns are visible
to the user, but unfortunatley that commit neglected to consider indexes
which are built on expressions.
Handle error-reporting of violations of constraint indexes based on
expressions by not returning any detail when the user does not have
table-level SELECT rights.
Tom Lane [Fri, 30 Jan 2015 01:18:44 +0000 (20:18 -0500)]
Handle unexpected query results, especially NULLs, safely in connectby().
connectby() didn't adequately check that the constructed SQL query returns
what it's expected to; in fact, since commit 08c33c426bfebb32 it wasn't
checking that at all. This could result in a null-pointer-dereference
crash if the constructed query returns only one column instead of the
expected two. Less excitingly, it could also result in surprising data
conversion failures if the constructed query returned values that were
not I/O-conversion-compatible with the types specified by the query
calling connectby().
In all branches, insist that the query return at least two columns;
this seems like a minimal sanity check that can't break any reasonable
use-cases.
In HEAD, insist that the constructed query return the types specified by
the outer query, including checking for typmod incompatibility, which the
code never did even before it got broken. This is to hide the fact that
the implementation does a conversion to text and back; someday we might
want to improve that.
In back branches, leave that alone, since adding a type check in a minor
release is more likely to break things than make people happy. Type
inconsistencies will continue to work so long as the actual type and
declared type are I/O representation compatible, and otherwise will fail
the same way they used to.
Also, in all branches, be on guard for NULL results from the constructed
query, which formerly would cause null-pointer dereference crashes.
We now print the row with the NULL but don't recurse down from it.
In passing, get rid of the rather pointless idea that
build_tuplestore_recursively() should return the same tuplestore that's
passed to it.
Andres Freund [Thu, 29 Jan 2015 16:49:03 +0000 (17:49 +0100)]
Properly terminate the array returned by GetLockConflicts().
GetLockConflicts() has for a long time not properly terminated the
returned array. During normal processing the returned array is zero
initialized which, while not pretty, is sufficient to be recognized as
a invalid virtual transaction id. But the HotStandby case is more than
aesthetically broken: The allocated (and reused) array is neither
zeroed upon allocation, nor reinitialized, nor terminated.
Not having a terminating element means that the end of the array will
not be recognized and that recovery conflict handling will thus read
ahead into adjacent memory. Only terminating when hitting memory
content that looks like a invalid virtual transaction id. Luckily
this seems so far not have caused significant problems, besides making
recovery conflict more expensive.
Fix bug where GIN scan keys were not initialized with gin_fuzzy_search_limit.
When gin_fuzzy_search_limit was used, we could jump out of startScan()
without calling startScanKey(). That was harmless in 9.3 and below, because
startScanKey()() didn't do anything interesting, but in 9.4 it initializes
information needed for skipping entries (aka GIN fast scans), and you
readily get a segfault if it's not done. Nevertheless, it was clearly wrong
all along, so backpatch all the way to 9.1 where the early return was
introduced.
(AFAICS startScanKey() did nothing useful in 9.3 and below, because the
fields it initialized were already initialized in ginFillScanKey(), but I
don't dare to change that in a minor release. ginFillScanKey() is always
called in gingetbitmap() even though there's a check there to see if the
scan keys have already been initialized, because they never are; ginrescan()
free's them.)
In the passing, remove unnecessary if-check from the second inner loop in
startScan(). We already check in the first loop that the condition is true
for all entries.
Reported by Olaf Gawenda, bug #12694, Backpatch to 9.1 and above, although
AFAICS it causes a live bug only in 9.4.
Stephen Frost [Wed, 28 Jan 2015 22:43:22 +0000 (17:43 -0500)]
Clean up range-table building in copy.c
Commit 804b6b6db4dcfc590a468e7be390738f9f7755fb added the build of a
range table in copy.c to initialize the EState es_range_table since it
can be needed in error paths. Unfortunately, that commit didn't
appreciate that some code paths might end up not initializing the rte
which is used to build the range table.
Fix that and clean up a couple others things along the way- build it
only once and don't explicitly set it on the !is_from path as it
doesn't make any sense there (cstate is palloc0'd, so this isn't an
issue from an initializing standpoint either).
The prior commit went back to 9.0, but this only goes back to 9.1 as
prior to that the range table build happens immediately after building
the RTE and therefore doesn't suffer from this issue.
Stephen Frost [Mon, 12 Jan 2015 22:04:11 +0000 (17:04 -0500)]
Fix column-privilege leak in error-message paths
While building error messages to return to the user,
BuildIndexValueDescription and ri_ReportViolation would happily include
the entire key or entire row in the result returned to the user, even if
the user didn't have access to view all of the columns being included.
Instead, include only those columns which the user is providing or which
the user has select rights on. If the user does not have any rights
to view the table or any of the columns involved then no detail is
provided and a NULL value is returned from BuildIndexValueDescription.
Note that, for key cases, the user must have access to all of the
columns for the key to be shown; a partial key will not be returned.
Back-patch all the way, as column-level privileges are now in all
supported versions.
This has been assigned CVE-2014-8161, but since the issue and the patch
have already been publicized on pgsql-hackers, there's no point in trying
to hide this commit.
Tom Lane [Tue, 27 Jan 2015 17:06:43 +0000 (12:06 -0500)]
Fix NUMERIC field access macros to treat NaNs consistently.
Commit 145343534c153d1e6c3cff1fa1855787684d9a38 arranged to store numeric
NaN values as short-header numerics, but the field access macros did not
get the memo: they thought only "SHORT" numerics have short headers.
Most of the time this makes no difference because we don't access the
weight or dscale of a NaN; but numeric_send does that. As pointed out
by Andrew Gierth, this led to fetching uninitialized bytes.
AFAICS this could not have any worse consequences than that; in particular,
an unaligned stored numeric would have been detoasted by PG_GETARG_NUMERIC,
so that there's no risk of a fetch off the end of memory. Still, the code
is wrong on its own terms, and it's not hard to foresee future changes that
might expose us to real risks. So back-patch to all affected branches.
Tom Lane [Mon, 26 Jan 2015 17:18:25 +0000 (12:18 -0500)]
Fix volatile-safety issue in pltcl_SPI_execute_plan().
The "callargs" variable is modified within PG_TRY and then referenced
within PG_CATCH, which is exactly the coding pattern we've now found
to be unsafe. Marking "callargs" volatile would be problematic because
it is passed by reference to some Tcl functions, so fix the problem
by not modifying it within PG_TRY. We can just postpone the free()
till we exit the PG_TRY construct, as is already done elsewhere in this
same file.
Also, fix failure to free(callargs) when exiting on too-many-arguments
error. This is only a minor memory leak, but a leak nonetheless.
In passing, remove some unnecessary "volatile" markings in the same
function. Those doubtless are there because gcc 2.95.3 whinged about
them, but we now know that its algorithm for complaining is many bricks
shy of a load.
This is certainly a live bug with compilers that optimize similarly
to current gcc, so back-patch to all active branches.
Tom Lane [Mon, 26 Jan 2015 16:57:44 +0000 (11:57 -0500)]
Fix volatile-safety issue in asyncQueueReadAllNotifications().
The "pos" variable is modified within PG_TRY and then referenced
within PG_CATCH, so for strict POSIX conformance it must be marked
volatile. Superficially the code looked safe because pos's address
was taken, which was sufficient to force it into memory ... but it's
not sufficient to ensure that the compiler applies updates exactly
where the program text says to. The volatility marking has to extend
into a couple of subroutines too, but I think that's probably a good
thing because the risk of out-of-order updates is mostly in those
subroutines not asyncQueueReadAllNotifications() itself. In principle
the compiler could have re-ordered operations such that an error could
be thrown while "pos" had an incorrect value.
It's unclear how real the risk is here, but for safety back-patch
to all active branches.
Tom Lane [Sat, 24 Jan 2015 18:05:56 +0000 (13:05 -0500)]
Replace a bunch more uses of strncpy() with safer coding.
strncpy() has a well-deserved reputation for being unsafe, so make an
effort to get rid of nearly all occurrences in HEAD.
A large fraction of the remaining uses were passing length less than or
equal to the known strlen() of the source, in which case no null-padding
can occur and the behavior is equivalent to memcpy(), though doubtless
slower and certainly harder to reason about. So just use memcpy() in
these cases.
In other cases, use either StrNCpy() or strlcpy() as appropriate (depending
on whether padding to the full length of the destination buffer seems
useful).
I left a few strncpy() calls alone in the src/timezone/ code, to keep it
in sync with upstream (the IANA tzcode distribution). There are also a
few such calls in ecpg that could possibly do with more analysis.
AFAICT, none of these changes are more than cosmetic, except for the four
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength
source leads to a non-null-terminated destination buffer and ensuing
misbehavior. These don't seem like security issues, first because no stack
clobber is possible and second because if your values of sslcert etc are
coming from untrusted sources then you've got problems way worse than this.
Still, it's undesirable to have unpredictable behavior for overlength
inputs, so back-patch those four changes to all active branches.
Tom Lane [Wed, 21 Jan 2015 02:21:44 +0000 (21:21 -0500)]
Improve documentation of random() function.
Move random() and setseed() to a separate table, to have them grouped
together. Also add a notice that random() is not cryptographically secure.
Back-patch of commit 75fdcec14543b60cc0c67483d8cc47d5c7adf1a8 into
all supported versions, per discussion of the need to document that
random() is just a wrapper around random(3).
Tom Lane [Tue, 20 Jan 2015 04:44:30 +0000 (23:44 -0500)]
In pg_regress, remove the temporary installation upon successful exit.
This results in a very substantial reduction in disk space usage during
"make check-world", since that sequence involves creation of numerous
temporary installations. It should also help a bit in the buildfarm, even
though the buildfarm script doesn't create as many temp installations,
because the current script misses deleting some of them; and anyway it
seems better to do this once in one place rather than expecting that
script to get it right every time.
In 9.4 and HEAD, also undo the unwise choice in commit b1aebbb6a86e96d7
to report strerror(errno) after a rmtree() failure. rmtree has already
reported that, possibly for multiple failures with distinct errnos; and
what's more, by the time it returns there is no good reason to assume
that errno still reflects the last reportable error. So reporting errno
here is at best redundant and at worst badly misleading.
Back-patch to all supported branches, so that future revisions of the
buildfarm script can rely on this behavior.
Tom Lane [Tue, 20 Jan 2015 04:01:44 +0000 (23:01 -0500)]
Adjust "pgstat wait timeout" message to be a translatable LOG message.
Per discussion, change the log level of this message to be LOG not WARNING.
The main point of this change is to avoid causing buildfarm run failures
when the stats collector is exceptionally slow to respond, which it not
infrequently is on some of the smaller/slower buildfarm members.
This change does lose notice to an interactive user when his stats query
is looking at out-of-date stats, but the majority opinion (not necessarily
that of yours truly) is that WARNING messages would probably not get
noticed anyway on heavily loaded production systems. A LOG message at
least ensures that the problem is recorded somewhere where bulk auditing
for the issue is possible.
Also, instead of an untranslated "pgstat wait timeout" message, provide
a translatable and hopefully more understandable message "using stale
statistics instead of current ones because stats collector is not
responding". The original text was written hastily under the assumption
that it would never really happen in practice, which we now know to be
unduly optimistic.
Back-patch to all active branches, since we've seen the buildfarm issue
in all branches.
Previously, the xml value resulting from an xpath query would not have
namespace declarations if the namespace declarations were attached to
an ancestor element in the input xml value. That means the output value
was not correct XML. Fix that by running the result value through
xmlCopyNode(), which produces the correct namespace declarations.
Noah Misch [Fri, 16 Jan 2015 06:27:31 +0000 (01:27 -0500)]
Update "pg_regress --no-locale" for Darwin and Windows.
Commit 894459e59ffa5c7fee297b246c17e1f72564db1d revealed this option to
be broken for NLS builds on Darwin, but "make -C contrib/unaccent check"
and the buildfarm client rely on it. Fix that configuration by
redefining the option to imply LANG=C on Darwin. In passing, use LANG=C
instead of LANG=en on Windows; since only postmaster startup uses that
value, testers are unlikely to notice the change. Back-patch to 9.0,
like the predecessor commit.
Tom Lane [Thu, 15 Jan 2015 23:52:34 +0000 (18:52 -0500)]
Fix use-of-already-freed-memory problem in EvalPlanQual processing.
Up to now, the "child" executor state trees generated for EvalPlanQual
rechecks have simply shared the ResultRelInfo arrays used for the original
execution tree. However, this leads to dangling-pointer problems, because
ExecInitModifyTable() is all too willing to scribble on some fields of the
ResultRelInfo(s) even when it's being run in one of those child trees.
This trashes those fields from the perspective of the parent tree, because
even if the generated subtree is logically identical to what was in use in
the parent, it's in a memory context that will go away when we're done
with the child state tree.
We do however want to share information in the direction from the parent
down to the children; in particular, fields such as es_instrument *must*
be shared or we'll lose the stats arising from execution of the children.
So the simplest fix is to make a copy of the parent's ResultRelInfo array,
but not copy any fields back at end of child execution.
Per report from Manuel Kniep. The added isolation test is based on his
example. In an unpatched memory-clobber-enabled build it will reliably
fail with "ctid is NULL" errors in all branches back to 9.1, as a
consequence of junkfilter->jf_junkAttNo being overwritten with $7f7f.
This test cannot be run as-is before that for lack of WITH syntax; but
I have no doubt that some variant of this problem can arise in older
branches, so apply the code change all the way back.
Robert Haas [Thu, 15 Jan 2015 14:26:03 +0000 (09:26 -0500)]
pg_standby: Avoid writing one byte beyond the end of the buffer.
Previously, read() might have returned a length equal to the buffer
length, and then the subsequent store to buf[len] would write a
zero-byte one byte past the end. This doesn't seem likely to be
a security issue, but there's some chance it could result in
pg_standby misbehaving.
Spotted by Coverity; patch by Michael Paquier, reviewed by me.
Andres Freund [Tue, 13 Jan 2015 20:02:47 +0000 (21:02 +0100)]
Make logging_collector=on work with non-windows EXEC_BACKEND again.
Commit b94ce6e80 reordered postmaster's startup sequence so that the
tempfile directory is only cleaned up after all the necessary state
for pg_ctl is collected. Unfortunately the chosen location is after
the syslogger has been started; which normally is fine, except for
!WIN32 EXEC_BACKEND builds, which pass information to children via
files in the temp directory.
Move the call to RemovePgTempFiles() to just before the syslogger has
started. That's the first child we fork.
Luckily EXEC_BACKEND is pretty much only used by endusers on windows,
which has a separate method to pass information to children. That
means the real world impact of this bug is very small.
Tom Lane [Mon, 12 Jan 2015 20:13:41 +0000 (15:13 -0500)]
Avoid unexpected slowdown in vacuum regression test.
I noticed the "vacuum" regression test taking really significantly longer
than it used to on a slow machine. Investigation pointed the finger at
commit e415b469b33ba328765e39fd62edcd28f30d9c3c, which added creation of
an index using an extremely expensive index function. That function was
evidently meant to be applied only twice ... but the test re-used an
existing test table, which up till a couple lines before that had had over
two thousand rows. Depending on timing of the concurrent regression tests,
the intervening VACUUMs might have been unable to remove those
recently-dead rows, and then the index build would need to create index
entries for them too, leading to the wrap_do_analyze() function being
executed 2000+ times not twice. Avoid this by using a different table
that is guaranteed to have only the intended two rows in it.
Back-patch to 9.0, like the commit that created the problem.
Noah Misch [Thu, 8 Jan 2015 03:35:44 +0000 (22:35 -0500)]
On Darwin, detect and report a multithreaded postmaster.
Darwin --enable-nls builds use a substitute setlocale() that may start a
thread. Buildfarm member orangutan experienced BackendList corruption
on account of different postmaster threads executing signal handlers
simultaneously. Furthermore, a multithreaded postmaster risks undefined
behavior from sigprocmask() and fork(). Emit LOG messages about the
problem and its workaround. Back-patch to 9.0 (all supported versions).
Noah Misch [Thu, 8 Jan 2015 03:34:57 +0000 (22:34 -0500)]
Always set the six locale category environment variables in main().
Typical server invocations already achieved that. Invalid locale
settings in the initial postmaster environment interfered, as could
malloc() failure. Setting "LC_MESSAGES=pt_BR.utf8 LC_ALL=invalid" in
the postmaster environment will now choose C-locale messages, not
Brazilian Portuguese messages. Most localized programs, including all
PostgreSQL frontend executables, do likewise. Users are unlikely to
observe changes involving locale categories other than LC_MESSAGES.
CheckMyDatabase() ensures that we successfully set LC_COLLATE and
LC_CTYPE; main() sets the remaining three categories to locale "C",
which almost cannot fail. Back-patch to 9.0 (all supported versions).
Noah Misch [Thu, 8 Jan 2015 03:33:58 +0000 (22:33 -0500)]
Reject ANALYZE commands during VACUUM FULL or another ANALYZE.
vacuum()'s static variable handling makes it non-reentrant; an ensuing
null pointer deference crashed the backend. Back-patch to 9.0 (all
supported versions).
Andres Freund [Tue, 6 Jan 2015 23:10:18 +0000 (00:10 +0100)]
Improve relcache invalidation handling of currently invisible relations.
The corner case where a relcache invalidation tried to rebuild the
entry for a referenced relation but couldn't find it in the catalog
wasn't correct.
The code tried to RelationCacheDelete/RelationDestroyRelation the
entry. That didn't work when assertions are enabled because the latter
contains an assertion ensuring the refcount is zero. It's also more
generally a bad idea, because by virtue of being referenced somebody
might actually look at the entry, which is possible if the error is
trapped and handled via a subtransaction abort.
Instead just error out, without deleting the entry. As the entry is
marked invalid, the worst that can happen is that the invalid (and at
some point unused) entry lingers in the relcache.
There should be no way to hit this case < 9.4 where logical decoding
introduced a bug that can hit this. But since the code for handling
the corner case is there it should do something halfway sane, so
backpatch all the the way back. The logical decoding bug will be
handled in a separate commit.
Andres Freund [Sat, 3 Jan 2015 19:51:52 +0000 (20:51 +0100)]
Prevent WAL files created by pg_basebackup -x/X from being archived again.
WAL (and timeline history) files created by pg_basebackup did not
maintain the new base backup's archive status. That's currently not a
problem if the new node is used as a standby - but if that node is
promoted all still existing files can get archived again. With a high
wal_keep_segment settings that can happen a significant time later -
which is quite confusing.
Change both the backend (for the -x/-X fetch case) and pg_basebackup
(for -X stream) itself to always mark WAL/timeline files included in
the base backup as .done. That's in line with walreceiver.c doing so.
The verbosity of the pg_basebackup changes show pretty clearly that it
needs some refactoring, but that'd result in not be backpatchable
changes.
Backpatch to 9.1 where pg_basebackup was introduced.
Tom Lane [Wed, 31 Dec 2014 21:42:54 +0000 (16:42 -0500)]
Docs: improve descriptions of ISO week-numbering date features.
Use the phraseology "ISO 8601 week-numbering year" in place of just
"ISO year", and make related adjustments to other terminology.
The point of this change is that it seems some people see "ISO year"
and think "standard year", whereupon they're surprised when constructs
like to_char(..., "IYYY-MM-DD") produce nonsensical results. Perhaps
hanging a few more adjectives on it will discourage them from jumping
to false conclusions. I put in an explicit warning against that
specific usage, too, though the main point is to discourage people
who haven't read this far down the page.
In passing fix some nearby markup and terminology inconsistencies.
Tom Lane [Wed, 31 Dec 2014 17:17:08 +0000 (12:17 -0500)]
Improve consistency of parsing of psql's magic variables.
For simple boolean variables such as ON_ERROR_STOP, psql has for a long
time recognized variant spellings of "on" and "off" (such as "1"/"0"),
and it also made a point of warning you if you'd misspelled the setting.
But these conveniences did not exist for other keyword-valued variables.
In particular, though ECHO_HIDDEN and ON_ERROR_ROLLBACK include "on" and
"off" as possible values, none of the alternative spellings for those were
recognized; and to make matters worse the code would just silently assume
"on" was meant for any unrecognized spelling. Several people have reported
getting bitten by this, so let's fix it. In detail, this patch:
* Allows all spellings recognized by ParseVariableBool() for ECHO_HIDDEN
and ON_ERROR_ROLLBACK.
* Reports a warning for unrecognized values for COMP_KEYWORD_CASE, ECHO,
ECHO_HIDDEN, HISTCONTROL, ON_ERROR_ROLLBACK, and VERBOSITY.
* Recognizes all values for all these variables case-insensitively;
previously there was a mishmash of case-sensitive and case-insensitive
behaviors.
Back-patch to all supported branches. There is a small risk of breaking
existing scripts that were accidentally failing to malfunction; but the
consensus is that the chance of detecting real problems and preventing
future mistakes outweighs this.
Tom Lane [Mon, 29 Dec 2014 19:21:06 +0000 (14:21 -0500)]
Assorted minor fixes for psql metacommand docs.
Document the long forms of \H \i \ir \o \p \r \w ... apparently, we have
a long and dishonorable history of leaving out the unabbreviated names of
psql backslash commands.
Avoid saying "Unix shell"; we can just say "shell" with equal clarity,
and not leave Windows users wondering whether the feature works for them.
Improve consistency of documentation of \g \o \w metacommands. There's
no reason to use slightly different wording or markup for each one.
Noah Misch [Thu, 25 Dec 2014 18:52:03 +0000 (13:52 -0500)]
Have config_sspi_auth() permit IPv6 localhost connections.
Windows versions later than Windows Server 2003 map "localhost" to ::1.
Account for that in the generated pg_hba.conf, fixing another oversight
in commit f6dc6dd5ba54d52c0733aaafc50da2fbaeabb8b0. Back-patch to 9.0,
like that commit.
Tom Lane [Wed, 24 Dec 2014 21:35:23 +0000 (16:35 -0500)]
Add CST (China Standard Time) to our lists of timezone abbreviations.
For some reason this seems to have been missed when the lists in
src/timezone/tznames/ were first constructed. We can't put it in Default
because of the conflict with US CST, but we should certainly list it among
the alternative entries in Asia.txt. (I checked for other oversights, but
all the other abbreviations that are in current use according to the IANA
files seem to be accounted for.) Noted while responding to bug #12326.
Tom Lane [Sun, 21 Dec 2014 20:30:39 +0000 (15:30 -0500)]
Docs: clarify treatment of variadic functions with zero variadic arguments.
Explain that you have to use "VARIADIC ARRAY[]" to pass an empty array
to a variadic parameter position. This was already implicit in the text
but it seems better to spell it out.
Per a suggestion from David Johnston, though I didn't use his proposed
wording. Back-patch to all supported branches.
Andres Freund [Fri, 19 Dec 2014 13:29:52 +0000 (14:29 +0100)]
Prevent potentially hazardous compiler/cpu reordering during lwlock release.
In LWLockRelease() (and in 9.4+ LWLockUpdateVar()) we release enqueued
waiters using PGSemaphoreUnlock(). As there are other sources of such
unlocks backends only wake up if MyProc->lwWaiting is set to false;
which is only done in the aforementioned functions.
Before this commit there were dangers because the store to lwWaitLink
could become visible before the store to lwWaitLink. This could both
happen due to compiler reordering (on most compilers) and on some
platforms due to the CPU reordering stores.
The possible consequence of this is that a backend stops waiting
before lwWaitLink is set to NULL. If that backend then tries to
acquire another lock and has to wait there the list could become
corrupted once the lwWaitLink store is finally performed.
Add a write memory barrier to prevent that issue.
Unfortunately the barrier support has been only added in 9.2. Given
that the issue has not knowingly been observed in praxis it seems
sufficient to prohibit compiler reordering using volatile for 9.0 and
9.1. Actual problems due to compiler reordering are more likely
anyway.
Tom Lane [Thu, 18 Dec 2014 21:39:04 +0000 (16:39 -0500)]
Improve documentation about CASE and constant subexpressions.
The possibility that constant subexpressions of a CASE might be evaluated
at planning time was touched on in 9.17.1 (CASE expressions), but it really
ought to be explained in 4.2.14 (Expression Evaluation Rules) which is the
primary discussion of such topics. Add text and an example there, and
revise the <note> under CASE to link there.
Back-patch to all supported branches, since it's acted like this for a
long time (though 9.2+ is probably worse because of its more aggressive
use of constant-folding via replanning of nominally-prepared statements).
Pre-9.4, also back-patch text added in commit 0ce627d4 about CASE versus
aggregate functions.
Tom Lane and David Johnston, per discussion of bug #12273.
Noah Misch [Thu, 18 Dec 2014 08:55:17 +0000 (03:55 -0500)]
Recognize Makefile line continuations in fetchRegressOpts().
Back-patch to 9.0 (all supported versions). This is mere
future-proofing in the context of the master branch, but commit f6dc6dd5ba54d52c0733aaafc50da2fbaeabb8b0 requires it of older branches.
Noah Misch [Thu, 18 Dec 2014 03:48:40 +0000 (22:48 -0500)]
Lock down regression testing temporary clusters on Windows.
Use SSPI authentication to allow connections exclusively from the OS
user that launched the test suite. This closes on Windows the
vulnerability that commit be76a6d39e2832d4b88c0e1cc381aa44a7f86881
closed on other platforms. Users of "make installcheck" or custom test
harnesses can run "pg_regress --config-auth=DATADIR" to activate the
same authentication configuration that "make check" would use.
Back-patch to 9.0 (all supported versions).
Tom Lane [Tue, 16 Dec 2014 20:35:46 +0000 (15:35 -0500)]
Fix off-by-one loop count in MapArrayTypeName, and get rid of static array.
MapArrayTypeName would copy up to NAMEDATALEN-1 bytes of the base type
name, which of course is wrong: after prepending '_' there is only room for
NAMEDATALEN-2 bytes. Aside from being the wrong result, this case would
lead to overrunning the statically allocated work buffer. This would be a
security bug if the function were ever used outside bootstrap mode, but it
isn't, at least not in any currently supported branches.
Aside from fixing the off-by-one loop logic, this patch gets rid of the
static work buffer by having MapArrayTypeName pstrdup its result; the sole
caller was already doing that, so this just requires moving the pstrdup
call. This saves a few bytes but mainly it makes the API a lot cleaner.
Back-patch on the off chance that there is some third-party code using
MapArrayTypeName with less-secure input. Pushing pstrdup into the function
should not cause any serious problems for such hypothetical code; at worst
there might be a short term memory leak.
Tom Lane [Tue, 16 Dec 2014 18:31:42 +0000 (13:31 -0500)]
Fix file descriptor leak after failure of a \setshell command in pgbench.
If the called command fails to return data, runShellCommand forgot to
pclose() the pipe before returning. This is fairly harmless in the current
code, because pgbench would then abandon further processing of that client
thread; so no more than nclients descriptors could be leaked this way. But
it's not hard to imagine future improvements whereby that wouldn't be true.
In any case, it's sloppy coding, so patch all branches. Found by Coverity.
Tom Lane [Fri, 12 Dec 2014 02:02:38 +0000 (21:02 -0500)]
Fix planning of SELECT FOR UPDATE on child table with partial index.
Ordinarily we can omit checking of a WHERE condition that matches a partial
index's condition, when we are using an indexscan on that partial index.
However, in SELECT FOR UPDATE we must include the "redundant" filter
condition in the plan so that it gets checked properly in an EvalPlanQual
recheck. The planner got this mostly right, but improperly omitted the
filter condition if the index in question was on an inheritance child
table. In READ COMMITTED mode, this could result in incorrectly returning
just-updated rows that no longer satisfy the filter condition.
The cause of the error is using get_parse_rowmark() when get_plan_rowmark()
is what should be used during planning. In 9.3 and up, also fix the same
mistake in contrib/postgres_fdw. It's currently harmless there (for lack
of inheritance support) but wrong is wrong, and the incorrect code might
get copied to someplace where it's more significant.
Report and fix by Kyotaro Horiguchi. Back-patch to all supported branches.
Tom Lane [Fri, 12 Dec 2014 00:37:14 +0000 (19:37 -0500)]
Fix corner case where SELECT FOR UPDATE could return a row twice.
In READ COMMITTED mode, if a SELECT FOR UPDATE discovers it has to redo
WHERE-clause checking on rows that have been updated since the SELECT's
snapshot, it invokes EvalPlanQual processing to do that. If this first
occurs within a non-first child table of an inheritance tree, the previous
coding could accidentally re-return a matching row from an earlier,
already-scanned child table. (And, to add insult to injury, I think this
could make it miss returning a row that should have been returned, if the
updated row that this happens on should still have passed the WHERE qual.)
Per report from Kyotaro Horiguchi; the added isolation test is based on his
test case.
This has been broken for quite awhile, so back-patch to all supported
branches.