I changed the loop in 9.3 to use "goto send_failure" instead of "break" on
errors, but I missed this one case. It was a relatively harmless bug: if
the flush fails once it will most likely fail again as soon as we try to
flush the output again. But it's a bug nevertheless.
Tom Lane [Sat, 1 Mar 2014 20:21:00 +0000 (15:21 -0500)]
Allow regex operations to be terminated early by query cancel requests.
The regex code didn't have any provision for query cancel; which is
unsurprising given its non-Postgres origin, but still problematic since
some operations can take a long time. Introduce a callback function to
check for a pending query cancel or session termination request, and
call it in a couple of strategic spots where we can make the regex code
exit with an error indicator.
If we ever actually split out the regex code as a standalone library,
some additional work will be needed to let the cancel callback function
be specified externally to the library. But that's straightforward
(certainly so by comparison to putting the locale-dependent character
classification logic on a similar arms-length basis), and there seems
no need to do it right now.
A bigger issue is that there may be more places than these two where
we need to check for cancels. We can always add more checks later,
now that the infrastructure is in place.
Since there are known examples of not-terribly-long regexes that can
lock up a backend for a long time, back-patch to all supported branches.
I have hopes of fixing the known performance problems later, but adding
query cancel ability seems like a good idea even if they were all fixed.
Report by KONDO Mitsumasa. Backpatch to 9.3. The commit that introduced
this was also applied to 9.2, but not the bogus while-loop part, because
the code in 9.2 looks quite different.
Alvaro Herrera [Thu, 27 Feb 2014 14:13:39 +0000 (11:13 -0300)]
Fix WAL replay of locking an updated tuple
We were resetting the tuple's HEAP_HOT_UPDATED flag as well as t_ctid on
WAL replay of a tuple-lock operation, which is incorrect when the tuple
is already updated.
Back-patch to 9.3. The clearing of both header elements was there
previously, but since no update could be present on a tuple that was
being locked, it was harmless.
Bug reported by Peter Geoghegan and Greg Stark in
CAM3SWZTMQiCi5PV5OWHb+bYkUcnCk=O67w0cSswPvV7XfUcU5g@mail.gmail.com and
CAM-w4HPTOeMT4KP0OJK+mGgzgcTOtLRTvFZyvD0O4aH-7dxo3Q@mail.gmail.com
respectively; diagnosis by Andres Freund.
Tom Lane [Tue, 25 Feb 2014 21:04:09 +0000 (16:04 -0500)]
Use SnapshotDirty rather than an active snapshot to probe index endpoints.
If there are lots of uncommitted tuples at the end of the index range,
get_actual_variable_range() ends up fetching each one and doing an MVCC
visibility check on it, until it finally hits a visible tuple. This is
bad enough in isolation, considering that we don't need an exact answer
only an approximate one. But because the tuples are not yet committed,
each visibility check does a TransactionIdIsInProgress() test, which
involves scanning the ProcArray. When multiple sessions do this
concurrently, the ensuing contention results in horrid performance loss.
20X overall throughput loss on not-too-complicated queries is easy to
demonstrate in the back branches (though someone's made it noticeably
less bad in HEAD).
We can dodge the problem fairly effectively by using SnapshotDirty rather
than a normal MVCC snapshot. This will cause the index probe to take
uncommitted tuples as good, so that we incur only one tuple fetch and test
even if there are many such tuples. The extent to which this degrades the
estimate is debatable: it's possible the result is actually a more accurate
prediction than before, if the endmost tuple has become committed by the
time we actually execute the query being planned. In any case, it's not
very likely that it makes the estimate a lot worse.
SnapshotDirty will still reject tuples that are known committed dead, so
we won't give bogus answers if an invalid outlier has been deleted but not
yet vacuumed from the index. (Because btrees know how to mark such tuples
dead in the index, we shouldn't have a big performance problem in the case
that there are many of them at the end of the range.) This consideration
motivates not using SnapshotAny, which was also considered as a fix.
Note: the back branches were using SnapshotNow instead of an MVCC snapshot,
but the problem and solution are the same.
Per performance complaints from Bartlomiej Romanski, Josh Berkus, and
others. Back-patch to 9.0, where the issue was introduced (by commit 40608e7f949fb7e4025c0ddd5be01939adc79eec).
Tom Lane [Fri, 21 Feb 2014 22:10:49 +0000 (17:10 -0500)]
Do ScalarArrayOp estimation correctly when array is a stable expression.
Most estimation functions apply estimate_expression_value to see if they
can reduce an expression to a constant; the key difference is that it
allows evaluation of stable as well as immutable functions in hopes of
ending up with a simple Const node. scalararraysel didn't get the memo
though, and neither did gincost_opexpr/gincost_scalararrayopexpr. Fix
that, and remove a now-unnecessary estimate_expression_value step in the
subsidiary function scalararraysel_containment.
Per complaint from Alexey Klyukin. Back-patch to 9.3. The problem
goes back further, but I'm hesitant to change estimation behavior in
long-stable release branches.
Tom Lane [Tue, 18 Feb 2014 17:44:24 +0000 (12:44 -0500)]
Remove broken code that tried to handle OVERLAPS with a single argument.
The SQL standard says that OVERLAPS should have a two-element row
constructor on each side. The original coding of OVERLAPS support in
our grammar attempted to extend that by allowing a single-element row
constructor, which it internally duplicated ... or tried to, anyway.
But that code has certainly not worked since our List infrastructure was
rewritten in 2004, and I'm none too sure it worked before that. As it
stands, it ends up building a List that includes itself, leading to
assorted undesirable behaviors later in the parser.
Even if it worked as intended, it'd be a bit evil because of the
possibility of duplicate evaluation of a volatile function that the user
had written only once. Given the lack of documentation, test cases, or
complaints, let's just get rid of the idea and only support the standard
syntax.
While we're at it, improve the error cursor positioning for the
wrong-number-of-arguments errors, and inline the makeOverlaps() function
since it's only called in one place anyway.
Per bug #9227 from Joshua Yanovski. Initial patch by Joshua Yanovski,
extended a bit by me.
Magnus Hagander [Tue, 18 Feb 2014 13:45:58 +0000 (14:45 +0100)]
Disable RandomizedBaseAddress on MSVC builds
The ASLR in Windows 8/Windows 2012 can break PostgreSQL's shared memory. It
doesn't fail every time (which is explained by the Random part in ASLR), but
can fail with errors abut failing to reserve shared memory region.
Tom Lane [Mon, 17 Feb 2014 16:24:38 +0000 (11:24 -0500)]
Document risks of "make check" in the regression testing instructions.
Since the temporary server started by "make check" uses "trust"
authentication, another user on the same machine could connect to it
as database superuser, and then potentially exploit the privileges of
the operating-system user who started the tests. We should change
the testing procedures to prevent this risk; but discussion is required
about the best way to do that, as well as more testing than is practical
for an undisclosed security problem. Besides, the same issue probably
affects some user-written test harnesses. So for the moment, we'll just
warn people against using "make check" when there are untrusted users on
the same machine.
In passing, remove some ancient advice that suggested making the
regression testing subtree world-writable if you'd built as root.
That looks dangerously insecure in modern contexts, and anyway we
should not be encouraging people to build Postgres as root.
Tom Lane [Mon, 17 Feb 2014 16:20:24 +0000 (11:20 -0500)]
Prevent potential overruns of fixed-size buffers.
Coverity identified a number of places in which it couldn't prove that a
string being copied into a fixed-size buffer would fit. We believe that
most, perhaps all of these are in fact safe, or are copying data that is
coming from a trusted source so that any overrun is not really a security
issue. Nonetheless it seems prudent to forestall any risk by using
strlcpy() and similar functions.
Fixes by Peter Eisentraut and Jozef Mlich based on Coverity reports.
In addition, fix a potential null-pointer-dereference crash in
contrib/chkpass. The crypt(3) function is defined to return NULL on
failure, but chkpass.c didn't check for that before using the result.
The main practical case in which this could be an issue is if libc is
configured to refuse to execute unapproved hashing algorithms (e.g.,
"FIPS mode"). This ideally should've been a separate commit, but
since it touches code adjacent to one of the buffer overrun changes,
I included it in this commit to avoid last-minute merge issues.
This issue was reported by Honza Horak.
Security: CVE-2014-0065 for buffer overruns, CVE-2014-0066 for crypt()
Noah Misch [Mon, 17 Feb 2014 14:33:31 +0000 (09:33 -0500)]
Predict integer overflow to avoid buffer overruns.
Several functions, mostly type input functions, calculated an allocation
size such that the calculation wrapped to a small positive value when
arguments implied a sufficiently-large requirement. Writes past the end
of the inadvertent small allocation followed shortly thereafter.
Coverity identified the path_in() vulnerability; code inspection led to
the rest. In passing, add check_stack_depth() to prevent stack overflow
in related functions.
Back-patch to 8.4 (all supported versions). The non-comment hstore
changes touch code that did not exist in 8.4, so that part stops at 9.0.
Noah Misch and Heikki Linnakangas, reviewed by Tom Lane.
Noah Misch [Mon, 17 Feb 2014 14:33:31 +0000 (09:33 -0500)]
Fix handling of wide datetime input/output.
Many server functions use the MAXDATELEN constant to size a buffer for
parsing or displaying a datetime value. It was much too small for the
longest possible interval output and slightly too small for certain
valid timestamp input, particularly input with a long timezone name.
The long input was rejected needlessly; the long output caused
interval_out() to overrun its buffer. ECPG's pgtypes library has a copy
of the vulnerable functions, which bore the same vulnerabilities along
with some of its own. In contrast to the server, certain long inputs
caused stack overflow rather than failing cleanly. Back-patch to 8.4
(all supported versions).
Reported by Daniel Schüssler, reviewed by Tom Lane.
Robert Haas [Mon, 17 Feb 2014 14:33:31 +0000 (09:33 -0500)]
Avoid repeated name lookups during table and index DDL.
If the name lookups come to different conclusions due to concurrent
activity, we might perform some parts of the DDL on a different table
than other parts. At least in the case of CREATE INDEX, this can be
used to cause the permissions checks to be performed against a
different table than the index creation, allowing for a privilege
escalation attack.
This changes the calling convention for DefineIndex, CreateTrigger,
transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible
(in 9.2 and newer), and AlterTable (in 9.1 and older). In addition,
CheckRelationOwnership is removed in 9.2 and newer and the calling
convention is changed in older branches. A field has also been added
to the Constraint node (FkConstraint in 8.4). Third-party code calling
these functions or using the Constraint node will require updating.
Report by Andres Freund. Patch by Robert Haas and Andres Freund,
reviewed by Tom Lane.
Noah Misch [Mon, 17 Feb 2014 14:33:31 +0000 (09:33 -0500)]
Prevent privilege escalation in explicit calls to PL validators.
The primary role of PL validators is to be called implicitly during
CREATE FUNCTION, but they are also normal functions that a user can call
explicitly. Add a permissions check to each validator to ensure that a
user cannot use explicit validator calls to achieve things he could not
otherwise achieve. Back-patch to 8.4 (all supported versions).
Non-core procedural language extensions ought to make the same two-line
change to their own validators.
Andres Freund, reviewed by Tom Lane and Noah Misch.
Noah Misch [Mon, 17 Feb 2014 14:33:31 +0000 (09:33 -0500)]
Shore up ADMIN OPTION restrictions.
Granting a role without ADMIN OPTION is supposed to prevent the grantee
from adding or removing members from the granted role. Issuing SET ROLE
before the GRANT bypassed that, because the role itself had an implicit
right to add or remove members. Plug that hole by recognizing that
implicit right only when the session user matches the current role.
Additionally, do not recognize it during a security-restricted operation
or during execution of a SECURITY DEFINER function. The restriction on
SECURITY DEFINER is not security-critical. However, it seems best for a
user testing his own SECURITY DEFINER function to see the same behavior
others will see. Back-patch to 8.4 (all supported versions).
The SQL standards do not conflate roles and users as PostgreSQL does;
only SQL roles have members, and only SQL users initiate sessions. An
application using PostgreSQL users and roles as SQL users and roles will
never attempt to grant membership in the role that is the session user,
so the implicit right to add or remove members will never arise.
The security impact was mostly that a role member could revoke access
from others, contrary to the wishes of his own grantor. Unapproved role
member additions are less notable, because the member can still largely
achieve that by creating a view or a SECURITY DEFINER function.
Reviewed by Andres Freund and Tom Lane. Reported, independently, by
Jonas Sundman and Noah Misch.
Tom Lane [Sun, 16 Feb 2014 17:37:10 +0000 (12:37 -0500)]
PGDLLIMPORT'ify DateStyle and IntervalStyle.
This is needed on Windows to support contrib/postgres_fdw. Although it's
been broken since last March, we didn't notice until recently because there
were no active buildfarm members that complained about missing PGDLLIMPORT
marking. Efforts are underway to improve that situation, in support of
which we're delaying fixing some other cases of global variables that
should be marked PGDLLIMPORT. However, this case affects 9.3, so we
can't wait any longer to fix it.
I chose to mark DateOrder as well, though it's not strictly necessary
for postgres_fdw.
Tom Lane [Sat, 15 Feb 2014 22:09:54 +0000 (17:09 -0500)]
Fix unportable coding in DetermineSleepTime().
We should not assume that struct timeval.tv_sec is a long, because
it ain't necessarily. (POSIX says that it's a time_t, which might
well be 64 bits now or in the future; or for that matter might be
32 bits on machines with 64-bit longs.) Per buildfarm member panther.
Back-patch to 9.3 where the dubious coding was introduced.
Tom Lane [Sat, 15 Feb 2014 02:59:13 +0000 (21:59 -0500)]
Update time zone data files to tzdata release 2013i.
DST law changes in Jordan; historical changes in Cuba.
Also, remove the zones Asia/Riyadh87, Asia/Riyadh88, and Asia/Riyadh89.
Per the upstream announcement:
The files solar87, solar88, and solar89 are no longer distributed.
They were a negative experiment -- that is, a demonstration that
tz data can represent solar time only with some difficulty and error.
Their presence in the distribution caused confusion, as Riyadh
civil time was generally not solar time in those years.
Tom Lane [Fri, 14 Feb 2014 21:50:25 +0000 (16:50 -0500)]
Update regression testing instructions.
This documentation never got the word about the existence of check-world or
installcheck-world. Revise to recommend use of those, and document all the
subsidiary test suites. Do some minor wordsmithing elsewhere, too.
In passing, remove markup related to generation of plain-text regression
test instructions, since we don't do that anymore.
Back-patch to 9.1 where check-world was added. (installcheck-world exists
in 9.0; but since check-world doesn't, this patch would need additional
work to cover that branch, and it doesn't seem worth the effort.)
Tom Lane [Fri, 14 Feb 2014 17:54:43 +0000 (12:54 -0500)]
Suggest shell here-documents instead of psql -c for multiple commands.
The documentation suggested using "echo | psql", but not the often-superior
alternative of a here-document. Also, be more direct about suggesting
that people avoid -c for multiple commands. Per discussion.
Change the order that pg_xlog and WAL archive are polled for WAL segments.
If there is a WAL segment with same ID but different TLI present in both
the WAL archive and pg_xlog, prefer the one with higher TLI. Before this
patch, the archive was polled first, for all expected TLIs, and only if no
file was found was pg_xlog scanned. This was a change in behavior from 9.3,
which first scanned archive and pg_xlog for the highest TLI, then archive
and pg_xlog for the next highest TLI and so forth. This patch reverts the
behavior back to what it was in 9.2.
The reason for this is that if for example you try to do archive recovery
to timeline 2, which branched off timeline 1, but the WAL for timeline 2 is
not archived yet, we would replay past the timeline switch point on
timeline 1 using the archived files, before even looking timeline 2's files
in pg_xlog
Report and patch by Kyotaro Horiguchi. Backpatch to 9.3 where the behavior
was changed.
Tom Lane [Thu, 13 Feb 2014 23:45:15 +0000 (18:45 -0500)]
Clean up error cases in psql's COPY TO STDOUT/FROM STDIN code.
Adjust handleCopyOut() to stop trying to write data once it's failed
one time. For typical cases such as out-of-disk-space or broken-pipe,
additional attempts aren't going to do anything but waste time, and
in any case clean truncation of the output seems like a better behavior
than randomly dropping blocks in the middle.
Also remove dubious (and misleadingly documented) attempt to force our way
out of COPY_OUT state if libpq didn't do that. If we did have a situation
like that, it'd be a bug in libpq and would be better fixed there, IMO.
We can hope that commit fa4440f51628d692f077d54b8313aea31af087ea took care
of any such problems, anyway.
Also fix longstanding bug in handleCopyIn(): PQputCopyEnd() only supports
a non-null errormsg parameter in protocol version 3, and will actively
fail if one is passed in version 2. This would've made our attempts
to get out of COPY_IN state after a failure into infinite loops when
talking to pre-7.4 servers.
Back-patch the COPY_OUT state change business back to 9.2 where it was
introduced, and the other two fixes into all supported branches.
Alvaro Herrera [Thu, 13 Feb 2014 22:30:30 +0000 (19:30 -0300)]
Separate multixact freezing parameters from xid's
Previously we were piggybacking on transaction ID parameters to freeze
multixacts; but since there isn't necessarily any relationship between
rates of Xid and multixact consumption, this turns out not to be a good
idea.
Therefore, we now have multixact-specific freezing parameters:
vacuum_multixact_freeze_min_age: when to remove multis as we come across
them in vacuum (default to 5 million, i.e. early in comparison to Xid's
default of 50 million)
vacuum_multixact_freeze_table_age: when to force whole-table scans
instead of scanning only the pages marked as not all visible in
visibility map (default to 150 million, same as for Xids). Whichever of
both which reaches the 150 million mark earlier will cause a whole-table
scan.
autovacuum_multixact_freeze_max_age: when for cause emergency,
uninterruptible whole-table scans (default to 400 million, double as
that for Xids). This means there shouldn't be more frequent emergency
vacuuming than previously, unless multixacts are being used very
rapidly.
Backpatch to 9.3 where multixacts were made to persist enough to require
freezing. To avoid an ABI break in 9.3, VacuumStmt has a couple of
fields in an unnatural place, and StdRdOptions is split in two so that
the newly added fields can go at the end.
Patch by me, reviewed by Robert Haas, with additional input from Andres
Freund and Tom Lane.
Tom Lane [Thu, 13 Feb 2014 19:24:45 +0000 (14:24 -0500)]
Fix length checking for Unicode identifiers containing escapes (U&"...").
We used the length of the input string, not the de-escaped string, as
the trigger for NAMEDATALEN truncation. AFAICS this would only result
in sometimes printing a phony truncation warning; but it's just luck
that there was no worse problem, since we were violating the API spec
for truncate_identifier(). Per bug #9204 from Joshua Yanovski.
This has been wrong since the Unicode-identifier support was added,
so back-patch to all supported branches.
Tom Lane [Thu, 13 Feb 2014 00:09:21 +0000 (19:09 -0500)]
Improve cross-references between minor version release notes.
We have a practice of providing a "bread crumb" trail between the minor
versions where the migration section actually tells you to do something.
Historically that was just plain text, eg, "see the release notes for
9.2.4"; but if you're using a browser or PDF reader, it's a lot nicer
if it's a live hyperlink. So use "<xref>" instead. Any argument against
doing this vanished with the recent decommissioning of plain-text release
notes.
Tom Lane [Wed, 12 Feb 2014 22:50:10 +0000 (17:50 -0500)]
Improve libpq's error recovery for connection loss during COPY.
In pqSendSome, if the connection is already closed at entry, discard any
queued output data before returning. There is no possibility of ever
sending the data, and anyway this corresponds to what we'd do if we'd
detected a hard error while trying to send(). This avoids possible
indefinite bloat of the output buffer if the application keeps trying
to send data (or even just keeps trying to do PQputCopyEnd, as psql
indeed will).
Because PQputCopyEnd won't transition out of PGASYNC_COPY_IN state
until it's successfully queued the COPY END message, and pqPutMsgEnd
doesn't distinguish a queuing failure from a pqSendSome failure,
this omission allowed an infinite loop in psql if the connection closure
occurred when we had at least 8K queued to send. It might be worth
refactoring so that we can make that distinction, but for the moment
the other changes made here seem to offer adequate defenses.
To guard against other variants of this scenario, do not allow
PQgetResult to return a PGRES_COPY_XXX result if the connection is
already known dead. Make sure it returns PGRES_FATAL_ERROR instead.
Per report from Stephen Frost. Back-patch to all active branches.
Tom Lane [Wed, 12 Feb 2014 19:52:20 +0000 (14:52 -0500)]
In XLogReadBufferExtended, don't assume P_NEW yields consecutive pages.
In a database that's not yet reached consistency, it's possible that some
segments of a relation are not full-size but are not the last ones either.
Because of the way smgrnblocks() works, asking for a new page with P_NEW
will fill in the last not-full-size segment --- and if that makes it full
size, the apparent EOF of the relation will increase by more than one page,
so that the next P_NEW request will yield a page past the next consecutive
one. This breaks the relation-extension logic in XLogReadBufferExtended,
possibly allowing a page update to be applied to some page far past where
it was intended to go. This appears to be the explanation for reports of
table bloat on replication slaves compared to their masters, and probably
explains some corrupted-slave reports as well.
Fix the loop to check the page number it actually got, rather than merely
Assert()'ing that dead reckoning got it to the desired place. AFAICT,
there are no other places that make assumptions about exactly which page
they'll get from P_NEW.
Problem identified by Greg Stark, though this is not the same as his
proposed patch.
It's been like this for a long time, so back-patch to all supported
branches.
Magnus Hagander [Sun, 9 Feb 2014 12:10:14 +0000 (13:10 +0100)]
Kill pg_basebackup background process when exiting
If an error occurs in the foreground (backup) process of pg_basebackup,
and we exit in a controlled way, the background process (streaming
xlog process) would stay around and keep streaming.
Tom Lane [Tue, 11 Feb 2014 01:48:12 +0000 (20:48 -0500)]
Don't generate plain-text HISTORY and src/test/regress/README anymore.
Providing this information as plain text was doubtless worth the trouble
ten years ago, but it seems likely that hardly anyone reads it in this
format anymore. And the effort required to maintain these files (in the
form of extra-complex markup rules in the relevant parts of the SGML
documentation) is significant. So, let's stop doing that and rely solely
on the other documentation formats.
Per discussion, the plain-text INSTALL instructions might still be worth
their keep, so we continue to generate that file.
Rather than remove HISTORY and src/test/regress/README from distribution
tarballs entirely, replace them with simple stub files that tell the reader
where to find the relevant documentation. This is mainly to avoid possibly
breaking packaging recipes that expect these files to exist.
Back-patch to all supported branches, because simplifying the markup
requirements for release notes won't help much unless we do it in all
branches.
Tom Lane [Tue, 4 Feb 2014 02:30:05 +0000 (21:30 -0500)]
Improve connection-failure error handling in contrib/postgres_fdw.
postgres_fdw tended to say "unknown error" if it tried to execute a command
on an already-dead connection, because some paths in libpq just return a
null PGresult for such cases. Out-of-memory might result in that, too.
To fix, pass the PGconn to pgfdw_report_error, and look at its
PQerrorMessage() string if we can't get anything out of the PGresult.
Also, fix the transaction-exit logic to reliably drop a dead connection.
It was attempting to do that already, but it assumed that only connection
cache entries with xact_depth > 0 needed to be examined. The folly in that
is that if we fail while issuing START TRANSACTION, we'll not have bumped
xact_depth. (At least for the case I was testing, this fix masks the
other problem; but it still seems like a good idea to have the PGconn
fallback logic.)
Per investigation of bug #9087 from Craig Lucas. Backpatch to 9.3 where
this code was introduced.
Tom Lane [Mon, 3 Feb 2014 19:46:54 +0000 (14:46 -0500)]
Fix *-qualification of named parameters in SQL-language functions.
Given a composite-type parameter named x, "$1.*" worked fine, but "x.*"
not so much. This has been broken since named parameter references were
added in commit 9bff0780cf5be2193a5bad0d3df2dbe143085264, so patch back
to 9.2. Per bug #9085 from Hardy Falk.
Tom Lane [Sat, 1 Feb 2014 23:26:58 +0000 (18:26 -0500)]
Fix some wide-character bugs in the text-search parser.
In p_isdigit and other character class test functions generated by the
p_iswhat macro, the code path for non-C locales with multibyte encodings
contained a bogus pointer cast that would accidentally fail to malfunction
if types wchar_t and wint_t have the same width. Apparently that is true
on most platforms, but not on recent Cygwin releases. Remove the cast,
as it seems completely unnecessary (I think it arose from a false analogy
to the need to cast to unsigned char when dealing with the <ctype.h>
functions). Per bug #8970 from Marco Atzeri.
In the same functions, the code path for C locale with a multibyte encoding
simply ANDed each wide character with 0xFF before passing it to the
corresponding <ctype.h> function. This could result in false positive
answers for some non-ASCII characters, so use a range test instead.
Noted by me while investigating Marco's complaint.
Also, remove some useless though not actually buggy maskings and casts
in the hand-coded p_isalnum and p_isalpha functions, which evidently
got tested a bit more carefully than the macro-generated functions.
Tom Lane [Sat, 1 Feb 2014 21:21:00 +0000 (16:21 -0500)]
Fix some more bugs in signal handlers and process shutdown logic.
WalSndKill was doing things exactly backwards: it should first clear
MyWalSnd (to stop signal handlers from touching MyWalSnd->latch),
then disown the latch, and only then mark the WalSnd struct unused by
clearing its pid field.
Also, WalRcvSigUsr1Handler and worker_spi_sighup failed to preserve
errno, which is surely a requirement for any signal handler.
Per discussion of recent buildfarm failures. Back-patch as far
as the relevant code exists.
Andrew Dunstan [Sat, 1 Feb 2014 20:16:06 +0000 (15:16 -0500)]
Copy the libpq DLL to the bin directory on Mingw and Cygwin.
This has long been done by the MSVC build system, and has caused
confusion in the past when programs like psql have failed to start
because they can't find the DLL. If it's in the same directory as it now
will be they will find it.
Robert Haas [Sat, 1 Feb 2014 02:31:08 +0000 (21:31 -0500)]
Clear MyProc and MyProcSignalState before they become invalid.
Evidence from buildfarm member crake suggests that the new test_shm_mq
module is routinely crashing the server due to the arrival of a SIGUSR1
after the shared memory segment has been unmapped. Although processes
using the new dynamic background worker facilities are more likely to
receive a SIGUSR1 around this time, the problem is also possible on older
branches, so I'm back-patching the parts of this change that apply to
older branches as far as they apply.
It's already generally the case that code checks whether these pointers
are NULL before deferencing them, so the important thing is mostly to
make sure that they do get set to NULL before they become invalid. But
in master, there's one case in procsignal_sigusr1_handler that lacks a
NULL guard, so add that.
Tom Lane [Thu, 30 Jan 2014 23:10:04 +0000 (18:10 -0500)]
Fix potential coredump on bad locale value in pg_upgrade.
Thinko in error report (and a typo in the message text, too). We're
failing anyway, but it would be good to print something useful first.
Noted while reviewing a patch to make pg_upgrade's locale code laxer.
Tom Lane [Thu, 30 Jan 2014 19:51:19 +0000 (14:51 -0500)]
Fix bogus handling of "postponed" lateral quals.
When pulling a "postponed" qual from a LATERAL subquery up into the quals
of an outer join, we must make sure that the postponed qual is included
in those seen by make_outerjoininfo(). Otherwise we might compute a
too-small min_lefthand or min_righthand for the outer join, leading to
"JOIN qualification cannot refer to other relations" failures from
distribute_qual_to_rels. Subtler errors in the created plan seem possible,
too, if the extra qual would only affect join ordering constraints.
Per bug #9041 from David Leverton. Back-patch to 9.3.
Tom Lane [Thu, 30 Jan 2014 01:04:01 +0000 (20:04 -0500)]
Fix unsafe references to errno within error messaging logic.
Various places were supposing that errno could be expected to hold still
within an ereport() nest or similar contexts. This isn't true necessarily,
though in some cases it accidentally failed to fail depending on how the
compiler chanced to order the subexpressions. This class of thinko
explains recent reports of odd failures on clang-built versions, typically
missing or inappropriate HINT fields in messages.
Problem identified by Christian Kruse, who also submitted the patch this
commit is based on. (I fixed a few issues in his patch and found a couple
of additional places with the same disease.)
Back-patch as appropriate to all supported branches.
Stephen Frost [Fri, 24 Jan 2014 20:10:08 +0000 (15:10 -0500)]
Avoid minor leak in parallel pg_dump
During parallel pg_dump, a worker process closing the connection caused
a minor memory leak (particularly minor as we are likely about to exit
anyway). Instead, free the memory in this case prior to returning NULL
to indicate connection closed.
Fujii Masao [Thu, 23 Jan 2014 14:00:30 +0000 (23:00 +0900)]
Fix bugs in PQhost().
In the platform that doesn't support Unix-domain socket, when
neither host nor hostaddr are specified, the default host
'localhost' is used to connect to the server and PQhost() must
return that, but it didn't. This patch fixes PQhost() so that
it returns the default host in that case.
Also this patch fixes PQhost() so that it doesn't return
Unix-domain socket directory path in the platform that doesn't
support Unix-domain socket.
Stephen Frost [Wed, 22 Jan 2014 03:49:22 +0000 (22:49 -0500)]
Allow type_func_name_keywords in even more places
A while back, 2c92edad48796119c83d7dbe6c33425d1924626d allowed
type_func_name_keywords to be used in more places, including role
identifiers. Unfortunately, that commit missed out on cases where
name_list was used for lists-of-roles, eg: for DROP ROLE. This
resulted in the unfortunate situation that you could CREATE a role
with a type_func_name_keywords-allowed identifier, but not DROP it
(directly- ALTER could be used to rename it to something which
could be DROP'd).
This extends allowing type_func_name_keywords to places where role
lists can be used.
Tom Lane [Tue, 21 Jan 2014 21:34:31 +0000 (16:34 -0500)]
Tweak parse location assignment for CURRENT_DATE and related constructs.
All these constructs generate parse trees consisting of a Const and
a run-time type coercion (perhaps a FuncExpr or a CoerceViaIO). Modify
the raw parse output so that we end up with the original token's location
attached to the type coercion node while the Const has location -1;
before, it was the other way around. This makes no difference in terms
of what exprLocation() will say about the parse tree as a whole, so it
should not have any user-visible impact. The point of changing it is that
we do not want contrib/pg_stat_statements to treat these constructs as
replaceable constants. It will do the right thing if the Const has
location -1 rather than a valid location.
This is a pretty ugly hack, but then this code is ugly already; we should
someday replace this translation with special-purpose parse node(s) that
would allow ruleutils.c to reconstruct the original query text.
Robert Haas [Tue, 21 Jan 2014 16:42:37 +0000 (11:42 -0500)]
Fix inadvertent semantics change in last patch to plug memory leaks.
Commit a5bca4ef034f71175d46462963af2329d22068c2 accidentally changed
the semantics when the "skipping missing configuration file" is
emitted, because it forced OK to true instead of leaving the value
untouched.
Stephen Frost [Sat, 18 Jan 2014 23:41:52 +0000 (18:41 -0500)]
Allow SET TABLESPACE to database default
We've always allowed CREATE TABLE to create tables in the database's default
tablespace without checking for CREATE permissions on that tablespace.
Unfortunately, the original implementation of ALTER TABLE ... SET TABLESPACE
didn't pick up on that exception.
This changes ALTER TABLE ... SET TABLESPACE to allow the database's default
tablespace without checking for CREATE rights on that tablespace, just as
CREATE TABLE works today. Users could always do this through a series of
commands (CREATE TABLE ... AS SELECT * FROM ...; DROP TABLE ...; etc), so
let's fix the oversight in SET TABLESPACE's original implementation.
Fix Hot Standby feedback sending when streaming busily.
Commit 6f60fdd7015b032bf49273c99f80913d57eac284 accidentally removed a
call to XLogWalRcvSendHSFeedback() after flushing received WAL to disk.
The consequence is that when walsender is busy streaming WAL, it doesn't
send HS feedback messages. One is sent if nothing is received from the
master for 100ms, but if there's a steady stream of WAL, it never happens.
Tom Lane [Wed, 15 Jan 2014 00:27:57 +0000 (19:27 -0500)]
Improve FILES section of psql reference page.
Primarily, explain where to find the system-wide psqlrc file, per recent
gripe from John Sutton. Do some general wordsmithing and improve the
markup, too.
Also adjust psqlrc.sample so its comments about file location are somewhat
trustworthy. (Not sure why we bother with this file when it's empty,
but whatever.)
Back-patch to 9.2 where the startup file naming scheme was last changed.
Tom Lane [Tue, 14 Jan 2014 22:34:51 +0000 (17:34 -0500)]
Fix multiple bugs in index page locking during hot-standby WAL replay.
In ordinary operation, VACUUM must be careful to take a cleanup lock on
each leaf page of a btree index; this ensures that no indexscans could
still be "in flight" to heap tuples due to be deleted. (Because of
possible index-tuple motion due to concurrent page splits, it's not enough
to lock only the pages we're deleting index tuples from.) In Hot Standby,
the WAL replay process must likewise lock every leaf page. There were
several bugs in the code for that:
* The replay scan might come across unused, all-zero pages in the index.
While btree_xlog_vacuum itself did the right thing (ie, nothing) with
such pages, xlogutils.c supposed that such pages must be corrupt and
would throw an error. This accounts for various reports of replication
failures with "PANIC: WAL contains references to invalid pages". To
fix, add a ReadBufferMode value that instructs XLogReadBufferExtended
not to complain when we're doing this.
* btree_xlog_vacuum performed the extra locking if standbyState ==
STANDBY_SNAPSHOT_READY, but that's not the correct test: we won't open up
for hot standby queries until the database has reached consistency, and
we don't want to do the extra locking till then either, for fear of reading
corrupted pages (which bufmgr.c would complain about). Fix by exporting a
new function from xlog.c that will report whether we're actually in hot
standby replay mode.
* To ensure full coverage of the index in the replay scan, btvacuumscan
would emit a dummy WAL record for the last page of the index, if no
vacuuming work had been done on that page. However, if the last page
of the index is all-zero, that would result in corruption of said page,
since the functions called on it weren't prepared to handle that case.
There's no need to lock any such pages, so change the logic to target
the last normal leaf page instead.
The first two of these bugs were diagnosed by Andres Freund, the other one
by me. Fixes based on ideas from Heikki Linnakangas and myself.
This has been wrong since Hot Standby was introduced, so back-patch to 9.0.
Tom Lane [Mon, 13 Jan 2014 18:07:13 +0000 (13:07 -0500)]
Fix possible buffer overrun in contrib/pg_trgm.
Allow for the possibility that folding a string to lower case makes it
longer (due to replacing a character with a longer multibyte character).
This doesn't change the number of trigrams that will be extracted, but
it does affect the required size of an intermediate buffer in
generate_trgm(). Per bug #8821 from Ufuk Kayserilioglu.
Also install some checks that the input string length is not so large
as to cause overflow in the calculations of palloc request sizes.
Tom Lane [Sun, 12 Jan 2014 00:03:15 +0000 (19:03 -0500)]
Disallow LATERAL references to the target table of an UPDATE/DELETE.
On second thought, commit 0c051c90082da0b7e5bcaf9aabcbd4f361137cdc was
over-hasty: rather than allowing this case, we ought to reject it for now.
That leaves the field clear for a future feature that allows the target
table to be re-specified in the FROM (or USING) clause, which will enable
left-joining the target table to something else. We can then also allow
LATERAL references to such an explicitly re-specified target table.
But allowing them right now will create ambiguities or worse for such a
feature, and it isn't something we documented 9.3 as supporting.
While at it, add a convenience subroutine to avoid having several copies
of the ereport for disalllowed-LATERAL-reference cases.
Tom Lane [Sat, 11 Jan 2014 21:35:30 +0000 (16:35 -0500)]
Fix possible crashes due to using elog/ereport too early in startup.
Per reports from Andres Freund and Luke Campbell, a server failure during
set_pglocale_pgservice results in a segfault rather than a useful error
message, because the infrastructure needed to use ereport hasn't been
initialized; specifically, MemoryContextInit hasn't been called.
One known cause of this is starting the server in a directory it
doesn't have permission to read.
We could try to prevent set_pglocale_pgservice from using anything that
depends on palloc or elog, but that would be messy, and the odds of future
breakage seem high. Moreover there are other things being called in main.c
that look likely to use palloc or elog too --- perhaps those things
shouldn't be there, but they are there today. The best solution seems to
be to move the call of MemoryContextInit to very early in the backend's
real main() function. I've verified that an elog or ereport occurring
immediately after that is now capable of sending something useful to
stderr.
I also added code to elog.c to print something intelligible rather than
just crashing if MemoryContextInit hasn't created the ErrorContext.
This could happen if MemoryContextInit itself fails (due to malloc
failure), and provides some future-proofing against someone trying to
sneak in new code even earlier in server startup.
Back-patch to all supported branches. Since we've only heard reports of
this type of failure recently, it may be that some recent change has made
it more likely to see a crash of this kind; but it sure looks like it's
broken all the way back.
Tom Lane [Sat, 11 Jan 2014 18:41:41 +0000 (13:41 -0500)]
Fix compute_scalar_stats() for case that all values exceed WIDTH_THRESHOLD.
The standard typanalyze functions skip over values whose detoasted size
exceeds WIDTH_THRESHOLD (1024 bytes), so as to limit memory bloat during
ANALYZE. However, we (I think I, actually :-() failed to consider the
possibility that *every* non-null value in a column is too wide. While
compute_minimal_stats() seems to behave reasonably anyway in such a case,
compute_scalar_stats() just fell through and generated no pg_statistic
entry at all. That's unnecessarily pessimistic: we can still produce
valid stanullfrac and stawidth values in such cases, since we do include
too-wide values in the average-width calculation. Furthermore, since the
general assumption in this code is that too-wide values are probably all
distinct from each other, it seems reasonable to set stadistinct to -1
("all distinct").
Per complaint from Kadri Raudsepp. This has been like this since roughly
neolithic times, so back-patch to all supported branches.
Alvaro Herrera [Fri, 10 Jan 2014 21:03:18 +0000 (18:03 -0300)]
Accept pg_upgraded tuples during multixact freezing
The new MultiXact freezing routines introduced by commit 8e9a16ab8f7
neglected to consider tuples that came from a pg_upgrade'd database; a
vacuum run that tried to freeze such tuples would die with an error such
as
ERROR: MultiXactId 11415437 does no longer exist -- apparent wraparound
To fix, ensure that GetMultiXactIdMembers is allowed to return empty
multis when the infomask bits are right, as is done in other callsites.
Per trouble report from F-Secure.
In passing, fix a copy&paste bug reported by Andrey Karpov from VIVA64
from their PVS-Studio static checked, that instead of setting relminmxid
to Invalid, we were setting relfrozenxid twice. Not an important
mistake because that code branch is about relations for which we don't
use the frozenxid/minmxid values at all in the first place, but seems to
warrants a fix nonetheless.
Michael Meskes [Thu, 9 Jan 2014 14:41:51 +0000 (15:41 +0100)]
Fix descriptor output in ECPG.
While working on most platforms the old way sometimes created alignment
problems. This should fix it. Also the regresion tests were updated to test for
the reported case.
Tom Lane [Thu, 9 Jan 2014 01:18:10 +0000 (20:18 -0500)]
Fix "cannot accept a set" error when only some arms of a CASE return a set.
In commit c1352052ef1d4eeb2eb1d822a207ddc2d106cb13, I implemented an
optimization that assumed that a function's argument expressions would
either always return a set (ie multiple rows), or always not. This is
wrong however: we allow CASE expressions in which some arms return a set
of some type and others just return a scalar of that type. There may be
other examples as well. To fix, replace the run-time test of whether an
argument returned a set with a static precheck (expression_returns_set).
This adds a little bit of query startup overhead, but it seems barely
measurable.
Per bug #8228 from David Johnston. This has been broken since 8.0,
so patch all supported branches.
If pause_at_recovery_target is set, recovery pauses *before* applying the
target record, even if recovery_target_inclusive is set. If you then
continue with pg_xlog_replay_resume(), it will apply the target record
before ending recovery. In other words, if you log in while it's paused
and verify that the database looks OK, ending recovery changes its state
again, possibly destroying data that you were tring to salvage with PITR.
Backpatch to 9.1, this has been broken since pause_at_recovery_target was
added.
Fix bug in determining when recovery has reached consistency.
When starting WAL replay from an online checkpoint, the last replayed WAL
record variable was initialized using the checkpoint record's location, even
though the records between the REDO location and the checkpoint record had
not been replayed yet. That was noted as "slightly confusing" but harmless
in the comment, but in some cases, it fooled CheckRecoveryConsistency to
incorrectly conclude that we had already reached a consistent state
immediately at the beginning of WAL replay. That caused the system to accept
read-only connections in hot standby mode too early, and also PANICs with
message "WAL contains references to invalid pages".
Fix by initializing the variables to the REDO location instead.
In 9.2 and above, change CheckRecoveryConsistency() to use
lastReplayedEndRecPtr variable when checking if backup end location has
been reached. It was inconsistently using EndRecPtr for that check, but
lastReplayedEndRecPtr when checking min recovery point. It made no
difference before this patch, because in all the places where
CheckRecoveryConsistency was called the two variables were the same, but
it was always an accident waiting to happen, and would have been wrong
after this patch anyway.
Report and analysis by Tomonari Katsumata, bug #8686. Backpatch to 9.0,
where hot standby was introduced.
Tom Lane [Tue, 7 Jan 2014 20:25:19 +0000 (15:25 -0500)]
Fix LATERAL references to target table of UPDATE/DELETE.
I failed to think much about UPDATE/DELETE when implementing LATERAL :-(.
The implemented behavior ended up being that subqueries in the FROM or
USING clause (respectively) could access the update/delete target table as
though it were a lateral reference; which seems fine if they said LATERAL,
but certainly ought to draw an error if they didn't. Fix it so you get a
suitable error when you omit LATERAL. Per report from Emre Hasegeli.
Magnus Hagander [Tue, 7 Jan 2014 16:47:52 +0000 (17:47 +0100)]
Move permissions check from do_pg_start_backup to pg_start_backup
And the same for do_pg_stop_backup. The code in do_pg_* is not allowed
to access the catalogs. For manual base backups, the permissions
check can be handled in the calling function, and for streaming
base backups only users with the required permissions can get past
the authentication step in the first place.
Reported by Antonin Houska, diagnosed by Andres Freund
Magnus Hagander [Tue, 7 Jan 2014 16:04:40 +0000 (17:04 +0100)]
Avoid including tablespaces inside PGDATA twice in base backups
If a tablespace was crated inside PGDATA it was backed up both as part
of the PGDATA backup and as the backup of the tablespace. Avoid this
by skipping any directory inside PGDATA that contains one of the active
tablespaces.
Tom Lane [Sat, 4 Jan 2014 21:05:20 +0000 (16:05 -0500)]
Fix translatability markings in psql, and add defenses against future bugs.
Several previous commits have added columns to various \d queries without
updating their translate_columns[] arrays, leading to potentially incorrect
translations in NLS-enabled builds. Offenders include commit 893686762
(added prosecdef to \df+), c9ac00e6e (added description to \dc+) and 3b17efdfd (added description to \dC+). Fix those cases back to 9.3 or
9.2 as appropriate.
Since this is evidently more easily missed than one would like, in HEAD
also add an Assert that the supplied array is long enough. This requires
an API change for printQuery(), so it seems inappropriate for back
branches, but presumably all future changes will be tested in HEAD anyway.
In HEAD and 9.3, also clean up a whole lot of sloppiness in the emitted
SQL for \dy (event triggers): lack of translatability due to failing to
pass words-to-be-translated through gettext_noop(), inadequate schema
qualification, and sloppy formatting resulting in unnecessarily ugly
-E output.
Peter Eisentraut and Tom Lane, per bug #8702 from Sergey Burladyan
Alvaro Herrera [Thu, 2 Jan 2014 21:17:29 +0000 (18:17 -0300)]
Handle 5-char filenames in SlruScanDirectory
Original users of slru.c were all producing 4-digit filenames, so that
was all that that code was prepared to handle. Changes to multixact.c
in the course of commit 0ac5ad5134f made pg_multixact/members create
5-digit filenames once a certain threshold was reached, which
SlruScanDirectory wasn't prepared to deal with; in particular,
5-digit-name files were not removed during truncation. Change that
routine to make it aware of those files, and have it process them just
like any others.
Right now, some pg_multixact/members directories will contain a mixture
of 4-char and 5-char filenames. A future commit is expected fix things
so that each slru.c user declares the correct maximum width for the
files it produces, to avoid such unsightly mixtures.
Noticed while investigating bug #8673 reported by Serge Negodyuck.
Alvaro Herrera [Thu, 2 Jan 2014 21:17:07 +0000 (18:17 -0300)]
Wrap multixact/members correctly during extension
In the 9.2 code for extending multixact/members, the logic was very
simple because the number of entries in a members page was a proper
divisor of 2^32, and thus at 2^32 wraparound the logic for page switch
was identical than at any other page boundary. In commit 0ac5ad5134f I
failed to realize this and introduced code that was not able to go over
the 2^32 boundary. Fix that by ensuring that when we reach the last
page of the last segment we correctly zero the initial page of the
initial segment, using correct uint32-wraparound-safe arithmetic.
Noticed while investigating bug #8673 reported by Serge Negodyuck, as
diagnosed by Andres Freund.
Alvaro Herrera [Thu, 2 Jan 2014 21:16:54 +0000 (18:16 -0300)]
Handle wraparound during truncation in multixact/members
In pg_multixact/members, relying on modulo-2^32 arithmetic for
wraparound handling doesn't work all that well. Because we don't
explicitely track wraparound of the allocation counter for members, it
is possible that the "live" area exceeds 2^31 entries; trying to remove
SLRU segments that are "old" according to the original logic might lead
to removal of segments still in use. To fix, have the truncation
routine use a tailored SlruScanDirectory callback that keeps track of
the live area in actual use; that way, when the live range exceeds 2^31
entries, the oldest segments still live will not get removed untimely.
This new SlruScanDir callback needs to take care not to remove segments
that are "in the future": if new SLRU segments appear while the
truncation is ongoing, make sure we don't remove them. This requires
examination of shared memory state to recheck for false positives, but
testing suggests that this doesn't cause a problem. The original coding
didn't suffer from this pitfall because segments created when truncation
is running are never considered to be removable.
Per Andres Freund's investigation of bug #8673 reported by Serge
Negodyuck.
Tom Lane [Mon, 30 Dec 2013 19:00:05 +0000 (14:00 -0500)]
Fix broken support for event triggers as extension members.
CREATE EVENT TRIGGER forgot to mark the event trigger as a member of its
extension, and pg_dump didn't pay any attention anyway when deciding
whether to dump the event trigger. Per report from Moshe Jacobson.
Given the obvious lack of testing here, it's rather astonishing that
ALTER EXTENSION ADD/DROP EVENT TRIGGER work, but they seem to.