Robert Haas [Fri, 23 Jan 2015 16:58:31 +0000 (11:58 -0500)]
Don't use abbreviated keys for the final merge pass.
When we write tuples out to disk and read them back in, the abbreviated
keys become non-abbreviated, because the readtup routines don't know
anything about abbreviation. But without this fix, the rest of the
code still thinks the abbreviation-aware compartor should be used,
so chaos ensues.
Report by Andrew Gierth; patch by Peter Geoghegan.
Tom Lane [Thu, 22 Jan 2015 23:10:47 +0000 (18:10 -0500)]
Prevent duplicate escape-string warnings when using pg_stat_statements.
contrib/pg_stat_statements will sometimes run the core lexer a second time
on submitted statements. Formerly, if you had standard_conforming_strings
turned off, this led to sometimes getting two copies of any warnings
enabled by escape_string_warning. While this is probably no longer a big
deal in the field, it's a pain for regression testing.
To fix, change the lexer so it doesn't consult the escape_string_warning
GUC variable directly, but looks at a copy in the core_yy_extra_type state
struct. Then, pg_stat_statements can change that copy to disable warnings
while it's redoing the lexing.
It seemed like a good idea to make this happen for all three of the GUCs
consulted by the lexer, not just escape_string_warning. There's not an
immediate use-case for callers to adjust the other two AFAIK, but making
it possible is easy enough and seems like good future-proofing.
Arguably this is a bug fix, but there doesn't seem to be enough interest to
justify a back-patch. We'd not be able to back-patch exactly as-is anyway,
for fear of breaking ABI compatibility of the struct. (We could perhaps
back-patch the addition of only escape_string_warning by adding it at the
end of the struct, where there's currently alignment padding space.)
Alvaro Herrera [Thu, 22 Jan 2015 20:01:09 +0000 (17:01 -0300)]
Tweak BRIN minmax operator class
In the union support proc, we were not checking the hasnulls flag of
value A early enough, so it could be skipped if the "allnulls" flag in
value B is set. Also, a check on the allnulls flag of value "B" was
redundant, so remove it.
Also change inet_minmax_ops to not be the default opclass for type inet,
as a future inclusion operator class would be more useful and it's
pretty difficult to change default opclass for a datatype later on.
(There is no catversion bump for this catalog change; this shouldn't be
a problem.)
Extracted from a larger patch to add an "inclusion" operator class.
The split between which things need to happen in the C-locale case and
which needed to happen in the locale-aware case was a few bricks short
of a load. Try to fix that.
Bruce Momjian [Thu, 22 Jan 2015 17:36:34 +0000 (12:36 -0500)]
adjust ACL owners for REASSIGN and ALTER OWNER TO
When REASSIGN and ALTER OWNER TO are used, both the object owner and ACL
list should be changed from the old owner to the new owner. This patch
fixes types, foreign data wrappers, and foreign servers to change their
ACL list properly; they already changed owners properly.
Robert Haas [Thu, 22 Jan 2015 16:58:58 +0000 (11:58 -0500)]
More fixes for abbreviated keys infrastructure.
First, when LC_COLLATE = C, bttext_abbrev_convert should use memcpy()
rather than strxfrm() to construct the abbreviated key, because the
authoritative comparator uses memcpy(). If we do anything else here,
we might get inconsistent answers, and the buildfarm says this risk
is not theoretical. It should be faster this way, too.
Second, while I'm looking at bttext_abbrev_convert, convert a needless
use of goto into the loop it's trying to implement into an actual
loop.
Robert Haas [Thu, 22 Jan 2015 15:46:42 +0000 (10:46 -0500)]
Heavily refactor btsortsupport_worker.
Prior to commit 4ea51cdfe85ceef8afabceb03c446574daa0ac23, this function
only had one job, which was to decide whether we could avoid trampolining
through the fmgr layer when performing sort comparisons. As of that
commit, it has a second job, which is to decide whether we can use
abbreviated keys. Unfortunately, those two tasks are somewhat intertwined
in the existing coding, which is likely why neither Peter Geoghegan nor
I noticed prior to commit that this calls pg_newlocale_from_collation() in
cases where it didn't previously. The buildfarm noticed, though.
To fix, rewrite the logic so that the decision as to which comparator to
use is more cleanly separated from the decision about abbreviation.
Robert Haas [Wed, 21 Jan 2015 01:32:21 +0000 (20:32 -0500)]
Disable abbreviated keys on Windows.
Most of the Windows buildfarm members (bowerbird, hamerkop, currawong,
jacana, brolga) are unhappy with yesterday's abbreviated keys patch,
although there are some (narwhal, frogmouth) that seem OK with it.
Since there's no obvious pattern to explain why some are working and
others are failing, just disable this across-the-board on Windows for
now. This is a bit unfortunate since the optimization will be a big
win in some cases, but we can't leave the buildfarm broken.
Tom Lane [Tue, 20 Jan 2015 04:44:19 +0000 (23:44 -0500)]
In pg_regress, remove the temporary installation upon successful exit.
This results in a very substantial reduction in disk space usage during
"make check-world", since that sequence involves creation of numerous
temporary installations. It should also help a bit in the buildfarm, even
though the buildfarm script doesn't create as many temp installations,
because the current script misses deleting some of them; and anyway it
seems better to do this once in one place rather than expecting that
script to get it right every time.
In 9.4 and HEAD, also undo the unwise choice in commit b1aebbb6a86e96d7
to report strerror(errno) after a rmtree() failure. rmtree has already
reported that, possibly for multiple failures with distinct errnos; and
what's more, by the time it returns there is no good reason to assume
that errno still reflects the last reportable error. So reporting errno
here is at best redundant and at worst badly misleading.
Back-patch to all supported branches, so that future revisions of the
buildfarm script can rely on this behavior.
Tom Lane [Tue, 20 Jan 2015 04:01:33 +0000 (23:01 -0500)]
Adjust "pgstat wait timeout" message to be a translatable LOG message.
Per discussion, change the log level of this message to be LOG not WARNING.
The main point of this change is to avoid causing buildfarm run failures
when the stats collector is exceptionally slow to respond, which it not
infrequently is on some of the smaller/slower buildfarm members.
This change does lose notice to an interactive user when his stats query
is looking at out-of-date stats, but the majority opinion (not necessarily
that of yours truly) is that WARNING messages would probably not get
noticed anyway on heavily loaded production systems. A LOG message at
least ensures that the problem is recorded somewhere where bulk auditing
for the issue is possible.
Also, instead of an untranslated "pgstat wait timeout" message, provide
a translatable and hopefully more understandable message "using stale
statistics instead of current ones because stats collector is not
responding". The original text was written hastily under the assumption
that it would never really happen in practice, which we now know to be
unduly optimistic.
Back-patch to all active branches, since we've seen the buildfarm issue
in all branches.
Andres Freund [Mon, 19 Jan 2015 17:28:11 +0000 (18:28 +0100)]
Fix various shortcomings of the new PrivateRefCount infrastructure.
As noted by Tom Lane the improvements in 4b4b680c3d6 had the problem
that in some situations we searched, entered and modified entries in
the private refcount hash while holding a spinlock. I had tried to
keep the logic entirely local to PinBuffer_Locked(), but that's not
really possible given it's called with a spinlock held...
Besides being disadvantageous from a performance point of view, this
also has problems with error handling safety. If we failed inserting
an entry into the hashtable due to an out of memory error, we'd error
out with a held spinlock. Not good.
Change the way private refcounts are manipulated: Before a buffer can
be tracked an entry has to be reserved using
ReservePrivateRefCountEntry(); then, if a entry is not found using
GetPrivateRefCountEntry(), it can be entered with
NewPrivateRefCountEntry().
Also take advantage of the fact that PinBuffer_Locked() currently is
never called for buffers that already have been pinned by the current
backend and don't search the private refcount entries for preexisting
local pins. That results in a small, but measurable, performance
improvement.
Additionally make ReleaseBuffer() always call UnpinBuffer() for shared
buffers. That avoids duplicating work in an eventual UnpinBuffer()
call that already has been done in ReleaseBuffer() and also saves some
code.
Robert Haas [Mon, 19 Jan 2015 20:20:31 +0000 (15:20 -0500)]
Use abbreviated keys for faster sorting of text datums.
This commit extends the SortSupport infrastructure to allow operator
classes the option to provide abbreviated representations of Datums;
in the case of text, we abbreviate by taking the first few characters
of the strxfrm() blob. If the abbreviated comparison is insufficent
to resolve the comparison, we fall back on the normal comparator.
This can be much faster than the old way of doing sorting if the
first few bytes of the string are usually sufficient to resolve the
comparison.
There is the potential for a performance regression if all of the
strings to be sorted are identical for the first 8+ characters and
differ only in later positions; therefore, the SortSupport machinery
now provides an infrastructure to abort the use of abbreviation if
it appears that abbreviation is producing comparatively few distinct
keys. HyperLogLog, a streaming cardinality estimator, is included in
this commit and used to make that determination for text.
Tom Lane [Sun, 18 Jan 2015 22:04:11 +0000 (17:04 -0500)]
Fix ancient thinko in default table rowcount estimation.
The code used sizeof(ItemPointerData) where sizeof(ItemIdData) is correct,
since we're trying to account for a tuple's line pointer. Spotted by
Tomonari Katsumata (bug #12584).
Although this mistake is of very long standing, no back-patch, since it's
a relatively harmless error and changing it would risk changing default
planner behavior in stable branches. (I don't see any change in regression
test outputs here, but the buildfarm may think differently.)
Andres Freund [Sat, 17 Jan 2015 12:00:42 +0000 (13:00 +0100)]
Replace walsender's latch with the general shared latch.
Relying on the normal shared latch simplifies interrupt/signal
handling because we can rely on all signal handlers setting the proc
latch. That in turn allows us to avoid the use of
ImmediateInterruptOK, which arguably isn't correct because
WaitLatchOrSocket isn't declared to be immediately interruptible.
Also change sections that wait on the walsender's latch to notice
interrupts quicker/more reliably and make them more consistent with
each other.
This is part of a larger "get rid of ImmediateInterruptOK" series.
Tom Lane [Fri, 16 Jan 2015 23:18:52 +0000 (18:18 -0500)]
Show sort ordering options in EXPLAIN output.
Up to now, EXPLAIN has contented itself with printing the sort expressions
in a Sort or Merge Append plan node. This patch improves that by
annotating the sort keys with COLLATE, DESC, USING, and/or NULLS FIRST/LAST
whenever nondefault sort ordering options are used. The output is now a
reasonably close approximation of an ORDER BY clause equivalent to the
plan's ordering.
Marius Timmer, Lukas Kreft, and Arne Scheffer; reviewed by Mike Blackwell.
Some additional hacking by me.
Advance backend's advertised xmin more aggressively.
Currently, a backend will reset it's PGXACT->xmin value when it doesn't
have any registered snapshots left. That covered the common case that a
transaction in read committed mode runs several queries, one after each
other, as there would be no snapshots active between those queries.
However, if you hold cursors across each of the query, we didn't get a
chance to reset xmin.
To make that better, keep all the registered snapshots in a pairing heap,
ordered by xmin so that it's always quick to find the snapshot with the
smallest xmin. That allows us to advance PGXACT->xmin whenever the oldest
snapshot is deregistered, even if there are others still active.
Per discussion originally started by Jeff Davis back in 2009 and more
recently by Robert Haas.
Tom Lane [Fri, 16 Jan 2015 18:28:30 +0000 (13:28 -0500)]
Improve new caching logic in tbm_add_tuples().
For no significant extra complexity, we can cache knowledge that the
target page is lossy, and save a hash_search per iteration in that
case as well. This probably makes little difference, since the extra
rechecks that must occur when pages are lossy are way more expensive
than anything we can save here ... but we might as well do it if we're
going to cache anything.
Andres Freund [Fri, 16 Jan 2015 16:47:59 +0000 (17:47 +0100)]
Make tbm_add_tuples more efficient by caching the last acccessed page.
When adding a large number of tuples to a TID bitmap using
tbm_add_tuples() sometimes a lot of time was spent looking up a page's
entry in the bitmap's internal hashtable.
Improve efficiency by caching the last accessed page, while iterating
over the passed in tuples, hoping consecutive tuples will often be on
the same page. In many cases that's a good bet, and in the rest the
added overhead isn't big.
Another attempt at fixing Windows Norwegian locale.
Previous fix mapped "Norwegian (Bokmål)" locale, which contains a non-ASCII
character, to the pure ASCII alias "norwegian-bokmal". However, it turns
out that more recent versions of the CRT library, in particular MSVCR110
(Visual Studio 2012), changed the behaviour of setlocale() so that if
you pass "norwegian-bokmal" to setlocale, it returns "Norwegian_Norway".
That meant trouble, when setlocale(..., NULL) first returned
"Norwegian (Bokmål)_Norway", which we mapped to "norwegian-bokmal_Norway",
but another call to setlocale(..., "norwegian-bokmal_Norway") returned
"Norwegian_Norway". That caused PostgreSQL to think that they are different
locales, and therefore not compatible. That caused initdb to fail at
CREATE DATABASE.
Older CRT versions seem to accept "Norwegian_Norway" too, so change the
mapping to return "Norwegian_Norway" instead of "norwegian-bokmal".
Backpatch to 9.2 like the previous attempt. We haven't made a release that
includes the previous fix yet, so we don't need to worry about changing the
locale of existing clusters from "norwegian-bokmal" to "Norwegian_Norway".
(Doing any mapping like this at all requires changing the locale of
existing databases; the release notes need to include instructions for
that).
Noah Misch [Fri, 16 Jan 2015 06:27:31 +0000 (01:27 -0500)]
Update "pg_regress --no-locale" for Darwin and Windows.
Commit 894459e59ffa5c7fee297b246c17e1f72564db1d revealed this option to
be broken for NLS builds on Darwin, but "make -C contrib/unaccent check"
and the buildfarm client rely on it. Fix that configuration by
redefining the option to imply LANG=C on Darwin. In passing, use LANG=C
instead of LANG=en on Windows; since only postmaster startup uses that
value, testers are unlikely to notice the change. Back-patch to 9.0,
like the predecessor commit.
Tom Lane [Thu, 15 Jan 2015 23:52:22 +0000 (18:52 -0500)]
Fix use-of-already-freed-memory problem in EvalPlanQual processing.
Up to now, the "child" executor state trees generated for EvalPlanQual
rechecks have simply shared the ResultRelInfo arrays used for the original
execution tree. However, this leads to dangling-pointer problems, because
ExecInitModifyTable() is all too willing to scribble on some fields of the
ResultRelInfo(s) even when it's being run in one of those child trees.
This trashes those fields from the perspective of the parent tree, because
even if the generated subtree is logically identical to what was in use in
the parent, it's in a memory context that will go away when we're done
with the child state tree.
We do however want to share information in the direction from the parent
down to the children; in particular, fields such as es_instrument *must*
be shared or we'll lose the stats arising from execution of the children.
So the simplest fix is to make a copy of the parent's ResultRelInfo array,
but not copy any fields back at end of child execution.
Per report from Manuel Kniep. The added isolation test is based on his
example. In an unpatched memory-clobber-enabled build it will reliably
fail with "ctid is NULL" errors in all branches back to 9.1, as a
consequence of junkfilter->jf_junkAttNo being overwritten with $7f7f.
This test cannot be run as-is before that for lack of WITH syntax; but
I have no doubt that some variant of this problem can arise in older
branches, so apply the code change all the way back.
Tom Lane [Thu, 15 Jan 2015 18:39:33 +0000 (13:39 -0500)]
Rearrange explain.c's API so callers need not embed sizeof(ExplainState).
The folly of the previous arrangement was just demonstrated: there's no
convenient way to add fields to ExplainState without breaking ABI, even
if callers have no need to touch those fields. Since we might well need
to do that again someday in back branches, let's change things so that
only explain.c has to have sizeof(ExplainState) compiled into it. This
costs one extra palloc() per EXPLAIN operation, which is surely pretty
negligible.
Tom Lane [Thu, 15 Jan 2015 18:18:12 +0000 (13:18 -0500)]
Improve performance of EXPLAIN with large range tables.
As of 9.3, ruleutils.c goes to some lengths to ensure that table and column
aliases used in its output are unique. Of course this takes more time than
was required before, which in itself isn't fatal. However, EXPLAIN was set
up so that recalculation of the unique aliases was repeated for each
subexpression printed in a plan. That results in O(N^2) time and memory
consumption for large plan trees, which did not happen in older branches.
Fortunately, the expensive work is the same across a whole plan tree,
so there is no need to repeat it; we can do most of the initialization
just once per query and re-use it for each subexpression. This buys
back most (not all) of the performance loss since 9.2.
We need an extra ExplainState field to hold the precalculated deparse
context. That's no problem in HEAD, but in the back branches, expanding
sizeof(ExplainState) seems risky because third-party extensions might
have local variables of that struct type. So, in 9.4 and 9.3, introduce
an auxiliary struct to keep sizeof(ExplainState) the same. We should
refactor the APIs to avoid such local variables in future, but that's
material for a separate HEAD-only commit.
Per gripe from Alexey Bashtanov. Back-patch to 9.3 where the issue
was introduced.
Robert Haas [Thu, 15 Jan 2015 14:26:03 +0000 (09:26 -0500)]
pg_standby: Avoid writing one byte beyond the end of the buffer.
Previously, read() might have returned a length equal to the buffer
length, and then the subsequent store to buf[len] would write a
zero-byte one byte past the end. This doesn't seem likely to be
a security issue, but there's some chance it could result in
pg_standby misbehaving.
Spotted by Coverity; patch by Michael Paquier, reviewed by me.
Andres Freund [Thu, 15 Jan 2015 12:10:24 +0000 (13:10 +0100)]
Blindly try to fix a warning in s_lock.h when compiling with gcc on HPPA.
The possibly, depending on compiler settings, generated warning was
"warning: `S_UNLOCK' redefined".
The hppa spinlock implementation doesn't follow the rules of s_lock.h
and provides a gcc specific implementation outside of the the part of
the file that's supposed to do that. It does so to avoid duplication
between the HP compiler and gcc. That unfortunately means that
S_UNLOCK is already defined when the HPPA specific section is reached.
Undefine the generic fallback S_UNLOCK definition inside the HPPA
section. That's far from pretty, but has the big advantage of being
simple. If somebody is interested to fix this in a prettier way...
This presumably got broken in the course of 0709b7ee72.
Andres Freund [Wed, 14 Jan 2015 17:45:22 +0000 (18:45 +0100)]
Add a default local latch for use in signal handlers.
To do so, move InitializeLatchSupport() into the new common process
initialization functions, and add a new global variable MyLatch.
MyLatch is usable as soon InitPostmasterChild() has been called
(i.e. very early during startup). Initially it points to a process
local latch that exists in all processes. InitProcess/InitAuxiliaryProcess
then replaces that local latch with PGPROC->procLatch. During shutdown
the reverse happens.
This is primarily advantageous for two reasons: For one it simplifies
dealing with the shared process latch, especially in signal handlers,
because instead of having to check for MyProc, MyLatch can be used
unconditionally. For another, a later patch that makes FEs/BE
communication use latches, now can rely on the existence of a latch,
even before having gone through InitProcess.
Tom Lane [Wed, 14 Jan 2015 16:08:13 +0000 (11:08 -0500)]
Allow CFLAGS from configure's environment to override automatic CFLAGS.
Previously, configure would add any switches that it chose of its own
accord to the end of the user-specified CFLAGS string. Since most
compilers process these left-to-right, this meant that configure's choices
would override the user-specified flags in case of conflicts. We'd rather
that worked the other way around, so adjust the logic to put the user's
string at the end not the beginning.
There does not seem to be a need for a similar behavior change for CPPFLAGS
or LDFLAGS: in those, the earlier switches tend to win (think -I or -L
behavior) so putting the user's string at the front is fine.
Backpatch to 9.4 but not earlier. I'm not planning to run buildfarm member
guar on older branches, and it seems a bit risky to change this behavior
in long-stable branches.
Tom Lane [Wed, 14 Jan 2015 03:52:11 +0000 (22:52 -0500)]
Remove duplicate specification of -Ae for HP-UX C compiler.
Autoconf has known about automatically selecting -Ae when needed for
quite some time now, so remove the redundant addition in template/hpux.
Noted while setting up buildfarm member pademelon.
Andres Freund [Tue, 13 Jan 2015 12:12:37 +0000 (13:12 +0100)]
Commonalize process startup code.
Move common code, that was duplicated in every postmaster child/every
standalone process, into two functions in miscinit.c. Not only does
that already result in a fair amount of net code reduction but it also
makes it much easier to remove more duplication in the future. The
prime motivation wasn't code deduplication though, but easier addition
of new common code.
Andres Freund [Tue, 13 Jan 2015 20:02:47 +0000 (21:02 +0100)]
Make logging_collector=on work with non-windows EXEC_BACKEND again.
Commit b94ce6e80 reordered postmaster's startup sequence so that the
tempfile directory is only cleaned up after all the necessary state
for pg_ctl is collected. Unfortunately the chosen location is after
the syslogger has been started; which normally is fine, except for
!WIN32 EXEC_BACKEND builds, which pass information to children via
files in the temp directory.
Move the call to RemovePgTempFiles() to just before the syslogger has
started. That's the first child we fork.
Luckily EXEC_BACKEND is pretty much only used by endusers on windows,
which has a separate method to pass information to children. That
means the real world impact of this bug is very small.
Andres Freund [Tue, 13 Jan 2015 11:58:43 +0000 (12:58 +0100)]
Add barriers to the latch code.
Since their introduction latches have required barriers in SetLatch
and ResetLatch - but when they were introduced there wasn't any
barrier abstraction. Instead latches were documented to rely on the
callsites to provide barrier semantics.
Now that the barrier support looks halfway complete, add the necessary
barriers to both latch implementations.
Also remove a now superflous lock acquisition from syncrep.c and a
superflous (and insufficient) barrier from freelist.c. There might be
other cases that can now be simplified, but those are the only ones
I've seen on a quick scan.
We might want to backpatch this at some later point, but right now the
barrier infrastructure in the backbranches isn't totally on par with
master.
Andres Freund [Tue, 13 Jan 2015 11:58:43 +0000 (12:58 +0100)]
Allow latches to wait for socket writability without waiting for readability.
So far WaitLatchOrSocket() required to pass in WL_SOCKET_READABLE as
that solely was used to indicate error conditions, like EOF. Waiting
for WL_SOCKET_WRITEABLE would have meant to busy wait upon socket
errors.
Adjust the API to signal errors by returning the socket as readable,
writable or both, depending on WL_SOCKET_READABLE/WL_SOCKET_WRITEABLE
being specified. It would arguably be nicer to return WL_SOCKET_ERROR
but that's not possible on platforms and would probably also result in
more complex callsites.
This previously had explicitly been forbidden in e42a21b9e6c9, as
there was no strong use case at that point. We now are looking into
making FE/BE communication use latches, so changing this makes sense.
There also are some portability concerns because there cases of older
platforms where select(2) is known to, in violation of POSIX, not
return a socket as writable after the peer has closed it. So far the
platforms where that's the case provide a working poll(2). If we find
one where that's not the case, we'll need to add a workaround for that
platform.
Discussion: 20140927191243.GD5423@alap3.anarazel.de Reviewed-By: Heikki Linnakangas, Noah Misch
Tom Lane [Mon, 12 Jan 2015 20:13:28 +0000 (15:13 -0500)]
Avoid unexpected slowdown in vacuum regression test.
I noticed the "vacuum" regression test taking really significantly longer
than it used to on a slow machine. Investigation pointed the finger at
commit e415b469b33ba328765e39fd62edcd28f30d9c3c, which added creation of
an index using an extremely expensive index function. That function was
evidently meant to be applied only twice ... but the test re-used an
existing test table, which up till a couple lines before that had had over
two thousand rows. Depending on timing of the concurrent regression tests,
the intervening VACUUMs might have been unable to remove those
recently-dead rows, and then the index build would need to create index
entries for them too, leading to the wrap_do_analyze() function being
executed 2000+ times not twice. Avoid this by using a different table
that is guaranteed to have only the intended two rows in it.
Back-patch to 9.0, like the commit that created the problem.
Alvaro Herrera [Mon, 12 Jan 2015 18:32:48 +0000 (15:32 -0300)]
Fix get_object_address argument type for extension statement
Commit 3f88672a4 neglected to update the AlterExtensionContentsStmt
production in the grammar to use TypeName to represent types when
passing objects to get_object_address.
Tom Lane [Mon, 12 Jan 2015 17:40:09 +0000 (12:40 -0500)]
Use correct text domain for errcontext() appearing within ereport().
The mechanism added in commit dbdf9679d7d61b03a3bf73af9b095831b7010eb5
for associating the correct translation domain with errcontext strings
potentially fails in cases where errcontext() is used within an ereport()
macro. Such usage was not originally envisioned for errcontext(), but we
do have a few places that do it. In this situation, the intended comma
expression becomes just a couple of arguments to errfinish(), which the
compiler might choose to evaluate right-to-left.
Fortunately, in such cases the textdomain for the errcontext string must
be the same as for the surrounding ereport. So we can fix this by letting
errstart initialize context_domain along with domain; then it will have
the correct value no matter which order the calls occur in. (Note that
error stack callback functions are not invoked until errfinish, so normal
usage of errcontext won't affect what happens for errcontext calls within
the ereport macro.)
In passing, make sure that errcontext calls within the main backend set
context_domain to something non-NULL. This isn't a live bug because
NULL would select the current textdomain() setting which should be the
right thing anyway --- but it seems better to handle this completely
consistently with the regular domain field.
Per report from Dmitry Voronin. Backpatch to 9.3; before that, there
wasn't any attempt to ensure that errcontext strings were translated
in an appropriate domain.
Stephen Frost [Mon, 12 Jan 2015 15:13:18 +0000 (10:13 -0500)]
Skip dead backends in MinimumActiveBackends
Back in ed0b409, PGPROC was split and moved to static variables in
procarray.c, with procs in ProcArrayStruct replaced by an array of
integers representing process numbers (pgprocnos), with -1 indicating a
dead process which has yet to be removed. Access to procArray is
generally done under ProcArrayLock and therefore most code does not have
to concern itself with -1 entries.
However, MinimumActiveBackends intentionally does not take
ProcArrayLock, which means it has to be extra careful when accessing
procArray. Prior to ed0b409, this was handled by checking for a NULL
in the pointer array, but that check was no longer valid after the
split. Coverity pointed out that the check could never happen and so
it was removed in 5592eba. That didn't make anything worse, but it
didn't fix the issue either.
The correct fix is to check for pgprocno == -1 and skip over that entry
if it is encountered.
Back-patch to 9.2, since there can be attempts to access the arrays
prior to their start otherwise. Note that the changes prior to 9.4 will
look a bit different due to the change in 5592eba.
Note that MinimumActiveBackends only returns a bool for heuristic
purposes and any pre-array accesses are strictly read-only and so there
is no security implication and the lack of fields complaints indicates
it's very unlikely to run into issues due to this.
Tom Lane [Sun, 11 Jan 2015 18:28:26 +0000 (13:28 -0500)]
Fix portability breakage in pg_dump.
Commit 0eea8047bf0e15b402b951e383e39236bdfe57d5 introduced some overly
optimistic assumptions about what could be in a local struct variable's
initializer. (This might in fact be valid code according to C99, but I've
got at least one pre-C99 compiler that falls over on those nonconstant
address expressions.) There is no reason whatsoever for main()'s workspace
to not be static, so revert long_options[] to a static and make the
DumpOptions struct static as well.
Tom Lane [Sun, 11 Jan 2015 17:52:37 +0000 (12:52 -0500)]
Remove configure test for nonstandard variants of getpwuid_r().
We had code that supposed that some platforms might offer a nonstandard
version of getpwuid_r() with only four arguments. However, the 5-argument
definition has been standardized at least since the Single Unix Spec v2,
which is our normal reference for what's portable across all Unix-oid
platforms. (What's more, this wasn't the only pre-standardization version
of getpwuid_r(); my old HPUX 10.20 box has still another signature.)
So let's just get rid of the now-useless configure step.
Tom Lane [Sun, 11 Jan 2015 17:35:44 +0000 (12:35 -0500)]
Fix libpq's behavior when /etc/passwd isn't readable.
Some users run their applications in chroot environments that lack an
/etc/passwd file. This means that the current UID's user name and home
directory are not obtainable. libpq used to be all right with that,
so long as the database role name to use was specified explicitly.
But commit a4c8f14364c27508233f8a31ac4b10a4c90235a9 broke such cases by
causing any failure of pg_fe_getauthname() to be treated as a hard error.
In any case it did little to advance its nominal goal of causing errors
in pg_fe_getauthname() to be reported better. So revert that and instead
put some real error-reporting code in place. This requires changes to the
APIs of pg_fe_getauthname() and pqGetpwuid(), since the latter had
departed from the POSIX-specified API of getpwuid_r() in a way that made
it impossible to distinguish actual lookup errors from "no such user".
To allow such failures to be reported, while not failing if the caller
supplies a role name, add a second call of pg_fe_getauthname() in
connectOptions2(). This is a tad ugly, and could perhaps be avoided with
some refactoring of PQsetdbLogin(), but I'll leave that idea for later.
(Note that the complained-of misbehavior only occurs in PQsetdbLogin,
not when using the PQconnect functions, because in the latter we will
never bother to call pg_fe_getauthname() if the user gives a role name.)
In passing also clean up the Windows-side usage of GetUserName(): the
recommended buffer size is 257 bytes, the passed buffer length should
be the buffer size not buffer size less 1, and any error is reported
by GetLastError() not errno.
Per report from Christoph Berg. Back-patch to 9.4 where the chroot
failure case was introduced. The generally poor reporting of errors
here is of very long standing, of course, but given the lack of field
complaints about it we won't risk changing these APIs further back
(even though they're theoretically internal to libpq).
Andres Freund [Sun, 11 Jan 2015 00:15:29 +0000 (01:15 +0100)]
Provide a generic fallback for pg_compiler_barrier using an extern function.
If the compiler/arch combination does not provide compiler barriers,
provide a fallback. That fallback simply consists out of a function
call into a externally defined function. That should guarantee
compiler barrierer semantics except for compilers that do inter
translation unit/global optimization - those better provide an actual
compiler barrier.
Hopefully this fixes Tom's report of linker failures due to
pg_compiler_barrier_impl not being provided.
I'm not backpatching this commit as it builds on the new atomics
infrastructure. If we decide an equivalent fix needs to be
backpatched, I'll do so in a separate commit.
Andres Freund [Sun, 11 Jan 2015 00:06:37 +0000 (01:06 +0100)]
Fix alignment of pg_atomic_uint64 variables on some 32bit platforms.
I failed to recognize that pg_atomic_uint64 wasn't guaranteed to be 8
byte aligned on some 32bit platforms - which it has to be on some
platforms to guarantee the desired atomicity and which we assert.
As this is all compiler specific code anyway we can just rely on
compiler specific tricks to enforce alignment.
I've been unable to find concrete documentation about the version that
introduce the sunpro alignment support, so that might need additional
guards.
I've verified that this works with gcc x86 32bit, but I don't have
access to any other 32bit environment.
Alvaro Herrera [Fri, 9 Jan 2015 15:34:25 +0000 (12:34 -0300)]
xlogreader.c: Fix report_invalid_record translatability flag
For some reason I overlooked in GETTEXT_TRIGGERS that the right argument
be read by gettext in 7fcbf6a405ffc12a4546a25b98592ee6733783fc. This
will drop the translation percentages for the backend all the way back
to 9.3 ...
Stephen Frost [Fri, 2 Jan 2015 20:09:39 +0000 (15:09 -0500)]
Move rowsecurity event trigger test
The event trigger test for rowsecurity can cause problems for other
tests which are run in parallel with it. Instead of running that test
in the rowsecurity set, move it to the event_trigger set, which runs
isolated from other tests.
Also reverts 7161b08, which moved rowsecurity into its own test group.
That's no longer necessary, now that the event trigger test is gone from
the rowsecurity set of tests.
Andres Freund [Thu, 8 Jan 2015 11:57:09 +0000 (12:57 +0100)]
Fix logging of pages skipped due to pins during vacuum.
The new logging introduced in 35192f06 made the incorrect assumption
that scan_all vacuums would always wait for buffer pins; but they only
do so if the page actually needs to be frozen.
Fix that inaccuracy by removing the difference in log output based on
scan_all and just always remove the same message. I chose to keep the
split log message from the original commit for now, it seems likely
that it'll be of use in the future.
Also merge the line about buffer pins in autovacuum's log output into
the existing "pages: ..." line. It seems odd to have a separate line
about pins, without the "topic: " prefix others have.
Also rename the new 'pinned_pages' variable to 'pinskipped_pages'
because it actually tracks the number of pages that could *not* be
pinned.
Noah Misch [Thu, 8 Jan 2015 03:46:59 +0000 (22:46 -0500)]
On Darwin, refuse postmaster startup when multithreaded.
The previous commit introduced its report at LOG level to avoid
surprises at minor release upgrade time. Compel users deploying the
next major release to also deploy the reported workaround.
Noah Misch [Thu, 8 Jan 2015 03:35:44 +0000 (22:35 -0500)]
On Darwin, detect and report a multithreaded postmaster.
Darwin --enable-nls builds use a substitute setlocale() that may start a
thread. Buildfarm member orangutan experienced BackendList corruption
on account of different postmaster threads executing signal handlers
simultaneously. Furthermore, a multithreaded postmaster risks undefined
behavior from sigprocmask() and fork(). Emit LOG messages about the
problem and its workaround. Back-patch to 9.0 (all supported versions).
Noah Misch [Thu, 8 Jan 2015 03:34:57 +0000 (22:34 -0500)]
Always set the six locale category environment variables in main().
Typical server invocations already achieved that. Invalid locale
settings in the initial postmaster environment interfered, as could
malloc() failure. Setting "LC_MESSAGES=pt_BR.utf8 LC_ALL=invalid" in
the postmaster environment will now choose C-locale messages, not
Brazilian Portuguese messages. Most localized programs, including all
PostgreSQL frontend executables, do likewise. Users are unlikely to
observe changes involving locale categories other than LC_MESSAGES.
CheckMyDatabase() ensures that we successfully set LC_COLLATE and
LC_CTYPE; main() sets the remaining three categories to locale "C",
which almost cannot fail. Back-patch to 9.0 (all supported versions).
Noah Misch [Thu, 8 Jan 2015 03:33:58 +0000 (22:33 -0500)]
Reject ANALYZE commands during VACUUM FULL or another ANALYZE.
vacuum()'s static variable handling makes it non-reentrant; an ensuing
null pointer deference crashed the backend. Back-patch to 9.0 (all
supported versions).
Don't open a WAL segment for writing at end of recovery.
Since commit ba94518a, we used XLogFileOpen to open the next segment for
writing, but if the end-of-recovery happens exactly at a segment boundary,
the new segment might not exist yet. (Before ba94518a, XLogFileOpen was
correct, because we would open the previous segment if the switch happened
at the boundary.)
Instead of trying to create it if necessary, it's simpler to not bother
opening the segment at all. XLogWrite() will open or create it soon anyway,
after writing the checkpoint or end-of-recovery record.
Previously, the xml value resulting from an xpath query would not have
namespace declarations if the namespace declarations were attached to
an ancestor element in the input xml value. That means the output value
was not correct XML. Fix that by running the result value through
xmlCopyNode(), which produces the correct namespace declarations.
Andres Freund [Tue, 6 Jan 2015 23:10:19 +0000 (00:10 +0100)]
Correctly handle relcache invalidation corner case during logical decoding.
When using a historic snapshot for logical decoding it can validly
happen that a relation that's in the relcache isn't visible to that
historic snapshot. E.g. if a newly created relation is referenced in
the query that uses the SQL interface for logical decoding and a
sinval reset occurs.
The earlier commit that fixed the error handling for that corner case
already improves the situation as a ERROR is better than hitting an
assertion... But it's obviously not good enough. So additionally
allow that case without an error if a historic snapshot is set up -
that won't allow an invalid entry to stay in the cache because it's a)
already marked invalid and will thus be rebuilt during the next access
b) the syscaches will be reset at the end of decoding.
There might be prettier solutions to handle this case, but all that we
could think of so far end up being much more complex than this quite
simple fix.
This fixes the assertion failures reported by the buildfarm (markhor,
tick, leech) after the introduction of new regression tests in 89fd41b390a4. The failure there weren't actually directly caused by
CLOBBER_CACHE_ALWAYS but the extraordinary long runtimes due to it
lead to sinval resets triggering the behaviour.
Andres Freund [Tue, 6 Jan 2015 23:10:18 +0000 (00:10 +0100)]
Improve relcache invalidation handling of currently invisible relations.
The corner case where a relcache invalidation tried to rebuild the
entry for a referenced relation but couldn't find it in the catalog
wasn't correct.
The code tried to RelationCacheDelete/RelationDestroyRelation the
entry. That didn't work when assertions are enabled because the latter
contains an assertion ensuring the refcount is zero. It's also more
generally a bad idea, because by virtue of being referenced somebody
might actually look at the entry, which is possible if the error is
trapped and handled via a subtransaction abort.
Instead just error out, without deleting the entry. As the entry is
marked invalid, the worst that can happen is that the invalid (and at
some point unused) entry lingers in the relcache.
There should be no way to hit this case < 9.4 where logical decoding
introduced a bug that can hit this. But since the code for handling
the corner case is there it should do something halfway sane, so
backpatch all the the way back. The logical decoding bug will be
handled in a separate commit.
Andres Freund [Mon, 5 Jan 2015 11:22:50 +0000 (12:22 +0100)]
Fix oversight in recent pg_basebackup fix causing pg_receivexlog failures.
A oversight in 2c0a485896 causes 'could not create archive status file
"...": No such file or directory' errors in pg_receivexlog if the
target directory doesn't happen to contain a archive_status
directory. That's due to a stupidly left over 'true' constant instead
of mark_done being passed down to ProcessXLogDataMsg().
The bug is only present in the master branch, and luckily wasn't
released.
Alvaro Herrera [Sun, 4 Jan 2015 18:48:29 +0000 (15:48 -0300)]
Fix thinko in lock mode enum
Commit 0e5680f4737a9c6aa94aa9e77543e5de60411322 contained a thinko
mixing LOCKMODE with LockTupleMode. This caused misbehavior in the case
where a tuple is marked with a multixact with at most a FOR SHARE lock,
and another transaction tries to acquire a FOR NO KEY EXCLUSIVE lock;
this case should block but doesn't.
Include a new isolation tester spec file to explicitely try all the
tuple lock combinations; without the fix it shows the problem:
starting permutation: s1_begin s1_lcksvpt s1_tuplock2 s2_tuplock3 s1_commit
step s1_begin: BEGIN;
step s1_lcksvpt: SELECT * FROM multixact_conflict FOR KEY SHARE; SAVEPOINT foo;
a
1
step s1_tuplock2: SELECT * FROM multixact_conflict FOR SHARE;
a
1
step s2_tuplock3: SELECT * FROM multixact_conflict FOR NO KEY UPDATE;
a
1
step s1_commit: COMMIT;
With the fixed code, step s2_tuplock3 blocks until session 1 commits,
which is the correct behavior.
All other cases behave correctly.
Backpatch to 9.3, like the commit that introduced the problem.
Andres Freund [Sun, 4 Jan 2015 15:47:23 +0000 (16:47 +0100)]
Add error handling for failing fstat() calls in copy.c.
These calls are pretty much guaranteed not to fail unless something
has gone horribly wrong, and even in that case we'd just error out a
short time later. But since several code checkers complain about the
missing check it seems worthwile to fix it nonetheless.
Andres Freund [Sun, 4 Jan 2015 14:44:49 +0000 (15:44 +0100)]
Correctly handle test durations of more than 2147s in pg_test_timing.
Previously the computation of the total test duration, measured in
microseconds, accidentally overflowed due to accidentally using signed
32bit arithmetic. As the only consequence is that pg_test_timing
invocations with such, overly large, durations never finished the
practical consequences of this bug are minor.
Andres Freund [Sun, 4 Jan 2015 13:36:21 +0000 (14:36 +0100)]
Fix inconsequential fd leak in the new mark_file_as_archived() function.
As every error in mark_file_as_archived() will lead to a failure of
pg_basebackup the FD leak couldn't ever lead to a real problem. It
seems better to fix the leak anyway though, rather than silence
Coverity, as the usage of the function might get extended or copied at
some point in the future.
Pointed out by Coverity.
Backpatch to 9.2, like the relevant part of the previous patch.
Andres Freund [Sat, 3 Jan 2015 19:51:52 +0000 (20:51 +0100)]
Prevent WAL files created by pg_basebackup -x/X from being archived again.
WAL (and timeline history) files created by pg_basebackup did not
maintain the new base backup's archive status. That's currently not a
problem if the new node is used as a standby - but if that node is
promoted all still existing files can get archived again. With a high
wal_keep_segment settings that can happen a significant time later -
which is quite confusing.
Change both the backend (for the -x/-X fetch case) and pg_basebackup
(for -X stream) itself to always mark WAL/timeline files included in
the base backup as .done. That's in line with walreceiver.c doing so.
The verbosity of the pg_basebackup changes show pretty clearly that it
needs some refactoring, but that'd result in not be backpatchable
changes.
Backpatch to 9.1 where pg_basebackup was introduced.
Andres Freund [Sat, 3 Jan 2015 19:51:52 +0000 (20:51 +0100)]
Add pg_string_endswith as the start of a string helper library in src/common.
Backpatch to 9.3 where src/common was introduce, because a bugfix that
needs to be backpatched, requires the function. Earlier branches will
have to duplicate the code.
Tom Lane [Sat, 3 Jan 2015 18:14:03 +0000 (13:14 -0500)]
Treat negative values of recovery_min_apply_delay as having no effect.
At one point in the development of this feature, it was claimed that
allowing negative values would be useful to compensate for timezone
differences between master and slave servers. That was based on a mistaken
assumption that commit timestamps are recorded in local time; but of course
they're in UTC. Nor is a negative apply delay likely to be a sane way of
coping with server clock skew. However, the committed patch still treated
negative delays as doing something, and the timezone misapprehension
survived in the user documentation as well.
If recovery_min_apply_delay were a proper GUC we'd just set the minimum
allowed value to be zero; but for the moment it seems better to treat
negative settings as if they were zero.
In passing do some extra wordsmithing on the parameter's documentation,
including correcting a second misstatement that the parameter affects
processing of Restore Point records.
Issue noted by Michael Paquier, who also provided the code patch; doc
changes by me. Back-patch to 9.4 where the feature was introduced.
Tom Lane [Wed, 31 Dec 2014 22:04:27 +0000 (17:04 -0500)]
Don't run rowsecurity in parallel with other regression tests.
The short-lived event trigger in the rowsecurity test causes irreproducible
failures when the concurrent tests do something that the event trigger
can't cope with. Per buildfarm.