Noah Misch [Thu, 8 Jan 2015 03:33:58 +0000 (22:33 -0500)]
Reject ANALYZE commands during VACUUM FULL or another ANALYZE.
vacuum()'s static variable handling makes it non-reentrant; an ensuing
null pointer deference crashed the backend. Back-patch to 9.0 (all
supported versions).
Andres Freund [Tue, 6 Jan 2015 23:10:19 +0000 (00:10 +0100)]
Correctly handle relcache invalidation corner case during logical decoding.
When using a historic snapshot for logical decoding it can validly
happen that a relation that's in the relcache isn't visible to that
historic snapshot. E.g. if a newly created relation is referenced in
the query that uses the SQL interface for logical decoding and a
sinval reset occurs.
The earlier commit that fixed the error handling for that corner case
already improves the situation as a ERROR is better than hitting an
assertion... But it's obviously not good enough. So additionally
allow that case without an error if a historic snapshot is set up -
that won't allow an invalid entry to stay in the cache because it's a)
already marked invalid and will thus be rebuilt during the next access
b) the syscaches will be reset at the end of decoding.
There might be prettier solutions to handle this case, but all that we
could think of so far end up being much more complex than this quite
simple fix.
This fixes the assertion failures reported by the buildfarm (markhor,
tick, leech) after the introduction of new regression tests in 89fd41b390a4. The failure there weren't actually directly caused by
CLOBBER_CACHE_ALWAYS but the extraordinary long runtimes due to it
lead to sinval resets triggering the behaviour.
Andres Freund [Tue, 6 Jan 2015 23:10:18 +0000 (00:10 +0100)]
Improve relcache invalidation handling of currently invisible relations.
The corner case where a relcache invalidation tried to rebuild the
entry for a referenced relation but couldn't find it in the catalog
wasn't correct.
The code tried to RelationCacheDelete/RelationDestroyRelation the
entry. That didn't work when assertions are enabled because the latter
contains an assertion ensuring the refcount is zero. It's also more
generally a bad idea, because by virtue of being referenced somebody
might actually look at the entry, which is possible if the error is
trapped and handled via a subtransaction abort.
Instead just error out, without deleting the entry. As the entry is
marked invalid, the worst that can happen is that the invalid (and at
some point unused) entry lingers in the relcache.
There should be no way to hit this case < 9.4 where logical decoding
introduced a bug that can hit this. But since the code for handling
the corner case is there it should do something halfway sane, so
backpatch all the the way back. The logical decoding bug will be
handled in a separate commit.
Alvaro Herrera [Sun, 4 Jan 2015 18:48:29 +0000 (15:48 -0300)]
Fix thinko in lock mode enum
Commit 0e5680f4737a9c6aa94aa9e77543e5de60411322 contained a thinko
mixing LOCKMODE with LockTupleMode. This caused misbehavior in the case
where a tuple is marked with a multixact with at most a FOR SHARE lock,
and another transaction tries to acquire a FOR NO KEY EXCLUSIVE lock;
this case should block but doesn't.
Include a new isolation tester spec file to explicitely try all the
tuple lock combinations; without the fix it shows the problem:
starting permutation: s1_begin s1_lcksvpt s1_tuplock2 s2_tuplock3 s1_commit
step s1_begin: BEGIN;
step s1_lcksvpt: SELECT * FROM multixact_conflict FOR KEY SHARE; SAVEPOINT foo;
a
1
step s1_tuplock2: SELECT * FROM multixact_conflict FOR SHARE;
a
1
step s2_tuplock3: SELECT * FROM multixact_conflict FOR NO KEY UPDATE;
a
1
step s1_commit: COMMIT;
With the fixed code, step s2_tuplock3 blocks until session 1 commits,
which is the correct behavior.
All other cases behave correctly.
Backpatch to 9.3, like the commit that introduced the problem.
Andres Freund [Sun, 4 Jan 2015 14:44:49 +0000 (15:44 +0100)]
Correctly handle test durations of more than 2147s in pg_test_timing.
Previously the computation of the total test duration, measured in
microseconds, accidentally overflowed due to accidentally using signed
32bit arithmetic. As the only consequence is that pg_test_timing
invocations with such, overly large, durations never finished the
practical consequences of this bug are minor.
Andres Freund [Sun, 4 Jan 2015 13:36:21 +0000 (14:36 +0100)]
Fix inconsequential fd leak in the new mark_file_as_archived() function.
As every error in mark_file_as_archived() will lead to a failure of
pg_basebackup the FD leak couldn't ever lead to a real problem. It
seems better to fix the leak anyway though, rather than silence
Coverity, as the usage of the function might get extended or copied at
some point in the future.
Pointed out by Coverity.
Backpatch to 9.2, like the relevant part of the previous patch.
Andres Freund [Sat, 3 Jan 2015 19:51:52 +0000 (20:51 +0100)]
Prevent WAL files created by pg_basebackup -x/X from being archived again.
WAL (and timeline history) files created by pg_basebackup did not
maintain the new base backup's archive status. That's currently not a
problem if the new node is used as a standby - but if that node is
promoted all still existing files can get archived again. With a high
wal_keep_segment settings that can happen a significant time later -
which is quite confusing.
Change both the backend (for the -x/-X fetch case) and pg_basebackup
(for -X stream) itself to always mark WAL/timeline files included in
the base backup as .done. That's in line with walreceiver.c doing so.
The verbosity of the pg_basebackup changes show pretty clearly that it
needs some refactoring, but that'd result in not be backpatchable
changes.
Backpatch to 9.1 where pg_basebackup was introduced.
Andres Freund [Sat, 3 Jan 2015 19:51:52 +0000 (20:51 +0100)]
Add pg_string_endswith as the start of a string helper library in src/common.
Backpatch to 9.3 where src/common was introduce, because a bugfix that
needs to be backpatched, requires the function. Earlier branches will
have to duplicate the code.
Tom Lane [Sat, 3 Jan 2015 18:14:03 +0000 (13:14 -0500)]
Treat negative values of recovery_min_apply_delay as having no effect.
At one point in the development of this feature, it was claimed that
allowing negative values would be useful to compensate for timezone
differences between master and slave servers. That was based on a mistaken
assumption that commit timestamps are recorded in local time; but of course
they're in UTC. Nor is a negative apply delay likely to be a sane way of
coping with server clock skew. However, the committed patch still treated
negative delays as doing something, and the timezone misapprehension
survived in the user documentation as well.
If recovery_min_apply_delay were a proper GUC we'd just set the minimum
allowed value to be zero; but for the moment it seems better to treat
negative settings as if they were zero.
In passing do some extra wordsmithing on the parameter's documentation,
including correcting a second misstatement that the parameter affects
processing of Restore Point records.
Issue noted by Michael Paquier, who also provided the code patch; doc
changes by me. Back-patch to 9.4 where the feature was introduced.
Tom Lane [Wed, 31 Dec 2014 21:42:45 +0000 (16:42 -0500)]
Docs: improve descriptions of ISO week-numbering date features.
Use the phraseology "ISO 8601 week-numbering year" in place of just
"ISO year", and make related adjustments to other terminology.
The point of this change is that it seems some people see "ISO year"
and think "standard year", whereupon they're surprised when constructs
like to_char(..., "IYYY-MM-DD") produce nonsensical results. Perhaps
hanging a few more adjectives on it will discourage them from jumping
to false conclusions. I put in an explicit warning against that
specific usage, too, though the main point is to discourage people
who haven't read this far down the page.
In passing fix some nearby markup and terminology inconsistencies.
Tom Lane [Wed, 31 Dec 2014 17:16:57 +0000 (12:16 -0500)]
Improve consistency of parsing of psql's magic variables.
For simple boolean variables such as ON_ERROR_STOP, psql has for a long
time recognized variant spellings of "on" and "off" (such as "1"/"0"),
and it also made a point of warning you if you'd misspelled the setting.
But these conveniences did not exist for other keyword-valued variables.
In particular, though ECHO_HIDDEN and ON_ERROR_ROLLBACK include "on" and
"off" as possible values, none of the alternative spellings for those were
recognized; and to make matters worse the code would just silently assume
"on" was meant for any unrecognized spelling. Several people have reported
getting bitten by this, so let's fix it. In detail, this patch:
* Allows all spellings recognized by ParseVariableBool() for ECHO_HIDDEN
and ON_ERROR_ROLLBACK.
* Reports a warning for unrecognized values for COMP_KEYWORD_CASE, ECHO,
ECHO_HIDDEN, HISTCONTROL, ON_ERROR_ROLLBACK, and VERBOSITY.
* Recognizes all values for all these variables case-insensitively;
previously there was a mishmash of case-sensitive and case-insensitive
behaviors.
Back-patch to all supported branches. There is a small risk of breaking
existing scripts that were accidentally failing to malfunction; but the
consensus is that the chance of detecting real problems and preventing
future mistakes outweighs this.
Revert the GinMaxItemSize calculation so that we fit 3 tuples per page.
Commit 36a35c55 changed the divisor from 3 to 6, for no apparent reason.
Reducing GinMaxItemSize like that created a dump/reload hazard: loading a
9.3 database to 9.4 might fail with "index row size XXX exceeds maximum 1352
for index ..." error. Revert the change.
While we're at it, make the calculation slightly more accurate. It used to
divide the available space on page by three, then subtract
sizeof(ItemIdData), and finally round down. That's not totally accurate; the
item pointers for the three items are packed tight right after the page
header, but there is alignment padding after the item pointers. Change the
calculation to reflect that, like BTMaxItemSize does. I tested this with
different block sizes on systems with 4- and 8-byte alignment, and the value
after the final MAXALIGN_DOWN was the same with both methods on all
configurations. So this does not make any difference currently, but let's be
tidy.
Also add a comment explaining what the macro does.
This fixes bug #12292 reported by Robert Thaler. Backpatch to 9.4, where the
bug was introduced.
Tom Lane [Mon, 29 Dec 2014 19:20:56 +0000 (14:20 -0500)]
Assorted minor fixes for psql metacommand docs.
Document the long forms of \H \i \ir \o \p \r \w ... apparently, we have
a long and dishonorable history of leaving out the unabbreviated names of
psql backslash commands.
Avoid saying "Unix shell"; we can just say "shell" with equal clarity,
and not leave Windows users wondering whether the feature works for them.
Improve consistency of documentation of \g \o \w metacommands. There's
no reason to use slightly different wording or markup for each one.
Alvaro Herrera [Fri, 26 Dec 2014 16:52:27 +0000 (13:52 -0300)]
Grab heavyweight tuple lock only before sleeping
We were trying to acquire the lock even when we were subsequently
not sleeping in some other transaction, which opens us up unnecessarily
to deadlocks. In particular, this is troublesome if an update tries to
lock an updated version of a tuple and finds itself doing EvalPlanQual
update chain walking; more than two sessions doing this concurrently
will find themselves sleeping on each other because the HW tuple lock
acquisition in heap_lock_tuple called from EvalPlanQualFetch races with
the same tuple lock being acquired in heap_update -- one of these
sessions sleeps on the other one to finish while holding the tuple lock,
and the other one sleeps on the tuple lock.
Per trouble report from Andrew Sackville-West in
http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230
His scenario can be simplified down to a relatively simple
isolationtester spec file which I don't include in this commit; the
reason is that the current isolationtester is not able to deal with more
than one blocked session concurrently and it blocks instead of raising
the expected deadlock. In the future, if we improve isolationtester, it
would be good to include the spec file in the isolation schedule. I
posted it in
http://www.postgresql.org/message-id/20141212205254.GC1768@alvh.no-ip.org
Hat tip to Mark Kirkwood, who helped diagnose the trouble.
Noah Misch [Thu, 25 Dec 2014 18:52:03 +0000 (13:52 -0500)]
Have config_sspi_auth() permit IPv6 localhost connections.
Windows versions later than Windows Server 2003 map "localhost" to ::1.
Account for that in the generated pg_hba.conf, fixing another oversight
in commit f6dc6dd5ba54d52c0733aaafc50da2fbaeabb8b0. Back-patch to 9.0,
like that commit.
Tom Lane [Wed, 24 Dec 2014 21:35:23 +0000 (16:35 -0500)]
Add CST (China Standard Time) to our lists of timezone abbreviations.
For some reason this seems to have been missed when the lists in
src/timezone/tznames/ were first constructed. We can't put it in Default
because of the conflict with US CST, but we should certainly list it among
the alternative entries in Asia.txt. (I checked for other oversights, but
all the other abbreviations that are in current use according to the IANA
files seem to be accounted for.) Noted while responding to bug #12326.
Andrew Dunstan [Mon, 22 Dec 2014 19:20:19 +0000 (14:20 -0500)]
Fix documentation of argument type of json_agg and jsonb_agg
json_agg was originally designed to aggregate records. However, it soon
became clear that it is useful for aggregating all kinds of values and
that's what we have on 9.3 and 9.4, and in head for it and jsonb_agg.
The documentation suggested otherwise, so this fixes it.
Tom Lane [Sun, 21 Dec 2014 20:30:39 +0000 (15:30 -0500)]
Docs: clarify treatment of variadic functions with zero variadic arguments.
Explain that you have to use "VARIADIC ARRAY[]" to pass an empty array
to a variadic parameter position. This was already implicit in the text
but it seems better to spell it out.
Per a suggestion from David Johnston, though I didn't use his proposed
wording. Back-patch to all supported branches.
Andres Freund [Fri, 19 Dec 2014 13:29:52 +0000 (14:29 +0100)]
Prevent potentially hazardous compiler/cpu reordering during lwlock release.
In LWLockRelease() (and in 9.4+ LWLockUpdateVar()) we release enqueued
waiters using PGSemaphoreUnlock(). As there are other sources of such
unlocks backends only wake up if MyProc->lwWaiting is set to false;
which is only done in the aforementioned functions.
Before this commit there were dangers because the store to lwWaitLink
could become visible before the store to lwWaitLink. This could both
happen due to compiler reordering (on most compilers) and on some
platforms due to the CPU reordering stores.
The possible consequence of this is that a backend stops waiting
before lwWaitLink is set to NULL. If that backend then tries to
acquire another lock and has to wait there the list could become
corrupted once the lwWaitLink store is finally performed.
Add a write memory barrier to prevent that issue.
Unfortunately the barrier support has been only added in 9.2. Given
that the issue has not knowingly been observed in praxis it seems
sufficient to prohibit compiler reordering using volatile for 9.0 and
9.1. Actual problems due to compiler reordering are more likely
anyway.
Tom Lane [Thu, 18 Dec 2014 21:38:55 +0000 (16:38 -0500)]
Improve documentation about CASE and constant subexpressions.
The possibility that constant subexpressions of a CASE might be evaluated
at planning time was touched on in 9.17.1 (CASE expressions), but it really
ought to be explained in 4.2.14 (Expression Evaluation Rules) which is the
primary discussion of such topics. Add text and an example there, and
revise the <note> under CASE to link there.
Back-patch to all supported branches, since it's acted like this for a
long time (though 9.2+ is probably worse because of its more aggressive
use of constant-folding via replanning of nominally-prepared statements).
Pre-9.4, also back-patch text added in commit 0ce627d4 about CASE versus
aggregate functions.
Tom Lane and David Johnston, per discussion of bug #12273.
Noah Misch [Thu, 18 Dec 2014 08:55:17 +0000 (03:55 -0500)]
Recognize Makefile line continuations in fetchRegressOpts().
Back-patch to 9.0 (all supported versions). This is mere
future-proofing in the context of the master branch, but commit f6dc6dd5ba54d52c0733aaafc50da2fbaeabb8b0 requires it of older branches.
Andres Freund [Thu, 18 Dec 2014 07:35:27 +0000 (08:35 +0100)]
Fix (re-)starting from a basebackup taken off a standby after a failure.
When starting up from a basebackup taken off a standby extra logic has
to be applied to compute the point where the data directory is
consistent. Normal base backups use a WAL record for that purpose, but
that isn't possible on a standby.
That logic had a error check ensuring that the cluster's control file
indicates being in recovery. Unfortunately that check was too strict,
disregarding the fact that the control file could also indicate that
the cluster was shut down while in recovery.
That's possible when the a cluster starting from a basebackup is shut
down before the backup label has been removed. When everything goes
well that's a short window, but when either restore_command or
primary_conninfo isn't configured correctly the window can get much
wider. That's because inbetween reading and unlinking the label we
restore the last checkpoint from WAL which can need additional WAL.
To fix simply also allow starting when the control file indicates
"shutdown in recovery". There's nicer fixes imaginable, but they'd be
more invasive.
Backpatch to 9.2 where support for taking basebackups from standbys
was added.
Noah Misch [Thu, 18 Dec 2014 03:48:40 +0000 (22:48 -0500)]
Lock down regression testing temporary clusters on Windows.
Use SSPI authentication to allow connections exclusively from the OS
user that launched the test suite. This closes on Windows the
vulnerability that commit be76a6d39e2832d4b88c0e1cc381aa44a7f86881
closed on other platforms. Users of "make installcheck" or custom test
harnesses can run "pg_regress --config-auth=DATADIR" to activate the
same authentication configuration that "make check" would use.
Back-patch to 9.0 (all supported versions).
Tom Lane [Tue, 16 Dec 2014 20:35:36 +0000 (15:35 -0500)]
Fix off-by-one loop count in MapArrayTypeName, and get rid of static array.
MapArrayTypeName would copy up to NAMEDATALEN-1 bytes of the base type
name, which of course is wrong: after prepending '_' there is only room for
NAMEDATALEN-2 bytes. Aside from being the wrong result, this case would
lead to overrunning the statically allocated work buffer. This would be a
security bug if the function were ever used outside bootstrap mode, but it
isn't, at least not in any currently supported branches.
Aside from fixing the off-by-one loop logic, this patch gets rid of the
static work buffer by having MapArrayTypeName pstrdup its result; the sole
caller was already doing that, so this just requires moving the pstrdup
call. This saves a few bytes but mainly it makes the API a lot cleaner.
Back-patch on the off chance that there is some third-party code using
MapArrayTypeName with less-secure input. Pushing pstrdup into the function
should not cause any serious problems for such hypothetical code; at worst
there might be a short term memory leak.
Tom Lane [Tue, 16 Dec 2014 18:31:42 +0000 (13:31 -0500)]
Fix file descriptor leak after failure of a \setshell command in pgbench.
If the called command fails to return data, runShellCommand forgot to
pclose() the pipe before returning. This is fairly harmless in the current
code, because pgbench would then abandon further processing of that client
thread; so no more than nclients descriptors could be leaked this way. But
it's not hard to imagine future improvements whereby that wouldn't be true.
In any case, it's sloppy coding, so patch all branches. Found by Coverity.
Tom Lane [Sun, 14 Dec 2014 23:09:55 +0000 (18:09 -0500)]
Improve documentation around parameter-setting and ALTER SYSTEM.
The ALTER SYSTEM ref page hadn't been held to a very high standard, nor
was the feature well integrated into section 18.1 (parameter setting).
Also, though commit 4c4654afe had improved the structure of 18.1, it also
introduced a lot of poor wording, imprecision, and outright falsehoods.
Try to clean that up.
Tom Lane [Sat, 13 Dec 2014 18:46:46 +0000 (13:46 -0500)]
Improve recovery target settings documentation.
Commit 815d71dee hadn't bothered to update the documentation to match the
behavioral change, and a lot of other text in this section was badly in
need of copy-editing.
Tom Lane [Sat, 13 Dec 2014 16:49:20 +0000 (11:49 -0500)]
Repair corner-case bug in array version of percentile_cont().
The code for advancing through the input rows overlooked the case that we
might already be past the first row of the row pair now being considered,
in case the previous percentile also fell between the same two input rows.
Report and patch by Andrew Gierth; logic rewritten a bit for clarity by me.
Tom Lane [Fri, 12 Dec 2014 17:41:52 +0000 (12:41 -0500)]
Revert misguided change to postgres_fdw FOR UPDATE/SHARE code.
In commit 462bd95705a0c23ba0b0ba60a78d32566a0384c1, I changed postgres_fdw
to rely on get_plan_rowmark() instead of get_parse_rowmark(). I still
think that's a good idea in the long run, but as Etsuro Fujita pointed out,
it doesn't work today because planner.c forces PlanRowMarks to have
markType = ROW_MARK_COPY for all foreign tables. There's no urgent reason
to change this in the back branches, so let's just revert that part of
yesterday's commit rather than trying to design a better solution under
time pressure.
Also, add a regression test case showing what postgres_fdw does with FOR
UPDATE/SHARE. I'd blithely assumed there was one already, else I'd have
realized yesterday that this code didn't work.
Tom Lane [Fri, 12 Dec 2014 02:02:28 +0000 (21:02 -0500)]
Fix planning of SELECT FOR UPDATE on child table with partial index.
Ordinarily we can omit checking of a WHERE condition that matches a partial
index's condition, when we are using an indexscan on that partial index.
However, in SELECT FOR UPDATE we must include the "redundant" filter
condition in the plan so that it gets checked properly in an EvalPlanQual
recheck. The planner got this mostly right, but improperly omitted the
filter condition if the index in question was on an inheritance child
table. In READ COMMITTED mode, this could result in incorrectly returning
just-updated rows that no longer satisfy the filter condition.
The cause of the error is using get_parse_rowmark() when get_plan_rowmark()
is what should be used during planning. In 9.3 and up, also fix the same
mistake in contrib/postgres_fdw. It's currently harmless there (for lack
of inheritance support) but wrong is wrong, and the incorrect code might
get copied to someplace where it's more significant.
Report and fix by Kyotaro Horiguchi. Back-patch to all supported branches.
Tom Lane [Fri, 12 Dec 2014 00:37:03 +0000 (19:37 -0500)]
Fix corner case where SELECT FOR UPDATE could return a row twice.
In READ COMMITTED mode, if a SELECT FOR UPDATE discovers it has to redo
WHERE-clause checking on rows that have been updated since the SELECT's
snapshot, it invokes EvalPlanQual processing to do that. If this first
occurs within a non-first child table of an inheritance tree, the previous
coding could accidentally re-return a matching row from an earlier,
already-scanned child table. (And, to add insult to injury, I think this
could make it miss returning a row that should have been returned, if the
updated row that this happens on should still have passed the WHERE qual.)
Per report from Kyotaro Horiguchi; the added isolation test is based on his
test case.
This has been broken for quite awhile, so back-patch to all supported
branches.
Tom Lane [Thu, 11 Dec 2014 00:06:27 +0000 (19:06 -0500)]
Fix minor thinko in convertToJsonb().
The amount of space to reserve for the value's varlena header is
VARHDRSZ, not sizeof(VARHDRSZ). The latter coding accidentally
failed to fail because of the way the VARHDRSZ macro is currently
defined; but if we ever change it to return size_t (as one might
reasonably expect it to do), convertToJsonb() would have failed.
Fix PGXS vpath build when PostgreSQL is built with vpath
PGXS computes srcdir from VPATH, PostgreSQL proper computes VPATH from
srcdir, and doing both results in an error from make. Conditionalize so
only one of these takes effect.
Fix SHLIB_PREREQS use in contrib, allowing PGXS builds
dblink and postgres_fdw use SHLIB_PREREQS = submake-libpq to build libpq
first. This doesn't work in a PGXS build, because there is no libpq to
build. So just omit setting SHLIB_PREREQS in this case.
Note that PGXS users can still use SHLIB_PREREQS (although it is not
documented). The problem here is only that contrib modules can be built
in-tree or using PGXS, and the prerequisite is only applicable in the
former case.
Commit 6697aa2bc25c83b88d6165340348a31328c35de6 previously attempted to
address this by creating a somewhat fake submake-libpq target in
Makefile.global. That was not the right fix, and it was also done in a
nonportable way, so revert that.
Since this is not something that a user should change,
pg_config_manual.h was an inappropriate place for it.
In initdb.c, remove the use of the macro, because utils/guc.h can't be
included by non-backend code. But we hardcode all the other
configuration file names there, so this isn't a disaster.
Tom Lane [Tue, 2 Dec 2014 23:23:20 +0000 (18:23 -0500)]
Improve error messages for malformed array input strings.
Make the error messages issued by array_in() uniformly follow the style
ERROR: malformed array literal: "actual input string"
DETAIL: specific complaint here
and rewrite many of the specific complaints to be clearer.
The immediate motivation for doing this is a complaint from Josh Berkus
that json_to_record() produced an unintelligible error message when
dealing with an array item, because it tries to feed the JSON-format
array value to array_in(). Really it ought to be smart enough to
perform JSON-to-Postgres array conversion, but that's a future feature
not a bug fix. In the meantime, this change is something we agreed
we could back-patch into 9.4, and it should help de-confuse things a bit.
Andres Freund [Tue, 2 Dec 2014 22:42:26 +0000 (23:42 +0100)]
Don't skip SQL backends in logical decoding for visibility computation.
The logical decoding patchset introduced PROC_IN_LOGICAL_DECODING flag
PGXACT flag, that allows such backends to be skipped when computing
the xmin horizon/snapshots. That's fine and sensible for walsenders
streaming out logical changes, but not at all fine for SQL backends
doing logical decoding. If the latter set that flag any change they
have performed outside of logical decoding will not be regarded as
visible - which e.g. can lead to that change being vacuumed away.
Note that not setting the flag for SQL backends isn't particularly
bothersome - the SQL backend doesn't do streaming, so it only runs for
a limited amount of time.
Per buildfarm member 'tick' and Alvaro.
Backpatch to 9.4, where logical decoding was introduced.
Tom Lane [Tue, 2 Dec 2014 20:02:40 +0000 (15:02 -0500)]
Fix JSON aggregates to work properly when final function is re-executed.
Davide S. reported that json_agg() sometimes produced multiple trailing
right brackets. This turns out to be because json_agg_finalfn() attaches
the final right bracket, and was doing so by modifying the aggregate state
in-place. That's verboten, though unfortunately it seems there's no way
for nodeAgg.c to check for such mistakes.
Fix that back to 9.3 where the broken code was introduced. In 9.4 and
HEAD, likewise fix json_object_agg(), which had copied the erroneous logic.
Make some cosmetic cleanups as well.
Tom Lane [Mon, 1 Dec 2014 20:25:05 +0000 (15:25 -0500)]
Guard against bad "dscale" values in numeric_recv().
We were not checking to see if the supplied dscale was valid for the given
digit array when receiving binary-format numeric values. While dscale can
validly be more than the number of nonzero fractional digits, it shouldn't
be less; that case causes fractional digits to be hidden on display even
though they're there and participate in arithmetic.
Bug #12053 from Tommaso Sala indicates that there's at least one broken
client library out there that sometimes supplies an incorrect dscale value,
leading to strange behavior. This suggests that simply throwing an error
might not be the best response; it would lead to failures in applications
that might seem to be working fine today. What seems the least risky fix
is to truncate away any digits that would be hidden by dscale. This
preserves the existing behavior in terms of what will be printed for the
transmitted value, while preventing subsequent arithmetic from producing
results inconsistent with that.
In passing, throw a specific error for the case of dscale being outside
the range that will fit into a numeric's header. Before you got "value
overflows numeric format", which is a bit misleading.
Andrew Dunstan [Mon, 1 Dec 2014 16:28:45 +0000 (11:28 -0500)]
Fix hstore_to_json_loose's detection of valid JSON number values.
We expose a function IsValidJsonNumber that internally calls the lexer
for json numbers. That allows us to use the same test everywhere,
instead of inventing a broken test for hstore conversions. The new
function is also used in datum_to_json, replacing the code that is now
moved to the new function.
Backpatch to 9.3 where hstore_to_json_loose was introduced.
Coverity complained that the "else" added to fillPGconn() was unreachable,
which it was. Remove the dead code. In passing, rearrange the tests so as
not to bother trying to fetch values for options that can't be assigned.
Pre-9.3 did not have that issue, but it did have a "return" that should be
"goto oom_error" to ensure that a suitable error message gets filled in.
Apart from ignoring "hostaddr" set to the empty string, this behaves
identically to its predecessor. Back-patch to 9.4, where the original
commit first appeared.
Noah Misch [Sat, 29 Nov 2014 17:31:21 +0000 (12:31 -0500)]
Revert "Add libpq function PQhostaddr()."
This reverts commit 9f80f4835a55a1cbffcda5d23a617917f3286c14. The
function returned the raw value of a connection parameter, a task served
by PQconninfo(). The next commit will reimplement the psql \conninfo
change that way. Back-patch to 9.4, where that commit first appeared.
Fujii Masao [Thu, 27 Nov 2014 17:42:43 +0000 (02:42 +0900)]
Make \watch respect the user's \pset null setting.
Previously \watch always ignored the user's \pset null setting.
\pset null setting should be ignored for \d and similar queries.
For those, the code can reasonably have an opinion about what
the presentation should be like, since it knows what SQL query
it's issuing. This argument surely doesn't apply to \watch,
so this commit makes \watch use the user's \pset null setting.
Tom Lane [Thu, 27 Nov 2014 16:12:47 +0000 (11:12 -0500)]
Free libxml2/libxslt resources in a safer order.
Mark Simonetti reported that libxslt sometimes crashes for him, and that
swapping xslt_process's object-freeing calls around to do them in reverse
order of creation seemed to fix it. I've not reproduced the crash, but
valgrind clearly shows a reference to already-freed memory, which is
consistent with the idea that shutdown of the xsltTransformContext is
trying to reference the already-freed stylesheet or input document.
With this patch, valgrind is no longer unhappy.
I have an inquiry in to see if this is a libxslt bug or if we're just
abusing the library; but even if it's a library bug, we'd want to adjust
our code so it doesn't fail with unpatched libraries.
Back-patch to all supported branches, because we've been doing this in
the wrong(?) order for a long time.
Allow "dbname" from connection string to be overridden in PQconnectDBParams
If the "dbname" attribute in PQconnectDBParams contained a connection string
or URI (and expand_dbname = TRUE), the database name from the connection
string could not be overridden by a subsequent "dbname" keyword in the
array. That was not intentional; all other options can be overridden.
Furthermore, any subsequent "dbname" caused the connection string from the
first dbname value to be processed again, overriding any values for the same
options that were given between the connection string and the second dbname
option.
In the passing, clarify in the docs that only the first dbname option in the
array is parsed as a connection string.
Alex Shulgin. Backpatch to all supported versions.
Check return value of strdup() in libpq connection option parsing.
An out-of-memory in most of these would lead to strange behavior, like
connecting to a different database than intended, but some would lead to
an outright segfault.
Alex Shulgin and me. Backpatch to all supported versions.
Tom Lane [Sat, 22 Nov 2014 21:01:08 +0000 (16:01 -0500)]
Fix mishandling of system columns in FDW queries.
postgres_fdw would send query conditions involving system columns to the
remote server, even though it makes no effort to ensure that system
columns other than CTID match what the remote side thinks. tableoid,
in particular, probably won't match and might have some use in queries.
Hence, prevent sending conditions that include non-CTID system columns.
Also, create_foreignscan_plan neglected to check local restriction
conditions while determining whether to set fsSystemCol for a foreign
scan plan node. This again would bollix the results for queries that
test a foreign table's tableoid.
Back-patch the first fix to 9.3 where postgres_fdw was introduced.
Back-patch the second to 9.2. The code is probably broken in 9.1 as
well, but the patch doesn't apply cleanly there; given the weak state
of support for FDWs in 9.1, it doesn't seem worth fixing.
Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me
Tom Lane [Wed, 19 Nov 2014 21:00:27 +0000 (16:00 -0500)]
Improve documentation's description of JOIN clauses.
In bug #12000, Andreas Kunert complained that the documentation was
misleading in saying "FROM T1 CROSS JOIN T2 is equivalent to FROM T1, T2".
That's correct as far as it goes, but the equivalence doesn't hold when
you consider three or more tables, since JOIN binds more tightly than
comma. I added a <note> to explain this, and ended up rearranging some
of the existing text so that the note would make sense in context.
In passing, rewrite the description of JOIN USING, which was unnecessarily
vague, and hadn't been helped any by somebody's reliance on markup as a
substitute for clear writing. (Mostly this involved reintroducing a
concrete example that was unaccountably removed by commit 032f3b7e166cfa28.)
Fujii Masao [Wed, 19 Nov 2014 10:11:03 +0000 (19:11 +0900)]
Fix bug in the test of file descriptor of current WAL file in pg_receivexlog.
In pg_receivexlog, in order to check whether the current WAL file is
being opened or not, its file descriptor has to be checked against -1
as an invalid value. But, oops, 7900e94 added the incorrect test
checking the descriptor against 1. This commit fixes that bug.
Fujii Masao [Wed, 19 Nov 2014 05:11:48 +0000 (14:11 +0900)]
Fix pg_receivexlog --slot so that it doesn't prevent the server shutdown.
When pg_receivexlog --slot is connecting to the server, at the shutdown
of the server, walsender keeps waiting for the last WAL record to be
replicated and flushed in pg_receivexlog. But previously pg_receivexlog
issued sync command only when WAL file was switched. So there was
the case where the last WAL was never flushed and walsender had to
keep waiting infinitely. This caused the server shutdown to get stuck.
pg_recvlogical handles this problem by calling fsync() when it receives
the request of immediate reply from the server. That is, at shutdown,
walsender sends the request, pg_recvlogical receives it, flushes the last
WAL record, and sends the flush location back to the server. Since
walsender can see that the last WAL record is successfully flushed, it can
exit cleanly.
This commit introduces the same logic as pg_recvlogical has,
to pg_receivexlog.
Back-patch to 9.4 where pg_receivexlog was changed so that it can use
the replication slot.
Original patch by Michael Paquier, rewritten by me.
Bug report by Furuya Osamu.
Tom Lane [Wed, 19 Nov 2014 02:36:43 +0000 (21:36 -0500)]
Don't require bleeding-edge timezone data in timestamptz regression test.
The regression test cases added in commits b2cbced9e et al depended in part
on the Russian timezone offset changes of Oct 2014. While this is of no
particular concern for a default Postgres build, it was possible for a
build using --with-system-tzdata to fail the tests if the system tzdata
database wasn't au courant. Bjorn Munch and Christoph Berg both complained
about this while packaging 9.4rc1, so we probably shouldn't insist on the
system tzdata being up-to-date. Instead, make an equivalent test using a
zone change that occurred in Venezuela in 2007. With this patch, the
regression tests should pass using any tzdata set from 2012 or later.
(I can't muster much sympathy for somebody using --with-system-tzdata
on a machine whose system tzdata is more than three years out-of-date.)
Tom Lane [Tue, 18 Nov 2014 18:28:09 +0000 (13:28 -0500)]
Fix some bogus direct uses of realloc().
pg_dump/parallel.c was using realloc() directly with no error check.
While the odds of an actual failure here seem pretty low, Coverity
complains about it, so fix by using pg_realloc() instead.
While looking for other instances, I noticed a couple of places in
psql that hadn't gotten the memo about the availability of pg_realloc.
These aren't bugs, since they did have error checks, but verbosely
inconsistent code is not a good thing.
Back-patch as far as 9.3. 9.2 did not have pg_dump/parallel.c, nor
did it have pg_realloc available in all frontend code.
Tom Lane [Mon, 17 Nov 2014 17:08:02 +0000 (12:08 -0500)]
Update time zone data files to tzdata release 2014j.
DST law changes in the Turks & Caicos Islands (America/Grand_Turk) and
in Fiji. New zone Pacific/Bougainville for portions of Papua New Guinea.
Historical changes for Korea and Vietnam.
Fix WAL-logging of B-tree "unlink halfdead page" operation.
There was some confusion on how to record the case that the operation
unlinks the last non-leaf page in the branch being deleted.
_bt_unlink_halfdead_page set the "topdead" field in the WAL record to
the leaf page, but the redo routine assumed that it would be an invalid
block number in that case. This commit fixes _bt_unlink_halfdead_page to
do what the redo routine expected.