Tom Lane [Sun, 14 Dec 2014 23:09:55 +0000 (18:09 -0500)]
Improve documentation around parameter-setting and ALTER SYSTEM.
The ALTER SYSTEM ref page hadn't been held to a very high standard, nor
was the feature well integrated into section 18.1 (parameter setting).
Also, though commit 4c4654afe had improved the structure of 18.1, it also
introduced a lot of poor wording, imprecision, and outright falsehoods.
Try to clean that up.
Tom Lane [Sat, 13 Dec 2014 18:46:46 +0000 (13:46 -0500)]
Improve recovery target settings documentation.
Commit 815d71dee hadn't bothered to update the documentation to match the
behavioral change, and a lot of other text in this section was badly in
need of copy-editing.
Tom Lane [Sat, 13 Dec 2014 16:49:20 +0000 (11:49 -0500)]
Repair corner-case bug in array version of percentile_cont().
The code for advancing through the input rows overlooked the case that we
might already be past the first row of the row pair now being considered,
in case the previous percentile also fell between the same two input rows.
Report and patch by Andrew Gierth; logic rewritten a bit for clarity by me.
Tom Lane [Fri, 12 Dec 2014 17:41:52 +0000 (12:41 -0500)]
Revert misguided change to postgres_fdw FOR UPDATE/SHARE code.
In commit 462bd95705a0c23ba0b0ba60a78d32566a0384c1, I changed postgres_fdw
to rely on get_plan_rowmark() instead of get_parse_rowmark(). I still
think that's a good idea in the long run, but as Etsuro Fujita pointed out,
it doesn't work today because planner.c forces PlanRowMarks to have
markType = ROW_MARK_COPY for all foreign tables. There's no urgent reason
to change this in the back branches, so let's just revert that part of
yesterday's commit rather than trying to design a better solution under
time pressure.
Also, add a regression test case showing what postgres_fdw does with FOR
UPDATE/SHARE. I'd blithely assumed there was one already, else I'd have
realized yesterday that this code didn't work.
Tom Lane [Fri, 12 Dec 2014 02:02:28 +0000 (21:02 -0500)]
Fix planning of SELECT FOR UPDATE on child table with partial index.
Ordinarily we can omit checking of a WHERE condition that matches a partial
index's condition, when we are using an indexscan on that partial index.
However, in SELECT FOR UPDATE we must include the "redundant" filter
condition in the plan so that it gets checked properly in an EvalPlanQual
recheck. The planner got this mostly right, but improperly omitted the
filter condition if the index in question was on an inheritance child
table. In READ COMMITTED mode, this could result in incorrectly returning
just-updated rows that no longer satisfy the filter condition.
The cause of the error is using get_parse_rowmark() when get_plan_rowmark()
is what should be used during planning. In 9.3 and up, also fix the same
mistake in contrib/postgres_fdw. It's currently harmless there (for lack
of inheritance support) but wrong is wrong, and the incorrect code might
get copied to someplace where it's more significant.
Report and fix by Kyotaro Horiguchi. Back-patch to all supported branches.
Tom Lane [Fri, 12 Dec 2014 00:37:03 +0000 (19:37 -0500)]
Fix corner case where SELECT FOR UPDATE could return a row twice.
In READ COMMITTED mode, if a SELECT FOR UPDATE discovers it has to redo
WHERE-clause checking on rows that have been updated since the SELECT's
snapshot, it invokes EvalPlanQual processing to do that. If this first
occurs within a non-first child table of an inheritance tree, the previous
coding could accidentally re-return a matching row from an earlier,
already-scanned child table. (And, to add insult to injury, I think this
could make it miss returning a row that should have been returned, if the
updated row that this happens on should still have passed the WHERE qual.)
Per report from Kyotaro Horiguchi; the added isolation test is based on his
test case.
This has been broken for quite awhile, so back-patch to all supported
branches.
Tom Lane [Thu, 11 Dec 2014 00:06:27 +0000 (19:06 -0500)]
Fix minor thinko in convertToJsonb().
The amount of space to reserve for the value's varlena header is
VARHDRSZ, not sizeof(VARHDRSZ). The latter coding accidentally
failed to fail because of the way the VARHDRSZ macro is currently
defined; but if we ever change it to return size_t (as one might
reasonably expect it to do), convertToJsonb() would have failed.
Fix PGXS vpath build when PostgreSQL is built with vpath
PGXS computes srcdir from VPATH, PostgreSQL proper computes VPATH from
srcdir, and doing both results in an error from make. Conditionalize so
only one of these takes effect.
Fix SHLIB_PREREQS use in contrib, allowing PGXS builds
dblink and postgres_fdw use SHLIB_PREREQS = submake-libpq to build libpq
first. This doesn't work in a PGXS build, because there is no libpq to
build. So just omit setting SHLIB_PREREQS in this case.
Note that PGXS users can still use SHLIB_PREREQS (although it is not
documented). The problem here is only that contrib modules can be built
in-tree or using PGXS, and the prerequisite is only applicable in the
former case.
Commit 6697aa2bc25c83b88d6165340348a31328c35de6 previously attempted to
address this by creating a somewhat fake submake-libpq target in
Makefile.global. That was not the right fix, and it was also done in a
nonportable way, so revert that.
Since this is not something that a user should change,
pg_config_manual.h was an inappropriate place for it.
In initdb.c, remove the use of the macro, because utils/guc.h can't be
included by non-backend code. But we hardcode all the other
configuration file names there, so this isn't a disaster.
Tom Lane [Tue, 2 Dec 2014 23:23:20 +0000 (18:23 -0500)]
Improve error messages for malformed array input strings.
Make the error messages issued by array_in() uniformly follow the style
ERROR: malformed array literal: "actual input string"
DETAIL: specific complaint here
and rewrite many of the specific complaints to be clearer.
The immediate motivation for doing this is a complaint from Josh Berkus
that json_to_record() produced an unintelligible error message when
dealing with an array item, because it tries to feed the JSON-format
array value to array_in(). Really it ought to be smart enough to
perform JSON-to-Postgres array conversion, but that's a future feature
not a bug fix. In the meantime, this change is something we agreed
we could back-patch into 9.4, and it should help de-confuse things a bit.
Andres Freund [Tue, 2 Dec 2014 22:42:26 +0000 (23:42 +0100)]
Don't skip SQL backends in logical decoding for visibility computation.
The logical decoding patchset introduced PROC_IN_LOGICAL_DECODING flag
PGXACT flag, that allows such backends to be skipped when computing
the xmin horizon/snapshots. That's fine and sensible for walsenders
streaming out logical changes, but not at all fine for SQL backends
doing logical decoding. If the latter set that flag any change they
have performed outside of logical decoding will not be regarded as
visible - which e.g. can lead to that change being vacuumed away.
Note that not setting the flag for SQL backends isn't particularly
bothersome - the SQL backend doesn't do streaming, so it only runs for
a limited amount of time.
Per buildfarm member 'tick' and Alvaro.
Backpatch to 9.4, where logical decoding was introduced.
Tom Lane [Tue, 2 Dec 2014 20:02:40 +0000 (15:02 -0500)]
Fix JSON aggregates to work properly when final function is re-executed.
Davide S. reported that json_agg() sometimes produced multiple trailing
right brackets. This turns out to be because json_agg_finalfn() attaches
the final right bracket, and was doing so by modifying the aggregate state
in-place. That's verboten, though unfortunately it seems there's no way
for nodeAgg.c to check for such mistakes.
Fix that back to 9.3 where the broken code was introduced. In 9.4 and
HEAD, likewise fix json_object_agg(), which had copied the erroneous logic.
Make some cosmetic cleanups as well.
Tom Lane [Mon, 1 Dec 2014 20:25:05 +0000 (15:25 -0500)]
Guard against bad "dscale" values in numeric_recv().
We were not checking to see if the supplied dscale was valid for the given
digit array when receiving binary-format numeric values. While dscale can
validly be more than the number of nonzero fractional digits, it shouldn't
be less; that case causes fractional digits to be hidden on display even
though they're there and participate in arithmetic.
Bug #12053 from Tommaso Sala indicates that there's at least one broken
client library out there that sometimes supplies an incorrect dscale value,
leading to strange behavior. This suggests that simply throwing an error
might not be the best response; it would lead to failures in applications
that might seem to be working fine today. What seems the least risky fix
is to truncate away any digits that would be hidden by dscale. This
preserves the existing behavior in terms of what will be printed for the
transmitted value, while preventing subsequent arithmetic from producing
results inconsistent with that.
In passing, throw a specific error for the case of dscale being outside
the range that will fit into a numeric's header. Before you got "value
overflows numeric format", which is a bit misleading.
Andrew Dunstan [Mon, 1 Dec 2014 16:28:45 +0000 (11:28 -0500)]
Fix hstore_to_json_loose's detection of valid JSON number values.
We expose a function IsValidJsonNumber that internally calls the lexer
for json numbers. That allows us to use the same test everywhere,
instead of inventing a broken test for hstore conversions. The new
function is also used in datum_to_json, replacing the code that is now
moved to the new function.
Backpatch to 9.3 where hstore_to_json_loose was introduced.
Coverity complained that the "else" added to fillPGconn() was unreachable,
which it was. Remove the dead code. In passing, rearrange the tests so as
not to bother trying to fetch values for options that can't be assigned.
Pre-9.3 did not have that issue, but it did have a "return" that should be
"goto oom_error" to ensure that a suitable error message gets filled in.
Apart from ignoring "hostaddr" set to the empty string, this behaves
identically to its predecessor. Back-patch to 9.4, where the original
commit first appeared.
Noah Misch [Sat, 29 Nov 2014 17:31:21 +0000 (12:31 -0500)]
Revert "Add libpq function PQhostaddr()."
This reverts commit 9f80f4835a55a1cbffcda5d23a617917f3286c14. The
function returned the raw value of a connection parameter, a task served
by PQconninfo(). The next commit will reimplement the psql \conninfo
change that way. Back-patch to 9.4, where that commit first appeared.
Fujii Masao [Thu, 27 Nov 2014 17:42:43 +0000 (02:42 +0900)]
Make \watch respect the user's \pset null setting.
Previously \watch always ignored the user's \pset null setting.
\pset null setting should be ignored for \d and similar queries.
For those, the code can reasonably have an opinion about what
the presentation should be like, since it knows what SQL query
it's issuing. This argument surely doesn't apply to \watch,
so this commit makes \watch use the user's \pset null setting.
Tom Lane [Thu, 27 Nov 2014 16:12:47 +0000 (11:12 -0500)]
Free libxml2/libxslt resources in a safer order.
Mark Simonetti reported that libxslt sometimes crashes for him, and that
swapping xslt_process's object-freeing calls around to do them in reverse
order of creation seemed to fix it. I've not reproduced the crash, but
valgrind clearly shows a reference to already-freed memory, which is
consistent with the idea that shutdown of the xsltTransformContext is
trying to reference the already-freed stylesheet or input document.
With this patch, valgrind is no longer unhappy.
I have an inquiry in to see if this is a libxslt bug or if we're just
abusing the library; but even if it's a library bug, we'd want to adjust
our code so it doesn't fail with unpatched libraries.
Back-patch to all supported branches, because we've been doing this in
the wrong(?) order for a long time.
Allow "dbname" from connection string to be overridden in PQconnectDBParams
If the "dbname" attribute in PQconnectDBParams contained a connection string
or URI (and expand_dbname = TRUE), the database name from the connection
string could not be overridden by a subsequent "dbname" keyword in the
array. That was not intentional; all other options can be overridden.
Furthermore, any subsequent "dbname" caused the connection string from the
first dbname value to be processed again, overriding any values for the same
options that were given between the connection string and the second dbname
option.
In the passing, clarify in the docs that only the first dbname option in the
array is parsed as a connection string.
Alex Shulgin. Backpatch to all supported versions.
Check return value of strdup() in libpq connection option parsing.
An out-of-memory in most of these would lead to strange behavior, like
connecting to a different database than intended, but some would lead to
an outright segfault.
Alex Shulgin and me. Backpatch to all supported versions.
Tom Lane [Sat, 22 Nov 2014 21:01:08 +0000 (16:01 -0500)]
Fix mishandling of system columns in FDW queries.
postgres_fdw would send query conditions involving system columns to the
remote server, even though it makes no effort to ensure that system
columns other than CTID match what the remote side thinks. tableoid,
in particular, probably won't match and might have some use in queries.
Hence, prevent sending conditions that include non-CTID system columns.
Also, create_foreignscan_plan neglected to check local restriction
conditions while determining whether to set fsSystemCol for a foreign
scan plan node. This again would bollix the results for queries that
test a foreign table's tableoid.
Back-patch the first fix to 9.3 where postgres_fdw was introduced.
Back-patch the second to 9.2. The code is probably broken in 9.1 as
well, but the patch doesn't apply cleanly there; given the weak state
of support for FDWs in 9.1, it doesn't seem worth fixing.
Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me
Tom Lane [Wed, 19 Nov 2014 21:00:27 +0000 (16:00 -0500)]
Improve documentation's description of JOIN clauses.
In bug #12000, Andreas Kunert complained that the documentation was
misleading in saying "FROM T1 CROSS JOIN T2 is equivalent to FROM T1, T2".
That's correct as far as it goes, but the equivalence doesn't hold when
you consider three or more tables, since JOIN binds more tightly than
comma. I added a <note> to explain this, and ended up rearranging some
of the existing text so that the note would make sense in context.
In passing, rewrite the description of JOIN USING, which was unnecessarily
vague, and hadn't been helped any by somebody's reliance on markup as a
substitute for clear writing. (Mostly this involved reintroducing a
concrete example that was unaccountably removed by commit 032f3b7e166cfa28.)
Fujii Masao [Wed, 19 Nov 2014 10:11:03 +0000 (19:11 +0900)]
Fix bug in the test of file descriptor of current WAL file in pg_receivexlog.
In pg_receivexlog, in order to check whether the current WAL file is
being opened or not, its file descriptor has to be checked against -1
as an invalid value. But, oops, 7900e94 added the incorrect test
checking the descriptor against 1. This commit fixes that bug.
Fujii Masao [Wed, 19 Nov 2014 05:11:48 +0000 (14:11 +0900)]
Fix pg_receivexlog --slot so that it doesn't prevent the server shutdown.
When pg_receivexlog --slot is connecting to the server, at the shutdown
of the server, walsender keeps waiting for the last WAL record to be
replicated and flushed in pg_receivexlog. But previously pg_receivexlog
issued sync command only when WAL file was switched. So there was
the case where the last WAL was never flushed and walsender had to
keep waiting infinitely. This caused the server shutdown to get stuck.
pg_recvlogical handles this problem by calling fsync() when it receives
the request of immediate reply from the server. That is, at shutdown,
walsender sends the request, pg_recvlogical receives it, flushes the last
WAL record, and sends the flush location back to the server. Since
walsender can see that the last WAL record is successfully flushed, it can
exit cleanly.
This commit introduces the same logic as pg_recvlogical has,
to pg_receivexlog.
Back-patch to 9.4 where pg_receivexlog was changed so that it can use
the replication slot.
Original patch by Michael Paquier, rewritten by me.
Bug report by Furuya Osamu.
Tom Lane [Wed, 19 Nov 2014 02:36:43 +0000 (21:36 -0500)]
Don't require bleeding-edge timezone data in timestamptz regression test.
The regression test cases added in commits b2cbced9e et al depended in part
on the Russian timezone offset changes of Oct 2014. While this is of no
particular concern for a default Postgres build, it was possible for a
build using --with-system-tzdata to fail the tests if the system tzdata
database wasn't au courant. Bjorn Munch and Christoph Berg both complained
about this while packaging 9.4rc1, so we probably shouldn't insist on the
system tzdata being up-to-date. Instead, make an equivalent test using a
zone change that occurred in Venezuela in 2007. With this patch, the
regression tests should pass using any tzdata set from 2012 or later.
(I can't muster much sympathy for somebody using --with-system-tzdata
on a machine whose system tzdata is more than three years out-of-date.)
Tom Lane [Tue, 18 Nov 2014 18:28:09 +0000 (13:28 -0500)]
Fix some bogus direct uses of realloc().
pg_dump/parallel.c was using realloc() directly with no error check.
While the odds of an actual failure here seem pretty low, Coverity
complains about it, so fix by using pg_realloc() instead.
While looking for other instances, I noticed a couple of places in
psql that hadn't gotten the memo about the availability of pg_realloc.
These aren't bugs, since they did have error checks, but verbosely
inconsistent code is not a good thing.
Back-patch as far as 9.3. 9.2 did not have pg_dump/parallel.c, nor
did it have pg_realloc available in all frontend code.
Tom Lane [Mon, 17 Nov 2014 17:08:02 +0000 (12:08 -0500)]
Update time zone data files to tzdata release 2014j.
DST law changes in the Turks & Caicos Islands (America/Grand_Turk) and
in Fiji. New zone Pacific/Bougainville for portions of Papua New Guinea.
Historical changes for Korea and Vietnam.
Fix WAL-logging of B-tree "unlink halfdead page" operation.
There was some confusion on how to record the case that the operation
unlinks the last non-leaf page in the branch being deleted.
_bt_unlink_halfdead_page set the "topdead" field in the WAL record to
the leaf page, but the redo routine assumed that it would be an invalid
block number in that case. This commit fixes _bt_unlink_halfdead_page to
do what the redo routine expected.
Andres Freund [Fri, 14 Nov 2014 17:22:12 +0000 (18:22 +0100)]
Fix initdb --sync-only to also sync tablespaces.
630cd14426dc added initdb --sync-only, for use by pg_upgrade, by just
exposing the existing fsync code. That's wrong, because initdb so far
had absolutely no reason to deal with tablespaces.
Fix --sync-only by additionally explicitly syncing each of the
tablespaces.
Backpatch to 9.3 where --sync-only was introduced.
Andres Freund [Fri, 14 Nov 2014 17:21:30 +0000 (18:21 +0100)]
Sync unlogged relations to disk after they have been reset.
Unlogged relations are only reset when performing a unclean
restart. That means they have to be synced to disk during clean
shutdowns. During normal processing that's achieved by registering a
buffer's file to be fsynced at the next checkpoint when flushed. But
ResetUnloggedRelations() doesn't go through the buffer manager, so
nothing will force reset relations to disk before the next shutdown
checkpoint.
So just make ResetUnloggedRelations() fsync the newly created main
forks to disk.
Andres Freund [Fri, 14 Nov 2014 17:20:59 +0000 (18:20 +0100)]
Ensure unlogged tables are reset even if crash recovery errors out.
Unlogged relations are reset at the end of crash recovery as they're
only synced to disk during a proper shutdown. Unfortunately that and
later steps can fail, e.g. due to running out of space. This reset
was, up to now performed after marking the database as having finished
crash recovery successfully. As out of space errors trigger a crash
restart that could lead to the situation that not all unlogged
relations are reset.
Once that happend usage of unlogged relations could yield errors like
"could not open file "...": No such file or directory". Luckily
clusters that show the problem can be fixed by performing a immediate
shutdown, and starting the database again.
To fix, just call ResetUnloggedRelations(UNLOGGED_RELATION_INIT)
earlier, before marking the database as having successfully recovered.
Tom Lane [Fri, 14 Nov 2014 22:19:29 +0000 (17:19 -0500)]
Document evaluation-order considerations for aggregate functions.
The SELECT reference page didn't really address the question of when
aggregate function evaluation occurs, nor did the "expression evaluation
rules" documentation mention that CASE can't be used to control whether
an aggregate gets evaluated or not. Improve that.
Per discussion of bug #11661. Original text by Marti Raudsepp and Michael
Paquier, rewritten significantly by me.
Stephen Frost [Fri, 14 Nov 2014 20:16:01 +0000 (15:16 -0500)]
Revert change to ALTER TABLESPACE summary.
When ALTER TABLESPACE MOVE ALL was changed to be ALTER TABLE ALL IN
TABLESPACE, the ALTER TABLESPACE summary should have been adjusted back
to its original definition.
Alvaro Herrera [Fri, 14 Nov 2014 18:14:02 +0000 (15:14 -0300)]
Allow interrupting GetMultiXactIdMembers
This function has a loop which can lead to uninterruptible process
"stalls" (actually infinite loops) when some bugs are triggered. Avoid
that unpleasant situation by adding a check for interrupts in a place
that shouldn't degrade performance in the normal case.
Backpatch to 9.3. Older branches have an identical loop here, but the
aforementioned bugs are only a problem starting in 9.3 so there doesn't
seem to be any point in backpatching any further.
Tom Lane [Thu, 13 Nov 2014 23:19:28 +0000 (18:19 -0500)]
Fix pg_dumpall to restore its ability to dump from ancient servers.
Fix breakage induced by commits d8d3d2a4f37f6df5d0118b7f5211978cca22091a
and 463f2625a5fb183b6a8925ccde98bb3889f921d9: pg_dumpall has crashed when
attempting to dump from pre-8.1 servers since then, due to faulty
construction of the query used for dumping roles from older servers.
The query was erroneous as of the earlier commit, but it wasn't exposed
unless you tried to use --binary-upgrade, which you presumably wouldn't
with a pre-8.1 server. However commit 463f2625a made it fail always.
In HEAD, also fix additional breakage induced in the same query by
commit 491c029dbc4206779cf659aa0ff986af7831d2ff, which evidently wasn't
tested against pre-8.1 servers either.
The bug is only latent in 9.1 because 463f2625a hadn't landed yet, but
it seems best to back-patch all branches containing the faulty query.
Andres Freund [Thu, 13 Nov 2014 18:06:43 +0000 (19:06 +0100)]
Fix and improve cache invalidation logic for logical decoding.
There are basically three situations in which logical decoding needs
to perform cache invalidation. During/After replaying a transaction
with catalog changes, when skipping a uninteresting transaction that
performed catalog changes and when erroring out while replaying a
transaction. Unfortunately these three cases were all done slightly
differently - partially because 8de3e410fa, which greatly simplifies
matters, got committed in the midst of the development of logical
decoding.
The actually problematic case was when logical decoding skipped
transaction commits (and thus processed invalidations). When used via
the SQL interface cache invalidation could access the catalog - bad,
because we didn't set up enough state to allow that correctly. It'd
not be hard to setup sufficient state, but the simpler solution is to
always perform cache invalidation outside a valid transaction.
Also make the different cache invalidation cases look as similar as
possible, to ease code review.
This fixes the assertion failure reported by Antonin Houska in 53EE02D9.7040702@gmail.com. The presented testcase has been expanded
into a regression test.
Backpatch to 9.4, where logical decoding was introduced.
Andres Freund [Thu, 13 Nov 2014 18:06:43 +0000 (19:06 +0100)]
Fix xmin/xmax horizon computation during logical decoding initialization.
When building the initial historic catalog snapshot there were
scenarios where snapbuild.c would use incorrect xmin/xmax values when
starting from a xl_running_xacts record. The values used were always a
bit suspect, but happened to be correct in the easy to test
cases. Notably the values used when the the initial snapshot was
computed while no other transactions were running were correct.
This is likely to be the cause of the occasional buildfarm failures on
animals markhor and tick; but it's quite possible to reproduce
problems without CLOBBER_CACHE_ALWAYS.
Backpatch to 9.4, where logical decoding was introduced.
Fix race condition between hot standby and restoring a full-page image.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
Tom Lane [Wed, 12 Nov 2014 20:58:40 +0000 (15:58 -0500)]
Explicitly support the case that a plancache's raw_parse_tree is NULL.
This only happens if a client issues a Parse message with an empty query
string, which is a bit odd; but since it is explicitly called out as legal
by our FE/BE protocol spec, we'd probably better continue to allow it.
Fix by adding tests everywhere that the raw_parse_tree field is passed to
functions that don't or shouldn't accept NULL. Also make it clear in the
relevant comments that NULL is an expected case.
This reverts commits a73c9dbab0165b3395dfe8a44a7dfd16166963c4 and 2e9650cbcff8c8fb0d9ef807c73a44f241822eee, which fixed specific crash
symptoms by hacking things at what now seems to be the wrong end, ie the
callee functions. Making the callees allow NULL is superficially more
robust, but it's not always true that there is a defensible thing for the
callee to do in such cases. The caller has more context and is better
able to decide what the empty-query case ought to do.
Per followup discussion of bug #11335. Back-patch to 9.2. The code
before that is sufficiently different that it would require development
of a separate patch, which doesn't seem worthwhile for what is believed
to be an essentially cosmetic change.
Andres Freund [Wed, 12 Nov 2014 17:52:49 +0000 (18:52 +0100)]
Fix several weaknesses in slot and logical replication on-disk serialization.
Heikki noticed in 544E23C0.8090605@vmware.com that slot.c and
snapbuild.c were missing the FIN_CRC32 call when computing/checking
checksums of on disk files. That doesn't lower the the error detection
capabilities of the checksum, but is inconsistent with other usages.
In a followup mail Heikki also noticed that, contrary to a comment,
the 'version' and 'length' struct fields of replication slot's on disk
data where not covered by the checksum. That's not likely to lead to
actually missed corruption as those fields are cross checked with the
expected version and the actual file length. But it's wrong
nonetheless.
As fixing these issues makes existing on disk files unreadable, bump
the expected versions of on disk files for both slots and logical
decoding historic catalog snapshots. This means that loading old
files will fail with
ERROR: "replication slot file ... has unsupported version 1"
and
ERROR: "snapbuild state file ... has unsupported version 1 instead of
2" respectively. Given the low likelihood of anybody already using
these new features in a production setup that seems acceptable.
Fixing these issues made me notice that there's no regression test
covering the loading of historic snapshot from disk - so add one.
Backpatch to 9.4 where these features were introduced.
Andres Freund [Wed, 12 Nov 2014 17:52:49 +0000 (18:52 +0100)]
Add interrupt checks to contrib/pg_prewarm.
Currently the extension's pg_prewarm() function didn't check
interrupts once it started "warming" data. Since individual calls can
take a long while it's important for them to be interruptible.
Noah Misch [Wed, 12 Nov 2014 12:33:17 +0000 (07:33 -0500)]
Use just one database connection in the "tablespace" test.
On Windows, DROP TABLESPACE has a race condition when run concurrently
with other processes having opened files in the tablespace. This led to
a rare failure on buildfarm member frogmouth. Back-patch to 9.4, where
the reconnection was introduced.
Tom Lane [Tue, 11 Nov 2014 22:00:18 +0000 (17:00 -0500)]
Fix dependency searching for case where column is visited before table.
When the recursive search in dependency.c visits a column and then later
visits the whole table containing the column, it needs to propagate the
drop-context flags for the table to the existing target-object entry for
the column. Otherwise we might refuse the DROP (if not CASCADE) on the
incorrect grounds that there was no automatic drop pathway to the column.
Remarkably, this has not been reported before, though it's possible at
least when an extension creates both a datatype and a table using that
datatype.
Rather than just marking the column as allowed to be dropped, it might
seem good to skip the DROP COLUMN step altogether, since the later DROP
of the table will surely get the job done. The problem with that is that
the datatype would then be dropped before the table (since the whole
situation occurred because we visited the datatype, and then recursed to
the dependent column, before visiting the table). That seems pretty risky,
and the case is rare enough that it doesn't seem worth expending a lot of
effort or risk to make the drops happen in a safe order. So we just play
dumb and delete the column separately according to the existing drop
ordering rules.
Per report from Petr Jelinek, though this is different from his proposed
patch.
Back-patch to 9.1, where extensions were introduced. There's currently
no evidence that such cases can arise before 9.1, and in any case we would
also need to back-patch cb5c2ba2d82688d29b5902d86b993a54355cad4d to 9.0
if we wanted to back-patch this.
Tom Lane [Mon, 10 Nov 2014 20:21:14 +0000 (15:21 -0500)]
Ensure that RowExprs and whole-row Vars produce the expected column names.
At one time it wasn't terribly important what column names were associated
with the fields of a composite Datum, but since the introduction of
operations like row_to_json(), it's important that looking up the rowtype
ID embedded in the Datum returns the column names that users would expect.
That did not work terribly well before this patch: you could get the column
names of the underlying table, or column aliases from any level of the
query, depending on minor details of the plan tree. You could even get
totally empty field names, which is disastrous for cases like row_to_json().
To fix this for whole-row Vars, look to the RTE referenced by the Var, and
make sure its column aliases are applied to the rowtype associated with
the result Datums. This is a tad scary because we might have to return
a transient RECORD type even though the Var is declared as having some
named rowtype. In principle it should be all right because the record
type will still be physically compatible with the named rowtype; but
I had to weaken one Assert in ExecEvalConvertRowtype, and there might be
third-party code containing similar assumptions.
Similarly, RowExprs have to be willing to override the column names coming
from a named composite result type and produce a RECORD when the column
aliases visible at the site of the RowExpr differ from the underlying
table's column names.
In passing, revert the decision made in commit 398f70ec070fe601 to add
an alias-list argument to ExecTypeFromExprList: better to provide that
functionality in a separate function. This also reverts most of the code
changes in d68581483564ec0f, which we don't need because we're no longer
depending on the tupdesc found in the child plan node's result slot to be
blessed.
Back-patch to 9.4, but not earlier, since this solution changes the results
in some cases that users might not have realized were buggy. We'll apply a
more restricted form of this patch in older branches.
pg_basebackup: Adjust tests for long file name issues
Work around accidental test failures because the working directory path
is too long by creating a temporary directory in the (hopefully shorter)
system location, symlinking that to the working directory, and creating
the tablespaces using the shorter path.
Tom Lane [Fri, 7 Nov 2014 01:52:40 +0000 (20:52 -0500)]
Cope with more than 64K phrases in a thesaurus dictionary.
dict_thesaurus stored phrase IDs in uint16 fields, so it would get confused
and even crash if there were more than 64K entries in the configuration
file. It turns out to be basically free to widen the phrase IDs to uint32,
so let's just do so.
This was complained of some time ago by David Boutin (in bug #7793);
he later submitted an informal patch but it was never acted on.
We now have another complaint (bug #11901 from Luc Ouellette) so it's
time to make something happen.
This is basically Boutin's patch, but for future-proofing I also added a
defense against too many words per phrase. Note that we don't need any
explicit defense against overflow of the uint32 counters, since before that
happens we'd hit array allocation sizes that repalloc rejects.
Back-patch to all supported branches because of the crash risk.
Tom Lane [Thu, 6 Nov 2014 16:41:06 +0000 (11:41 -0500)]
Fix normalization of numeric values in JSONB GIN indexes.
The default JSONB GIN opclass (jsonb_ops) converts numeric data values
to strings for storage in the index. It must ensure that numeric values
that would compare equal (such as 12 and 12.00) produce identical strings,
else index searches would have behavior different from regular JSONB
comparisons. Unfortunately the function charged with doing this was
completely wrong: it could reduce distinct numeric values to the same
string, or reduce equivalent numeric values to different strings. The
former type of error would only lead to search inefficiency, but the
latter type of error would cause index entries that should be found by
a search to not be found.
Repairing this bug therefore means that it will be necessary for 9.4 beta
testers to reindex GIN jsonb_ops indexes, if they care about getting
correct results from index searches involving numeric data values within
the comparison JSONB object.
Fujii Masao [Thu, 6 Nov 2014 12:24:40 +0000 (21:24 +0900)]
Prevent the unnecessary creation of .ready file for the timeline history file.
Previously .ready file was created for the timeline history file at the end
of an archive recovery even when WAL archiving was not enabled.
This creation is unnecessary and causes .ready file to remain infinitely.
This commit changes an archive recovery so that it creates .ready file for
the timeline history file only when WAL archiving is enabled.
Tom Lane [Wed, 5 Nov 2014 16:34:13 +0000 (11:34 -0500)]
Fix volatility markings of some contrib I/O functions.
In general, datatype I/O functions are supposed to be immutable or at
worst stable. Some contrib I/O functions were, through oversight, not
marked with any volatility property at all, which made them VOLATILE.
Since (most of) these functions actually behave immutably, the erroneous
marking isn't terribly harmful; but it can be user-visible in certain
circumstances, as per a recent bug report from Joe Van Dyk in which a
cast to text was disallowed in an expression index definition.
To fix, just adjust the declarations in the extension SQL scripts. If we
were being very fussy about this, we'd bump the extension version numbers,
but that seems like more trouble (for both developers and users) than the
problem is worth.
A fly in the ointment is that chkpass_in actually is volatile, because
of its use of random() to generate a fresh salt when presented with a
not-yet-encrypted password. This is bad because of the general assumption
that I/O functions aren't volatile: the consequence is that records or
arrays containing chkpass elements may have input behavior a bit different
from a bare chkpass column. But there seems no way to fix this without
breaking existing usage patterns for chkpass, and the consequences of the
inconsistency don't seem bad enough to justify that. So for the moment,
just document it in a comment.
Since we're not bumping version numbers, there seems no harm in
back-patching these fixes; at least future installations will get the
functions marked correctly.
Tom Lane [Tue, 4 Nov 2014 18:24:10 +0000 (13:24 -0500)]
Drop no-longer-needed buffers during ALTER DATABASE SET TABLESPACE.
The previous coding assumed that we could just let buffers for the
database's old tablespace age out of the buffer arena naturally.
The folly of that is exposed by bug #11867 from Marc Munro: the user could
later move the database back to its original tablespace, after which any
still-surviving buffers would match lookups again and appear to contain
valid data. But they'd be missing any changes applied while the database
was in the new tablespace.
This has been broken since ALTER SET TABLESPACE was introduced, so
back-patch to all supported branches.
Noah Misch [Mon, 3 Nov 2014 02:43:25 +0000 (21:43 -0500)]
Make ECPG test programs depend on "ecpg$(X)", not "ecpg".
Cygwin builds require this of dependencies pertaining to pattern rules.
On Cygwin, stat("foo") in the absence of a file with that exact name can
locate foo.exe. While GNU make uses stat() for dependencies of ordinary
rules, it uses readdir() to assess dependencies of pattern rules.
Therefore, a pattern rule dependency should match any underlying file
name exactly. Back-patch to 9.4, where the dependency was introduced.
Tom Lane [Thu, 30 Oct 2014 17:03:25 +0000 (13:03 -0400)]
Test IsInTransactionChain, not IsTransactionBlock, in vac_update_relstats.
As noted by Noah Misch, my initial cut at fixing bug #11638 didn't cover
all cases where ANALYZE might be invoked in an unsafe context. We need to
test the result of IsInTransactionChain not IsTransactionBlock; which is
notationally a pain because IsInTransactionChain requires an isTopLevel
flag, which would have to be passed down through several levels of callers.
I chose to pass in_outer_xact (ie, the result of IsInTransactionChain)
rather than isTopLevel per se, as that seemed marginally more apropos
for the intermediate functions to know about.
Robert Haas [Thu, 30 Oct 2014 15:35:55 +0000 (11:35 -0400)]
"Pin", rather than "keep", dynamic shared memory mappings and segments.
Nobody seemed concerned about this naming when it originally went in,
but there's a pending patch that implements the opposite of
dsm_keep_mapping, and the term "unkeep" was judged unpalatable.
"unpin" has existing precedent in the PostgreSQL code base, and the
English language, so use this terminology instead.
Tom Lane [Wed, 29 Oct 2014 22:12:04 +0000 (18:12 -0400)]
Avoid corrupting tables when ANALYZE inside a transaction is rolled back.
VACUUM and ANALYZE update the target table's pg_class row in-place, that is
nontransactionally. This is OK, more or less, for the statistical columns,
which are mostly nontransactional anyhow. It's not so OK for the DDL hint
flags (relhasindex etc), which might get changed in response to
transactional changes that could still be rolled back. This isn't a
problem for VACUUM, since it can't be run inside a transaction block nor
in parallel with DDL on the table. However, we allow ANALYZE inside a
transaction block, so if the transaction had earlier removed the last
index, rule, or trigger from the table, and then we roll back the
transaction after ANALYZE, the table would be left in a corrupted state
with the hint flags not set though they should be.
To fix, suppress the hint-flag updates if we are InTransactionBlock().
This is safe enough because it's always OK to postpone hint maintenance
some more; the worst-case consequence is a few extra searches of pg_index
et al. There was discussion of instead using a transactional update,
but that would change the behavior in ways that are not all desirable:
in most scenarios we're better off keeping ANALYZE's statistical values
even if the ANALYZE itself rolls back. In any case we probably don't want
to change this behavior in back branches.
Per bug #11638 from Casey Shobe. This has been broken for a good long
time, so back-patch to all supported branches.
Tom Lane and Michael Paquier, initial diagnosis by Andres Freund
If you call PQreset() repeatedly, and the connection cannot be
re-established, the error messages from the failed connection attempts
kept accumulating in the error string.
Fixes bug #11455 reported by Caleb Epstein. Backpatch to all supported
versions.
Tom Lane [Tue, 28 Oct 2014 22:36:02 +0000 (18:36 -0400)]
Remove obsolete commentary.
Since we got rid of non-MVCC catalog scans, the fourth reason given for
using a non-transactional update in index_update_stats() is obsolete.
The other three are still good, so we're not going to change the code,
but fix the comment.