Tom Lane [Tue, 25 Aug 2015 17:09:48 +0000 (13:09 -0400)]
Limit the verbosity of memory context statistics dumps.
We had a report from Stefan Kaltenbrunner of a case in which postmaster
log files overran available disk space because multiple backends spewed
enormous context stats dumps upon hitting an out-of-memory condition.
Given the lack of similar reports, this isn't a common problem, but it
still seems worth doing something about. However, we don't want to just
blindly truncate the output, because that might prevent diagnosis of OOM
problems. What seems like a workable compromise is to limit the dump to
100 child contexts per parent, and summarize the space used within any
additional child contexts. That should help because practical cases where
the dump gets long will typically be huge numbers of siblings under the
same parent context; while the additional debugging value from seeing
details about individual siblings beyond 100 will not be large, we hope.
Anyway it doesn't take much code or memory space to do this, so let's try
it like this and see how things go.
Since the summarization mechanism requires passing totals back up anyway,
I took the opportunity to add a "grand total" line to the end of the
printout.
Tom Lane [Tue, 25 Aug 2015 15:43:37 +0000 (11:43 -0400)]
Fix potential platform dependence in gist regression test.
The results of the KNN-search test cases were indeterminate, as they asked
the system to sort pairs of points that are exactly equidistant from the
query reference point. It's a bit surprising that we've seen no
platform-specific failures from this in the buildfarm. Perhaps IEEE-float
math is well enough standardized that no such failures will ever occur on
supported platforms ... but since this entire regression test has yet to be
shipped in any non-alpha release, that seems like an unduly optimistic
assumption. Tweak the queries so that the correct output is uniquely
defined.
(The other queries in this test are also underdetermined; but it looks like
they are regurgitating index rows in insertion order, so for the moment
assume that that behavior is stable enough.)
Per Greg Stark's experiments with VAX. Back-patch to 9.5 where this test
script was introduced.
Tom Lane [Sun, 23 Aug 2015 19:15:47 +0000 (15:15 -0400)]
Reduce number of bytes examined by convert_one_string_to_scalar().
Previously, convert_one_string_to_scalar() would examine up to 20 bytes of
the input string, producing a scalar conversion with theoretical precision
far greater than is of any possible use considering the other limitations
on the accuracy of the resulting selectivity estimate. (I think this
choice might pre-date the caller-level logic that strips any common prefix
of the strings; before that, there could have been value in scanning the
strings far enough to use all the precision available in a double.)
Aside from wasting cycles to little purpose, this choice meant that the
"denom" variable could grow to as much as 256^21 = 3.74e50, which could
overflow in some non-IEEE float arithmetics. While we don't really support
any machines with non-IEEE arithmetic anymore, this still seems like quite
an unnecessary platform dependency. Limit the scan to 12 bytes instead,
thus limiting "denom" to 256^13 = 2.03e31, a value more likely to be
computable everywhere.
Per testing by Greg Stark, which showed overflow failures in our standard
regression tests on VAX.
Tom Lane [Sun, 23 Aug 2015 17:02:13 +0000 (13:02 -0400)]
Avoid use of float arithmetic in bipartite_match.c.
Since the distances used in this algorithm are small integers (not more
than the size of the U set, in fact), there is no good reason to use float
arithmetic for them. Use short ints instead: they're smaller, faster, and
require no special portability assumptions.
Per testing by Greg Stark, which disclosed that the code got into an
infinite loop on VAX for lack of IEEE-style float infinities. We don't
really care all that much whether Postgres can run on a VAX anymore,
but there seems sufficient reason to change this code anyway.
In passing, make a few other small adjustments to make the code match
usual Postgres coding style a bit better.
Tom Lane [Sat, 22 Aug 2015 00:32:11 +0000 (20:32 -0400)]
Avoid O(N^2) behavior when enlarging SPI tuple table in spi_printtup().
For no obvious reason, spi_printtup() was coded to enlarge the tuple
pointer table by just 256 slots at a time, rather than doubling the size at
each reallocation, as is our usual habit. For very large SPI results, this
makes for O(N^2) time spent in repalloc(), which of course soon comes to
dominate the runtime. Use the standard doubling approach instead.
This is a longstanding performance bug, so back-patch to all active
branches.
Tom Lane [Sat, 22 Aug 2015 00:17:19 +0000 (20:17 -0400)]
Detect mismatched CONTINUE and EXIT statements at plpgsql compile time.
With a bit of tweaking of the compile namestack data structure, we can
verify at compile time whether a CONTINUE or EXIT is legal. This is
surely better than leaving it to runtime, both because earlier is better
and because we can issue a proper error pointer. Also, we can get rid
of the ad-hoc old way of detecting the problem, which only took care of
CONTINUE not EXIT.
Stephen Frost [Fri, 21 Aug 2015 19:51:24 +0000 (15:51 -0400)]
Clean up roles from roleattributes test
Having the roles remain after the test ends up causing repeated 'make
installcheck' runs to fail and may be risky from a security perspective
also, so remove them at the end of the test.
Alvaro Herrera [Fri, 21 Aug 2015 17:36:54 +0000 (14:36 -0300)]
Do not allow *timestamp to be passed as NULL
The code had bugs that would cause crashes if NULL was passed as that
argument (originally intended to mean not to bother returning its
value), and after inspection it turns out that nothing seems interested
in the case that *ts is NULL anyway. Therefore, remove the partial
checks intended to support that case.
Author: Michael Paquier
though I didn't include a proposed Assert.
Tom Lane [Fri, 21 Aug 2015 16:21:37 +0000 (12:21 -0400)]
Fix plpython crash when returning string representation of a RECORD result.
PLyString_ToComposite() blithely overwrote proc->result.out.d, even though
for a composite result type the other union variant proc->result.out.r is
the one that should be valid. This could result in a crash if out.r had
in fact been filled in (proc->result.is_rowtype == 1) and then somebody
later attempted to use that data; as per bug #13579 from Paweł Michalak.
Just to add insult to injury, it didn't work for RECORD results anyway,
because record_in() would refuse the case.
Fix by doing the I/O function lookup in a local PLyTypeInfo variable,
as we were doing already in PLyObject_ToComposite(). This is not a great
technique because any fn_extra data allocated by the input function will
be leaked permanently (thanks to using TopMemoryContext as fn_mcxt).
But that's a pre-existing issue that is much less serious than a crash,
so leave it to be fixed separately.
This bug would be a potential security issue, except that plpython is
only available to superusers and the crash requires coding the function
in a way that didn't work before today's patches.
Add regression test cases covering all the supported methods of converting
composite results.
Back-patch to 9.1 where the faulty coding was introduced.
Tom Lane [Fri, 21 Aug 2015 15:19:33 +0000 (11:19 -0400)]
Allow record_in() and record_recv() to work for transient record types.
If we have the typmod that identifies a registered record type, there's no
reason that record_in() should refuse to perform input conversion for it.
Now, in direct SQL usage, record_in() will always be passed typmod = -1
with type OID RECORDOID, because no typmodin exists for type RECORD, so the
case can't arise. However, some InputFunctionCall users such as PLs may be
able to supply the right typmod, so we should allow this to support them.
Note: the previous coding and comment here predate commit 59c016aa9f490b53.
There has been no case since 8.1 in which the passed type OID wouldn't be
valid; and if it weren't, this error message wouldn't be apropos anyway.
Better to let lookup_rowtype_tupdesc complain about it.
Back-patch to 9.1, as this is necessary for my upcoming plpython fix.
I'm committing it separately just to make it a bit more visible in the
commit history.
Stephen Frost [Fri, 21 Aug 2015 12:22:22 +0000 (08:22 -0400)]
Rename 'cmd' to 'cmd_name' in CreatePolicyStmt
To avoid confusion, rename CreatePolicyStmt's 'cmd' to 'cmd_name',
parse_policy_command's 'cmd' to 'polcmd', and AlterPolicy's 'cmd_datum'
to 'polcmd_datum', per discussion with Noah and as a follow-up to his
correction of copynodes/equalnodes handling of the CreatePolicyStmt
'cmd' field.
Back-patch to 9.5 where the CreatePolicyStmt was introduced, as we
are still only in alpha.
Stephen Frost [Fri, 21 Aug 2015 12:22:22 +0000 (08:22 -0400)]
In AlterRole, make bypassrls an int
When reworking bypassrls in AlterRole to operate the same way the other
attribute handling is done, I missed that the variable was incorrectly a
bool rather than an int. This meant that on platforms with an unsigned
char, we could end up with incorrect behavior during ALTER ROLE.
Pointed out by Andres thanks to tests he did changing our bool to be the
one from stdbool.h which showed this and a number of other issues.
Add regression tests to test CREATE/ALTER role for the various role
attributes. Arrange to leave roles behind for testing pg_dumpall, but
none which have the LOGIN attribute.
Tom Lane [Thu, 20 Aug 2015 16:28:15 +0000 (12:28 -0400)]
Remove xpath namespace-handling change from 9.5 release notes.
Although commit 79af9a1d2 was initially applied to HEAD only, we later
back-patched the change into all branches (commits 6bbf75192 et al).
So it's not a new behavior in 9.5 and should not be release-noted here.
Kevin Grittner [Wed, 19 Aug 2015 13:20:55 +0000 (08:20 -0500)]
Fix bug in calculations of hash join buckets.
Commit 8cce08f168481c5fc5be4e7e29b968e314f1b41e used a left-shift
on a literal of 1 that could (in large allocations) be shifted by
31 or more bits. This was assigned to a local variable that was
already declared to be a long to protect against overruns of int,
but the literal in this shift needs to be declared long to allow it
to work correctly in some compilers.
Backpatch to 9.5, where the bug was introduced.
Report and patch by KaiGai Kohei, slighly modified based on
discussion.
Tom Lane [Tue, 18 Aug 2015 23:22:37 +0000 (19:22 -0400)]
Fix a few bogus statement type names in plpgsql error messages.
plpgsql's error location context messages ("PL/pgSQL function fn-name line
line-no at stmt-type") would misreport a CONTINUE statement as being an
EXIT, and misreport a MOVE statement as being a FETCH. These are clear
bugs that have been there a long time, so back-patch to all supported
branches.
In addition, in 9.5 and HEAD, change the description of EXECUTE from
"EXECUTE statement" to just plain EXECUTE; there seems no good reason why
this statement type should be described differently from others that have
a well-defined head keyword. And distinguish GET STACKED DIAGNOSTICS from
plain GET DIAGNOSTICS. These are a bit more of a judgment call, and also
affect existing regression-test outputs, so I did not back-patch into
stable branches.
Tom Lane [Mon, 17 Aug 2015 23:39:35 +0000 (19:39 -0400)]
Fix performance bug from conflict between two previous improvements.
My expanded-objects patch (commit 1dc5ebc9077ab742) included code to make
plpgsql pass expanded-object variables as R/W pointers to certain functions
that are trusted for modifying such variables in-place. However, that
optimization got broken by commit 6c82d8d1fdb1f126, which arranged to share
a single ParamListInfo across most expressions evaluated by a plpgsql
function. We don't want a R/W pointer to be passed to other functions
just because we decided one function was safe! Fortunately, the breakage
was in the other direction, of never passing a R/W pointer at all, because
we'd always have pre-initialized the shared array slot with a R/O pointer.
So it was still functionally correct, but we were back to O(N^2)
performance for repeated use of "a := a || x". To fix, force an unshared
param array to be used when the R/W param optimization is active.
Commit 6c82d8d1fdb1f126 is in HEAD only, so no need for a back-patch.
Andres Freund [Mon, 17 Aug 2015 09:15:46 +0000 (11:15 +0200)]
Improve configure test for the sse4.2 crc instruction.
With optimizations enabled at least one compiler, clang 3.7, optimized
away the crc intrinsics knowing that the result went on unused and has
no side effects. That can trigger errors in code generation when the
intrinsic is used, as we chose to use the intrinsics without any
additional compiler flag. Return the computed value to prevent that.
With some more pedantic warning flags (-Wold-style-definition) the
configure test failed to recognize the existence of _mm_crc32_u*
intrinsics due to an independent warning in the test because the test
turned on -Werror, but that's not actually needed here.
Discussion: 20150814092039.GH4955@awork2.anarazel.de
Backpatch: 9.5, where the use of crc intrinsics was integrated.
Tom Lane [Sat, 15 Aug 2015 18:31:04 +0000 (14:31 -0400)]
Add docs about postgres_fdw's setting of search_path and other GUCs.
This behavior wasn't documented, but it should be because it's user-visible
in triggers and other functions executed on the remote server.
Per question from Adam Fuchs.
Tom Lane [Sat, 15 Aug 2015 17:30:16 +0000 (13:30 -0400)]
Improve documentation about MVCC-unsafe utility commands.
The table-rewriting forms of ALTER TABLE are MVCC-unsafe, in much the same
way as TRUNCATE, because they replace all rows of the table with newly-made
rows with a new xmin. (Ideally, concurrent transactions with old snapshots
would continue to see the old table contents, but the data is not there
anymore --- and if it were there, it would be inconsistent with the table's
updated rowtype, so there would be serious implementation problems to fix.)
This was nowhere documented though, and the problem was only documented for
TRUNCATE in a note in the TRUNCATE reference page. Create a new "Caveats"
section in the MVCC chapter that can be home to this and other limitations
on serializable consistency.
In passing, fix a mistaken statement that VACUUM and CLUSTER would reclaim
space occupied by a dropped column. They don't reconstruct existing tuples
so they couldn't do that.
Tom Lane [Sat, 15 Aug 2015 16:00:36 +0000 (12:00 -0400)]
Repair unsafe use of shared typecast-lookup table in plpgsql DO blocks.
DO blocks use private simple_eval_estates to avoid intra-transaction memory
leakage, cf commit c7b849a89. I had forgotten about that while writing
commit 0fc94a5ba, but it means that expression execution trees created
within a DO block disappear immediately on exiting the DO block, and hence
can't safely be linked into plpgsql's session-wide cast hash table.
To fix, give a DO block a private cast hash table to go with its private
simple_eval_estate. This is less efficient than one could wish, since
DO blocks can no longer share any cast lookup work with other plpgsql
execution, but it shouldn't be too bad; in any case it's no worse than
what happened in DO blocks before commit 0fc94a5ba.
Per bug #13571 from Feike Steenbergen. Preliminary analysis by
Oleksandr Shulgin.
Andres Freund [Sat, 15 Aug 2015 15:25:00 +0000 (17:25 +0200)]
Don't use function definitions looking like old-style ones.
This fixes a bunch of somewhat pedantic warnings with new
compilers. Since by far the majority of other functions definitions use
the (void) style it just seems to be consistent to do so as well in the
remaining few places.
Andres Freund [Sat, 15 Aug 2015 15:02:47 +0000 (17:02 +0200)]
Correct type of waitMode variable in ExecInsertIndexTuples().
It was a bool, even though it should be CEOUC_WAIT_MODE. That's unlikely
to have a negative effect with the current definition of bool (char),
but it's definitely wrong.
Discussion: 20150812084351.GD8470@awork2.anarazel.de
Backpatch: 9.5, where ON CONFLICT was merged
Andres Freund [Wed, 12 Aug 2015 14:49:36 +0000 (16:49 +0200)]
vacuumdb: Don't assign negative values to a boolean.
Since a17923204736 (vacuumdb: enable parallel mode) -1 has been assigned
to a boolean. That can, justifiedly, trigger compiler warnings. There's
also no need for ternary logic, result was only ever set to 0 or -1. So
don't.
Andres Freund [Wed, 12 Aug 2015 14:02:20 +0000 (16:02 +0200)]
Don't use 'bool' as a struct member name in help_config.c.
Doing so doesn't work if bool is a macro rather than a typedef.
Although c.h spends some effort to support configurations where bool is
a preexisting macro, help_config.c has existed this way since
2003 (b700a6), and there have not been any reports of
problems. Backpatch anyway since this is as riskless as it gets.
Robert Haas [Sat, 15 Aug 2015 02:39:09 +0000 (22:39 -0400)]
Remove bogus step from test_decoding isolation tests.
Commit 43b4a16817c8b5568cec72f3b0e1c8209f5ac7f7 made the isolation
tester reject duplicate step names, and it turns out that the
test_decoding module's concurrent_ddl_dml isolation test has a
duplicate name. I think the second definition isn't actually getting
used, so just remove it.
Noah Misch [Sat, 15 Aug 2015 00:23:13 +0000 (20:23 -0400)]
Encoding PG_UHC is code page 949.
This fixes presentation of non-ASCII messages to the Windows event log
and console in rare cases involving Korean locale. Processes like the
postmaster and checkpointer, but not processes attached to databases,
were affected. Back-patch to 9.4, where MessageEncoding was introduced.
The problem exists in all supported versions, but this change has no
effect in the absence of the code recognizing PG_UHC MessageEncoding.
Noticed while investigating bug #13427 from Dmitri Bourlatchkov.
Noah Misch [Sat, 15 Aug 2015 00:23:09 +0000 (20:23 -0400)]
Restore old pgwin32_message_to_UTF16() behavior outside transactions.
Commit 49c817eab78c6f0ce8c3bf46766b73d6cf3190b7 replaced with a hard
error the dubious pg_do_encoding_conversion() behavior when outside a
transaction. Reintroduce the historic soft failure locally within
pgwin32_message_to_UTF16(). This fixes errors when writing messages in
less-common encodings to the Windows event log or console. Back-patch
to 9.4, where the aforementioned commit first appeared.
Simon Riggs [Fri, 14 Aug 2015 13:19:28 +0000 (14:19 +0100)]
Reduce lock levels for ALTER TABLE SET autovacuum storage options
Reduce lock levels down to ShareUpdateExclusiveLock for all autovacuum-related
relation options when setting them using ALTER TABLE.
Add infrastructure to allow varying lock levels for relation options in later
patches. Setting multiple options together uses the highest lock level required
for any option. Works for both main and toast tables.
FabrÃzio Mello, reviewed by Michael Paquier, mild edit and additional regression
tests from myself
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file. In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python. So use a generic
exception instead, which has a message that doesn't vary across
versions.
Alvaro Herrera [Thu, 13 Aug 2015 17:41:52 +0000 (14:41 -0300)]
Re-add BRIN isolation test
This time, instead of using a core isolation test, put it on its own
test module; this way it can require the pageinspect module to be
present before running.
The module's Makefile is loosely modeled after test_decoding's, so that
it's easy to add further tests for either pg_regress or isolationtester
later.
Tom Lane [Thu, 13 Aug 2015 17:25:01 +0000 (13:25 -0400)]
Improve regression test case to avoid depending on system catalog stats.
In commit 95f4e59c32866716 I added a regression test case that examined
the plan of a query on system catalogs. That isn't a terribly great idea
because the catalogs tend to change from version to version, or even
within a version if someone makes an unrelated regression-test change that
populates the catalogs a bit differently. Usually I try to make planner
test cases rely on test tables that have not changed since Berkeley days,
but I got sloppy in this case because the submitted crasher example queried
the catalogs and I didn't spend enough time on rewriting it. But it was a
problem waiting to happen, as I was rudely reminded when I tried to port
that patch into Salesforce's Postgres variant :-(. So spend a little more
effort and rewrite the query to not use any system catalogs. I verified
that this version still provokes the Assert if 95f4e59c32866716's code fix
is reverted.
I also removed the EXPLAIN output from the test, as it turns out that the
assertion occurs while considering a plan that isn't the one ultimately
selected anyway; so there's no value in risking any cross-platform
variation in that printout.
Alvaro Herrera [Thu, 13 Aug 2015 16:02:10 +0000 (13:02 -0300)]
Use materialize SRF mode in brin_page_items
This function was using the single-value-per-call mechanism, but the
code relied on a relcache entry that wasn't kept open across calls.
This manifested as weird errors in buildfarm during the short time that
the "brin-1" isolation test lived.
Run autoheader to add a few missing #defines to pg_config.h.in.
These are emitted by the new ax_pthread.m4 script version. They are not
used for anything in PostgreSQL, but let's keep the generated header file
up-to-date.
Tom Lane [Thu, 13 Aug 2015 01:18:45 +0000 (21:18 -0400)]
Undo mistaken tightening in join_is_legal().
One of the changes I made in commit 8703059c6b55c427 turns out not to have
been such a good idea: we still need the exception in join_is_legal() that
allows a join if both inputs already overlap the RHS of the special join
we're checking. Otherwise we can miss valid plans, and might indeed fail
to find a plan at all, as in recent report from Andreas Seltenreich.
That code was added way back in commit c17117649b9ae23d, but I failed to
include a regression test case then; my bad. Put it back with a better
explanation, and a test this time. The logic does end up a bit different
than before though: I now believe it's appropriate to make this check
first, thereby allowing such a case whether or not we'd consider the
previous SJ(s) to commute with this one. (Presumably, we already decided
they did; but it was confusing to have this consideration in the middle
of the code that was handling the other case.)
Back-patch to all active branches, like the previous patch.
Alvaro Herrera [Wed, 12 Aug 2015 17:20:38 +0000 (14:20 -0300)]
Close some holes in BRIN page assignment
In some corner cases, it is possible for the BRIN index relation to be
extended by brin_getinsertbuffer but the new page not be used
immediately for anything by its callers; when this happens, the page is
initialized and the FSM is updated (by brin_getinsertbuffer) with the
info about that page, but these actions are not WAL-logged. A later
index insert/update can use the page, but since the page is already
initialized, the initialization itself is not WAL-logged then either.
Replay of this sequence of events causes recovery to fail altogether.
There is a related corner case within brin_getinsertbuffer itself, in
which we extend the relation to put a new index tuple there, but later
find out that we cannot do so, and do not return the buffer; the page
obtained from extension is not even initialized. The resulting page is
lost forever.
To fix, shuffle the code so that initialization is not the
responsibility of brin_getinsertbuffer anymore, in normal cases;
instead, the initialization is done by its callers (brin_doinsert and
brin_doupdate) once they're certain that the page is going to be used.
When either those functions determine that the new page cannot be used,
before bailing out they initialize the page as an empty regular page,
enter it in FSM and WAL-log all this. This way, the page is usable for
future index insertions, and WAL replay doesn't find trying to insert
tuples in pages whose initialization didn't make it to the WAL. The
same strategy is used in brin_getinsertbuffer when it cannot return the
new page.
Additionally, add a new step to vacuuming so that all pages of the index
are scanned; whenever an uninitialized page is found, it is initialized
as empty and WAL-logged. This closes the hole that the relation is
extended but the system crashes before anything is WAL-logged about it.
We also take this opportunity to update the FSM, in case it has gotten
out of date.
Thanks to Heikki Linnakangas for finding the problem that kicked some
additional analysis of BRIN page assignment code.
Andres Freund [Wed, 12 Aug 2015 15:35:50 +0000 (17:35 +0200)]
Fix two off-by-one errors in bufmgr.c.
In 4b4b680c I passed a buffer index number (starting from 0) instead of
a proper Buffer id (which start from 1 for shared buffers) in two
places.
This wasn't noticed so far as one of those locations isn't compiled at
all (PrintPinnedBufs) and the other one (InvalidBuffer) requires a
unlikely, but possible, set of circumstances to trigger a symptom.
To reduce the likelihood of such incidents a bit also convert existing
open coded mappings from buffer descriptors to buffer ids with
BufferDescriptorGetBuffer().
Author: Qingqing Zhou Reported-By: Qingqing Zhou
Discussion: CAJjS0u2ai9ooUisKtkV8cuVUtEkMTsbK8c7juNAjv8K11zeCQg@mail.gmail.com
Backpatch: 9.5 where the private ref count infrastructure was introduced
Tom Lane [Wed, 12 Aug 2015 04:48:11 +0000 (00:48 -0400)]
Fix some possible low-memory failures in regexp compilation.
newnfa() failed to set the regex error state when malloc() fails.
Several places in regcomp.c failed to check for an error after calling
subre(). Each of these mistakes could lead to null-pointer-dereference
crashes in memory-starved backends.
Report and patch by Andreas Seltenreich. Back-patch to all branches.
Tom Lane [Wed, 12 Aug 2015 03:48:37 +0000 (23:48 -0400)]
Postpone extParam/allParam calculations until the very end of planning.
Until now we computed these Param ID sets at the end of subquery_planner,
but that approach depends on subquery_planner returning a concrete Plan
tree. We would like to switch over to returning one or more Paths for a
subquery, and in that representation the necessary details aren't fully
fleshed out (not to mention that we don't really want to do this work for
Paths that end up getting discarded). Hence, refactor so that we can
compute the param ID sets at the end of planning, just before
set_plan_references is run.
The main change necessary to make this work is that we need to capture
the set of outer-level Param IDs available to the current query level
before exiting subquery_planner, since the outer levels' plan_params lists
are transient. (That's not going to pose a problem for returning Paths,
since all the work involved in producing that data is part of expression
preprocessing, which will continue to happen before Paths are produced.)
On the plus side, this change gets rid of several existing kluges.
Eventually I'd like to get rid of SS_finalize_plan altogether in favor of
doing this work during set_plan_references, but that will require some
complex rejiggering because SS_finalize_plan needs to visit subplans and
initplans before the main plan. So leave that idea for another day.
Tom Lane [Tue, 11 Aug 2015 14:46:51 +0000 (10:46 -0400)]
Fix broken markup, and copy-edit a bit.
Fix docs build failure introduced by commit 6fcd88511f8e69e3.
I failed to resist the temptation to rearrange the description of
pg_create_physical_replication_slot(), too.
Andres Freund [Tue, 11 Aug 2015 10:34:31 +0000 (12:34 +0200)]
Allow pg_create_physical_replication_slot() to reserve WAL.
When creating a physical slot it's often useful to immediately reserve
the current WAL position instead of only doing after the first feedback
message arrives. That e.g. allows slots to guarantee that all the WAL
for a base backup will be available afterwards.
Logical slots already have to reserve WAL during creation, so generalize
that logic into being usable for both physical and logical slots.
Catversion bump because of the new parameter.
Author: Gurjeet Singh Reviewed-By: Andres Freund
Discussion: CABwTF4Wh_dBCzTU=49pFXR6coR4NW1ynb+vBqT+Po=7fuq5iCw@mail.gmail.com
Tom Lane [Tue, 11 Aug 2015 00:10:15 +0000 (20:10 -0400)]
Fix privilege dumping from servers too old to have that type of privilege.
pg_dump produced fairly silly GRANT/REVOKE commands when dumping types from
pre-9.2 servers, and when dumping functions or procedural languages from
pre-7.3 servers. Those server versions lack the typacl, proacl, and/or
lanacl columns respectively, and pg_dump substituted default values that
were in fact incorrect. We ended up revoking all the owner's own
privileges for the object while granting all privileges to PUBLIC.
Of course the owner would then have those privileges again via PUBLIC, so
long as she did not try to revoke PUBLIC's privileges; which may explain
the lack of field reports. Nonetheless this is pretty silly behavior.
The stakes were raised by my recent patch to make pg_dump dump shell types,
because 9.2 and up pg_dump would proceed to emit bogus GRANT/REVOKE
commands for a shell type if dumping from a pre-9.2 server; and the server
will not accept GRANT/REVOKE commands for a shell type. (Perhaps it
should, but that's a topic for another day.) So the resulting dump script
wouldn't load without errors.
The right thing to do is to act as though these objects have default
privileges (null ACL entries), which causes pg_dump to print no
GRANT/REVOKE commands at all for them. That fixes the silly results
and also dodges the problem with shell types.
In passing, modify getProcLangs() to be less creatively different about
how to handle missing columns when dumping from older server versions.
Every other data-acquisition function in pg_dump does that by substituting
appropriate default values in the version-specific SQL commands, and I see
no reason why this one should march to its own drummer. Its use of
"SELECT *" was likewise not conformant with anyplace else, not to mention
it's not considered good SQL style for production queries.
Back-patch to all supported versions. Although 9.0 and 9.1 pg_dump don't
have the issue with typacl, they are more likely than newer versions to be
used to dump from ancient servers, so we ought to fix the proacl/lanacl
issues all the way back.
Tom Lane [Mon, 10 Aug 2015 21:18:17 +0000 (17:18 -0400)]
Further mucking with PlaceHolderVar-related restrictions on join order.
Commit 85e5e222b1dd02f135a8c3bf387d0d6d88e669bd turns out not to have taken
care of all cases of the partially-evaluatable-PlaceHolderVar problem found
by Andreas Seltenreich's fuzz testing. I had set it up to check for risky
PHVs only in the event that we were making a star-schema-based exception to
the param_source_rels join ordering heuristic. However, it turns out that
the problem can occur even in joins that satisfy the param_source_rels
heuristic, in which case allow_star_schema_join() isn't consulted.
Refactor so that we check for risky PHVs whenever the proposed join has
any remaining parameterization.
Back-patch to 9.2, like the previous patch (except for the regression test
case, which only works back to 9.3 because it uses LATERAL).
Note that this discovery implies that problems of this sort could've
occurred in 9.2 and up even before the star-schema patch; though I've not
tried to prove that experimentally.
Andrew Dunstan [Mon, 10 Aug 2015 14:42:05 +0000 (10:42 -0400)]
Work around an apparent bug in the Msys DTK perl's regex engine.
Several versions of the perl that comes with the Msys DTK have been
found to have a bug that fails to recognize a ' before a multiline $ in
some circumstances. To work around the problem, use a character class
for the '. Another solution would have been to use \n instead of $, but
that would have changed the test semantics very slightly.
Tom Lane [Mon, 10 Aug 2015 14:22:37 +0000 (10:22 -0400)]
Temporarily(?) remove BRIN isolation test.
Commit 2834855cb added a not-very-carefully-thought-out isolation test
to check a BRIN index bug fix. The test depended on the availability
of the pageinspect contrib module, which meant it did not work in
several common testing scenarios such as "make check-world". It's not
clear whether we want a core test depending on a contrib module like
that, but in any case, failing to deal with the possibility that the
module isn't present in the installation-under-test is not acceptable.
Andres Freund [Mon, 10 Aug 2015 11:28:18 +0000 (13:28 +0200)]
Add confirmed_flush column to pg_replication_slots.
There's no reason not to expose both restart_lsn and confirmed_flush
since they have rather distinct meanings. The former is the oldest WAL
still required and valid for both physical and logical slots, whereas
the latter is the location up to which a logical slot's consumer has
confirmed receiving data. Most of the time a slot will require older
WAL (i.e. restart_lsn) than the confirmed
position (i.e. confirmed_flush_lsn).
Author: Marko Tiikkaja, editorialized by me
Discussion: 559D110B.1020109@joh.to
Andres Freund [Mon, 10 Aug 2015 11:28:18 +0000 (13:28 +0200)]
Fix copy & paste mistake in pg_get_replication_slots().
XLogRecPtr was compared with InvalidTransactionId instead of
InvalidXLogRecPtr. As both are defined to the same value this doesn't
cause any actual problems, but it's still wrong.
Andres Freund [Mon, 10 Aug 2015 11:28:18 +0000 (13:28 +0200)]
Don't start to stream after pg_receivexlog --create-slot.
Immediately starting to stream after --create-slot is inconvenient in a
number of situations (e.g. when configuring a slot for use in
recovery.conf) and it's easy to just call pg_receivexlog twice in the
rest of the cases.
Author: Michael Paquier
Discussion: CAB7nPqQ9qEtuDiKY3OpNzHcz5iUA+DUX9FcN9K8GUkCZvG7+Ew@mail.gmail.com
Backpatch: 9.5, where the option was introduced
Tom Lane [Sun, 9 Aug 2015 23:01:04 +0000 (19:01 -0400)]
Remove gram.y's precedence declaration for OVERLAPS.
The allowed syntax for OVERLAPS, viz "row OVERLAPS row", is sufficiently
constrained that we don't actually need a precedence declaration for
OVERLAPS; indeed removing this declaration does not change the generated
gram.c file at all. Let's remove it to avoid confusion about whether
OVERLAPS has precedence or not. If we ever generalize what we allow for
OVERLAPS, we might need to put back a precedence declaration for it,
but we might want some other level than what it has today --- and leaving
the declaration there would just risk confusion about whether that would
be an incompatible change.
Likewise, remove OVERLAPS from the documentation's precedence table.
Per discussion with Noah Misch. Back-patch to 9.5 where we hacked up some
nearby precedence decisions.
Andres Freund [Fri, 7 Aug 2015 23:19:02 +0000 (01:19 +0200)]
Attempt to work around a 32bit xlc compiler bug from a different place.
In de6fd1c8 I moved the the work around from 53f73879 into the aix
template. The previous location was removed in the former commit, and I
thought that it would be nice to emit a warning when running configure.
That didn't turn out to work because at the point the template is
included we don't know whether we're compiling a 32/64 bit binary and
it's possible to install compilers for both on a 64 bit kernel/OS.
So go back to a less ambitious approach and define
PG_FORCE_DISABLE_INLINE in port/aix.h, without emitting a warning. We
could try a more fancy approach, but it doesn't seem worth it.
This requires moving the check for PG_FORCE_DISABLE_INLINE in c.h to
after including the system headers included from therein which isn't
perfect, as it seems slightly more robust to include all system headers
in a similar environment. Oh well.
Andres Freund [Fri, 7 Aug 2015 23:19:02 +0000 (01:19 +0200)]
Fix bug slowing down pgbench when -P is used.
A removed check in ba3deeefb made all threads but the main one busy-loop
when -P was used. All threads computed the time to the next time the
progress report should be printed, but only the main thread did so and
re-scheduled it only for the future.
Tom Lane [Fri, 7 Aug 2015 18:13:38 +0000 (14:13 -0400)]
Further adjustments to PlaceHolderVar removal.
A new test case from Andreas Seltenreich showed that we were still a bit
confused about removing PlaceHolderVars during join removal. Specifically,
remove_rel_from_query would remove a PHV that was used only underneath
the removable join, even if the place where it's used was the join partner
relation and not the join clause being deleted. This would lead to a
"too late to create a new PlaceHolderInfo" error later on. We can defend
against that by checking ph_eval_at to see if the PHV could possibly be
getting used at some partner rel.
Also improve some nearby LATERAL-related logic. I decided that the check
on ph_lateral needed to take precedence over the check on ph_needed, in
case there's a lateral reference underneath the join being considered.
(That may be impossible, but I'm not convinced of it, and it's easy enough
to defend against the case.) Also, I realized that remove_rel_from_query's
logic for updating LateralJoinInfos is dead code, because we don't build
those at all until after join removal.
Back-patch to 9.3. Previous versions didn't have the LATERAL issues, of
course, and they also didn't attempt to remove PlaceHolderInfos during join
removal. (I'm starting to wonder if changing that was really such a great
idea.)
Andres Freund [Fri, 7 Aug 2015 13:10:56 +0000 (15:10 +0200)]
Don't include low level locking code from frontend code.
Some frontend code like e.g. pg_xlogdump or pg_resetxlog, has to use
backend headers. Unfortunately until now that code includes most of the
locking code. It's generally not nice to expose such low level details,
but de6fd1c898 made that a hard problem. We fall back to defining
'inline' away if the compiler doesn't support it - that can cause linker
errors like on buildfarm animal pademelon if a inline function
references backend only code.
To fix that problem separate definitions from lock.h that are required
from frontend code into lockdefs.h and use it in the relevant
places. I've only removed the minimal amount of necessary definitions
for now - it might turn out that we want more for other reasons.
To avoid such details being exposed again put some checks against being
included from frontend code into atomics.h, lock.h, lwlock.h and
s_lock.h. It's otherwise fairly easy to indirectly include these
headers.
Andres Freund [Fri, 7 Aug 2015 13:08:51 +0000 (15:08 +0200)]
Address points made in post-commit review of replication origins.
Amit reviewed the replication origins patch and made some good
points. Address them. This fixes typos in error messages, docs and
comments and adds a missing error check (although in a
should-never-happen scenario).
Discussion: CAA4eK1JqUBVeWWKwUmBPryFaje4190ug0y-OAUHWQ6tD83V4xg@mail.gmail.com
Backpatch: 9.5, where replication origins were introduced.
Tom Lane [Fri, 7 Aug 2015 02:14:07 +0000 (22:14 -0400)]
Fix old oversight in join removal logic.
Commit 9e7e29c75ad441450f9b8287bd51c13521641e3b introduced an Assert that
join removal didn't reduce the eval_at set of any PlaceHolderVar to empty.
At first glance it looks like join_is_removable ensures that's true --- but
actually, the loop in join_is_removable skips PlaceHolderVars that are not
referenced above the join due to be removed. So, if we don't want any
empty eval_at sets, the right thing to do is to delete any now-unreferenced
PlaceHolderVars from the data structure entirely.
Per fuzz testing by Andreas Seltenreich. Back-patch to 9.3 where the
aforesaid Assert was added.
Tom Lane [Fri, 7 Aug 2015 00:14:37 +0000 (20:14 -0400)]
Fix eclass_useful_for_merging to give valid results for appendrel children.
Formerly, this function would always return "true" for an appendrel child
relation, because it would think that the appendrel parent was a potential
join target for the child. In principle that should only lead to some
inefficiency in planning, but fuzz testing by Andreas Seltenreich disclosed
that it could lead to "could not find pathkey item to sort" planner errors
in odd corner cases. Specifically, we would think that all columns of a
child table's multicolumn index were interesting pathkeys, causing us to
generate a MergeAppend path that sorts by all the columns. However, if any
of those columns weren't actually used above the level of the appendrel,
they would not get added to that rel's targetlist, which would result in
being unable to resolve the MergeAppend's sort keys against its targetlist
during createplan.c.
Backpatch to 9.3. In older versions, columns of an appendrel get added
to its targetlist even if they're not mentioned above the scan level,
so that the failure doesn't occur. It might be worth back-patching this
fix to older versions anyway, but I'll refrain for the moment.
Tom Lane [Thu, 6 Aug 2015 19:35:27 +0000 (15:35 -0400)]
Further fixes for degenerate outer join clauses.
Further testing revealed that commit f69b4b9495269cc4 was still a few
bricks shy of a load: minor tweaking of the previous test cases resulted
in the same wrong-outer-join-order problem coming back. After study
I concluded that my previous changes in make_outerjoininfo() were just
accidentally masking the problem, and should be reverted in favor of
forcing syntactic join order whenever an upper outer join's predicate
doesn't mention a lower outer join's LHS. This still allows the
chained-outer-joins style that is the normally optimizable case.
I also tightened things up some more in join_is_legal(). It seems to me
on review that what's really happening in the exception case where we
ignore a mismatched special join is that we're allowing the proposed join
to associate into the RHS of the outer join we're comparing it to. As
such, we should *always* insist that the proposed join be a left join,
which eliminates a bunch of rather dubious argumentation. The case where
we weren't enforcing that was the one that was already known buggy anyway
(it had a violatable Assert before the aforesaid commit) so it hardly
deserves a lot of deference.
Back-patch to all active branches, like the previous patch. The added
regression test case failed in all branches back to 9.1, and I think it's
only an unrelated change in costing calculations that kept 9.0 from
choosing a broken plan.
Robert Haas [Thu, 6 Aug 2015 17:25:45 +0000 (13:25 -0400)]
Fix incorrect calculation in shm_mq_receive.
If some, but not all, of the length word has already been read, and the
next attempt to read sees exactly the number of bytes needed to complete
the length word, or fewer, then we'll incorrectly read less than all of
the available data.
Robert Haas [Thu, 6 Aug 2015 15:52:51 +0000 (11:52 -0400)]
Reduce ProcArrayLock contention by removing backends in batches.
When a write transaction commits, it must clear its XID advertised via
the ProcArray, which requires that we hold ProcArrayLock in exclusive
mode in order to prevent concurrent processes running GetSnapshotData
from seeing inconsistent results. When many processes try to commit
at once, ProcArrayLock must change hands repeatedly, with each
concurrent process trying to commit waking up to acquire the lock in
turn. To make things more efficient, when more than one backend is
trying to commit a write transaction at the same time, have just one
of them acquire ProcArrayLock in exclusive mode and clear the XIDs of
all processes in the group. Benchmarking reveals that this is much
more efficient at very high client counts.
Amit Kapila, heavily revised by me, with some review also from Pavan
Deolasee.
Kevin Grittner [Thu, 6 Aug 2015 15:47:47 +0000 (10:47 -0500)]
Fix `make installcheck` for serializable transactions.
Commit e5550d5fec66aa74caad1f79b79826ec64898688 added some new
tests for ALTER TABLE which involved table scans. When
default_transaction_isolation = 'serializable' these acquire
relation-level SIReadLocks. The test results didn't cope with
that. Add SIReadLock as the minimum lock level for purposes of
these tests.
This could also be fixed by excluding this type of lock from the
my_locks view, but it would be a bug for SIReadLock to show up for
a relation which was not otherwise locked, so do it this way to
allow that sort of condition to cause a regression test failure.
There is some question whether we could avoid taking SIReadLocks
during these operations, but confirming the safety of that and
figuring out how to avoid the locks is not trivial, and would be
a separate patch.
Andres Freund [Thu, 6 Aug 2015 10:38:35 +0000 (12:38 +0200)]
Improve includes introduced in the replication origins patch.
pg_resetxlog.h contained two superfluous includes, origin.h superfluously
depended on logical.h, and pg_xlogdump's rmgrdesc.h only indirectly
included origin.h.
Backpatch: 9.5, where replication origins were introduced.
Noah Misch [Thu, 6 Aug 2015 00:44:27 +0000 (20:44 -0400)]
Reconcile nodes/*funcs.c with recent work.
A few of the discrepancies had semantic significance, but I did not
track down the resulting user-visible bugs, if any. Back-patch to 9.5,
where all but one discrepancy appeared. The _equalCreateEventTrigStmt()
situation dates to 9.3 but does not affect semantics.
catversion bump due to readfuncs.c field order changes.
Noah Misch [Thu, 6 Aug 2015 00:43:07 +0000 (20:43 -0400)]
Link $(WIN32RES) into single-file modules only when PGFILEDESC is set.
Commit 0ffc201a51395ca71fe429ef86c872850a5850ee included this object
unconditionally. Being unprepared for that, most external, single-file
modules failed to build. This better aligns the GNU make build system
with the heuristic in the MSVC build's Project::AddDirResourceFile().
In-tree, installed modules set PGFILEDESC, so they will see no change.
Also, under PGXS, omit the nonfunctioning rule to build win32ver.rc.
Back-patch to 9.5, where the aforementioned commit first appeared.