Andres Freund [Sat, 12 Nov 2016 13:01:48 +0000 (05:01 -0800)]
Add minimal set of regression tests for pg_stat_statements.
While the set of covered functionality is fairly small, the added tests
still are useful to get some basic buildfarm testing of
pg_stat_statements itself, but also to exercise the lwlock tranch code
on the buildfarm.
Author: Amit Kapila, slightly editorialized by me Reviewed-By: Ashutosh Sharma, Andres Freund
Discussion: <CAA4eK1JOjkdXYtHxh=2aDK4VgDtN-LNGKY_YqX0N=YEvuzQVWg@mail.gmail.com>
Tom Lane [Fri, 11 Nov 2016 17:03:49 +0000 (12:03 -0500)]
Doc: fix data types of FuncCallContext's call_cntr and max_calls fields.
Commit 23a27b039 widened these from uint32 to uint64, but I overlooked
that the documentation explicitly showed them as uint32. Per report
from Vicky Vergara.
Tom Lane [Thu, 10 Nov 2016 21:16:33 +0000 (16:16 -0500)]
Cleanup of rewriter and planner handling of Query.hasRowSecurity flag.
Be sure to pull up the subquery's hasRowSecurity flag when flattening a
subquery in pull_up_simple_subquery(). This isn't a bug today because
we don't look at the hasRowSecurity flag during planning, but it could
easily be a bug tomorrow.
Likewise, make rewriteRuleAction() pull up the hasRowSecurity flag when
absorbing RTEs from a rule action. This isn't a bug either, for the
opposite reason: the flag should never be set yet. But again, it seems
like good future proofing.
Add a comment explaining why rewriteTargetView() should *not* set
hasRowSecurity when adding stuff to securityQuals.
Improve some nearby comments about securityQuals processing, and document
that field more completely in parsenodes.h.
Tom Lane [Thu, 10 Nov 2016 20:00:58 +0000 (15:00 -0500)]
Re-allow user_catalog_table option for materialized views.
The reloptions stuff allows this option to be set on a matview.
While it's questionable whether that is useful or was really intended,
it does work, and we shouldn't change that in minor releases. Commit e3e66d8a9 disabled the option since I didn't realize that it was
possible for it to be set on a matview. Tweak the test to re-allow it.
Tom Lane [Thu, 10 Nov 2016 16:31:56 +0000 (11:31 -0500)]
Fix partial aggregation for the case of a degenerate GROUP BY clause.
The plan generated for sorted partial aggregation with "GROUP BY constant"
included a Sort node with no sort keys, which the executor does not like.
Per report from Steve Randall. I'd add a regression test case if I could
think of a compact one, but it doesn't seem worth expending lots of cycles
on.
Tom Lane [Tue, 8 Nov 2016 22:39:45 +0000 (17:39 -0500)]
Simplify code by getting rid of SPI_push, SPI_pop, SPI_restore_connection.
The idea behind SPI_push was to allow transitioning back into an
"unconnected" state when a SPI-using procedure calls unrelated code that
might or might not invoke SPI. That sounds good, but in practice the only
thing it does for us is to catch cases where a called SPI-using function
forgets to call SPI_connect --- which is a highly improbable failure mode,
since it would be exposed immediately by direct testing of said function.
As against that, we've had multiple bugs induced by forgetting to call
SPI_push/SPI_pop around code that might invoke SPI-using functions; these
are much harder to catch and indeed have gone undetected for years in some
cases. And we've had to band-aid around some problems of this ilk by
introducing conditional push/pop pairs in some places, which really kind
of defeats the purpose altogether; if we can't draw bright lines between
connected and unconnected code, what's the point?
Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the
underlying state variable _SPI_curid. It turns out SPI_restore_connection
can go away too, which is a nice side benefit since it was never more than
a kluge. Provide no-op macros for the deleted functions so as to avoid an
API break for external modules.
A side effect of this removal is that SPI_palloc and allied functions no
longer permit being called when unconnected; they'll throw an error
instead. The apparent usefulness of the previous behavior was a mirage
as well, because it was depended on by only a few places (which I fixed in
preceding commits), and it posed a risk of allocations being unexpectedly
long-lived if someone forgot a SPI_push call.
Robert Haas [Tue, 8 Nov 2016 21:27:09 +0000 (16:27 -0500)]
psql: Tab completion for renaming enum values.
For ALTER TYPE .. RENAME, add "VALUE" to the list of possible
completions. Complete ALTER TYPE .. RENAME VALUE with possible
enum values. After that, complete with "TO".
Dagfinn Ilmari Mannsåker, reviewed by Artur Zakirov.
Tom Lane [Tue, 8 Nov 2016 20:36:36 +0000 (15:36 -0500)]
Replace uses of SPI_modifytuple that intend to allocate in current context.
Invent a new function heap_modify_tuple_by_cols() that is functionally
equivalent to SPI_modifytuple except that it always allocates its result
by simple palloc. I chose however to make the API details a bit more
like heap_modify_tuple: pass a tupdesc rather than a Relation, and use
bool convention for the isnull array.
Use this function in place of SPI_modifytuple at all call sites where the
intended behavior is to allocate in current context. (There actually are
only two call sites left that depend on the old behavior, which makes me
wonder if we should just drop this function rather than keep it.)
This new function is easier to use than heap_modify_tuple() for purposes
of replacing a single column (or, really, any fixed number of columns).
There are a number of places where it would simplify the code to change
over, but I resisted that temptation for the moment ... everywhere except
in plpgsql's exec_assign_value(); changing that might offer some small
performance benefit, so I did it.
This is on the way to removing SPI_push/SPI_pop, but it seems like
good code cleanup in its own right.
Tom Lane [Tue, 8 Nov 2016 18:11:15 +0000 (13:11 -0500)]
Make SPI_fnumber() reject dropped columns.
There's basically no scenario where it's sensible for this to match
dropped columns, so put a test for dropped-ness into SPI_fnumber()
itself, and excise the test from the small number of callers that
were paying attention to the case. (Most weren't :-(.)
In passing, normalize tests at call sites: always reject attnum <= 0
if we're disallowing system columns. Previously there was a mixture
of "< 0" and "<= 0" tests. This makes no practical difference since
SPI_fnumber() never returns 0, but I'm feeling pedantic today.
Also, in the places that are actually live user-facing code and not
legacy cruft, distinguish "column not found" from "can't handle
system column".
Per discussion with Jim Nasby; thi supersedes his original patch
that just changed the behavior at one call site.
Robert Haas [Tue, 8 Nov 2016 17:09:18 +0000 (12:09 -0500)]
Fix mistake in XLOG_SEG_SIZE test.
The intent of the test is to check whether XLOG_SEG_SIZE is in a
particular range, but actually in one case it compares XLOG_BLCKSZ
by mistake. Repair.
Tom Lane [Tue, 8 Nov 2016 17:00:24 +0000 (12:00 -0500)]
Use heap_modify_tuple not SPI_modifytuple in pl/python triggers.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment. But really it's better to just use heap_modify_tuple.
While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber. The lack of a check
for system column names is actually a pre-existing bug here, and might
even qualify as a security bug except that we don't have any trusted
version of plpython.
Tom Lane [Tue, 8 Nov 2016 16:35:01 +0000 (11:35 -0500)]
Use heap_modify_tuple not SPI_modifytuple in pl/perl triggers.
The code here would need some change anyway given planned change in
SPI_modifytuple semantics, since this executes after we've exited the
SPI environment. But really it's better to just use heap_modify_tuple.
The code's actually shorter this way, and this avoids depending on some
rather indirect reasoning about why the temporary arrays can't be overrun.
(I think the old code is safe, as long as Perl hashes can't contain
duplicate keys; but with this way we don't need that assumption, only
the assumption that SPI_fnumber doesn't return an out-of-range attnum.)
While at it, normalize use of SPI_fnumber: make error messages distinguish
no-such-column from can't-set-system-column, and remove test for deleted
column which is going to migrate into SPI_fnumber.
Robert Haas [Tue, 8 Nov 2016 15:47:52 +0000 (10:47 -0500)]
Improve handling of dead tuples in hash indexes.
When squeezing a bucket during vacuum, it's not necessary to retain
any tuples already marked as dead, so ignore them when deciding which
tuples must be moved in order to empty a bucket page. Similarly, when
splitting a bucket, relocating dead tuples to the new bucket is a
waste of effort; instead, just ignore them.
Amit Kapila, reviewed by me. Testing help provided by Ashutosh
Sharma.
Noah Misch [Tue, 8 Nov 2016 01:27:30 +0000 (20:27 -0500)]
Change qr/foo$/m to qr/foo\n/m, for Perl 5.8.8.
In each case, absence of a trailing newline would itself constitute a
PostgreSQL bug. Therefore, this slightly enhances the changed tests.
This works around a bug that last appeared in Perl 5.8.8, fixing
src/test/modules/test_pg_dump when run against that version. Commit e7293e3271bf618eeb2d4779a15fc516a69fe463 worked around the bug, but the
subsequent addition of test_pg_dump introduced affected code. As that
commit had shown, slight increases in pattern complexity can suppress
the bug. This commit edits qr/foo$/m patterns too complex to encounter
the bug today, for style consistency and robustness against unrelated
pattern changes. Back-patch to 9.6, where test_pg_dump was introduced.
As of this writing, a fresh MSYS installation includes an affected Perl
5.8.8. The Perl 5.8.8 in Red Hat Enterprise Linux 5.11 carries a patch
that renders it unaffected, but the Perl 5.8.5 of Red Hat Enterprise
Linux 4.4 is affected.
Tom Lane [Mon, 7 Nov 2016 17:08:18 +0000 (12:08 -0500)]
Band-aid fix for incorrect use of view options as StdRdOptions.
We really ought to make StdRdOptions and the other decoded forms of
reloptions self-identifying, but for the moment, assume that only plain
relations could possibly be user_catalog_tables. Fixes problem with bogus
"ON CONFLICT is not supported on table ... used as a catalog table" error
when target is a view with cascade option.
Tom Lane [Mon, 7 Nov 2016 15:27:52 +0000 (10:27 -0500)]
Revert "Delete contrib/xml2's legacy implementation of xml_is_well_formed()."
This partly reverts commit 20540710e83f2873707c284a0c0693f0b57156c4.
Since we've given up on adding PGDLLEXPORT markers to PG_FUNCTION_INFO_V1,
there's no need to remove the legacy compatibility function. I kept the
documentation changes, though, as they seem appropriate anyway.
Tom Lane [Mon, 7 Nov 2016 15:19:22 +0000 (10:19 -0500)]
Revert "Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro."
This reverts commit c8ead2a3974d3eada145a0e18940150039493cc9.
Seems there is no way to do this that doesn't cause MSVC to give
warnings, so let's just go back to the way we've been doing it.
Peter Eisentraut [Thu, 27 Oct 2016 16:00:00 +0000 (12:00 -0400)]
Save redundant code for pseudotype I/O functions
Use a macro to generate the in and out functions for pseudotypes that
reject all input and output, saving many lines of redundant code.
Parameterize the error messages to reduce translatable strings.
Tom Lane [Sun, 6 Nov 2016 21:09:57 +0000 (16:09 -0500)]
Modernize result-tuple construction in pltcl_trigger_handler().
Use Tcl_ListObjGetElements instead of Tcl_SplitList. Aside from being
possibly more efficient in its own right, this means we are no longer
responsible for freeing a malloc'd result array, so we can get rid of
a PG_TRY/PG_CATCH block.
Use heap_form_tuple instead of SPI_modifytuple. We don't need the
extra generality of the latter, since we're always replacing all
columns. Nor do we need its memory-context-munging, since at this
point we're already out of the SPI environment.
Per comparison of this code to tuple-building code submitted by Jim Nasby.
I've abandoned the thought of merging the two cases into a single routine,
but we may as well make the older code simpler and faster where we can.
Tom Lane [Sun, 6 Nov 2016 19:43:13 +0000 (14:43 -0500)]
Rationalize and document pltcl's handling of magic ".tupno" array element.
For a very long time, pltcl's spi_exec and spi_execp commands have had
a behavior of storing the current row number as an element of output
arrays, but this was never documented. Fix that.
For an equally long time, pltcl_trigger_handler had a behavior of silently
ignoring ".tupno" as an output column name, evidently so that the result
of spi_exec could be used directly as a trigger result tuple. Not sure
how useful that really is, but in any case it's bad that it would break
attempts to use ".tupno" as an actual column name. We can fix it by not
checking for ".tupno" until after we check for a column name match. This
comports with the effective behavior of spi_exec[p] that ".tupno" is only
magic when you don't have an actual column named that.
In passing, wordsmith the description of returning modified tuples from
a pltcl trigger.
Noted while working on Jim Nasby's patch to support composite results
from pltcl. The inability to return trigger tuples using ".tupno" as
a column name is a bug, so back-patch to all supported branches.
Tom Lane [Sun, 6 Nov 2016 17:09:36 +0000 (12:09 -0500)]
Need to do SPI_push/SPI_pop around expression evaluation in plpgsql.
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI). I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.
Per bug #14414 from Marcos Castedo. Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.
Tom Lane [Sun, 6 Nov 2016 15:45:58 +0000 (10:45 -0500)]
More zic cleanup.
The workaround the IANA guys chose to get rid of the clang warning
we'd silenced in commit 23ed2ba81 turns out not to satisfy Coverity.
Go back to the previous solution, ie, remove the useless comparison
to SIZE_MAX. (In principle, there could be machines out there where
it's not useless because ptrdiff_t is wider than size_t. But the whole
thing is pretty academic anyway, as we could never approach this limit
for any sane estimate of the amount of data that zic will ever be asked
to work with.)
Also, s/lineno/lineno_t/g, because if we accept their decision to start
using "lineno" as a typedef, it is going to have very unpleasant
consequences in our next pgindent run. Noted that while fooling with
pltcl yesterday.
Tom Lane [Sat, 5 Nov 2016 21:32:29 +0000 (17:32 -0400)]
Improve minor error-handling details in pltcl.
Don't ask Tcl_GetIndexFromObj to store an error message in the interpreter
in cases where the next argument isn't necessarily one of the options
we're asking it to check for. At best that is a waste of time, and at
worst it might cause an inappropriate error result to get left behind.
Be sure to check for valid syntax (ie, no command arguments) in
pltcl_SPI_lastoid.
Extracted from a larger and otherwise-unrelated patch.
Tom Lane [Sat, 5 Nov 2016 17:48:11 +0000 (13:48 -0400)]
Adjust cost_merge_append() to reflect use of binaryheap_replace_first().
Commit 7a2fe9bd0 improved merge append so that replacement of a tuple
takes log(N) operations, not twice log(N). Since cost_merge_append knew
about that explicitly, we should adjust it. This probably makes little
difference in practice, but the obsolete comment is confusing.
Ideally this would have been put in in 9.3 with the underlying behavior
change; but I'm not going to back-patch it, since there's some small chance
of changing a plan choice that somebody's optimized for.
Tom Lane [Fri, 4 Nov 2016 23:04:56 +0000 (19:04 -0400)]
Provide DLLEXPORT markers for C functions via PG_FUNCTION_INFO_V1 macro.
Second try at the change originally made in commit 8518583cd;
this time with contrib updates so that manual extern declarations
are also marked with PGDLLEXPORT. The release notes should point
this out as a significant source-code change for extension authors,
since they'll have to make similar additions to avoid trouble on Windows.
Tom Lane [Fri, 4 Nov 2016 22:29:53 +0000 (18:29 -0400)]
Delete contrib/xml2's legacy implementation of xml_is_well_formed().
This function is unreferenced in modern usage; it was superseded in 9.1
by a core function of the same name. It has been left in place in the C
code only so that pre-9.1 SQL definitions of the contrib/xml2 functions
would continue to work. Six years seems like enough time for people to
have updated to the extension-style version of the xml2 module, so let's
drop this.
The key reason for not keeping it any longer is that we want to stick
an explicit PGDLLEXPORT into PG_FUNCTION_INFO_V1(), and the similarity
of name to the core function creates a conflict that compilers will
complain about.
Extracted from a larger patch for that purpose. I'm committing this
change separately to give it more visibility in the commit logs.
While at it, remove the documentation entry that claimed that
xml_is_well_formed() is a function provided by contrib/xml2, and
instead mention the even more ancient alias xml_valid().
Tom Lane [Fri, 4 Nov 2016 17:26:49 +0000 (13:26 -0400)]
Be more consistent about masking xl_info with ~XLR_INFO_MASK.
Generally, WAL resource managers are only supposed to examine the
top 4 bits of a WAL record's xl_info; the rest are reserved for
the WAL mechanism itself. A few places were not consistent about
doing this with respect to XLOG_CHECKPOINT and XLOG_SWITCH records.
There's no bug currently, since no additional bits ever get set in
these specific record types, but that might not be true forever.
Let's follow the generic coding rule here too.
Tom Lane [Fri, 4 Nov 2016 16:11:47 +0000 (12:11 -0400)]
Fix gin_leafpage_items().
On closer inspection, commit 84ad68d64 broke gin_leafpage_items(),
because the aligned copy of the page got palloc'd in a short-lived
context whereas it needs to be in the SRF's multi_call_memory_ctx.
This was not exposed by the regression test, because the regression
test doesn't actually exercise the function in a meaningful way.
Fix the code bug, and extend the test in what I hope is a portable
fashion.
Kevin Grittner [Fri, 4 Nov 2016 15:49:50 +0000 (10:49 -0500)]
Implement syntax for transition tables in AFTER triggers.
This is infrastructure for the complete SQL standard feature. No
support is included at this point for execution nodes or PLs. The
intent is to add that soon.
As this patch leaves things, standard syntax can create tuplestores
to contain old and/or new versions of rows affected by a statement.
References to these tuplestores are in the TriggerData structure.
C triggers can access the tuplestores directly, so they are usable,
but they cannot yet be referenced within a SQL statement.
pageinspect: Fix unaligned struct access in GIN functions
The raw page data that is passed into the functions will not be aligned
at 8-byte boundaries. Casting that to a struct and accessing int64
fields will result in unaligned access. On most platforms, you get away
with it, but it will result on a crash on pickier platforms such as ia64
and sparc64.
Robert Haas [Fri, 4 Nov 2016 13:27:49 +0000 (09:27 -0400)]
Add API to check if an existing exclusive lock allows cleanup.
LockBufferForCleanup() acquires a cleanup lock unconditionally, and
ConditionalLockBufferForCleanup() acquires a cleanup lock if it is
possible to do so without waiting; this patch adds a new API,
IsBufferCleanupOK(), which tests whether an exclusive lock already
held happens to be a cleanup lock. This is possible because a cleanup
lock simply means an exclusive lock plus the assurance any other pins
on the buffer are newer than our own pin. Therefore, just as the
existing functions decide that the exclusive lock that they've just
taken is a cleanup lock if they observe the pin count to be 1, this
new function allows us to observe that the pin count is 1 on a buffer
we've already locked.
This is useful in situations where a backend definitely wishes to
modify the buffer and also wishes to perform cleanup operations if
possible. The patch to eliminate heavyweight locking by hash indexes
uses this, and it may have other applications as well.
Amit Kapila, per a suggestion from me. Some comment adjustments by me
as well.
Tom Lane [Fri, 4 Nov 2016 02:24:34 +0000 (22:24 -0400)]
Sync our copy of the timezone library with IANA tzcode master.
This patch absorbs some unreleased fixes for symlink manipulation bugs
introduced in tzcode 2016g. Ordinarily I'd wait around for a released
version, but in this case it seems like we could do with extra testing,
in particular checking whether it works in EDB's VMware build environment.
This corresponds to commit aec59156abbf8472ba201b6c7ca2592f9c10e077 in
https://github.com/eggert/tz.
Per a report from Sandeep Thakkar, building in an environment where hard
links are not supported in the timezone data installation directory failed,
because upstream code refactoring had broken the case of symlinking from an
existing symlink. Further experimentation also showed that the symlinks
were sometimes made incorrectly, with too many or too few "../"'s in the
symlink contents.
This should get back-patched, but first let's see what the buildfarm
makes of it. I'm not too sure about the new dependency on linkat(2).
Tom Lane [Wed, 2 Nov 2016 19:50:15 +0000 (15:50 -0400)]
Don't make FK-based selectivity estimates in inheritance situations.
The foreign-key-aware logic for estimation of join sizes (added in commit 100340e2d) blindly tried to apply the concept to rels that are actually
parents of inheritance trees. This is just plain wrong so far as the
referenced relation is concerned, since the inheritance scan may well
produce lots of rows that are not participating in the constraint. It's
wrong for the referencing relation too, for the same reason; although on
that end we could conceivably detect whether all members of the inheritance
tree have equivalent FK constraints pointing to the same referenced rel,
and then proceed more or less as we do now. But pending somebody writing
code to do that, we must disable this, because it's producing completely
silly estimates when there's an FK linking the heads of inheritance trees.
Per bug #14404 from Clinton Adams. Back-patch to 9.6 where the new
estimation logic came in.
Tom Lane [Wed, 2 Nov 2016 18:32:13 +0000 (14:32 -0400)]
Don't convert Consts into Vars during setrefs.c processing.
While converting expressions in an upper-level plan node so that they
reference Vars and expressions provided by the input plan node(s),
don't convert plain Const items, even if there happens to be a matching
Const in the input. It's silly to do so because a Var is more expensive to
execute than a Const. Moreover, converting can fool ExecCheckPlanOutput's
check that an insert or update query inserts nulls into dropped columns,
leading to "query provides a value for a dropped column" errors during
INSERT or UPDATE on a table with a dropped column. We could solve this
by making that check more complicated, but I don't see the point; this fix
should save a marginal number of cycles, and it also makes for less messy
EXPLAIN output, as shown by the ensuing regression test result changes.
Per report from Pavel Hanák. I have not incorporated a test case based
on that example, as there doesn't seem to be a simple way of checking
this in isolation without making a bunch of assumptions about other
planner and SQL-function behavior.
Back-patch to 9.6. This setrefs.c behavior exists much further back,
but there is not currently reason to think that it causes problems
before 9.6.
Tom Lane [Wed, 2 Nov 2016 04:09:27 +0000 (00:09 -0400)]
Fix portability bug in gin_page_opaque_info().
Somebody apparently thought that "if Int32GetDatum is good,
Int64GetDatum must be better". Per buildfarm failures now
that Peter has added some regression tests here.
Peter Eisentraut [Fri, 28 Oct 2016 16:00:00 +0000 (12:00 -0400)]
Add make rules to download raw Unicode mapping files
This serves as implicit documentation and is handy if someone wants to
tweak things. The rules are not part of a normal build, like this
entire directory.
Robert Haas [Mon, 31 Oct 2016 13:14:46 +0000 (09:14 -0400)]
Remove declarations for pq_putmessage_hook and pq_flush_hook.
Commit 2bd9e412f92bc6a68f3e8bcb18e04955cc35001d added these in error.
They were part of an earlier design for that patch and survived in the
committed version only by inadvertency.
Tom Lane [Sun, 30 Oct 2016 21:35:42 +0000 (17:35 -0400)]
Fix nasty performance problem in tsquery_rewrite().
tsquery_rewrite() tries to find matches to subsets of AND/OR conditions;
for example, in the query 'a | b | c' the substitution subquery 'a | c'
should match and lead to replacement of the first and third items.
That's fine, but the matching algorithm apparently takes about O(2^N)
for an N-clause query (I say "apparently" because the code is also both
unintelligible and uncommented). We could probably do better than that
even without any extra assumptions --- but actually, we know that the
subclauses are sorted, indeed are depending on that elsewhere in this very
same function. So we can just scan the two lists a single time to detect
matches, as though we were doing a merge join.
Also do a re-flattening call (QTNTernary()) in tsquery_rewrite_query, just
to make sure that the tree fits the expectations of the next search cycle.
I didn't try to devise a test case for this, but I'm pretty sure that the
oversight could have led to failure to match in some cases where a match
would be expected.
Improve comments, and also stick a CHECK_FOR_INTERRUPTS into
dofindsubquery, just in case it's still too slow for somebody.
Per report from Andreas Seltenreich. Back-patch to all supported branches.
Tom Lane [Sun, 30 Oct 2016 19:24:40 +0000 (15:24 -0400)]
Fix bogus tree-flattening logic in QTNTernary().
QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c',
which is all well and good, but it tries to do that to NOT nodes as well,
so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to
be done only on AND and OR nodes, and add a test case illustrating the bug.
In passing, provide some comments for the sadly naked functions in
tsquery_util.c, and simplify some baroque logic in QTNFree(), which
I think may have been leaking some items it intended to free.
Noted while investigating a complaint from Andreas Seltenreich.
Back-patch to all supported versions.
Tom Lane [Sun, 30 Oct 2016 16:27:41 +0000 (12:27 -0400)]
Improve speed of aggregates that use array_append as transition function.
In the previous coding, if an aggregate's transition function returned an
expanded array, nodeAgg.c and nodeWindowAgg.c would always copy it and thus
force it into the flat representation. This led to ping-ponging between
flat and expanded formats, which costs a lot. For an aggregate using
array_append as transition function, I measured about a 15X slowdown
compared to the pre-9.5 code, when working on simple int[] arrays.
Of course, the old code was already O(N^2) in this usage due to copying
flat arrays all the time, but it wasn't quite this inefficient.
To fix, teach nodeAgg.c and nodeWindowAgg.c to allow expanded transition
values without copying, so long as the transition function takes care to
return the transition value already properly parented under the aggcontext.
That puts a bit of extra responsibility on the transition function, but
doing it this way allows us to not need any extra logic in the fast path
of advance_transition_function (ie, with a pass-by-value transition value,
or with a modified-in-place pass-by-reference value). We already know
that that's a hot spot so I'm loath to add any cycles at all there. Also,
while only array_append currently knows how to follow this convention,
this solution allows other transition functions to opt-in without needing
to have a whitelist in the core aggregation code.
(The reason we would need a whitelist is that currently, if you pass a
R/W expanded-object pointer to an arbitrary function, it's allowed to do
anything with it including deleting it; that breaks the core agg code's
assumption that it should free discarded values. Returning a value under
aggcontext is the transition function's signal that it knows it is an
aggregate transition function and will play nice. Possibly the API rules
for expanded objects should be refined, but that would not be a
back-patchable change.)
With this fix, an aggregate using array_append is no longer O(N^2), so it's
much faster than pre-9.5 code rather than much slower. It's still a bit
slower than the bespoke infrastructure for array_agg, but the differential
seems to be only about 10%-20% rather than orders of magnitude.
Robert Haas [Fri, 28 Oct 2016 16:21:15 +0000 (12:21 -0400)]
pgstattuple: Don't take heavyweight locks when examining a hash index.
It's currently necessary to take a heavyweight lock when scanning a
hash bucket, but pgstattuple only examines individual pages, so it
doesn't need to do this. If, for some hypothetical reason, it did
need to do any heavyweight locking here, this logic would probably
still be incorrect, because most of the locks that it is taking are
meaningless. Only a heavyweight lock on a primary bucket page has any
meaning, but this takes heavyweight locks on all pages regardless of
function - and in particular overflow pages, where you might imagine
that we'd want to lock the primary bucket page if we needed to lock
anything at all.
This is arguably a bug that has existed since this code was added in
commit dab42382f483c3070bdce14a4d93c5d0cf61e82b, but I'm not going to
bother back-patching it because in most cases the only consequence is
that running pgstattuple() on a hash index is a little slower than it
otherwise might be, which is no big deal.
Extracted from a vastly larger patch by Amit Kapila which heavyweight
locking for hash indexes entirely; analysis of why this can be done
independently of the rest by me.
Peter Eisentraut [Thu, 27 Oct 2016 16:00:00 +0000 (12:00 -0400)]
Remove invitation to report a bug about unknown encoding
The error message when we couldn't determine the encoding from a locale
said to report a bug about that. That might have been appropriate when
this code was first added, but by now this works pretty solidly and any
encodings we don't recognize we probably just don't support. We still
print the warning, but no longer invite the bug report.
Peter Eisentraut [Thu, 27 Oct 2016 16:00:00 +0000 (12:00 -0400)]
Add function name to PyArg_ParseTuple()
This causes the supplied function name to appear in any error message,
making the error message friendlier and relieving us from having to
provide our own in some cases.
Robert Haas [Thu, 27 Oct 2016 15:19:51 +0000 (11:19 -0400)]
Fix possible pg_basebackup failure on standby with "include WAL".
If a restartpoint flushed no dirty buffers, it could fail to update
the minimum recovery point, leading to a minimum recovery point prior
to the starting REDO location. perform_base_backup() would interpret
that as meaning that no WAL files at all needed to be included in the
backup, failing an internal sanity check. To fix, have restartpoints
always update the minimum recovery point to just after the checkpoint
record itself, so that the file (or files) containing the checkpoint
record will always be included in the backup.
Code by Amit Kapila, per a design suggestion by me, with some
additional work on the code comment by me. Test case by Michael
Paquier. Report by Kyotaro Horiguchi.
Tom Lane [Wed, 26 Oct 2016 21:05:06 +0000 (17:05 -0400)]
Fix incorrect trigger-property updating in ALTER CONSTRAINT.
The code to change the deferrability properties of a foreign-key constraint
updated all the associated triggers to match; but a moment's examination of
the code that creates those triggers in the first place shows that only
some of them should track the constraint's deferrability properties. This
leads to odd failures in subsequent exercise of the foreign key, as the
triggers are fired at the wrong times. Fix that, and add a regression test
comparing the trigger properties produced by ALTER CONSTRAINT with those
you get by creating the constraint as-intended to begin with.
Per report from James Parks. Back-patch to 9.4 where this ALTER
functionality was introduced.
Tom Lane [Wed, 26 Oct 2016 17:40:41 +0000 (13:40 -0400)]
Fix not-HAVE_SYMLINK code in zic.c.
I broke this in commit f3094920a. Apparently it's dead code anyway,
at least as far as our buildfarm is concerned (and the upstream IANA
code doesn't worry at all about symlink() not being present).
But as long as the rest of our code is willing to guard against not
having symlink(), this should too. Noted while investigating a
tangentially-related complaint from Sandeep Thakkar.
Tom Lane [Wed, 26 Oct 2016 15:46:25 +0000 (11:46 -0400)]
Doc: improve documentation about inheritance.
Clarify documentation about inheritance of check constraints, in
particular mentioning the NO INHERIT option, which didn't exist when
this text was written.
Document that in an inherited query, the applicable row security policies
are those of the explicitly-named table, not its children. This is the
intended behavior (per off-list discussion with Stephen Frost), and there
are regression tests for it, but it wasn't documented anywhere user-facing
as far as I could find.
Do a bit of wordsmithing on the description of inherited access-privilege
checks.
Turns out that the output format of Python Decimal isn't totally platform-
independent either. There are other tests for multi-dimensional arrays,
so rather than try to fix this test case, just remove it.
Instead of treating all python sequence types as array dimensions, except
for tuples and various kinds of strings, only treat Python lists as
dimensions. The PyBytes_Check() function used previously is only available
on Python 2.6 and newer, and it was a bit fiddly anyway. The list of
exceptions would require adjustment if Python got a new kind of a sequence
similar to bytes/unicodes/strings, so only checking for Lists seems more
future-proof. The documentation only mentioned using Lists, so this is
closer to what was documented, anyway.
This should fix the buildfarm failures on systems building with Python 2.5,
although I don't have Python 2.5 installed myself to test with.
Avoid using platform-dependent floats in test case.
The number of decimals printed for floats varied in this test case, as
noted by several buildfarm members. There's nothing special about floats
and arrays in the code being tested, so replace the floats with numerics to
make the output platform-independent.
Multi-dimensional arrays can now be used as arguments to a PL/python function
(used to throw an error), and they can be returned as nested Python lists.
This makes a backwards-incompatible change to the handling of composite
types in arrays. Previously, you could return an array of composite types
as "[[col1, col2], [col1, col2]]", but now that is interpreted as a two-
dimensional array. Composite types in arrays must now be returned as
Python tuples, not lists, to resolve the ambiguity. I.e. "[(col1, col2),
(col1, col2)]".
To avoid breaking backwards-compatibility, when not necessary, () is still
accepted for arrays at the top-level, but it is always treated as a
single-dimensional array. Likewise, [] is still accepted for composite types,
when they are not in an array. Update the documentation to recommend using []
for arrays, and () for composite types, with a mention that those other things
are also accepted in some contexts.
This needs to be mentioned in the release notes.
Alexey Grishchenko, Dave Cramer and me. Reviewed by Pavel Stehule.
Peter Eisentraut [Tue, 25 Oct 2016 16:00:00 +0000 (12:00 -0400)]
pg_dump: Simplify internal archive version handling
The ArchiveHandle structure contained the archive format version number
twice, once as a single field and once split into components. Simplify
that by just keeping the single field and adding some macros to extract
the components. Introduce some macros for composing version numbers, to
eliminate the repeated use of magic formulas. Drop the unused trailing
zero byte from the run-time composite version representation.
Robert Haas [Tue, 25 Oct 2016 02:36:24 +0000 (22:36 -0400)]
postgres_fdw: Try again to stabilize aggregate pushdown regression tests.
A query that only aggregates one row isn't a great argument for pushdown,
and buildfarm member brolga decides against it. Adjust the query a bit
in the hopes of getting remote aggregation to win consistently.