Tom Lane [Mon, 4 Jul 2016 20:09:11 +0000 (16:09 -0400)]
Fix failure to handle conflicts in non-arbiter exclusion constraints.
ExecInsertIndexTuples treated an exclusion constraint as subject to
noDupErr processing even when it was not listed in arbiterIndexes, and
would therefore not error out for a conflict in such a constraint, instead
returning it as an arbiter-index failure. That led to an infinite loop in
ExecInsert, since ExecCheckIndexConstraints ignored the index as-intended
and therefore didn't throw the expected error. To fix, make the exclusion
constraint code path use the same condition as the index_insert call does
to decide whether no-error-for-duplicates behavior is appropriate. While
at it, refactor a little bit to avoid unnecessary list_member_oid calls.
(That surely wouldn't save anything worth noticing, but I find the code
a bit clearer this way.)
Per bug report from Heikki Rauhala. Back-patch to 9.5 where ON CONFLICT
was introduced.
Tom Lane [Sun, 3 Jul 2016 22:24:49 +0000 (18:24 -0400)]
Allow RTE_SUBQUERY rels to be considered parallel-safe.
There isn't really any reason not to; the original comments here were
partly confused about subplans versus subquery-in-FROM, and partly
dependent on restrictions that no longer apply now that subqueries return
Paths not Plans. Depending on what's inside the subquery, it might fail
to produce any parallel_safe Paths, but that's fine.
Tom Lane [Sun, 3 Jul 2016 21:57:28 +0000 (17:57 -0400)]
Fix up parallel-safety marking for appendrels.
The previous coding assumed that the value derived by
set_rel_consider_parallel() for an appendrel parent would be accurate for
all the appendrel's children; but this is not so, for example because one
child might scan a temp table. Instead, apply set_rel_consider_parallel()
to each child rel as well as the parent, and then take the AND of the
results as controlling parallel safety for the appendrel as a whole.
(We might someday be able to deal more intelligently than this with cases
in which some of the childrels are parallel-safe and others not, but that's
for later.)
Tom Lane [Sun, 3 Jul 2016 20:55:27 +0000 (16:55 -0400)]
Allow treating TABLESAMPLE scans as parallel-safe.
This was the intention all along, but an extraneous "return;" in
set_rel_consider_parallel() caused sampled rels to never be marked
consider_parallel.
Since we don't have any partial tablesample path/plan type yet, there's
no possibility of parallelizing the sample scan itself; but this fix
allows such a scan to appear below a parallel join, for example.
Tom Lane [Sun, 3 Jul 2016 19:35:29 +0000 (15:35 -0400)]
Set correct cost data in Gather node added by force_parallel_mode.
We were just leaving the cost fields zeroes, which produces obviously bogus
output with force_parallel_mode = on. With force_parallel_mode = regress,
the zeroes are hidden, but I wonder if they wouldn't still confuse add-on
code such as auto_explain.
Tom Lane [Sun, 3 Jul 2016 18:53:37 +0000 (14:53 -0400)]
Round rowcount estimate for a partial path to an integer.
I'd been wondering why I was sometimes seeing fractional rowcount
estimates in parallel-query situations, and this seems to be the
reason. (You won't see the fractional parts in EXPLAIN, because it
prints rowcounts with %.0f, but they are apparent in the debugger.)
A fractional rowcount is not any saner for a partial path than any
other kind of path, and it's equally likely to break cost estimation
for higher paths, so apply clamp_row_est() like we do in other places.
PL/Python: Report argument parsing errors using exceptions
Instead of calling PLy_elog() for reporting Python argument parsing
errors, generate appropriate exceptions. This matches the existing plpy
functions and is more consistent with the behavior of the Python
argument parsing routines.
Tom Lane [Sat, 2 Jul 2016 17:23:02 +0000 (13:23 -0400)]
Fix failure to mark all aggregates with appropriate transtype.
In commit 915b703e1 I gave get_agg_clause_costs() the responsibility of
marking Aggref nodes with the appropriate aggtranstype. I failed to notice
that where it was being called from, it might see only a subset of the
Aggref nodes that were in the original targetlist. Specifically, if there
are duplicate aggregate calls in the tlist, either make_sort_input_target
or make_window_input_target might put just a single instance into the
grouping_target, and then only that instance would get marked. Fix by
moving the call back into grouping_planner(), before we start building
assorted PathTargets from the query tlist. Per report from Stefan Huehner.
Bruce Momjian [Sat, 2 Jul 2016 15:22:36 +0000 (11:22 -0400)]
doc: mention dependency on collation libraries
Document that index storage is dependent on the operating system's
collation library ordering, and any change in that ordering can create
invalid indexes.
Tom Lane [Sat, 2 Jul 2016 00:05:55 +0000 (20:05 -0400)]
Fix some interrelated planner issues with initPlans and Param munging.
In commit 68fa28f77 I tried to teach SS_finalize_plan() to cope with
initPlans attached anywhere in the plan tree, by dint of moving its
handling of those into the recursion in finalize_plan(). It turns out that
that doesn't really work: if a lower-level plan node emits an initPlan
output parameter in its targetlist, it's legitimate for upper levels to
reference those Params --- and at the point where this code runs, those
references look just like the Param itself, so finalize_plan() quite
properly rejects them as being in the wrong place. We could lobotomize
the checks enough to allow that, probably, but then it's not clear that
we'd have any meaningful check for misplaced Params at all. What seems
better, at least in the near term, is to tweak standard_planner() a bit
so that initPlans are never placed anywhere but the topmost plan node
for a query level, restoring the behavior that occurred pre-9.6. Possibly
we can do better if this code is ever merged into setrefs.c: then it would
be possible to check a Param's placement only when we'd failed to replace
it with a Var referencing a child plan node's targetlist.
BTW, I'm now suspicious that finalize_plan is doing the wrong thing by
returning the node's allParam rather than extParam to be incorporated
in the parent node's set of used parameters. However, it makes no
difference given that initPlans only appear at top level, so I'll leave
that alone for now.
Another thing that emerged from this is that standard_planner() needs
to check for initPlans before deciding that it's safe to stick a Gather
node on top in force_parallel_mode mode. We previously guarded against
that by deciding the plan wasn't wholePlanParallelSafe if any subplans
had been found, but after commit 5ce5e4a12 it's necessary to have this
substitute test, because path parallel_safe markings don't account for
initPlans. (Normally, we'd have decided the paths weren't safe anyway
due to appearances of SubPlan nodes, Params, or CTE scans somewhere in
the tree --- but it's possible for those all to be optimized away while
initPlans still remain.)
Tom Lane [Fri, 1 Jul 2016 19:08:55 +0000 (15:08 -0400)]
Provide and use a makefile target to build all generated headers.
As of 9.6, pg_regress doesn't build unless storage/lwlocknames.h has been
created; but there was nothing forcing that to happen if you just went into
src/test/regress/ and built there. We previously had a similar complaint
about plpython.
To fix in a way that won't break next time we invent a generated header,
make src/backend/Makefile expose a phony target for updating all the
include files it builds, and invoke that before building pg_regress or
plpython. In principle, maybe we ought to invoke that everywhere; but
it would add a lot of usually-useless make cycles, so let's just do it
in the places where people have complained.
I made a couple of cosmetic adjustments in src/backend/Makefile as well,
to deal with the generated headers in consistent orders.
There are two problems in the original coding: one is that if one
walreceiver process exits, the ready_to_display flag remains set in
shared memory, exposing the conninfo of the next walreceiver before
obfuscating. Fix by having WalRcvDie reset the flag.
Second, the sleep-and-retry behavior that waited until walreceiver had
set ready_to_display wasn't liked; the preference is to have it return
no data instead, so let's do that.
Bugs in 9ed551e0a reported by Fujii Masao and Michël Paquier.
Tom Lane [Fri, 1 Jul 2016 17:12:34 +0000 (13:12 -0400)]
Rethink the GetForeignUpperPaths API (again).
In the previous design, the GetForeignUpperPaths FDW callback hook was
called before we got around to labeling upper relations with the proper
consider_parallel flag; this meant that any upper paths created by an FDW
would be marked not-parallel-safe. While that's probably just as well
right now, we aren't going to want it to be true forever. Hence, abandon
the idea that FDWs should be allowed to inject upper paths before the core
code has gotten around to creating the relevant upper relation. (Well,
actually they still can, but it's on their own heads how well it works.)
Instead, adopt the same API already designed for create_upper_paths_hook:
we call GetForeignUpperPaths after each upperrel has been created and
populated with the paths the core planner knows how to make.
Robert Haas [Fri, 1 Jul 2016 15:43:19 +0000 (11:43 -0400)]
Set consider_parallel correctly for upper planner rels.
Commit 3fc6e2d7f5b652b417fa6937c34de2438d60fa9f introduced new "upper"
RelOptInfo structures but didn't set consider_parallel for them
correctly, a point I completely missed when reviewing it. Later,
commit e06a38965b3bcdaa881e7e06892d4d8ab6c2c980 made the situation
worse by doing it incorrectly for the grouping relation. Try to
straighten all of that out. Along the way, get rid of the annoying
wholePlanParallelSafe flag, which was only necessarily because of
the fact that upper planning stages didn't use paths at the time
that code was written.
The most important immediate impact of these changes is that
force_parallel_mode will provide useful test coverage in quite a few
more scenarios than it did previously, but it's also necessary
preparation for fixing some problems related to subqueries.
Tom Lane [Fri, 1 Jul 2016 15:40:22 +0000 (11:40 -0400)]
Be more paranoid in ruleutils.c's get_variable().
We were merely Assert'ing that the Var matched the RTE it's supposedly
from. But if the user passes incorrect information to pg_get_expr(),
the RTE might in fact not match; this led either to Assert failures
or core dumps, as reported by Chris Hanks in bug #14220. To fix, just
convert the Asserts to test-and-elog. Adjust an existing test-and-elog
elsewhere in the same function to be consistent in wording.
(If we really felt these were user-facing errors, we might promote them to
ereport's; but I can't convince myself that they're worth translating.)
Back-patch to 9.3; the problematic code doesn't exist before that, and
a quick check says that 9.2 doesn't crash on such cases.
Robert Haas [Fri, 1 Jul 2016 15:29:25 +0000 (11:29 -0400)]
postgres_fdw: Fix cache lookup failure while creating error context.
This is fallout from join pushdown; get_relid_attribute_name can't
handle an attribute number of 0, indicating a whole-row reference,
and shouldn't be called in that case.
Robert Haas [Fri, 1 Jul 2016 14:13:06 +0000 (10:13 -0400)]
postgres_fdw: Remove schema-qualification from cast to text.
As pointed out by Ashutosh Bapat, the header comments for this file
say that schema-qualification is needed for all and only those types
outside pg_catalog. pg_catalog.text is not outside pg_catalog.
Robert Haas [Fri, 1 Jul 2016 12:51:58 +0000 (08:51 -0400)]
Fix crash bug in RestoreSnapshot.
If serialized_snapshot->subxcnt > 0 and serialized_snapshot->xcnt == 0,
the old coding would do the wrong thing and crash. This can happen
on standby servers.
Report by Andreas Seltenreich. Patch by Thomas Munro, reviewed by
Amit Kapila and tested by Andreas Seltenreich.
Robert Haas [Thu, 30 Jun 2016 22:35:32 +0000 (18:35 -0400)]
Fix several mistakes around parallel workers and client_encoding.
Previously, workers sent data to the leader using the client encoding.
That mostly worked, but the leader the converted the data back to the
server encoding. Since not all encoding conversions are reversible,
that could provoke failures. Fix by using the database encoding for
all communication between worker and leader.
Also, while temporary changes to GUC settings, as from the SET clause
of a function, are in general OK for parallel query, changing
client_encoding this way inside of a parallel worker is not OK.
Previously, that would have confused the leader; with these changes,
it would not confuse the leader, but it wouldn't do anything either.
So refuse such changes in parallel workers.
Also, the previous code naively assumed that when it received a
NotifyResonse from the worker, it could pass that directly back to the
user. But now that worker-to-leader communication always uses the
database encoding, that's clearly no longer correct - though,
actually, the old way was always broken for V2 clients. So
disassemble and reconstitute the message instead.
Issues reported by Peter Eisentraut. Patch by me, reviewed by
Peter Eisentraut.
Tom Lane [Thu, 30 Jun 2016 16:37:02 +0000 (12:37 -0400)]
Fix typo in ReorderBufferIterTXNInit().
This looks like it would cause changes from subtransactions to be missed
by the iterator being constructed, if those changes had been spilled to
disk previously. This implies that large subtransactions might be lost
(in whole or in part) by logical replication. Found and fixed by
Petru-Florin Mihancea, per bug #14208.
Tom Lane [Wed, 29 Jun 2016 23:07:19 +0000 (19:07 -0400)]
Dodge compiler bug in Visual Studio 2013.
VS2013 apparently has a problem with taking the address of a formal
parameter in some cases. We do that elsewhere without trouble, but
in this case the address is being passed to a subroutine that will
probably get inlined, so maybe the combination of those things is
what tickles the bug. Anyway, introducing an extra copy of the
parameter value is enough to work around it. Per trouble report
from Umair Shahid.
Tom Lane [Wed, 29 Jun 2016 22:51:20 +0000 (18:51 -0400)]
Fix some infelicities in EXPLAIN output for parallel query plans.
In non-text output formats, parallelized aggregates were reporting
"Partial" or "Finalize" as a field named "Operation", which might be all
right in the absence of any context --- but other plan node types use that
field to report SQL-visible semantics, such as Select/Insert/Update/Delete.
So that naming choice didn't seem good to me. I changed it to "Partial
Mode".
Also, the field did not appear at all for a non-parallelized Agg plan node,
which is contrary to expectation in non-text formats. We're notionally
producing objects that conform to a schema, so the set of fields for a
given node type and EXPLAIN mode should be well-defined. I set it up to
fill in "Simple" in such cases.
Other fields that were added for parallel query, namely "Parallel Aware"
and Gather's "Single Copy", had not gotten the word on that point either.
Make them appear always in non-text output.
Also, the latter two fields were nominally producing boolean output, but
were getting it wrong, because bool values shouldn't be quoted in JSON or
YAML. Somehow we'd not needed an ExplainPropertyBool formatting subroutine
before 9.6; but now we do, so invent it.
Alvaro Herrera [Wed, 29 Jun 2016 20:57:17 +0000 (16:57 -0400)]
Add conninfo to pg_stat_wal_receiver
Commit b1a9bad9e744 introduced a stats view to provide insight into the
running WAL receiver, but neglected to include the connection string in
it, as reported by Michaël Paquier. This commit fixes that omission.
(Any security-sensitive information is not disclosed).
While at it, close the mild security hole that we were exposing the
password in the connection string in shared memory. This isn't
user-accessible, but it still looks like a good idea to avoid having the
cleartext password in memory.
Tom Lane [Wed, 29 Jun 2016 20:02:08 +0000 (16:02 -0400)]
Fix match_foreign_keys_to_quals for FKs linking to unused rtable entries.
Since get_relation_foreign_keys doesn't try to determine whether RTEs
are actually part of the query semantics, it might make FK info records
linking to RTEs that won't have a RelOptInfo at all. Cope with that.
Per bug #14219 from Andrew Gierth.
Tom Lane [Wed, 29 Jun 2016 19:00:25 +0000 (15:00 -0400)]
Adjust text search documentation for recent commits.
Fix some now-obsolete statements that were overlooked in commits 6734a1cac, 3dbbd0f02, 028350f61. Document the behavior of <0>.
Also do a little bit of rearranging and copy-editing for clarity.
Tom Lane [Tue, 28 Jun 2016 14:43:11 +0000 (10:43 -0400)]
Don't apply sortgroupref labels to a tlist that might not match.
If we need to use a gating Result node for pseudoconstant quals,
create_scan_plan() intentionally suppresses use_physical_tlist's checks
on whether there are matches for sortgroupref labels, on the grounds that
we don't need matches because we can label the Result's projection output
properly. However, it then called apply_pathtarget_labeling_to_tlist
anyway. This oversight was harmless when written, but in commit aeb9ae645
I made that function throw an error if there was no match. Thus, the
combination of a table scan, pseudoconstant quals, and a non-simple-Var
sortgroupref column threw the dreaded "ORDER/GROUP BY expression not found
in targetlist" error. To fix, just skip applying the labeling in this
case. Per report from Rushabh Lathia.
Tom Lane [Mon, 27 Jun 2016 19:57:21 +0000 (15:57 -0400)]
Fix CREATE MATVIEW/CREATE TABLE AS ... WITH NO DATA to not plan the query.
Previously, these commands always planned the given query and went through
executor startup before deciding not to actually run the query if WITH NO
DATA is specified. This behavior is problematic for pg_dump because it
may cause errors to be raised that we would rather not see before a
REFRESH MATERIALIZED VIEW command is issued. See for example bug #13907
from Marian Krucina. This change is not sufficient to fix that particular
bug, because we also need to tweak pg_dump to issue the REFRESH later,
but it's a necessary step on the way.
A user-visible side effect of doing things this way is that the returned
command tag for WITH NO DATA cases will now be "CREATE MATERIALIZED VIEW"
or "CREATE TABLE AS", not "SELECT 0". We could preserve the old behavior
but it would take more code, and arguably that was just an implementation
artifact not intended behavior anyhow.
In 9.5 and HEAD, also get rid of the static variable CreateAsReladdr, which
was trouble waiting to happen; there is not any prohibition on nested
CREATE commands.
Back-patch to 9.3 where CREATE MATERIALIZED VIEW was introduced.
Teodor Sigaev [Mon, 27 Jun 2016 17:55:24 +0000 (20:55 +0300)]
Change predecence of phrase operator.
<-> operator now have higher predecence than & (AND) operator. This change
was motivated by unexpected difference of similar queries:
'a & b <-> c'::tsquery and 'b <-> c & a'. Before first query means
(a & b) <-> c and second one - '(b <-> c) & a', now phrase operator evaluates
first.
Per suggestion from Tom Lane 32260.1465402409@sss.pgh.pa.us
Teodor Sigaev [Mon, 27 Jun 2016 17:47:32 +0000 (20:47 +0300)]
Do not fallback to AND for FTS phrase operator.
If there is no positional information of lexemes then phrase operator will not
fallback to AND operator. This change makes needing to modify TS_execute()
interface, because somewhere (in indexes, for example) positional information
is unaccesible and in this cases we need to force fallback to AND.
Tom Lane [Sun, 26 Jun 2016 19:55:01 +0000 (15:55 -0400)]
Avoid making a separate pass over the query to check for partializability.
It's rather silly to make a separate pass over the tlist + HAVING qual,
and a separate set of visits to the syscache, when get_agg_clause_costs
already has all the required information in hand. This nets out as less
code as well as fewer cycles.
Tom Lane [Sun, 26 Jun 2016 18:33:38 +0000 (14:33 -0400)]
Rethink node-level representation of partial-aggregation modes.
The original coding had three separate booleans representing partial
aggregation behavior, which was confusing, unreadable, and error-prone,
not least because the booleans weren't always listed in the same order.
It was also inadequate for the allegedly-desirable future extension to
support intermediate partial aggregation, because we'd need separate
markers for serialization and deserialization in such a case.
Merge these bools into an enum "AggSplit" to provide symbolic names for
the supported operating modes (and document what those are). By assigning
the values of the enum constants carefully, we can treat AggSplit values
as options bitmasks so that tests of what to do aren't noticeably more
expensive than before.
While at it, get rid of Aggref.aggoutputtype. That's not needed since
commit 59a3795c2 got rid of setrefs.c's special-purpose Aggref comparison
code, and it likewise seemed more confusing than helpful.
Assorted comment cleanup as well (there's still more that I want to do
in that line).
catversion bump for change in Aggref node contents. Should be the last
one for partial-aggregation changes.
Tom Lane [Sun, 26 Jun 2016 16:08:12 +0000 (12:08 -0400)]
Simplify planner's final setup of Aggrefs for partial aggregation.
Commit e06a38965's original coding for constructing the execution-time
expression tree for a combining aggregate was rather messy, involving
duplicating quite a lot of code in setrefs.c so that it could inject
a nonstandard matching rule for Aggrefs. Get rid of that in favor of
explicitly constructing a combining Aggref with a partial Aggref as input,
then allowing setref's normal matching logic to match the partial Aggref
to the output of the lower plan node and hence replace it with a Var.
In passing, rename and redocument make_partialgroup_input_target to have
some connection to what it actually does.
Alvaro Herrera [Fri, 24 Jun 2016 22:29:28 +0000 (18:29 -0400)]
Fix handling of multixacts predating pg_upgrade
After pg_upgrade, it is possible that some tuples' Xmax have multixacts
corresponding to the old installation; such multixacts cannot have
running members anymore. In many code sites we already know not to read
them and clobber them silently, but at least when VACUUM tries to freeze
a multixact or determine whether one needs freezing, there's an attempt
to resolve it to its member transactions by calling GetMultiXactIdMembers,
and if the multixact value is "in the future" with regards to the
current valid multixact range, an error like this is raised:
ERROR: MultiXactId 123 has not been created yet -- apparent wraparound
and vacuuming fails. Per discussion with Andrew Gierth, it is completely
bogus to try to resolve multixacts coming from before a pg_upgrade,
regardless of where they stand with regards to the current valid
multixact range.
It's possible to get from under this problem by doing SELECT FOR UPDATE
of the problem tuples, but if tables are large, this is slow and
tedious, so a more thorough solution is desirable.
To fix, we realize that multixacts in xmax created in 9.2 and previous
have a specific bit pattern that is never used in 9.3 and later (we
already knew this, per comments and infomask tests sprinkled in various
places, but we weren't leveraging this knowledge appropriately).
Whenever the infomask of the tuple matches that bit pattern, we just
ignore the multixact completely as if Xmax wasn't set; or, in the case
of tuple freezing, we act as if an unwanted value is set and clobber it
without decoding. This guarantees that no errors will be raised, and
that the values will be progressively removed until all tables are
clean. Most callers of GetMultiXactIdMembers are patched to recognize
directly that the value is a removable "empty" multixact and avoid
calling GetMultiXactIdMembers altogether.
To avoid changing the signature of GetMultiXactIdMembers() in back
branches, we keep the "allow_old" boolean flag but rename it to
"from_pgupgrade"; if the flag is true, we always return an empty set
instead of looking up the multixact. (I suppose we could remove the
argument in the master branch, but I chose not to do so in this commit).
This was broken all along, but the error-facing message appeared first
because of commit 8e9a16ab8f7f and was partially fixed in a25c2b7c4db3.
This fix, backpatched all the way back to 9.3, goes approximately in the
same direction as a25c2b7c4db3 but should cover all cases.
Bug analysis by Andrew Gierth and Álvaro Herrera.
A number of public reports match this bug:
https://www.postgresql.org/message-id/20140330040029.GY4582@tamriel.snowman.net
https://www.postgresql.org/message-id/538F3D70.6080902@publicrelay.com
https://www.postgresql.org/message-id/556439CF.7070109@pscs.co.uk
https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com
https://www.postgresql.org/message-id/20160615203829.5798.4594@wrigleys.postgresql.org
Tom Lane [Fri, 24 Jun 2016 20:57:36 +0000 (16:57 -0400)]
Fix building of large (bigger than shared_buffers) hash indexes.
When the index is predicted to need more than NBuffers buckets,
CREATE INDEX attempts to sort the index entries by hash key before
insertion, so as to reduce thrashing. This code path got broken by
commit 9f03ca915196dfc8, which overlooked that _hash_form_tuple() is not
just an alias for index_form_tuple(). The index got built anyway, but
with garbage data, so that searches for pre-existing tuples always
failed. Fix by refactoring to separate construction of the indexable
data from calling index_form_tuple().
Per bug #14210 from Daniel Newman. Back-patch to 9.5 where the
bug was introduced.
Robert Haas [Fri, 24 Jun 2016 19:06:19 +0000 (15:06 -0400)]
postgres_fdw: Fix incorrect NULL handling in join pushdown.
something.* IS NOT NULL means that every attribute of the row is not
NULL, not that the row itself is non-NULL (e.g. because it's coming
from below an outer join. Use (somevar.*)::pg_catalog.text IS NOT
NULL instead.
Ashutosh Bapat, per a report by Rushabh Lathia. Reviewed by
Amit Langote and Etsuro Fujita. Schema-qualification added by me.
Tom Lane [Thu, 23 Jun 2016 14:55:59 +0000 (10:55 -0400)]
Fix small memory leak in partial-aggregate deserialization functions.
A deserialize function's result is short-lived data during partial
aggregation, since we're just going to pass it to the combine function
and then it's of no use anymore. However, the built-in deserialize
functions allocated their results in the aggregate state context,
resulting in a query-lifespan memory leak. It's probably not possible for
this to amount to anything much at present, since the number of leaked
results would only be the number of worker processes. But it might become
a problem in future. To fix, don't use the same convenience subroutine for
setting up results that the aggregate transition functions use.
Tom Lane [Wed, 22 Jun 2016 23:14:16 +0000 (19:14 -0400)]
Improve user-facing documentation for partial/parallel aggregation.
Add a section to xaggr.sgml, as we have done in the past for other
extensions to the aggregation functionality. Assorted wordsmithing
and other minor improvements.
Tom Lane [Wed, 22 Jun 2016 20:52:41 +0000 (16:52 -0400)]
Fix type-safety problem with parallel aggregate serial/deserialization.
The original specification for this called for the deserialization function
to have signature "deserialize(serialtype) returns transtype", which is a
security violation if transtype is INTERNAL (which it always would be in
practice) and serialtype is not (which ditto). The patch blithely overrode
the opr_sanity check for that, which was sloppy-enough work in itself,
but the indisputable reason this cannot be allowed to stand is that CREATE
FUNCTION will reject such a signature and thus it'd be impossible for
extensions to create parallelizable aggregates.
The minimum fix to make the signature type-safe is to add a second, dummy
argument of type INTERNAL. But to lock it down a bit more and make misuse
of INTERNAL-accepting functions less likely, let's get rid of the ability
to specify a "serialtype" for an aggregate and just say that the only
useful serialtype is BYTEA --- which, in practice, is the only interesting
value anyway, due to the usefulness of the send/recv infrastructure for
this purpose. That means we only have to allow "serialize(internal)
returns bytea" and "deserialize(bytea, internal) returns internal" as
the signatures for these support functions.
In passing fix bogus signature of int4_avg_combine, which I found thanks
to adding an opr_sanity check on combinefunc signatures.
catversion bump due to removing pg_aggregate.aggserialtype and adjusting
signatures of assorted built-in functions.
Tom Lane [Wed, 22 Jun 2016 15:55:18 +0000 (11:55 -0400)]
Make "postgres -C guc" print "" not "(null)" for null-valued GUCs.
Commit 0b0baf262 et al made this case print "(null)" on the grounds that
that's what happened on platforms that didn't crash. But neither behavior
was actually intentional. What we should print is just an empty string,
for compatibility with the behavior of SHOW and other ways of examining
string GUCs. Those code paths don't distinguish NULL from empty strings,
so we should not here either. Per gripe from Alain Radix.
Like the previous patch, back-patch to 9.2 where -C option was introduced.
Tom Lane [Wed, 22 Jun 2016 00:07:58 +0000 (20:07 -0400)]
Document that dependency tracking doesn't consider function bodies.
If there's anyplace in our SGML docs that explains this behavior, I can't
find it right at the moment. Add an explanation in "Dependency Tracking"
which seems like the authoritative place for such a discussion. Per
gripe from Michelle Schwan.
While at it, update this section's example of a dependency-related
error message: they last looked like that in 8.3. And remove the
explanation of dependency updates from pre-7.3 installations, which
is probably no longer worth anybody's brain cells to read.
The bogus error message example seems like an actual documentation bug,
so back-patch to all supported branches.
Tom Lane [Tue, 21 Jun 2016 22:38:20 +0000 (18:38 -0400)]
Refactor planning of projection steps that don't need a Result plan node.
The original upper-planner-pathification design (commit 3fc6e2d7f5b652b4)
assumed that we could always determine during Path formation whether or not
we would need a Result plan node to perform projection of a targetlist.
That turns out not to work very well, though, because createplan.c still
has some responsibilities for choosing the specific target list associated
with sorting/grouping nodes (in particular it might choose to add resjunk
columns for sorting). We might not ever refactor that --- doing so would
push more work into Path formation, which isn't attractive --- and we
certainly won't do so for 9.6. So, while create_projection_path and
apply_projection_to_path can tell for sure what will happen if the subpath
is projection-capable, they can't tell for sure when it isn't. This is at
least a latent bug in apply_projection_to_path, which might think it can
apply a target to a non-projecting node when the node will end up computing
something different.
Also, I'd tied the creation of a ProjectionPath node to whether or not a
Result is needed, but it turns out that we sometimes need a ProjectionPath
node anyway to avoid modifying a possibly-shared subpath node. Callers had
to use create_projection_path for such cases, and we added code to them
that knew about the potential omission of a Result node and attempted to
adjust the cost estimates for that. That was uncertainly correct and
definitely ugly/unmaintainable.
To fix, have create_projection_path explicitly check whether a Result
is needed and adjust its cost estimate accordingly, though it creates
a ProjectionPath in either case. apply_projection_to_path is now mostly
just an optimized version that can avoid creating an extra Path node when
the input is known to not be shared with any other live path. (There
is one case that create_projection_path doesn't handle, which is pushing
parallel-safe expressions below a Gather node. We could make it do that
by duplicating the GatherPath, but there seems no need as yet.)
create_projection_plan still has to recheck the tlist-match condition,
which means that if the matching situation does get changed by createplan.c
then we'll have made a slightly incorrect cost estimate. But there seems
no help for that in the near term, and I doubt it occurs often enough,
let alone would change planning decisions often enough, to be worth
stressing about.
I added a "dummypp" field to ProjectionPath to track whether
create_projection_path thinks a Result is needed. This is not really
necessary as-committed because create_projection_plan doesn't look at the
flag; but it seems like a good idea to remember what we thought when
forming the cost estimate, if only for debugging purposes.
In passing, get rid of the target_parallel parameter added to
apply_projection_to_path by commit 54f5c5150. I don't think that's a good
idea because it involves callers in what should be an internal decision,
and opens us up to missing optimization opportunities if callers think they
don't need to provide a valid flag, as most don't. For the moment, this
just costs us an extra has_parallel_hazard call when planning a Gather.
If that starts to look expensive, I think a better solution would be to
teach PathTarget to carry/cache knowledge of parallel-safety of its
contents.
Tom Lane [Mon, 20 Jun 2016 14:49:19 +0000 (10:49 -0400)]
Fix comparison of similarity to threshold in GIST trigram searches.
There was some very strange code here, dating to commit b525bf77, that
purported to work around an ancient gcc bug by forcing a float4 comparison
to be done as int instead. Commit 5871b8848 broke that when it changed
one side of the comparison to "double" but left the comparison code alone.
Commit f576b17cd doubled down on the weirdness by introducing a "volatile"
marker, which had nothing to do with the actual problem.
Guess that the gcc bug, even if it's still present in the wild, was
triggered by comparison of float4's and can be avoided if we store the
result of cnt_sml() into a double before comparing to the double "nlimit".
This will at least work correctly on non-broken compilers, and it's way
more readable.
Per bug #14202 from Greg Navis. Add a regression test based on his
example.
Tom Lane [Sun, 19 Jun 2016 17:11:40 +0000 (13:11 -0400)]
Docs: improve description of psql's %R prompt escape sequence.
Dilian Palauzov pointed out in bug #14201 that the docs failed to mention
the possibility of %R producing '(' due to an unmatched parenthesis.
He proposed just adding that in the same style as the other options were
listed; but it seemed to me that the sentence was already nearly
unintelligible, so I rewrote it a bit more extensively.
Tom Lane [Sat, 18 Jun 2016 23:38:59 +0000 (19:38 -0400)]
Improve error message annotation for GRANT/REVOKE on untrusted PLs.
The annotation for "ERROR: language "foo" is not trusted" used to say
"HINT: Only superusers can use untrusted languages", which was fairly
poorly thought out. For one thing, it's not a hint about what to do,
but a statement of fact, which makes it errdetail. But also, this
fails to clarify things much, because there's a missing step in the
chain of reasoning. I think it's more useful to say "GRANT and REVOKE
are not allowed on untrusted languages, because only superusers can use
untrusted languages".
It's been like this for a long time, but given the lack of previous
complaints, I don't think this is worth back-patching.
Tom Lane [Sat, 18 Jun 2016 19:22:34 +0000 (15:22 -0400)]
Restore foreign-key-aware estimation of join relation sizes.
This patch provides a new implementation of the logic added by commit 137805f89 and later removed by 77ba61080. It differs from the original
primarily in expending much less effort per joinrel in large queries,
which it accomplishes by doing most of the matching work once per query not
once per joinrel. Hopefully, it's also less buggy and better commented.
The never-documented enable_fkey_estimates GUC remains gone.
There remains work to be done to make the selectivity estimates account
for nulls in FK referencing columns; but that was true of the original
patch as well. We may be able to address this point later in beta.
In the meantime, any error should be in the direction of overestimating
rather than underestimating joinrel sizes, which seems like the direction
we want to err in.
Tom Lane [Sat, 18 Jun 2016 04:28:51 +0000 (00:28 -0400)]
Still another try at fixing scanjoin_target insertion into parallel plans.
The previous code neglected the fact that the scanjoin_target might
carry sortgroupref labelings that we need to absorb. Instead, do
create_projection_path() unconditionally, and tweak the path's cost
estimate after the fact. (I'm now convinced that we ought to refactor
the way we account for sometimes not needing a separate projection step,
but right now is not the time for that sort of cleanup.)
Tom Lane [Sat, 18 Jun 2016 03:08:21 +0000 (23:08 -0400)]
Fix parallel-safety markings for contrib/dblink.
As shown by buildfarm reports, dblink_build_sql_insert and
dblink_build_sql_update are *not* parallel safe, because they
may attempt to access temporary tables of the local session.
Although dblink_build_sql_delete doesn't actually touch the
contents of the referenced table, it seems consistent and prudent
to mark it PARALLEL RESTRICTED too.
Tom Lane [Sat, 18 Jun 2016 01:44:37 +0000 (21:44 -0400)]
Fix handling of argument and result datatypes for partial aggregation.
When doing partial aggregation, the args list of the upper (combining)
Aggref node is replaced by a Var representing the output of the partial
aggregation steps, which has either the aggregate's transition data type
or a serialized representation of that. However, nodeAgg.c blindly
continued to use the args list as an indication of the user-level argument
types. This broke resolution of polymorphic transition datatypes at
executor startup (though it accidentally failed to fail for the ANYARRAY
case, which is likely the only one anyone had tested). Moreover, the
constructed FuncExpr passed to the finalfunc contained completely wrong
information, which would have led to bogus answers or crashes for any case
where the finalfunc examined that information (which is only likely to be
with polymorphic aggregates using a non-polymorphic transition type).
As an independent bug, apply_partialaggref_adjustment neglected to resolve
a polymorphic transition datatype before assigning it as the output type
of the lower-level Aggref node. This again accidentally failed to fail
for ANYARRAY but would be unlikely to work in other cases.
To fix the first problem, record the user-level argument types in a
separate OID-list field of Aggref, and look to that rather than the args
list when asking what the argument types were. (It turns out to be
convenient to include any "direct" arguments in this list too, although
those are not currently subject to being overwritten.)
Rather than adding yet another resolve_aggregate_transtype() call to fix
the second problem, add an aggtranstype field to Aggref, and store the
resolved transition type OID there when the planner first computes it.
(By doing this in the planner and not the parser, we can allow the
aggregate's transition type to change from time to time, although no DDL
support yet exists for that.) This saves nothing of consequence for
simple non-polymorphic aggregates, but for polymorphic transition types
we save a catalog lookup during executor startup as well as several
planner lookups that are new in 9.6 due to parallel query planning.
In passing, fix an error that was introduced into count_agg_clauses_walker
some time ago: it was applying exprTypmod() to something that wasn't an
expression node at all, but a TargetEntry. exprTypmod silently returned
-1 so that there was not an obvious failure, but this broke the intended
sensitivity of aggregate space consumption estimates to the typmod of
varchar and similar data types. This part needs to be back-patched.
Catversion bump due to change of stored Aggref nodes.
This requires some core changes as well so that we can properly
WAL-log the truncation. Specifically, it changes the format of the
XLOG_SMGR_TRUNCATE WAL record, so bump XLOG_PAGE_MAGIC.
Patch by me, reviewed but not fully endorsed by Andres Freund.
Robert Haas [Fri, 17 Jun 2016 20:25:02 +0000 (16:25 -0400)]
Try again to fix the way the scanjoin_target is used with partial paths.
Commit 04ae11f62e643e07c411c4935ea6af46cb112aa9 removed some broken
code to apply the scan/join target to partial paths, but its theory
that this processing step is totally unnecessary turns out to be wrong.
Put similar code back again, but this time, check for parallel-safety
and avoid in-place modifications to paths that may already have been
used as part of some other path.
(This is not an entirely elegant solution to this problem; it might
be better, for example, to postpone generate_gather_paths for the
topmost scan/join rel until after the scan/join target has been
applied. But this is not the time for such redesign work.)
Robert Haas [Fri, 17 Jun 2016 19:48:57 +0000 (15:48 -0400)]
Add VACUUM (DISABLE_PAGE_SKIPPING) for emergencies.
If you really want to vacuum every single page in the relation,
regardless of apparent visibility status or anything else, you can use
this option. In previous releases, this behavior could be achieved
using VACUUM (FREEZE), but because we can now recognize all-frozen
pages as not needing to be frozen again, that no longer works. There
should be no need for routine use of this option, but maybe bugs or
disaster recovery will necessitate its use.
Robert Haas [Fri, 17 Jun 2016 19:09:57 +0000 (15:09 -0400)]
Update dblink extension for parallel query.
Almost all functions provided by this extension are PARALLEL
RESTRICTED. Mostly, that's because the leader's TCP connections won't
be shared with the workers, but in some cases like dblink_get_pkey
it's because they obtain locks which might be released early if taken
within a parallel worker. dblink_fdw_validator probably can't be used
in a query anyway, but there would be no problem from the point of
view of parallel query if it were, so it's PARALLEL SAFE.
Robert Haas [Fri, 17 Jun 2016 12:35:47 +0000 (08:35 -0400)]
Attempt to fix broken regression test.
In commit 8c1d9d56e9a00680a035b8b333a98ea16b121eb7, I attempted to
add a regression test that would fail if the target list was pushed
into a parallel worker, but due to brain fade on my part, it just
randomly fails whether anything bad or not, because the error check
inside the parallel_restricted() function tests whether there is
*any process in the system* that is not connected to a client, not
whether the process running the query is not connected to a client.
A little experimentation has left me pessimistic about the
prospects of doing better here in a short amount of time, so let's
just fall back to checking that the plan is as we expect and leave
the execution-time check for another day.
Tom Lane [Thu, 16 Jun 2016 21:16:32 +0000 (17:16 -0400)]
Fix validation of overly-long IPv6 addresses.
The inet/cidr types sometimes failed to reject IPv6 inputs with too many
colon-separated fields, instead translating them to '::/0'. This is the
result of a thinko in the original ISC code that seems to be as yet
unreported elsewhere. Per bug #14198 from Stefan Kaltenbrunner.
Tom Lane [Thu, 16 Jun 2016 19:20:29 +0000 (15:20 -0400)]
Fix fuzzy thinking in ReinitializeParallelDSM().
The fact that no workers were successfully launched in the previous
iteration does not excuse us from setting up properly to try again.
This appears to explain crashes I saw in parallel regression testing
due to error_mqh being NULL when it shouldn't be.
Tom Lane [Thu, 16 Jun 2016 17:47:20 +0000 (13:47 -0400)]
Invent min_parallel_relation_size GUC to replace a hard-wired constant.
The main point of doing this is to allow the cutoff to be set very small,
even zero, to allow parallel-query behavior to be tested on relatively
small tables such as we typically use in the regression tests. But it
might be of use to users too. The number-of-workers scaling behavior in
create_plain_partial_paths() is pretty ad-hoc and subject to change, so
we won't expose anything about that, but the notion of not considering
parallel query at all for tables below size X seems reasonably stable.
Tom Lane [Thu, 16 Jun 2016 16:17:03 +0000 (12:17 -0400)]
Avoid crash in "postgres -C guc" for a GUC with a null string value.
Emit "(null)" instead, which was the behavior all along on platforms
that don't crash, eg OS X. Per report from Jehan-Guillaume de Rorthais.
Back-patch to 9.2 where -C option was introduced.
Tom Lane [Wed, 15 Jun 2016 23:31:08 +0000 (19:31 -0400)]
Use strftime("%c") to format timestamps in psql's \watch command.
This allows the timestamps to follow local conventions (in particular,
they respond to the LC_TIME environment setting). In C locale you get
the same results as before. It seems like a good idea to do this now not
later because we already changed the format of \watch headers for 9.6.
Also, increase the buffer sizes a tad to ensure there's enough space for
translated strings.
Robert Haas [Wed, 15 Jun 2016 18:59:07 +0000 (14:59 -0400)]
Fix regression test for force_parallel_mode=on.
Commit 14a254fb52423c57059851abafbd1247261f7f03 managed not to
exercise the code it was intended to test, and the comment explaining
why no "parallel worker" line showed up in the context wasn't right.
Robert Haas [Wed, 15 Jun 2016 18:33:58 +0000 (14:33 -0400)]
Add integrity-checking functions to pg_visibility.
The new pg_check_visible() and pg_check_frozen() functions can be used to
verify that the visibility map bits for a relation's data pages match the
actual state of the tuples on those pages.
Amit Kapila and Robert Haas, reviewed (in earlier versions) by Andres
Freund. Additional testing help by Thomas Munro.
Robert Haas [Wed, 15 Jun 2016 18:23:39 +0000 (14:23 -0400)]
Fix lazy_scan_heap so that it won't mark pages all-frozen too soon.
Commit a892234f830e832110f63fc0a2afce2fb21d1584 added a new bit per
page to the visibility map fork indicating whether the page is
all-frozen, but incorrectly assumed that if lazy_scan_heap chose to
freeze a tuple then that tuple would not need to later be frozen
again. This turns out to be false, because xmin and xmax (and
conceivably xvac, if dealing with tuples from very old releases) could
be frozen at separate times.
Thanks to Andres Freund for help in uncovering and tracking down this
issue.
Robert Haas [Wed, 15 Jun 2016 15:40:07 +0000 (11:40 -0400)]
Mark some functions parallel-unsafe.
currtid() and currtid2() call GetLatestSnapshot(), which fails in
parallel mode. pg_export_snapshot() calls ExportSnapshot() which
attempts to assign an XID for the current transaction if it does not
already have one; that, too, will fail in parallel mode.