Andres Freund [Fri, 16 Feb 2018 05:55:31 +0000 (21:55 -0800)]
Do execGrouping.c via expression eval machinery, take two.
This has a performance benefit on own, although not hugely so. The
primary benefit is that it will allow for to JIT tuple deforming and
comparator invocations.
Large parts of this were previously committed (773aec7aa), but the
commit contained an omission around cross-type comparisons and was
thus reverted.
Author: Andres Freund
Discussion: https://postgr.es/m/20171129080934.amqqkke2zjtekd4t@alap3.anarazel.de
elog(FATAL) would end up calling PortalCleanup(), which would call
executor shutdown code, which could fail and crash, especially under
parallel query. This was introduced by 8561e4840c81f7e345be2df170839846814fa004, which did not want to mark an
active portal as failed by a normal transaction abort anymore. But we
do need to do that for an elog(FATAL) exit. Introduce a variable
shmem_exit_inprogress similar to the existing proc_exit_inprogress, so
we can tell whether we are in the FATAL exit scenario.
Tom Lane [Fri, 16 Feb 2018 17:14:08 +0000 (12:14 -0500)]
Remove some inappropriate #includes.
Other header files should never #include postgres.h (nor postgres_fe.h,
nor c.h), per project policy. Also, there's no need for any backend .c
file to explicitly include elog.h or palloc.h, because postgres.h pulls
those in already.
Extracted from a larger patch by Kyotaro Horiguchi. The rest of the
removals he suggests require more study, but these are no-brainers.
There's an unresolved issue in the reverted commit: It only creates
one comparator function, but in for the nodeSubplan.c case we need
more (c.f. FindTupleHashEntry vs LookupTupleHashEntry calls in
nodeSubplan.c).
This isn't too difficult to fix, but it's not entirely trivial
either. The fact that the issue only causes breakage on 32bit systems
shows that the current test coverage isn't that great. To avoid
turning half the buildfarm red till those two issues are addressed,
revert.
Andres Freund [Fri, 16 Feb 2018 05:55:31 +0000 (21:55 -0800)]
Do execGrouping.c via expression eval machinery.
This has a performance benefit on own, although not hugely so. The
primary benefit is that it will allow for to JIT tuple deforming and
comparator invocations.
Author: Andres Freund
Discussion: https://postgr.es/m/20171129080934.amqqkke2zjtekd4t@alap3.anarazel.de
Tom Lane [Thu, 15 Feb 2018 21:25:19 +0000 (16:25 -0500)]
Fix plpgsql to enforce domain checks when returning a NULL domain value.
If a plpgsql function is declared to return a domain type, and the domain's
constraints forbid a null value, it was nonetheless possible to return
NULL, because we didn't bother to check the constraints for a null result.
I'd noticed this while fooling with domains-over-composite, but had not
gotten around to fixing it immediately.
Add a regression test script exercising this and various other domain
cases, largely borrowed from the plpython_types test.
Although this is clearly a bug fix, I'm not sure whether anyone would
thank us for changing the behavior in stable branches, so I'm inclined
not to back-patch.
Tom Lane [Thu, 15 Feb 2018 18:56:38 +0000 (13:56 -0500)]
Doc: fix minor bug in CREATE TABLE example.
One example in create_table.sgml claimed to be showing table constraint
syntax, but it was really column constraint syntax due to the omission
of a comma. This is both wrong and confusing, so fix it in all
supported branches.
Tom Lane [Thu, 15 Feb 2018 18:41:30 +0000 (13:41 -0500)]
Cast to void in StaticAssertExpr, not its callers.
Seems a bit silly that many (in fact all, as of today) uses of
StaticAssertExpr would need to cast it to void to avoid warnings from
pickier compilers. Let's just do the cast right in the macro, instead.
In passing, change StaticAssertExpr to StaticAssertStmt in one
place where that seems more apropos.
Tom Lane [Thu, 15 Feb 2018 00:43:33 +0000 (19:43 -0500)]
Move the extern declaration for ExceptionalCondition into c.h.
This is the logical conclusion of our decision to support Assert()
in both frontend and backend code: it should be possible to use that
after including just c.h. But as things were arranged before, if
you wanted to use Assert() in code that might be compiled for either
environment, you had to include postgres.h for the backend case.
Let's simplify that.
Per buildfarm, some of whose members started throwing warnings after
commit 0c62356cc added an Assert in src/port/snprintf.c.
It's possible that some other src/port files that use the stanza
could now be simplified to just say '#include "c.h"'. I have not
tested for that, though, and it'd be unlikely to apply for more
than a small number of them.
Tom Lane [Wed, 14 Feb 2018 23:42:14 +0000 (18:42 -0500)]
Revert "Stabilize output of new regression test case".
This effectively reverts commit 9edc97b71 (although the test is now
in a different place and has different contents). We don't need that
hack anymore, because since commit 4b93f5799, this test case no longer
throws an error and so there's no displayed CONTEXT that could vary
depending on CLOBBER_CACHE_ALWAYS. The underlying unstable-output
problem isn't really gone, of course, but it no longer manifests here.
Tom Lane [Wed, 14 Feb 2018 23:17:22 +0000 (18:17 -0500)]
Stabilize new plpgsql_record regression tests.
The buildfarm's CLOBBER_CACHE_ALWAYS animals aren't happy with some
of the test cases added in commit 4b93f5799. There are two different
problems:
* In two places, a different CONTEXT stack is shown because the error
is detected in a different place, due to recompiling an expression
from scratch rather than re-using a previously cached plan for it.
I fixed these via the expedient of hiding the CONTEXT stack altogether.
* In one place, a test expected to fail (because a cached plan hadn't
been updated) actually succeeds (because the forced recompile makes
it good). I couldn't think of a simple workaround for this, so I've
just commented out that test step altogether.
I have hopes of improving things enough that both of these kluges can
be reverted eventually. The first one is the same kind of problem
previously discussed at
https://postgr.es/m/31545.1512924904@sss.pgh.pa.us
but there was insufficient agreement about how to fix it, so we
just hacked around the output instability (commit 9edc97b71).
The second issue should be fixed by allowing the plan to be rebuilt
when a type conflict is detected. But for today, let's just make the
buildfarm green again.
Andres Freund [Wed, 14 Feb 2018 22:17:28 +0000 (14:17 -0800)]
Return implementation defined value if pg_$op_s$bit_overflow overflows.
Some older compilers otherwise sometimes complain about undefined
values, even though the return value should not be used in the
overflow case. We assume that any decent compiler will optimize away
the unnecessary assignment in performance critical cases.
We do not want to restrain the returned value to a specific value,
e.g. 0 or the wrapped-around value, because some fast ways to
implement overflow detecting math do not easily allow for
that (e.g. msvc intrinsics). As the function documentation already
documents the returned value in case of intrinsics to be
implementation defined, no documentation has to be updated.
Per complaint from Tom Lane and his buildfarm member prairiedog.
Author: Andres Freund
Discussion: https://postgr.es/m/18169.1513958454@sss.pgh.pa.us
Tom Lane [Wed, 14 Feb 2018 21:06:49 +0000 (16:06 -0500)]
Silence assorted "variable may be used uninitialized" warnings.
All of these are false positives, but in each case a fair amount of
analysis is needed to see that, and it's not too surprising that not all
compilers are smart enough. (In particular, in the logtape.c case, a
compiler lacking the knowledge provided by the Assert would almost surely
complain, so that this warning will be seen in any non-assert build.)
Some of these are of long standing while others are pretty recent,
but it only seems worth fixing them in HEAD.
Tom Lane [Wed, 14 Feb 2018 20:06:01 +0000 (15:06 -0500)]
Add an assertion that we don't pass NULL to snprintf("%s").
Per commit e748e902d, we appear to have little or no coverage in the
buildfarm of machines that will dump core when asked to printf a
null string pointer. Let's try to improve that situation by adding
an assertion that will make src/port/snprintf.c behave that way.
Since it's just an assertion, it won't break anything in production
builds, but it will help developers find this type of oversight.
Note that while our buildfarm coverage of machines that use that
snprintf implementation is pretty thin on the Unix side (apparently
amounting only to gaur/pademelon), all of the MSVC critters use it.
Tom Lane [Wed, 14 Feb 2018 19:47:18 +0000 (14:47 -0500)]
Fix broken logic for reporting PL/Python function names in errcontext.
plpython_error_callback() reported the name of the function associated
with the topmost PL/Python execution context. This was not merely
wrong if there were nested PL/Python contexts, but it risked a core
dump if the topmost one is an inline code block rather than a named
function. That will have proname = NULL, and so we were passing a NULL
pointer to snprintf("%s"). It seems that none of the PL/Python-testing
machines in the buildfarm will dump core for that, but some platforms do,
as reported by Marina Polyakova.
Investigation finds that there actually is an existing regression test
that used to prove that the behavior was wrong, though apparently no one
had noticed that it was printing the wrong function name. It stopped
showing the problem in 9.6 when we adjusted psql to not print CONTEXT
by default for NOTICE messages. The problem is masked (if your platform
avoids the core dump) in error cases, because PL/Python will throw away
the originally generated error info in favor of a new traceback produced
at the outer level.
Repair by using ErrorContextCallback.arg to pass the correct context to
the error callback. Add a regression test illustrating correct behavior.
Back-patch to all supported branches, since they're all broken this way.
Tom Lane [Wed, 14 Feb 2018 03:15:08 +0000 (22:15 -0500)]
Support CONSTANT/NOT NULL/initial value for plpgsql composite variables.
These features were never implemented previously for composite or record
variables ... not that the documentation admitted it, so there's no doc
updates here.
This also fixes some issues concerning enforcing DOMAIN NOT NULL
constraints against plpgsql variables, although I'm not sure that
that topic is completely dealt with.
I created a new plpgsql test file for these features, and moved the
one relevant existing test case into that file.
Tom Lane [Wed, 14 Feb 2018 00:20:37 +0000 (19:20 -0500)]
Speed up plpgsql trigger startup by introducing "promises".
Over the years we've accreted quite a few special variables that are
predefined in plpgsql trigger functions. The cost of initializing these
variables to their defined values turns out to be a significant part of
the runtime of simple triggers; but, undoubtedly, most real-world triggers
never examine the values of most of these variables.
To improve matters, invent the notion of a variable that has a "promise"
attached to it, specifying which of the predetermined values should be
assigned to the variable if anything ever reads it. This eliminates all
the unneeded startup overhead, in return for a small penalty on accesses
to these variables.
Tom Lane [Wed, 14 Feb 2018 00:10:43 +0000 (19:10 -0500)]
Speed up plpgsql function startup by doing fewer pallocs.
Previously, copy_plpgsql_datum did a separate palloc for each variable
needing instance-local storage. In simple benchmarks this made for a
noticeable fraction of the total runtime. Improve it by precalculating
the space needed for all of a function's variables and doing just one
palloc for all of them.
In passing, remove PLPGSQL_DTYPE_EXPR from the list of plpgsql "datum"
types, since in fact it has nothing in common with the others, and there
is noplace that needs to discriminate on the basis of dtype between an
expression and any type of datum. And add comments clarifying which
datum struct fields are generic and which aren't.
Tom Lane [Tue, 13 Feb 2018 23:52:21 +0000 (18:52 -0500)]
Make plpgsql use its DTYPE_REC code paths for composite-type variables.
Formerly, DTYPE_REC was used only for variables declared as "record";
variables of named composite types used DTYPE_ROW, which is faster for
some purposes but much less flexible. In particular, the ROW code paths
are entirely incapable of dealing with DDL-caused changes to the number
or data types of the columns of a row variable, once a particular plpgsql
function has been parsed for the first time in a session. And, since the
stored representation of a ROW isn't a tuple, there wasn't any easy way
to deal with variables of domain-over-composite types, since the domain
constraint checking code would expect the value to be checked to be a
tuple. A lesser, but still real, annoyance is that ROW format cannot
represent a true NULL composite value, only a row of per-field NULL
values, which is not exactly the same thing.
Hence, switch to using DTYPE_REC for all composite-typed variables,
whether "record", named composite type, or domain over named composite
type. DTYPE_ROW remains but is used only for its native purpose, to
represent a fixed-at-compile-time list of variables, for instance the
targets of an INTO clause.
To accomplish this without taking significant performance losses, introduce
infrastructure that allows storing composite-type variables as "expanded
objects", similar to the "expanded array" infrastructure introduced in
commit 1dc5ebc90. A composite variable's value is thereby kept (most of
the time) in the form of separate Datums, so that field accesses and
updates are not much more expensive than they were in the ROW format.
This holds the line, more or less, on performance of variables of named
composite types in field-access-intensive microbenchmarks, and makes
variables declared "record" perform much better than before in similar
tests. In addition, the logic involved with enforcing composite-domain
constraints against updates of individual fields is in the expanded
record infrastructure not plpgsql proper, so that it might be reusable
for other purposes.
In further support of this, introduce a typcache feature for assigning a
unique-within-process identifier to each distinct tuple descriptor of
interest; in particular, DDL alterations on composite types result in a new
identifier for that type. This allows very cheap detection of the need to
refresh tupdesc-dependent data. This improves on the "tupDescSeqNo" idea
I had in commit 687f096ea: that assigned identifying sequence numbers to
successive versions of individual composite types, but the numbers were not
unique across different types, nor was there support for assigning numbers
to registered record types.
In passing, allow plpgsql functions to accept as well as return type
"record". There was no good reason for the old restriction, and it
was out of step with most of the other PLs.
Peter Eisentraut [Tue, 13 Feb 2018 14:12:45 +0000 (09:12 -0500)]
In LDAP test, restart after pg_hba.conf changes
Instead of issuing a reload after pg_hba.conf changes between test
cases, run a full restart. With a reload, an error in the new
pg_hba.conf is ignored and the tests will continue to run with the old
settings, invalidating the subsequent test cases. With a restart, a
faulty pg_hba.conf will lead to the test being aborted, which is what
we'd rather want.
Robert Haas [Mon, 12 Feb 2018 17:55:12 +0000 (12:55 -0500)]
Fix parallel index builds for dynamic_shared_memory_type=none.
The previous code failed to realize that this setting effectively
disables parallelism, and would crash if it decided to attempt
parallelism anyway. Instead, treat it as a disabling condition.
Kyotaro Horiguchi, who also reported the issue. Reviewed by Michael
Paquier and Peter Geoghegan.
Tom Lane [Sun, 11 Feb 2018 18:24:15 +0000 (13:24 -0500)]
Fix assorted errors in pg_dump's handling of extended statistics objects.
pg_dump supposed that a stats object necessarily shares the same schema
as its underlying table, and that it doesn't have a separate owner.
These things may have been true during early development of the feature,
but they are not true as of v10 release.
Failure to track the object's schema separately turns out to have only
limited consequences, because pg_get_statisticsobjdef() always schema-
qualifies the target object name in the generated CREATE STATISTICS command
(a decision out of step with the rest of ruleutils.c, but I digress).
Therefore the restored object would be in the right schema, so that the
only problem is that the TOC entry would be mislabeled as to schema. That
could lead to wrong decisions for schema-selective restores, for example.
The ownership issue is a bit more serious: not only was the TOC entry
potentially mislabeled as to owner, but pg_dump didn't bother to issue an
ALTER OWNER command at all, so that after restore the stats object would
continue to be owned by the restoring superuser.
A final point is that decisions as to whether to dump a stats object or
not were driven by whether the underlying table was dumped or not. While
that's not wrong on its face, it won't scale nicely to the planned future
extension to cross-table statistics. Moreover, that design decision comes
out of the view of stats objects as being auxiliary to a particular table,
like a rule or trigger, which is exactly where the above problems came
from. Since we're now treating stats objects more like independent objects
in their own right, they ought to behave like standalone objects for this
purpose too. So change to using the generic selectDumpableObject() logic
for them (which presently amounts to "dump if containing schema is to be
dumped").
Along the way to fixing this, restructure so that getExtendedStatistics
collects the identity info (only) for all extended stats objects in one
query, and then for each object actually being dumped, we retrieve the
definition in dumpStatisticsExt. This is necessary to ensure that
schema-qualification in the generated CREATE STATISTICS command happens
with respect to the search path that pg_dump will now be using at restore
time (ie, the schema the stats object is in, not that of the underlying
table). It's probably also significantly faster in the typical scenario
where only a minority of tables have extended stats.
Back-patch to v10 where extended stats were introduced.
Tom Lane [Sat, 10 Feb 2018 18:37:12 +0000 (13:37 -0500)]
Avoid premature free of pass-by-reference CALL arguments.
Prematurely freeing the EState used to evaluate CALL arguments led, in some
cases, to passing dangling pointers to the procedure. This was masked in
trivial cases because the argument pointers would point to Const nodes in
the original expression tree, and in some other cases because the result
value would end up in the standalone ExprContext rather than in memory
belonging to the EState --- but that wasn't exactly high quality
programming either, because the standalone ExprContext was never
explicitly freed, breaking assorted API contracts.
In addition, using a separate EState for each argument was just silly.
So let's use just one EState, and one ExprContext, and make the latter
belong to the former rather than be standalone, and clean up the EState
(and hence the ExprContext) post-call.
While at it, improve the function's commentary a bit.
Tom Lane [Sat, 10 Feb 2018 18:05:14 +0000 (13:05 -0500)]
Fix oversight in CALL argument handling, and do some minor cleanup.
CALL statements cannot support sub-SELECTs in the arguments of the called
procedure, since they just use ExecEvalExpr to evaluate such arguments.
Teach transformSubLink() to reject the case, as it already does for other
contexts in which subqueries are not supported.
In passing, s/EXPR_KIND_CALL/EXPR_KIND_CALL_ARGUMENT/ to make that enum
symbol line up more closely with the phrasing of the error messages it is
associated with. And fix someone's weak grasp of English grammar in the
preceding EXPR_KIND_PARTITION_EXPRESSION addition. Also update an
incorrect comment in resolve_unique_index_expr (possibly it was correct
when written, but nowadays transformExpr definitely does reject SRFs here).
Per report from Pavel Stehule --- but this resolves only one of the bugs
he mentions.
Alvaro Herrera [Sat, 10 Feb 2018 13:01:37 +0000 (10:01 -0300)]
Mention partitioned indexes in "Data Definition" chapter
We can now create indexes more easily than before, so update this
chapter to use the simpler instructions.
After an idea of Amit Langote. I (Álvaro) opted to do more invasive
surgery and remove the previous suggestion to create per-partition
indexes, which his patch left in place.
Discussion: https://postgr.es/m/eafaaeb1-f0fd-d010-dd45-07db0300f645@lab.ntt.co.jp
Author: Amit Langote, Álvaro Herrera
Robert Haas [Fri, 9 Feb 2018 20:48:18 +0000 (15:48 -0500)]
Clear stmt_timeout_active if we disable_all_timeouts.
Otherwise, we can end up with the flag set when the timeout is
actually disabled, leading to misbehavior. Commit f8e5f156b30efee5d0038b03e38735773abcb7ed introduced this bug.
Reported by Peter Eisentraut. Analysis and fix by Thomas Munro,
tweaked by me.
Robert Haas [Fri, 9 Feb 2018 20:24:35 +0000 (15:24 -0500)]
postgres_fdw: Attmempt to stabilize regression tests.
Even after commit 882ea509fe7a4711fe25463427a33262b873dfa1, some
buildfarm members are still failing in the postgres_fdw tests.
Try to fix that by disabling use of remote statistics for some
test cases.
Robert Haas [Thu, 8 Feb 2018 17:31:48 +0000 (12:31 -0500)]
Fix possible infinite loop with Parallel Append.
When the previously-chosen plan was non-partial, all pa_finished
flags for partial plans are now set, and pa_next_plan has not yet
been set to INVALID_SUBPLAN_INDEX, the previous code could go into
an infinite loop.
Report by Rajkumar Raghuwanshi. Patch by Amit Khandekar and me.
Review by Kyotaro Horiguchi.
Instead of using the psql/libpq connection string as the displayed test
name and relying on "notes" and source code comments to explain the
tests, give the tests self-explanatory names, like we do elsewhere.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Robert Haas [Thu, 8 Feb 2018 01:38:08 +0000 (20:38 -0500)]
postgres_fdw: Remove CTID output from some tests.
Commit 1bc0100d270e5bcc980a0629b8726a32a497e788 added these tests,
but they're not stable enough to survive in the buildfarm. Remove
CTIDs from the output in the hopes of fixing that.
Robert Haas [Wed, 7 Feb 2018 20:34:30 +0000 (15:34 -0500)]
postgres_fdw: Push down UPDATE/DELETE joins to remote servers.
Commit 0bf3ae88af330496517722e391e7c975e6bad219 allowed direct
foreign table modification; instead of fetching each row, updating
it locally, and then pushing the modification back to the remote
side, we would instead do all the work on the remote server via a
single remote UPDATE or DELETE command. However, that commit only
enabled this optimization when join tree consisted only of the
target table.
This change allows the same optimization when an UPDATE statement
has a FROM clause or a DELETE statement has a USING clause. This
works much like ordinary foreign join pushdown, in that the tables
must be on the same remote server, relevant parts of the query
must be pushdown-safe, and so forth.
Etsuro Fujita, reviewed by Ashutosh Bapat, Rushabh Lathia, and me.
Some formatting corrections by me.
Add explicit collation on the trigger name to avoid locale dependencies.
Also restrict the tables selected, to avoid interference from
concurrently running tests.
Magnus Hagander [Wed, 7 Feb 2018 09:53:56 +0000 (10:53 +0100)]
Change default git repo URL to https
Since we now support the server side handler for git over https (so
we're no longer using the "dumb protocol"), make https the primary
choice for cloning the repository, and the git protocol the secondary
choice.
In passing, also change the links to git-scm.com from http to https.
Reviewed by Stefan Kaltenbrunner and David G. Johnston
Tom Lane [Wed, 7 Feb 2018 05:06:50 +0000 (00:06 -0500)]
Support all SQL:2011 options for window frame clauses.
This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING"
frame boundaries in window functions. We'd punted on that back in the
original patch to add window functions, because it was not clear how to
do it in a reasonably data-type-extensible fashion. That problem is
resolved here by adding the ability for btree operator classes to provide
an "in_range" support function that defines how to add or subtract the
RANGE offset value. Factoring it this way also allows the operator class
to avoid overflow problems near the ends of the datatype's range, if it
wishes to expend effort on that. (In the committed patch, the integer
opclasses handle that issue, but it did not seem worth the trouble to
avoid overflow failures for datetime types.)
The patch includes in_range support for the integer_ops opfamily
(int2/int4/int8) as well as the standard datetime types. Support for
other numeric types has been requested, but that seems like suitable
material for a follow-on patch.
In addition, the patch adds GROUPS mode which counts the offset in
ORDER-BY peer groups rather than rows, and it adds the frame_exclusion
options specified by SQL:2011. As far as I can see, we are now fully
up to spec on window framing options.
Existing behaviors remain unchanged, except that I changed the errcode
for a couple of existing error reports to meet the SQL spec's expectation
that negative "offset" values should be reported as SQLSTATE 22013.
Internally and in relevant parts of the documentation, we now consistently
use the terminology "offset PRECEDING/FOLLOWING" rather than "value
PRECEDING/FOLLOWING", since the term "value" is confusingly vague.
Oliver Ford, reviewed and whacked around some by me
Robert Haas [Tue, 6 Feb 2018 19:24:57 +0000 (14:24 -0500)]
Avoid valgrind complaint about write() of uninitalized bytes.
LogicalTapeFreeze() may write out its first block when it is dirty but
not full, and then immediately read the first block back in from its
BufFile as a BLCKSZ-width block. This can only occur in rare cases
where very few tuples were written out, which is currently only
possible with parallel external tuplesorts. To avoid valgrind
complaints, tell it to treat the tail of logtape.c's buffer as
defined.
Commit 9da0cc35284bdbe8d442d732963303ff0e0a40bc exposed this problem
but did not create it. LogicalTapeFreeze() has always tended to write
out some amount of garbage bytes, but previously never wrote less than
one block of data in total, so the problem was masked.
Per buildfarm members lousyjack and skink.
Peter Geoghegan, based on a suggestion from Tom Lane and me. Some
comment revisions by me.
Tom Lane [Tue, 6 Feb 2018 18:52:27 +0000 (13:52 -0500)]
Doc: move info for btree opclass implementors into main documentation.
Up to now, useful info for writing a new btree opclass has been buried
in the backend's nbtree/README file. Let's move it into the SGML docs,
in preparation for extending it with info about "in_range" functions
in the upcoming window RANGE patch.
To do this, I chose to create a new chapter for btree indexes in Part VII
(Internals), parallel to the chapters that exist for the newer index AMs.
This is a pretty short chapter as-is. At some point somebody might care
to flesh it out with more detail about btree internals, but that is
beyond the scope of my ambition for today.
Robert Haas [Mon, 5 Feb 2018 22:31:57 +0000 (17:31 -0500)]
Fix possible crash in partition-wise join.
The previous code assumed that we'd always succeed in creating
child-joins for a joinrel for which partition-wise join was considered,
but that's not guaranteed, at least in the case where dummy rels
are involved.
Tom Lane [Mon, 5 Feb 2018 15:58:27 +0000 (10:58 -0500)]
Ensure that all temp files made during pg_upgrade are non-world-readable.
pg_upgrade has always attempted to ensure that the transient dump files
it creates are inaccessible except to the owner. However, refactoring
in commit 76a7650c4 broke that for the file containing "pg_dumpall -g"
output; since then, that file was protected according to the process's
default umask. Since that file may contain role passwords (hopefully
encrypted, but passwords nonetheless), this is a particularly unfortunate
oversight. Prudent users of pg_upgrade on multiuser systems would
probably run it under a umask tight enough that the issue is moot, but
perhaps some users are depending only on pg_upgrade's umask changes to
protect their data.
To fix this in a future-proof way, let's just tighten the umask at
process start. There are no files pg_upgrade needs to write at a
weaker security level; and if there were, transiently relaxing the
umask around where they're created would be a safer approach.
Report and patch by Tom Lane; the idea for the fix is due to Noah Misch.
Back-patch to all supported branches.
Tom Lane [Mon, 5 Feb 2018 15:37:30 +0000 (10:37 -0500)]
Fix RelationBuildPartitionKey's processing of partition key expressions.
Failure to advance the list pointer while reading partition expressions
from a list results in invoking an input function with inappropriate data,
possibly leading to crashes or, with carefully crafted input, disclosure
of arbitrary backend memory.
Bug discovered independently by Álvaro Herrera and David Rowley.
This patch is by Álvaro but owes something to David's proposed fix.
Back-patch to v10 where the issue was introduced.
Tom Lane [Mon, 5 Feb 2018 03:14:07 +0000 (22:14 -0500)]
Skip setting up shared instrumentation for Hash node if not needed.
We don't need to set up the shared space for hash join instrumentation data
if instrumentation hasn't been requested. Let's follow the example of the
similar Sort node code and save a few cycles by skipping that when we can.
This reverts commit d59ff4ab3 and instead allows us to use the safer choice
of passing noError = false to shm_toc_lookup in ExecHashInitializeWorker,
since if we reach that call there should be a TOC entry to be found.
Tom Lane [Fri, 2 Feb 2018 23:32:05 +0000 (18:32 -0500)]
Fix another instance of unsafe coding for shm_toc_lookup failure.
One or another author of commit 5bcf389ec seems to have thought that
computing an offset from a NULL pointer would yield another NULL pointer.
There may possibly be architectures where that works, but common machines
don't work like that. Per a quick code review of places calling
shm_toc_lookup and not using noError = false.
Tom Lane [Fri, 2 Feb 2018 23:26:07 +0000 (18:26 -0500)]
Be more wary about shm_toc_lookup failure.
Commit 445dbd82a basically missed the point of commit d46633506,
which was that we shouldn't allow shm_toc_lookup() failure to lead
to a core dump or assertion crash, because the odds of such a
failure should never be considered negligible. It's correct that
we can't expect the PARALLEL_KEY_ERROR_QUEUE TOC entry to be there
if we have no workers. But if we have no workers, we're not going
to do anything in this function with the lookup result anyway,
so let's just skip it. That lets the code use the easy-to-prove-safe
noError=false case, rather than anything requiring effort to review.
Investigation of 2d2d06b7e27e3177d5bef0061801c75946871db3 revealed that
identity values were not applied in some further cases, including
logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD
COLUMN. To fix all that, apply the identity column expression in
build_column_default() instead of repeating the same logic at each call
site.
For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding
completely ignored that existing rows for the new column should have
values filled in from the identity sequence. The coding using
build_column_default() fails for this because the sequence ownership
isn't registered until after ALTER TABLE, and we can't do it before
because we don't have the column in the catalog yet. So we specially
remember in ColumnDef the sequence name that we decided on and build a
custom NextValueExpr using that.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Robert Haas [Fri, 2 Feb 2018 18:25:55 +0000 (13:25 -0500)]
Support parallel btree index builds.
To make this work, tuplesort.c and logtape.c must also support
parallelism, so this patch adds that infrastructure and then applies
it to the particular case of parallel btree index builds. Testing
to date shows that this can often be 2-3x faster than a serial
index build.
The model for deciding how many workers to use is fairly primitive
at present, but it's better than not having the feature. We can
refine it as we get more experience.
Peter Geoghegan with some help from Rushabh Lathia. While Heikki
Linnakangas is not an author of this patch, he wrote other patches
without which this feature would not have been possible, and
therefore the release notes should possibly credit him as an author
of this feature. Reviewed by Claudio Freire, Heikki Linnakangas,
Thomas Munro, Tels, Amit Kapila, me.
Robert Haas [Fri, 2 Feb 2018 14:23:42 +0000 (09:23 -0500)]
Refactor code for partition bound searching
Remove partition_bound_cmp() and partition_bound_bsearch(), whose
void * argument could be, depending on the situation, of any of
three different types: PartitionBoundSpec *, PartitionRangeBound *,
Datum *.
Instead, introduce separate bound-searching functions for each
situation: partition_list_bsearch, partition_range_bsearch,
partition_range_datum_bsearch, and partition_hash_bsearch. This
requires duplicating the code for binary search, but it makes the
code much more type safe, involves fewer branches at runtime, and
at least in my opinion, is much easier to understand.
Along the way, add an option to partition_range_datum_bsearch
allowing the number of keys to be specified, so that we can search
for partitions based on a prefix of the full list of partition
keys. This is important for pending work to improve partition
pruning.
Robert Haas [Fri, 2 Feb 2018 14:00:59 +0000 (09:00 -0500)]
Add new function WaitForParallelWorkersToAttach.
Once this function has been called, we know that all workers have
started and attached to their error queues -- so if any of them
subsequently exit uncleanly, we'll be sure to throw an ERROR promptly.
Otherwise, users of the ParallelContext machinery must be careful not
to wait forever for a worker that has failed to start. Parallel query
manages to work without needing this for reasons explained in new
comments added by this patch, but it's a useful primitive for other
parallel operations, such as the pending patch to make creating a
btree index run in parallel.
Amit Kapila, revised by me. Additional review by Peter Geoghegan.
Bruce Momjian [Thu, 1 Feb 2018 13:30:43 +0000 (08:30 -0500)]
psql: Add quit/help behavior/hint, for other tool portability
Issuing 'quit'/'exit' in an empty psql buffer exits psql. Issuing
'quit'/'exit' in a non-empty psql buffer alone on a line with no prefix
whitespace issues a hint on how to exit.
Also add similar 'help' hints for 'help' in a non-empty psql buffer.
Bruce Momjian [Wed, 31 Jan 2018 21:43:32 +0000 (16:43 -0500)]
doc: Improve pg_upgrade rsync examples to use clusterdir
Commit 9521ce4a7a1125385fb4de9689f345db594c516a from Sep 13, 2017 and
backpatched through 9.5 used rsync examples with datadir. The reporter
has pointed out, and testing has verified, that clusterdir must be used,
so update the docs accordingly.
Reported-by: Don Seiler
Discussion: https://postgr.es/m/CAHJZqBD0u9dCERpYzK6BkRv=663AmH==DFJpVC=M4Xg_rq2=CQ@mail.gmail.com
Bruce Momjian [Wed, 31 Jan 2018 21:25:21 +0000 (16:25 -0500)]
doc: mention datadir locations are actually config locations
Technically, pg_upgrade's --old-datadir and --new-datadir are
configuration directories, not necessarily data directories. This is
reflected in the 'postgres' manual page, so do the same for pg_upgrade.
Robert Haas [Wed, 31 Jan 2018 20:43:11 +0000 (15:43 -0500)]
Fix list partition constraints for partition keys of array type.
The old code generated always generated a constraint of the form
col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an
array type. Instead, generate col = val when there's only one value,
col = val1 OR col = val2 OR ... when there are multiple values and
col is of array type, and the old form when there are multiple values
and col is not of an array type.
As a side benefit, this makes constraint exclusion able to prune
a list partition declared to accept a single Boolean value, which
didn't work before.
Move the generic functions into a new file fe-secure-common.c, so the
calls generally go fe-connect.c -> fe-secure.c -> fe-secure-${impl}.c ->
fe-secure-common.c, although there is a bit of back-and-forth between
the last two.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Robert Haas [Tue, 30 Jan 2018 19:27:38 +0000 (14:27 -0500)]
Fix test case for 'outer pathkeys do not match mergeclauses' fix.
Commit 4bbf6edfbd5d03743ff82dda2f00c738fb3208f5 added a test case,
but it turns out that the test case doesn't reliably test for the
bug, and in the context of the regression test suite did not because
ANALYZE had not been run.
Report and patch by Etsuro Fujita. I added a comment along lines
previously suggested by Tom Lane.
Andres Freund [Mon, 29 Jan 2018 20:16:53 +0000 (12:16 -0800)]
Introduce ExecQualAndReset() helper.
It's a common task to evaluate a qual and reset the corresponding
expression context. Currently that requires storing the result of the
qual eval, resetting the context, and then reacting on the result. As
that's awkward several places only reset the context next time through
a node. That's not great, so introduce a helper that evaluates and
resets.
It's a bit ugly that it currently uses MemoryContextReset() instead of
ResetExprContext(), but that seems easier than reordering all of
executor.h.
Author: Andres Freund
Discussion: https://postgr.es/m/20180109222544.f7loxrunqh3xjl5f@alap3.anarazel.de
Tom Lane [Mon, 29 Jan 2018 20:13:07 +0000 (15:13 -0500)]
Save a few bytes by removing useless last argument to SearchCatCacheList.
There's never any value in giving a fully specified cache key to
SearchCatCacheList: you might as well call SearchCatCache instead,
since there could be only one match. So the maximum useful number of
key arguments is one less than the supported number of key columns.
We might as well remove the useless extra argument and save some few
bytes per call site, as well as a cycle or so per call.
I believe the reason it was coded like this is that originally, callers
had to write out all the dummy arguments in each call, and so it seemed
less confusing if SearchCatCache and SearchCatCacheList took the same
number of key arguments. But since commit e26c539e9, callers only write
their live arguments explicitly, making that a non-factor; and there's
surely been enough time for third-party modules to adapt to that coding
style. So this is only an ABI break not an API break for callers.
Per discussion with Oliver Ford, this might also make it less confusing
how to use SearchCatCacheList correctly.
Andres Freund [Wed, 24 Jan 2018 07:20:02 +0000 (23:20 -0800)]
Initialize unused ExprEvalStep fields.
ExecPushExprSlots didn't initialize ExprEvalStep's resvalue/resnull
steps as it didn't use them. That caused wrong valgrind warnings for
an upcoming patch, so zero-intialize.
Also zero-initialize all scratch ExprEvalStep's allocated on the
stack, to avoid issues with similar future omissions of non-critial
data.
Peter Eisentraut [Mon, 29 Jan 2018 19:26:17 +0000 (14:26 -0500)]
doc: Clarify pg_upgrade documentation
Clarify that the restriction against reg* types only applies to table
columns using these types, not to the type appearing in any other way,
for example as a function argument.
Andres Freund [Mon, 29 Jan 2018 19:02:09 +0000 (11:02 -0800)]
Prevent growth of simplehash tables when they're "too empty".
In cases where simplehash tables where filled with either a lot of
conflicting hash-values, or values that hash to consecutive
values (i.e. build "chains") the growth heuristics in d4c62a6b623d6eef88218158e9fa3cf974c6c7e5 could trigger rather
explosively.
To fix that, address some of the reasons (see previous commit) of why
the growth heuristics where needed, and only allow growth when the
table isn't too empty. While that means there's a few cases of bad
input that can be slower, that seems a lot better than running very
quickly out of memory.
Author: Tomas Vondra and Andres Freund, with additional input by
Thomas Munro, Tom Lane Todd A. Cook Reported-By: Todd A. Cook, Tomas Vondra, Thomas Munro
Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org
Backpatch: 10, where simplehash was introduced
Andres Freund [Mon, 29 Jan 2018 19:02:09 +0000 (11:02 -0800)]
Improve bit perturbation in TupleHashTableHash.
The changes in b81b5a96f424531b97cdd1dba97d9d1b9c9d372e did not fully
address the issue, because the bit-mixing of the IV into the final
hash-key didn't prevent clustering in the input-data survive in the
output data.
This didn't cause a lot of problems because of the additional growth
conditions added d4c62a6b623d6eef88218158e9fa3cf974c6c7e5. But as we
want to rein those in due to explosive growth in some edges, this
needs to be fixed.
Author: Andres Freund
Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org
Backpatch: 10, where simplehash was introduced
Tom Lane [Mon, 29 Jan 2018 17:57:09 +0000 (12:57 -0500)]
Avoid misleading psql password prompt when username is multiply specified.
When a password is needed, cases such as
psql -d "postgresql://alice@localhost/testdb" -U bob
would incorrectly prompt for "Password for user bob: ", when actually the
connection will be attempted with username alice. The priority order of
which name to use isn't that important here, but the misleading prompt is.
When we are prompting for a password after initial connection failure,
we can fix this reliably by looking at PQuser(conn) to see how libpq
interpreted the connection arguments. But when we're doing a forced
password prompt because of a -W switch, we can't use that solution.
Fortunately, because the main use of -W is for noninteractive situations,
it's less critical to produce a helpful prompt in such cases. I made
the startup prompt for -W just say "Password: " all the time, rather
than expending extra code on trying to identify which username to use.
In the case of a \c command (after -W has been given), there's already
logic in do_connect that determines whether the "dbname" is a connstring
or URI, so we can avoid lobotomizing the prompt except in cases that
are actually dubious. (We could do similarly in startup.c if anyone
complains, but for now it seems not worthwhile, especially since that
would still be only a partial solution.)
Per bug #15025 from Akos Vandra. Although this is arguably a bug fix,
it doesn't seem worth back-patching. The case where it matters seems
like a very corner-case usage, and someone might complain that we'd
changed the behavior of -W in a minor release.
Tom Lane [Sun, 28 Jan 2018 18:39:07 +0000 (13:39 -0500)]
Add stack-overflow guards in set-operation planning.
create_plan_recurse lacked any stack depth check. This is not per
our normal coding rules, but I'd supposed it was safe because earlier
planner processing is more complex and presumably should eat more
stack. But bug #15033 from Andrew Grossman shows this isn't true,
at least not for queries having the form of a many-thousand-way
INTERSECT stack.
Further testing showed that recurse_set_operations is also capable
of being crashed in this way, since it likewise will recurse to the
bottom of a parsetree before calling any support functions that
might themselves contain any stack checks. However, its stack
consumption is only perhaps a third of create_plan_recurse's.
It's possible that this particular problem with create_plan_recurse can
only manifest in 9.6 and later, since before that we didn't build a Path
tree for set operations. But having seen this example, I now have no
faith in the proposition that create_plan_recurse doesn't need a stack
check, so back-patch to all supported branches.