Fujii Masao [Fri, 16 Dec 2016 17:20:59 +0000 (02:20 +0900)]
Ensure that num_sync is greater than zero in synchronous_standby_names.
Previously num_sync could be set to zero and this setting caused
an assertion failure. This means that multiple synchronous standbys
code should assume that num_sync is greater than zero.
Also setting num_sync to zero is nonsense because it's basically
the configuration for synchronous replication. If users want not to
make transaction commits wait for any standbys,
synchronous_standby_names should be emptied to disable synchronous
replication instead of setting num_sync to zero.
This patch forbids users from setting num_sync to zero in
synchronous_standby_names. If zero is specified, an error will
happen during processing the parameter settings.
Back-patch to 9.6 where multiple synchronous standbys feature was added.
Patch by me. Reviewed by Tom Lane.
Discussion: <CAHGQGwHWB3izc6cXuFLh5kOcAbFXaRhhgwd-X5PeN9TEjxqXwg@mail.gmail.com>
Tom Lane [Fri, 16 Dec 2016 16:50:07 +0000 (11:50 -0500)]
Improve documentation around TS_execute().
I got frustrated by the lack of commentary in this area, so here is some
reverse-engineered documentation, along with minor stylistic cleanup.
No code changes more significant than removal of unused variables.
Back-patch to 9.6, not because that's useful in itself, but because
we have some bugs to fix in phrase search and this would cause merge
failures if it's only in HEAD.
Robert Haas [Fri, 16 Dec 2016 16:29:23 +0000 (11:29 -0500)]
Simplify LWLock tranche machinery by removing array_base/array_stride.
array_base and array_stride were added so that we could identify the
offset of an LWLock within a tranche, but this facility is only very
marginally used apart from the main tranche. So, give every lock in
the main tranche its own tranche ID and get rid of array_base,
array_stride, and all that's attached. For debugging facilities
(Trace_lwlocks and LWLOCK_STATS) print the pointer address of the
LWLock using %p instead of the offset. This is arguably more useful,
and certainly a lot cheaper. Drop the offset-within-tranche from
the information reported to dtrace and from one can't-happen message
inside lwlock.c.
The main user-visible impact of this change is that pg_stat_activity
will now report all waits for LWLocks as "LWLock" rather than
reporting some as "LWLockTranche" and others as "LWLockNamed".
The main motivation for this change is that the need to specify an
array_base and an array_stride is awkward for parallel query. There
is only a very limited supply of tranche IDs so we can't just keep
allocating new ones, and if we try to use the same tranche IDs every
time then we run into trouble when multiple parallel contexts are
use simultaneously. So if we didn't get rid of this mechanism we'd
have to make it even more complicated. By simplifying it in this
way, we instead reduce the size of the generated code for lwlock.c
by about 5%.
Robert Haas [Fri, 16 Dec 2016 15:03:08 +0000 (10:03 -0500)]
Unbreak Finalize HashAggregate over Partial HashAggregate.
Commit 5dfc198146b49ce7ecc8a1fc9d5e171fb75f6ba5 introduced the use
of a new type of hash table with linear reprobing for hash aggregates.
Such a hash table behaves very poorly if keys are inserted in hash
order, which does in fact happen in the case where a query use a
Finalize HashAggregate node fed (via Gather) by a Partial
HashAggregate node. In fact, queries with this type of plan tend
to run effectively forever.
Fix that by seeding the hash value differently in each worker
(and in the leader, if it participates).
Robert Haas [Fri, 16 Dec 2016 14:52:04 +0000 (09:52 -0500)]
Fix more hash index bugs around marking buffers dirty.
In _hash_freeovflpage(), if we're freeing the overflow page that
immediate follows the page to which tuples are being moved (the
confusingly-named "write buffer"), don't forget to mark that
page dirty after updating its hasho_nextblkno.
In _hash_squeezebucket(), it's not necessary to mark the primary
bucket page dirty if there are no overflow pages, because there's
nothing to squeeze in that case.
Amit Kapila, with help from Kuntal Ghosh and Dilip Kumar, after
an initial trouble report by Jeff Janes.
Robert Haas [Fri, 16 Dec 2016 14:29:21 +0000 (09:29 -0500)]
Remove _hash_wrtbuf() in favor of calling MarkBufferDirty().
The whole concept of _hash_wrtbuf() is that we need to know at the
time we're releasing the buffer lock (and pin) whether we dirtied the
buffer, but this is easy to get wrong. This patch actually fixes one
non-obvious bug of that form: hashbucketcleanup forgot to signal
_hash_squeezebucket, which gets the primary bucket page already
locked, as to whether it had already dirtied the page. Calling
MarkBufferDirty() at the places where we dirty the buffer is more
intuitive and lets us simplify the code in various places as well.
On top of all that, the ultimate goal here is to make hash indexes
WAL-logged, and as the comments to _hash_wrtbuf() note, it should
go away when that happens. Making it go away a little earlier than
that seems like a good preparatory step.
Report by Jeff Janes. Diagnosis by Amit Kapila, Kuntal Ghosh,
and Dilip Kumar. Patch by me, after studying an alternative patch
submitted by Amit Kapila.
Fix off-by-one in memory allocation for quote_literal_cstr().
The calculation didn't take into account the NULL terminator. That lead
to overwriting the palloc'd buffer by one byte, if the input consists
entirely of backslashes. For example "format('%L', E'\\')".
Fixes bug #14468. Backpatch to all supported versions.
Tom Lane [Thu, 15 Dec 2016 19:32:42 +0000 (14:32 -0500)]
Sync our copy of the timezone library with IANA release tzcode2016j.
This is a trivial update (consisting in fact only in the addition of
a comment). The point is just to get back to being synced with an
official release of tzcode, rather than some ad-hoc point in their
commit history, which is where commit 1f87181e1 left it.
Tom Lane [Tue, 13 Dec 2016 21:33:03 +0000 (16:33 -0500)]
Improve handling of array elements as getdiag_targets and cursor_variables.
There's no good reason why plpgsql's GET DIAGNOSTICS statement can't
support an array element as target variable, since the execution code
already uses the generic exec_assign_value() function to assign to it.
Hence, refactor the grammar to allow that, by making getdiag_target
depend on the assign_var production.
Ideally we'd also let a cursor_variable expand to an element of a
refcursor[] array, but that's substantially harder since those statements
also have to handle bound-cursor-variable cases. For now, just make sure
the reported error is sensible, ie "cursor variable must be a simple
variable" not "variable must be of type cursor or refcursor". The latter
was quite confusing from the user's viewpoint, since what he wrote
satisfies the claimed restriction.
Per bug #14463 from Zhou Digoal. Given the lack of previous complaints,
I see no need for a back-patch.
Tom Lane [Tue, 13 Dec 2016 18:20:16 +0000 (13:20 -0500)]
Prevent planagg.c from failing on queries containing CTEs.
The existing tests in preprocess_minmax_aggregates() usually prevent it
from trying to do anything with queries containing CTEs, but there's an
exception: a CTE could be present as a member of an appendrel, if we
flattened a UNION ALL that contains CTE references. If it did try to
generate an optimized path for a query using a CTE, it failed with
"could not find plan for CTE", as reported by Torsten Förtsch.
The proximate cause is an unwise decision in commit 3fc6e2d7f to clear
subroot->cte_plan_ids in build_minmax_path(). That left the subroot's
cte_plan_ids list out of step with its parse->cteList.
Removing the "subroot->cte_plan_ids = NIL;" assignment is enough to let
the case work again, but really it's pretty silly to be expending any
cycles at all in this module when there are CTEs: we always treat their
outputs as unordered so there's no way for the optimization to win.
Hence, also add an early-exit test so we don't waste time like that.
Back-patch to 9.6 where the misbehavior was introduced.
Robert Haas [Tue, 13 Dec 2016 17:16:02 +0000 (12:16 -0500)]
Fix bug in hashbulkdelete.
Commit 6d46f4783efe457f74816a75173eb23ed8930020 failed to account for
the possibility that hashbulkdelete() might encounter a bucket that
has been split since it began scanning the bucket array. Repair.
Extracted from a larger pathc by Amit Kapila; I rewrote the comment.
Robert Haas [Tue, 13 Dec 2016 13:18:00 +0000 (08:18 -0500)]
doc: Improve documentation related to table partitioning feature.
Commit f0e44751d7175fa3394da2c8f85e3ceb3cdbfe63 implemented table
partitioning, but failed to mention the "no row movement"
restriction in the documentation. Fix that and a few other issues.
Amit Langote, with some additional wordsmithing by me.
Robert Haas [Mon, 12 Dec 2016 20:57:35 +0000 (15:57 -0500)]
Remove should_free arguments to tuplesort routines.
Since commit e94568ecc10f2638e542ae34f2990b821bbf90ac, the answer is
always "false", and we do not need to complicate the API by arranging
to return a constant value.
Tom Lane [Mon, 12 Dec 2016 19:32:09 +0000 (14:32 -0500)]
Fix race condition in test_decoding "slot" test.
This test, just added in commit a924c327e, sometimes fails because
the old backend hasn't finished dropping the temporary replication slot
when the new backend looks. Borrow the previously-invented methodology
for waiting for the old process to disappear from pg_stat_activity.
Tom Lane [Mon, 12 Dec 2016 18:32:10 +0000 (13:32 -0500)]
Make the different Unix-y semaphore implementations ABI-compatible.
Previously, the "sem" field of PGPROC varied in size depending on which
kernel semaphore API we were using. That was okay as long as there was
only one likely choice per platform, but in the wake of commit ecb0d20a9,
that assumption seems rather shaky. It doesn't seem out of the question
anymore that an extension compiled against one API choice might be loaded
into a postmaster built with another choice. Moreover, this prevents any
possibility of selecting the semaphore API at postmaster startup, which
might be something we want to do in future.
Hence, change PGPROC.sem to be PGSemaphore (i.e. a pointer) for all Unix
semaphore APIs, and turn the pointed-to data into an opaque struct whose
contents are only known within the responsible modules.
For the SysV and unnamed-POSIX APIs, the pointed-to data has to be
allocated elsewhere in shared memory, which takes a little bit of
rejiggering of the InitShmemAllocation code sequence. (I invented a
ShmemAllocUnlocked() function to make that a little cleaner than it used
to be. That function is not meant for any uses other than the ones it
has now, but it beats having InitShmemAllocation() know explicitly about
allocation of space for semaphores and spinlocks.) This change means an
extra indirection to access the semaphore data, but since we only touch
that when blocking or awakening a process, there shouldn't be any
meaningful performance penalty. Moreover, at least for the unnamed-POSIX
case on Linux, the sem_t type is quite a bit wider than a pointer, so this
reduces sizeof(PGPROC) which seems like a good thing.
For the named-POSIX API, there's effectively no change: the PGPROC.sem
field was and still is a pointer to something returned by sem_open() in
the postmaster's memory space. Document and check the pre-existing
limitation that this case can't work in EXEC_BACKEND mode.
It did not seem worth unifying the Windows semaphore ABI with the Unix
cases, since there's no likelihood of needing ABI compatibility much less
runtime switching across those cases. However, we can simplify the Windows
code a bit if we define PGSemaphore as being directly a HANDLE, rather than
pointer to HANDLE, so let's do that while we're here. (This also ends up
being no change in what's physically stored in PGPROC.sem. We're just
moving the HANDLE fetch from callees to callers.)
It would take a bunch of additional code shuffling to get to the point of
actually choosing a semaphore API at postmaster start, but the effects
of that would now be localized in the port/XXX_sema.c files, so it seems
like fit material for a separate patch. The need for it is unproven as
yet, anyhow, whereas the ABI risk to extensions seems real enough.
Tom Lane [Mon, 12 Dec 2016 16:23:23 +0000 (11:23 -0500)]
Fix creative, but unportable, spelling of "ptr != NULL".
Or at least I suppose that's what was really meant here. But even
aside from the not-per-project-style use of "0" to mean "NULL",
I doubt it's safe to assume that all valid pointers are > NULL.
Per buildfarm member pademelon.
Split md5_crypt_verify() into three functions:
* get_role_password() to fetch user's password from pg_authid, and check
its expiration.
* md5_crypt_verify() to check an MD5 authentication challenge
* plain_crypt_verify() to check a plaintext password.
get_role_password() will be needed as a separate function by the upcoming
SCRAM authentication patch set. Most of the remaining functionality in
md5_crypt_verify() was different for MD5 and plaintext authentication, so
split that for readability.
While we're at it, simplify the *_crypt_verify functions by using
stack-allocated buffers to hold the temporary MD5 hashes, instead of
pallocing.
Tom Lane [Sun, 11 Dec 2016 19:54:25 +0000 (14:54 -0500)]
Use "%option prefix" to set API names in ecpg's lexer.
Clean up some technical debt left behind by commit 72b1e3a21: instead of
quickly hacking the name of base_yylex() with a #define, set it properly
with "%option prefix". This causes the names of pgc.l's other exported
symbols to change as well, so run around and modify the outside references
to them as needed. Similarly, make pgc.l's external references to
base_yylval use that variable's true name instead of a macro.
The reason for doing this now is that the quick-hack solution will fail
with future versions of flex, as reported by Дилян Палаузов.
Hence, back-patch into 9.6 where the previous commit appeared, since
it's likely people will build 9.6 with newer flex versions during
its lifetime.
Tom Lane [Sun, 11 Dec 2016 18:09:57 +0000 (13:09 -0500)]
Prevent crash when ts_rewrite() replaces a non-top-level subtree with null.
When ts_rewrite()'s replacement argument is an empty tsquery, it's supposed
to simplify any operator nodes whose operand(s) become NULL; but it failed
to do that reliably, because dropvoidsubtree() only examined the top level
of the result tree. Rather than make a second recursive pass, let's just
give the responsibility to dofindsubquery() to simplify while it's doing
the main replacement pass. Per report from Andreas Seltenreich.
Artur Zakirov, with some cosmetic changes by me. Back-patch to all
supported branches.
Tom Lane [Fri, 9 Dec 2016 20:27:23 +0000 (15:27 -0500)]
Be more careful about Python refcounts while creating exception objects.
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception
objects, evidently supposing that PyModule_AddObject would do that --- but
it doesn't. This left us in a situation where a Python garbage collection
cycle could result in deletion of exception object(s), causing server
crashes or wrong answers if the exception objects are used later in the
session.
In addition, PLy_generate_spi_exceptions didn't bother to test for
a null result from PyErr_NewException, which at best is inconsistent
with the code in PLy_add_exceptions. And PLy_add_exceptions, while it
did do Py_INCREF on the exceptions it makes, waited to do that till
after some PyModule_AddObject calls, creating a similar risk for
failure if garbage collection happened within those calls.
To fix, refactor to have just one piece of code that creates an
exception object and adds it to the spiexceptions module, bumping the
refcount first.
Also, let's add an additional refcount to represent the pointer we're
going to store in a C global variable or hash table. This should only
matter if the user does something weird like delete the spiexceptions
Python module, but lack of paranoia has caused us enough problems in
PL/Python already.
The fact that PyModule_AddObject doesn't do a Py_INCREF of its own
explains the need for the Py_INCREF added in commit 4c966d920, so we
can improve the comment about that; also, this means we really want
to do that before not after the PyModule_AddObject call.
The missing Py_INCREF in PLy_generate_spi_exceptions was reported and
diagnosed by Rafa de la Torre; the other fixes by me. Back-patch
to all supported branches.
Alvaro Herrera [Fri, 9 Dec 2016 15:42:17 +0000 (12:42 -0300)]
Fix crasher bug in array_position(s)
array_position and its cousin array_positions were caching the element
type equality function's FmgrInfo without being careful enough to put it
in a long-lived context. This is obviously broken but it didn't matter
in most cases; only when using arrays of records (involving record_eq)
it becomes a problem. The fix is to ensure that the type's equality
function's FmgrInfo is cached in the array_position's flinfo->fn_mcxt
rather than the current memory context.
Apart from record types, the only other case that seems complex enough
to possibly cause the same problem are range types. I didn't find a way
to reproduce the problem with those, so I only include the test case
submitted with the bug report as regression test.
Bug report and patch: Junseok Yang
Discussion: https://postgr.es/m/CAE+byMupUURYiZ6bKYgMZb9pgV1CYAijJGqWj-90W=nS7uEOeA@mail.gmail.com
Backpatch to 9.5, where array_position appeared.
Robert Haas [Thu, 8 Dec 2016 19:09:09 +0000 (14:09 -0500)]
Log the creation of an init fork unconditionally.
Previously, it was thought that this only needed to be done for the
benefit of possible standbys, so wal_level = minimal skipped it.
But that's not safe, because during crash recovery we might replay
XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively
removes the directory that contains the new init fork. So log it
always.
The user-visible effect of this bug is that if you create a database
or tablespace, then create an unlogged table, then crash without
checkpointing, then restart, accessing the table will fail, because
the it won't have been properly reset. This commit fixes that.
Michael Paquier, per a report from Konstantin Knizhnik. Wording of
the comments per a suggestion from me.
Tom Lane [Thu, 8 Dec 2016 16:40:02 +0000 (11:40 -0500)]
Fix reporting of column typmods for multi-row VALUES constructs.
expandRTE() and get_rte_attribute_type() reported the exprType() and
exprTypmod() values of the expressions in the first row of the VALUES as
being the column type/typmod returned by the VALUES RTE. That's fine for
the data type, since we coerce all expressions in a column to have the same
common type. But we don't coerce them to have a common typmod, so it was
possible for rows after the first one to return values that violate the
claimed column typmod. This leads to the incorrect result seen in bug
#14448 from Hassan Mahmood, as well as some other corner-case misbehaviors.
The desired behavior is the same as we use in other type-unification
cases: report the common typmod if there is one, but otherwise return -1
indicating no particular constraint. It's cheap for transformValuesClause
to determine the common typmod while transforming a multi-row VALUES, but
it'd be less cheap for expandRTE() and get_rte_attribute_type() to
re-determine that info every time they're asked --- possibly a lot less
cheap, if the VALUES has many rows. Therefore, the best fix is to record
the common typmods explicitly in a list in the VALUES RTE, as we were
already doing for column collations. This looks quite a bit like what
we're doing for CTE RTEs, so we can save a little bit of space and code by
unifying the representation for those two RTE types. They both now share
coltypes/coltypmods/colcollations fields. (At some point it might seem
desirable to populate those fields for all RTE types; but right now it
looks like constructing them for other RTE types would add more code and
cycles than it would save.)
The RTE change requires a catversion bump, so this fix is only usable
in HEAD. If we fix this at all in the back branches, the patch will
need to look quite different.
Commit fe0a0b59, which moved code to do MD5 authentication to a separate
CheckMD5Auth() function, left behind a comment that really belongs inside
the function, too. Also move the check for db_user_namespace inside the
function, seems clearer that way.
Now that the md5 salt is passed as argument to md5_crypt_verify, it's a bit
silly that it peeks into the Port struct to see if MD5 authentication was
used. Seems more straightforward to treat it as an MD5 authentication, if
the md5 salt argument is given. And after that, md5_crypt_verify only used
the Port argument to look at port->user_name, but that is redundant,
because it is also passed as a separate 'role' argument. So remove the Port
argument altogether.
We allegedly allocated all remaining memory for the read buffers of the
sort tapes, but we allocated the merge heap only after that. That means
that the allocation of the merge heap was guaranteed to go over the memory
limit. Fix by allocating the merge heap first. This makes little difference
in practice, because the merge heap is tiny, but let's tidy.
While we're at it, add a safeguard for the case that we are already over
the limit when allocating the read buffers. That shouldn't happen, but
better safe than sorry.
The memory accounting error was reported off-list by Peter Geoghegan.
Robert Haas [Wed, 7 Dec 2016 18:17:43 +0000 (13:17 -0500)]
Implement table partitioning.
Table partitioning is like table inheritance and reuses much of the
existing infrastructure, but there are some important differences.
The parent is called a partitioned table and is always empty; it may
not have indexes or non-inherited constraints, since those make no
sense for a relation with no data of its own. The children are called
partitions and contain all of the actual data. Each partition has an
implicit partitioning constraint. Multiple inheritance is not
allowed, and partitioning and inheritance can't be mixed. Partitions
can't have extra columns and may not allow nulls unless the parent
does. Tuples inserted into the parent are automatically routed to the
correct partition, so tuple-routing ON INSERT triggers are not needed.
Tuple routing isn't yet supported for partitions which are foreign
tables, and it doesn't handle updates that cross partition boundaries.
Currently, tables can be range-partitioned or list-partitioned. List
partitioning is limited to a single column, but range partitioning can
involve multiple columns. A partitioning "column" can be an
expression.
Because table partitioning is less general than table inheritance, it
is hoped that it will be easier to reason about properties of
partitions, and therefore that this will serve as a better foundation
for a variety of possible optimizations, including query planner
optimizations. The tuple routing based which this patch does based on
the implicit partitioning constraints is an example of this, but it
seems likely that many other useful optimizations are also possible.
Amit Langote, reviewed and tested by Robert Haas, Ashutosh Bapat,
Amit Kapila, Rajkumar Raghuwanshi, Corey Huinker, Jaime Casanova,
Rushabh Lathia, Erik Rijkers, among others. Minor revisions by me.
Tom Lane [Wed, 7 Dec 2016 17:39:24 +0000 (12:39 -0500)]
Restore psql's SIGPIPE setting if popen() fails.
Ancient oversight in PageOutput(): if popen() fails, we'd better reset
the SIGPIPE handler before returning stdout, because ClosePager() won't.
Noticed while fixing the empty-PAGER issue.
Tom Lane [Wed, 7 Dec 2016 17:19:56 +0000 (12:19 -0500)]
Handle empty or all-blank PAGER setting more sanely in psql.
If the PAGER environment variable is set but contains an empty string,
psql would pass it to "sh" which would silently exit, causing whatever
query output we were printing to vanish entirely. This is quite
mystifying; it took a long time for us to figure out that this was the
cause of Joseph Brenner's trouble report. Rather than allowing that
to happen, we should treat this as another way to specify "no pager".
(We could alternatively treat it as selecting the default pager, but
it seems more likely that the former is what the user meant to achieve
by setting PAGER this way.)
Nonempty, but all-white-space, PAGER values have the same behavior, and
it's pretty easy to test for that, so let's handle that case the same way.
Most other cases of faulty PAGER values will result in the shell printing
some kind of complaint to stderr, which should be enough to diagnose the
problem, so we don't need to work harder than this. (Note that there's
been an intentional decision not to be very chatty about apparent failure
returns from the pager process, since that may happen if, eg, the user
quits the pager with control-C or some such. I'd just as soon not start
splitting hairs about which exit codes might merit making our own report.)
libpq's old PQprint() function was already on board with ignoring empty
PAGER values, but for consistency, make it ignore all-white-space values
as well.
It's been like this a long time, so back-patch to all supported branches.
In commit fe0a0b59, the datatype used for MyCancelKey and other variables
that store cancel keys were changed from long to uint32, but I missed this
one. That broke query cancellation on platforms where long is wider than 32
bits.
Stephen Frost [Wed, 7 Dec 2016 04:02:38 +0000 (23:02 -0500)]
Silence compiler warnings
Rearrange a bit of code to ensure that 'mode' in LWLockRelease is
obviously always set, which seems a bit cleaner and avoids a compiler
warning (thanks to Robert for the suggestion!).
In GetCachedPlan(), initialize 'plan' to silence a compiler warning, but
also add an Assert() to make sure we don't ever actually fall through
with 'plan' still being set to NULL, since we are about to dereference
it.
Neither of these appear to be live bugs but at least gcc
5.4.0-6ubuntu1~16.04.4 doesn't quite have the smarts to realize that.
Tom Lane [Wed, 7 Dec 2016 00:52:34 +0000 (19:52 -0500)]
Fix unsafe assumption that struct timeval.tv_sec is a "long".
It typically is a "long", but it seems possible that on some platforms
it wouldn't be. In any case, this silences a compiler warning on
OpenBSD (cf buildfarm member curculio).
While at it, use snprintf not sprintf. This format string couldn't
possibly overrun the supplied buffer, but that doesn't seem like
a good reason not to use the safer style.
Oversight in commit f828654e1. Back-patch to 9.6 where that came in.
Tom Lane [Wed, 7 Dec 2016 00:34:29 +0000 (19:34 -0500)]
Put AC_MSG_RESULT() call in the right place.
Thinko in ecb0d20a9 --- this needs to go one level further out in
the "if" nest. As it stood, nothing got printed in the case of
selecting named POSIX semaphores. Cosmetic issue only, but a bug.
Robert Haas [Tue, 6 Dec 2016 16:11:54 +0000 (11:11 -0500)]
Fix interaction of parallel query with prepared statements.
Previously, a prepared statement created via a Parse message could get
a parallel plan, but one created with a PREPARE statement could not.
This state of affairs was due to confusion on my (rhaas) part: I
erroneously believed that a CREATE TABLE .. AS EXECUTE statement could
only be performed with a prepared statement by PREPARE, but in fact
one created by a Prepare message works just as well. Therefore, it
makes no sense to allow parallel query in one case but not the other.
To fix, allow parallel query with all prepared statements, but run
the parallel plan serially (i.e. without workers) in the case of
CREATE TABLE .. AS EXECUTE. Also, document this.
Amit Kapila and Tobias Bussman, plus an extra sentence of
documentation by me.
Tom Lane [Tue, 6 Dec 2016 01:27:55 +0000 (20:27 -0500)]
Remove extraneous semicolon from uses of relptr_declare().
If we're going to write a semicolon after calls of relptr_declare(),
then we don't need one inside the macro, and removing it suppresses
"empty declaration" warnings from pickier compilers (eg pademelon).
While at it, we might as well use relptr() inside relptr_declare(),
because otherwise that macro would likely go unused altogether.
Also improve the comment, which I for one found unclear,
and provide a specific example of intended usage.
Robert Haas [Mon, 5 Dec 2016 20:54:28 +0000 (15:54 -0500)]
Ensure gatherstate->nextreader is properly initialized.
The previously code worked OK as long as a Gather node was never
rescanned, or if it was rescanned, as long as it got at least as
many workers on rescan as it had originally. But if the number
of workers ever decreased on a rescan, then it could crash.
Stephen Frost [Mon, 5 Dec 2016 20:50:55 +0000 (15:50 -0500)]
Add support for restrictive RLS policies
We have had support for restrictive RLS policies since 9.5, but they
were only available through extensions which use the appropriate hooks.
This adds support into the grammer, catalog, psql and pg_dump for
restrictive RLS policies, thus reducing the cases where an extension is
necessary.
In passing, also move away from using "AND"d and "OR"d in comments.
As pointed out by Alvaro, it's not really appropriate to attempt
to make verbs out of "AND" and "OR", so reword those comments which
attempted to.
Reviewed By: Jeevan Chalke, Dean Rasheed
Discussion: https://postgr.es/m/20160901063404.GY4028@tamriel.snowman.net
If we failed to connect to one or more hosts, and then afterwards we
find one that fails to be read-write, the latter error message was
clobbering any earlier ones. Repair.
Robert Haas [Mon, 5 Dec 2016 15:53:21 +0000 (10:53 -0500)]
Reduce the default for max_worker_processes back to 8.
Commit b460f5d6693103076dc554aa7cbb96e1e53074f9 -- at my suggestion --
increased the default value of max_worker_processes from 8 to 16, on
the theory that this would be harmless and convenient for users.
Unfortunately, this caused some buildfarm machines with low connection
limits to start failing, so apparently it's not harmless after all.
Robert Haas [Mon, 5 Dec 2016 15:00:49 +0000 (10:00 -0500)]
Try to fix some DSA-related compiler warnings.
Commit 13df76a537cca3b8884911d8fdf7c89a457a8dd3 was overconfident
about how portable %016lx is. Some compilers complain because they
need %016llx, while platforms where DSA pointers are only 32 bits
get unhappy about using a 64-bit format for a 32-bit quantity.
Replace PostmasterRandom() with a stronger source, second attempt.
This adds a new routine, pg_strong_random() for generating random bytes,
for use in both frontend and backend. At the moment, it's only used in
the backend, but the upcoming SCRAM authentication patches need strong
random numbers in libpq as well.
pg_strong_random() is based on, and replaces, the existing implementation
in pgcrypto. It can acquire strong random numbers from a number of sources,
depending on what's available:
- OpenSSL RAND_bytes(), if built with OpenSSL
- On Windows, the native cryptographic functions are used
- /dev/urandom
Unlike the current pgcrypto function, the source is chosen by configure.
That makes it easier to test different implementations, and ensures that
we don't accidentally fall back to a less secure implementation, if the
primary source fails. All of those methods are quite reliable, it would be
pretty surprising for them to fail, so we'd rather find out by failing
hard.
If no strong random source is available, we fall back to using erand48(),
seeded from current timestamp, like PostmasterRandom() was. That isn't
cryptographically secure, but allows us to still work on platforms that
don't have any of the above stronger sources. Because it's not very secure,
the built-in implementation is only used if explicitly requested with
--disable-strong-random.
This replaces the more complicated Fortuna algorithm we used to have in
pgcrypto, which is unfortunate, but all modern platforms have /dev/urandom,
so it doesn't seem worth the maintenance effort to keep that. pgcrypto
functions that require strong random numbers will be disabled with
--disable-strong-random.
Original patch by Magnus Hagander, tons of further work by Michael Paquier
and me.
Fujii Masao [Mon, 5 Dec 2016 11:29:41 +0000 (20:29 +0900)]
Fix incorrect output from gin_desc().
Previously gin_desc() displayed incorrect output "unknown action 0"
for XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE records with
valid actions. The cause of this problem was that gin_desc() wrongly
used XLogRecGetData() to extract data from those records.
Since they were registered by XLogRegisterBufData(), gin_desc() should
have used XLogRecGetBlockData(), instead, like gin_redo().
Also there were other differences about how to treat XLOG_GIN_INSERT
record between gin_desc() and gin_redo().
This commit fixes gin_desc() routine so that it treats those records
in the same way as gin_redo().
Batch-patch to 9.5 where WAL record format was revamped and
XLogRegisterBufData() was added.
Reported-By: Andres Freund Reviewed-By: Tom Lane
Discussion: <20160509194645.7lewnpw647zegx2m@alap3.anarazel.de>
Tom Lane [Sun, 4 Dec 2016 20:02:27 +0000 (15:02 -0500)]
Don't mess up pstate->p_next_resno in transformOnConflictClause().
transformOnConflictClause incremented p_next_resno while generating the
phony targetlist for the EXCLUDED pseudo-rel. Then that field got
incremented some more during transformTargetList, possibly leading to
free_parsestate concluding that we'd overrun the allowed length of a tlist,
as reported by Justin Pryzby.
We could fix this by resetting p_next_resno to 1 after using it for the
EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other
places if we just don't use that field at all in this loop. (Note that
this doesn't change anything about the resnos that end up appearing in
the main target list, because those are all replaced with target-column
numbers by updateTargetListEntry.)
In passing, fix incorrect type OID assigned to the whole-row Var for
"EXCLUDED.*" (somehow this escaped having any bad consequences so far,
but it's certainly wrong); remove useless assignment to var->location;
pstrdup the column names in case of a relcache flush; and improve
nearby comments.
Back-patch to 9.5 where ON CONFLICT was introduced.
Noah Misch [Sun, 4 Dec 2016 05:16:54 +0000 (00:16 -0500)]
Make pgwin32_putenv() probe every known CRT, regardless of compiler.
This extends to MinGW builds the provision for MSVC-built libraries to
see putenv() effects. Doing so repairs, for example, the handling of
the krb_server_keyfile parameter when linked with MSVC-built MIT
Kerberos. Like the previous commit, no back-patch.
Noah Misch [Sat, 3 Dec 2016 20:46:36 +0000 (15:46 -0500)]
Make pgwin32_putenv() follow DLL loading and unloading.
Until now, the first putenv() call of a given postgres.exe process would
cache the set of loaded CRTs. If a CRT unloaded after that call, the
next putenv() would crash. That risk was largely theoretical, because
the first putenv() precedes all PostgreSQL-initiated module loading.
However, this might explain bad interactions with antivirus and other
software that injects threads asynchronously. If an additional CRT
loaded after the first putenv(), pgwin32_putenv() would not discover it.
That CRT would have all environment changes predating its load, but it
would not receive later PostgreSQL-initiated changes. An additional CRT
loading concurrently with the first putenv() might miss that change in
addition to missing later changes. Fix all those problems. This
removes the cache mechanism from pgwin32_putenv(); the cost, less than
100 μs per backend startup, is negligible.
No resulting misbehavior was known to be user-visible given the core
distribution alone, but one can readily construct an affected extension
module. No back-patch given the lack of complaints and the potential
for behavior changes in non-PostgreSQL code running in the backend.
Noah Misch [Sat, 3 Dec 2016 20:46:36 +0000 (15:46 -0500)]
Make pgwin32_putenv() visit debug CRTs.
This has no effect in the most conventional case, where no relevant DLL
uses a debug build. For an example where it does matter, given a debug
build of MIT Kerberos, the krb_server_keyfile parameter usually had no
effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all
supported versions).
Noah Misch [Sat, 3 Dec 2016 20:46:35 +0000 (15:46 -0500)]
Remove wrong CloseHandle() call.
In accordance with its own documentation, invoke CloseHandle() only when
directed in the documentation for the function that furnished the
handle. GetModuleHandle() does not so direct. We have been issuing
this call only in the rare event that a CRT DLL contains no "_putenv"
symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all
supported versions).
Noah Misch [Sat, 3 Dec 2016 20:46:35 +0000 (15:46 -0500)]
Refine win32env.c cosmetics.
Replace use of plain 0 as a null pointer constant. In comments, update
terminology and lessen redundancy. Back-patch to 9.2 (all supported
versions) for the convenience of back-patching the next two commits.
Christian Ullrich and Noah Misch, reviewed (in earlier versions) by
Michael Paquier.
Tom Lane [Fri, 2 Dec 2016 22:23:54 +0000 (17:23 -0500)]
Fix broken wait-for-previous-process-to-exit loop in regression test.
Must do pg_stat_clear_snapshot() inside test's loop, or our snapshot of
pg_stat_activity will never change :-(. Thinko in b3427dade -- evidently
my workstation never really iterated the loop in testing. Per buildfarm.
Tom Lane [Fri, 2 Dec 2016 19:57:35 +0000 (14:57 -0500)]
Delete deleteWhatDependsOn() in favor of more performDeletion() flag bits.
deleteWhatDependsOn() had grown an uncomfortably large number of
assumptions about what it's used for. There are actually only two minor
differences between what it does and what a regular performDeletion() call
can do, so let's invent additional bits in performDeletion's existing flags
argument that specify those behaviors, and get rid of deleteWhatDependsOn()
as such. (We'd probably have done it this way from the start, except that
performDeletion didn't originally have a flags argument, IIRC.)
Also, add a SKIP_EXTENSIONS flag bit that prevents ever recursing to an
extension, and use that when dropping temporary objects at session end.
This provides a more general solution to the problem addressed in a hacky
way in commit 08dd23cec: if an extension script creates temp objects and
forgets to remove them again, the whole extension went away when its
contained temp objects were deleted. The previous solution only covered
temp relations, but this solves it for all object types.
These changes require minor additions in dependency.c to pass the flags
to subroutines that previously didn't get them, but it's still a net
savings of code, and it seems cleaner than before.
Having done this, revert the special-case code added in 08dd23cec that
prevented addition of pg_depend records for temp table extension
membership, because that caused its own oddities: dropping an extension
that had created such a table didn't automatically remove the table,
leading to a failure if the table had another dependency on the extension
(such as use of an extension data type), or to a duplicate-name failure if
you then tried to recreate the extension. But we keep the part that
prevents the pg_temp_nnn schema from becoming an extension member; we never
want that to happen. Add a regression test case covering these behaviors.
Although this fixes some arguable bugs, we've heard few field complaints,
and any such problems are easily worked around by explicitly dropping temp
objects at the end of extension scripts (which seems like good practice
anyway). So I won't risk a back-patch.
Robert Haas [Fri, 2 Dec 2016 17:34:36 +0000 (12:34 -0500)]
Introduce dynamic shared memory areas.
Programmers discovered decades ago that it was useful to have a simple
interface for allocating and freeing memory, which is why malloc() and
free() were invented. Unfortunately, those handy tools don't work
with dynamic shared memory segments because those are specific to
PostgreSQL and are not necessarily mapped at the same address in every
cooperating process. So invent our own allocator instead. This makes
it possible for processes cooperating as part of parallel query
execution to allocate and free chunks of memory without having to
reserve them prior to the start of execution. It could also be used
for longer lived objects; for example, we could consider storing data
for pg_stat_statements or the stats collector in shared memory using
these interfaces, rather than writing them to files. Basically,
anything that needs shared memory but can't predict in advance how
much it's going to need might find this useful.
Thomas Munro and Robert Haas. The original code (of mine) on which
Thomas based his work was actually designed to be a new backend-local
memory allocator for PostgreSQL, but that hasn't gone anywhere - or
not yet, anyway. Thomas took that work and performed major
refactoring and extensive modifications to make it work with dynamic
shared memory, including the addition of appropriate locking.
Robert Haas [Fri, 2 Dec 2016 17:03:30 +0000 (12:03 -0500)]
Management of free memory pages.
This is intended as infrastructure for a full-fledged allocator for
dynamic shared memory. The interface looks a bit like a real
allocator, but only supports allocating and freeing memory in
multiples of the 4kB page size. Further, to free memory, you must
know the size of the span you wish to free, in pages. While these are
make it unsuitable as an allocator in and of itself, it still serves
as very useful scaffolding for a full-fledged allocator.
Robert Haas and Thomas Munro. This code is mostly the same as my 2014
submission, but Thomas fixed quite a few bugs and made some changes to
the interface.
Robert Haas [Fri, 2 Dec 2016 16:29:01 +0000 (11:29 -0500)]
Add a crude facility for dealing with relative pointers.
C doesn't have any sort of built-in understanding of a pointer
relative to some arbitrary base address, but dynamic shared memory
segments can be mapped at different addresses in different processes,
so any sort of shared data structure stored within a dynamic shared
memory segment can't use absolute pointers. We could use something
like Size to represent a relative pointer, but then the compiler
provides no type-checking. Use stupid macro tricks to get some
type-checking.
Patch originally by me. Concept suggested by Andres Freund. Recently
resubmitted as part of Thomas Munro's work on dynamic shared memory
allocation.
Robert Haas [Fri, 2 Dec 2016 13:58:41 +0000 (08:58 -0500)]
Clarify that pg_stat_activity.query has a length limit.
There was always documentation of the GUC that controlled what the
limit actually was, but previously the documentation of the field
itself made no mention of that limit.
Robert Haas [Fri, 2 Dec 2016 12:42:58 +0000 (07:42 -0500)]
Add max_parallel_workers GUC.
Increase the default value of the existing max_worker_processes GUC
from 8 to 16, and add a new max_parallel_workers GUC with a maximum
of 8. This way, even if the maximum amount of parallel query is
happening, there is still room for background workers that do other
things, as originally envisioned when max_worker_processes was added.
Julien Rouhaud, reviewed by Amit Kapila and by revised by me.
Alvaro Herrera [Fri, 2 Dec 2016 03:34:01 +0000 (00:34 -0300)]
Permit dump/reload of not-too-large >1GB tuples
Our documentation states that our maximum field size is 1 GB, and that
our maximum row size of 1.6 TB. However, while this might be attainable
in theory with enough contortions, it is not workable in practice; for
starters, pg_dump fails to dump tables containing rows larger than 1 GB,
even if individual columns are well below the limit; and even if one
does manage to manufacture a dump file containing a row that large, the
server refuses to load it anyway.
This commit enables dumping and reloading of such tuples, provided two
conditions are met:
1. no single column is larger than 1 GB (in output size -- for bytea
this includes the formatting overhead)
2. the whole row is not larger than 2 GB
There are three related changes to enable this:
a. StringInfo's API now has two additional functions that allow creating
a string that grows beyond the typical 1GB limit (and "long" string).
ABI compatibility is maintained. We still limit these strings to 2 GB,
though, for reasons explained below.
b. COPY now uses long StringInfos, so that pg_dump doesn't choke
trying to emit rows longer than 1GB.
c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation
for the input tuple, which means that large tuples are accepted on
input. Note that at this point we do not apply any further limit to the
input tuple size.
The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit
length words to describe each row; and because the documentation is
ambiguous on its signedness and libpq does consider it signed, we cannot
use the highest-order bit. Additionally, the StringInfo API uses "int"
(which is 4 bytes wide in most platforms) in many places, so we'd need
to change that API too in order to improve, which has lots of fallout.
Backpatch to 9.5, which is the oldest that has
MemoryContextAllocExtended, a necessary piece of infrastructure. We
could apply to 9.4 with very minimal additional effort, but any further
than that would require backpatching "huge" allocations too.
This is the largest set of changes we could find that can be
back-patched without breaking compatibility with existing systems.
Fixing a bigger set of problems (for example, dumping tuples bigger than
2GB, or dumping fields bigger than 1GB) would require changing the FE/BE
protocol and/or changing the StringInfo API in an ABI-incompatible way,
neither of which would be back-patchable.
Authors: Daniel Vérité, Álvaro Herrera
Reviewed by: Tomas Vondra
Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
Peter Eisentraut [Wed, 30 Nov 2016 17:00:00 +0000 (12:00 -0500)]
Refactor libpqwalreceiver
The whole walreceiver API is now wrapped into a struct, like most of our
other loadable module APIs. The libpq connection is no longer a global
variable in libpqwalreceiver. Instead, it is encapsulated into a struct
that is passed around the functions. This allows multiple walreceivers
to run at the same time.
Add some rudimentary support for logical replication connections to
libpqwalreceiver.
These changes are mostly cosmetic and are going to be useful for the
future logical replication patches.
Peter Eisentraut [Thu, 15 Sep 2016 17:00:00 +0000 (12:00 -0500)]
Use grammar symbol function_with_argtypes consistently
Instead of sometimes referring to a function signature like func_name
func_args, use the existing function_with_argtypes symbol, which
combines the two.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Robert Haas [Thu, 1 Dec 2016 19:36:39 +0000 (14:36 -0500)]
libpq: Fix inadvertent change in PQhost() behavior.
Commit 274bb2b3857cc987cfa21d14775cae9b0dababa5 caused PQhost() to
return the value of the hostaddr parameter rather than the relevant
host when the latter parameter was specified. That's wrong. Commit 9a1d0af4ad2cbd419115b453d811c141b80d872b then amplified the damage by
using PQhost() in more places, so that the SSL test suite started
failing.
Andres Freund [Thu, 1 Dec 2016 01:30:09 +0000 (17:30 -0800)]
User narrower representative tuples in the hash-agg hashtable.
So far the hashtable stored representative tuples in the form of its
input slot, with all columns in the hashtable that are not
needed (i.e. not grouped upon or functionally dependent) set to NULL.
Thats good for saving memory, but it turns out that having tuples full
of NULL isn't free. slot_deform_tuple is faster if there's no NULL
bitmap even if no NULLs are encountered, and skipping over leading NULLs
isn't free.
So compute a separate tuple descriptor that only contains the needed
columns. As columns have already been moved in/out the slot for the
hashtable that does not imply additional per-row overhead.
Author: Andres Freund Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/20161103110721.h5i5t5saxfk5eeik@alap3.anarazel.de
Andres Freund [Thu, 1 Dec 2016 00:08:11 +0000 (16:08 -0800)]
Perform one only projection to compute agg arguments.
Previously we did a ExecProject() for each individual aggregate
argument. That turned out to be a performance bottleneck in queries with
multiple aggregates.
Doing all the argument computations in one ExecProject() is quite a bit
cheaper because ExecProject's fastpath can do the work at once in a
relatively tight loop, and because it can get all the required columns
with a single slot_getsomeattr and save some other redundant setup
costs.
Author: Andres Freund Reviewed-By: Heikki Linnakangas
Discussion: https://postgr.es/m/20161103110721.h5i5t5saxfk5eeik@alap3.anarazel.de
Robert Haas [Wed, 30 Nov 2016 20:39:21 +0000 (15:39 -0500)]
Improve hash index bucket split behavior.
Previously, the right to split a bucket was represented by a
heavyweight lock on the page number of the primary bucket page.
Unfortunately, this meant that every scan needed to take a heavyweight
lock on that bucket also, which was bad for concurrency. Instead, use
a cleanup lock on the primary bucket page to indicate the right to
begin a split, so that scans only need to retain a pin on that page,
which is they would have to acquire anyway, and which is also much
cheaper.
In addition to reducing the locking cost, this also avoids locking out
scans and inserts for the entire lifetime of the split: while the new
bucket is being populated with copies of the appropriate tuples from
the old bucket, scans and inserts can happen in parallel. There are
minor concurrency improvements for vacuum operations as well, though
the situation there is still far from ideal.
This patch also removes the unworldly assumption that a split will
never be interrupted. With the new code, a split is done in a series
of small steps and the system can pick up where it left off if it is
interrupted prior to completion. While this patch does not itself add
write-ahead logging for hash indexes, it is clearly a necessary first
step, since one of the things that could interrupt a split is the
removal of electrical power from the machine performing it.
Amit Kapila. I wrote the original design on which this patch is
based, and did a good bit of work on the comments and README through
multiple rounds of review, but all of the code is Amit's. Also
reviewed by Jesper Pedersen, Jeff Janes, and others.