Robert Haas [Fri, 26 Feb 2016 11:03:37 +0000 (16:33 +0530)]
On second thought, disable parallelism for prepared statements.
CREATE TABLE .. AS EXECUTE can turn an apparently read-only query into
a write operation, which parallel query can't handle. It's a bit of a
shame that requires us to avoid parallel query for queries prepared via
PREPARE in all cases, but for right now it does.
Robert Haas [Fri, 26 Feb 2016 10:44:46 +0000 (16:14 +0530)]
Add new FDW API to test for parallel-safety.
This is basically a bug fix; the old code assumes that a ForeignScan
is always parallel-safe, but for postgres_fdw, for example, this is
definitely false. It should be true for file_fdw, though, since a
worker can read a file from the filesystem just as well as any other
backend process.
Original patch by Thomas Munro. Documentation, and changes to the
comments, by me.
Robert Haas [Thu, 25 Feb 2016 07:32:18 +0000 (13:02 +0530)]
Enable parallelism for prepared statements and extended query protocol.
Parallel query can't handle running a query only partially rather than
to completion. However, there seems to be no way to run a statement
prepared via SQL PREPARE other than to completion, so we can enable it
there without a problem.
The situation is more complicated for the extend query protocol.
libpq seems to provide no way to send an Execute message with a
non-zero rowcount, but some other client might. If that happens, and
a parallel plan was chosen, we'll execute the parallel plan without
using any workers, which may be somewhat inefficient but should still
work. Hopefully this won't be a problem; users can always set
max_parallel_degree=0 to avoid choosing parallel plans in the first
place.
Tom Lane [Mon, 22 Feb 2016 19:31:43 +0000 (14:31 -0500)]
Create a function to reliably identify which sessions block which others.
This patch introduces "pg_blocking_pids(int) returns int[]", which returns
the PIDs of any sessions that are blocking the session with the given PID.
Historically people have obtained such information using a self-join on
the pg_locks view, but it's unreasonably tedious to do it that way with any
modicum of correctness, and the addition of parallel queries has pretty
much broken that approach altogether. (Given some more columns in the view
than there are today, you could imagine handling parallel-query cases with
a 4-way join; but ugh.)
The new function has the following behaviors that are painful or impossible
to get right via pg_locks:
1. Correctly understands which lock modes block which other ones.
2. In soft-block situations (two processes both waiting for conflicting lock
modes), only the one that's in front in the wait queue is reported to
block the other.
3. In parallel-query cases, reports all sessions blocking any member of
the given PID's lock group, and reports a session by naming its leader
process's PID, which will be the pg_backend_pid() value visible to
clients.
The motivation for doing this right now is mostly to fix the isolation
tests. Commit 38f8bdcac4982215beb9f65a19debecaf22fd470 lobotomized
isolationtester's is-it-waiting query by removing its ability to recognize
nonconflicting lock modes, as a crude workaround for the inability to
handle soft-block situations properly. But even without the lock mode
tests, the old query was excessively slow, particularly in
CLOBBER_CACHE_ALWAYS builds; some of our buildfarm animals fail the new
deadlock-hard test because the deadlock timeout elapses before they can
probe the waiting status of all eight sessions. Replacing the pg_locks
self-join with use of pg_blocking_pids() is not only much more correct, but
a lot faster: I measure it at about 9X faster in a typical dev build with
Asserts, and 3X faster in CLOBBER_CACHE_ALWAYS builds. That should provide
enough headroom for the slower CLOBBER_CACHE_ALWAYS animals to pass the
test, without having to lengthen deadlock_timeout yet more and thus slow
down the test for everyone else.
We don't really need this field, because it's either zero or redundant with
PGPROC.pid. The use of zero to mark "not a group leader" is not necessary
since we can just as well test whether lockGroupLeader is NULL. This does
not save very much, either as to code or data, but the simplification seems
worthwhile anyway.
Andres Freund [Mon, 22 Feb 2016 06:48:44 +0000 (22:48 -0800)]
Fix wrong keysize in PrivateRefCountHash creation.
In 4b4b680c3 I accidentally used sizeof(PrivateRefCountArray) instead of
sizeof(PrivateRefCountEntry) when creating the refcount overflow
hashtable. As the former is bigger than the latter, this luckily only
resulted in a slightly increased memory usage when many buffers are
pinned in a backend.
Reported-By: Takashi Horikawa
Discussion: 73FA3881462C614096F815F75628AFCD035A48C3@BPXM01GP.gisp.nec.co.jp
Backpatch: 9.5, where thew new ref count infrastructure was introduced
Tom Lane [Sun, 21 Feb 2016 20:23:17 +0000 (15:23 -0500)]
Docs: make prose discussion match the ordering of Table 9-58.
The "Session Information Functions" table seems to be sorted mostly
alphabetically (although it's not perfect), which would be all right
if it didn't lead to some related functions being described in a
pretty nonintuitive order. Also, the prose discussions after the table
were in an order that hardly matched the table at all. Rearrange to
make things a bit easier to follow.
Tom Lane [Sun, 21 Feb 2016 16:38:24 +0000 (11:38 -0500)]
Cosmetic improvements in new config_info code.
Coverity griped about use of unchecked strcpy() into a local variable.
There's unlikely to be any actual bug there, since no caller would be
passing a path longer than MAXPGPATH, but nonetheless use of strlcpy()
seems preferable.
While at it, get rid of unmaintainable separation between list of
field names and list of field values in favor of initializing them
in parallel. And we might as well declare get_configdata()'s path
argument as const char *, even though no current caller needs that.
Andrew Dunstan [Sun, 21 Feb 2016 15:30:49 +0000 (10:30 -0500)]
Fix two-argument jsonb_object when called with empty arrays
Some over-eager copy-and-pasting on my part resulted in a nonsense
result being returned in this case. I have adopted the same pattern for
handling this case as is used in the one argument form of the function,
i.e. we just skip over the code that adds values to the object.
Diagnosis and patch from Michael Paquier, although not quite his
solution.
Fixes bug #13936.
Backpatch to 9.5 where jsonb_object was introduced.
Robert Haas [Sun, 21 Feb 2016 10:12:02 +0000 (15:42 +0530)]
Cosmetic improvements to group locking.
Reflow text in lock manager README so that it fits within 80 columns.
Correct some mistakes. Expand the README to explain not only why group
locking exists but also the data structures that support it. Improve
comments related to group locking several files. Change the name of a
macro argument for improved clarity.
Most of these problems were reported by Tom Lane, but I found a few
of them myself.
Robert Haas [Sun, 21 Feb 2016 08:47:50 +0000 (14:17 +0530)]
postgres_fdw: Avoid sharing list substructure.
list_concat(list_concat(a, b), c) destructively changes both a and b;
to avoid such perils, copy lists of remote_conds before incorporating
them into larger lists via list_concat().
Dean Rasheed [Sat, 20 Feb 2016 11:03:04 +0000 (11:03 +0000)]
Fix pg_size_bytes() to be more portable.
Commit 53874c5228fe16589a4d01b3e1fab3678e0fd8e3 broke various 32-bit
buildfarm machines because it incorrectly used an 'L' suffix for what
needed to be a 64-bit literal. Thanks to Michael Paquier for helping
to diagnose this.
Dean Rasheed [Sat, 20 Feb 2016 09:57:27 +0000 (09:57 +0000)]
Add pg_size_bytes() to parse human-readable size strings.
This will parse strings in the format produced by pg_size_pretty() and
return sizes in bytes. This allows queries to be written with clauses
like "pg_total_relation_size(oid) > pg_size_bytes('10 GB')".
Author: Pavel Stehule with various improvements by Vitaly Burovoy
Discussion: http://www.postgresql.org/message-id/CAFj8pRD-tGoDKnxdYgECzA4On01_uRqPrwF-8LdkSE-6bDHp0w@mail.gmail.com Reviewed-by: Vitaly Burovoy, Oleksandr Shulgin, Kyotaro Horiguchi,
Michael Paquier and Robert Haas
Noah Misch [Sat, 20 Feb 2016 03:47:50 +0000 (22:47 -0500)]
Call xlc __isync() after, not before, associated compare-and-swap.
Architecture reference material specifies this order, and s_lock.h
inline assembly agrees. The former order failed to provide mutual
exclusion to lwlock.c and perhaps to other clients. The two xlc
buildfarm members, hornet and mandrill, have failed sixteen times with
duplicate key errors involving pg_class_oid_index or pg_type_oid_index.
Back-patch to 9.5, where commit b64d92f1a5602c55ee8b27a7ac474f03b7aee340
introduced atomics.
Simon Riggs [Fri, 19 Feb 2016 08:31:12 +0000 (08:31 +0000)]
Correct StartupSUBTRANS for page wraparound
StartupSUBTRANS() incorrectly handled cases near the max pageid in the subtrans
data structure, which in some cases could lead to errors in startup for Hot
Standby.
This patch wraps the pageids correctly, avoiding any such errors.
Identified by exhaustive crash testing by Jeff Janes.
Peter Eisentraut [Fri, 19 Feb 2016 01:12:38 +0000 (20:12 -0500)]
pg_dump: Fix inconsistent sscanf() conversions
It was using %u to read a string that was earlier produced by snprintf with %d
into a signed integer variable. This seems to work in practice but is
incorrect.
Tom Lane [Fri, 19 Feb 2016 01:01:49 +0000 (20:01 -0500)]
Add an explicit representation of the output targetlist to Paths.
Up to now, there's been an assumption that all Paths for a given relation
compute the same output column set (targetlist). However, there are good
reasons to remove that assumption. For example, an indexscan on an
expression index might be able to return the value of an expensive function
"for free". While we have the ability to generate such a plan today in
simple cases, we don't have a way to model that it's cheaper than a plan
that computes the function from scratch, nor a way to create such a plan
in join cases (where the function computation would normally happen at
the topmost join node). Also, we need this so that we can have Paths
representing post-scan/join steps, where the targetlist may well change
from one step to the next. Therefore, invent a "struct PathTarget"
representing the columns we expect a plan step to emit. It's convenient
to include the output tuple width and tlist evaluation cost in this struct,
and there will likely be additional fields in future.
While Path nodes that actually do have custom outputs will need their own
PathTargets, it will still be true that most Paths for a given relation
will compute the same tlist. To reduce the overhead added by this patch,
keep a "default PathTarget" in RelOptInfo, and allow Paths that compute
that column set to just point to their parent RelOptInfo's reltarget.
(In the patch as committed, actually every Path is like that, since we
do not yet have any cases of custom PathTargets.)
I took this opportunity to provide some more-honest costing of
PlaceHolderVar evaluation. Up to now, the assumption that "scan/join
reltargetlists have cost zero" was applied not only to Vars, where it's
reasonable, but also PlaceHolderVars where it isn't. Now, we add the eval
cost of a PlaceHolderVar's expression to the first plan level where it can
be computed, by including it in the PathTarget cost field and adding that
to the cost estimates for Paths. This isn't perfect yet but it's much
better than before, and there is a way forward to improve it more. This
costing change affects the join order chosen for a couple of the regression
tests, changing expected row ordering.
Tom Lane [Thu, 18 Feb 2016 20:40:35 +0000 (15:40 -0500)]
Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.
Dead or half-dead index leaf pages were incorrectly reported as live, as a
consequence of a code rearrangement I made (during a moment of severe brain
fade, evidently) in commit d287818eb514d431.
The index metapage was not counted in index_size, causing that result to
not agree with the actual index size on-disk.
Index root pages were not counted in internal_pages, which is inconsistent
compared to the case of a root that's also a leaf (one-page index), where
the root would be counted in leaf_pages. Aside from that inconsistency,
this could lead to additional transient discrepancies between the reported
page counts and index_size, since it's possible for pgstatindex's scan to
see zero or multiple pages marked as BTP_ROOT, if the root moves due to
a split during the scan. With these fixes, index_size will always be
exactly one page more than the sum of the displayed page counts.
Also, the index_size result was incorrectly documented as being measured in
pages; it's always been measured in bytes. (While fixing that, I couldn't
resist doing some small additional wordsmithing on the pgstattuple docs.)
Including the metapage causes the reported index_size to not be zero for
an empty index. To preserve the desired property that the pgstattuple
regression test results are platform-independent (ie, BLCKSZ configuration
independent), scale the index_size result in the regression tests.
The documentation issue was reported by Otsuka Kenji, and the inconsistent
root page counting by Peter Geoghegan; the other problems noted by me.
Back-patch to all supported branches, because this has been broken for
a long time.
Joe Conway [Wed, 17 Feb 2016 18:00:34 +0000 (10:00 -0800)]
Revert inadvertant change in pg_config behavior
In commit a5c43b88 the behavior of command line pg_config was
inadvertantly changed to include the config name when specific
configs are requested, similar to when none are requested and
all are emitted. This breaks scripts that expect to use
pg_config for e.g. PGXS. Revert the behavior to the previous.
Joe Conway [Wed, 17 Feb 2016 17:12:06 +0000 (09:12 -0800)]
Add new system view, pg_config
Move and refactor the underlying code for the pg_config client
application to src/common in support of sharing it with a new
system information SRF called pg_config() which makes the same
information available via SQL. Additionally wrap the SRF with a
new system view, as called pg_config.
Patch by me with extensive input and review by Michael Paquier
and additional review by Alvaro Herrera.
Robert Haas [Wed, 17 Feb 2016 10:10:00 +0000 (15:40 +0530)]
Reuse abbreviated keys in ordered [set] aggregates.
When processing ordered aggregates following a sort that could make use
of the abbreviated key optimization, only call the equality operator to
compare successive pairs of tuples when their abbreviated keys were not
equal.
Peter Geoghegan, reviewd by Andreas Karlsson and by me.
Tom Lane [Wed, 17 Feb 2016 02:08:15 +0000 (21:08 -0500)]
Make plpython cope with funny characters in function names.
A function name that's double-quoted in SQL can contain almost any
characters, but we were using that name directly as part of the name
generated for the Python-level function, and Python doesn't like
anything that isn't pretty much a standard identifier. To fix,
replace anything that isn't an ASCII letter or digit with an underscore
in the generated name. This doesn't create any risk of duplicate Python
function names because we were already appending the function OID to
the generated name to ensure uniqueness. Per bug #13960 from Jim Nasby.
Patch by Jim Nasby, modified a bit by me. Back-patch to all
supported branches.
Tom Lane [Tue, 16 Feb 2016 18:43:03 +0000 (13:43 -0500)]
Improve documentation about CREATE INDEX CONCURRENTLY.
Clarify the description of which transactions will block a CREATE INDEX
CONCURRENTLY command from proceeding, and mention that the index might
still not be usable after CREATE INDEX completes. (This happens if the
index build detected broken HOT chains, so that pg_index.indcheckxmin gets
set, and there are open old transactions preventing the xmin horizon from
advancing past the index's initial creation. I didn't want to explain what
broken HOT chains are, though, so I omitted an explanation of exactly when
old transactions prevent the index from being used.)
Per discussion with Chris Travers. Back-patch to all supported branches,
since the same text appears in all of them.
Fujii Masao [Tue, 16 Feb 2016 05:49:47 +0000 (14:49 +0900)]
Correct the formulas for System V IPC parameters SEMMNI and SEMMNS in docs.
In runtime.sgml, the old formulas for calculating the reasonable
values of SEMMNI and SEMMNS were incorrect. They have forgotten to
count the number of semaphores which both the checkpointer process
(introduced in 9.2) and the background worker processes (introduced
in 9.3) need.
This commit fixes those formulas so that they count the number of
semaphores which the checkpointer process and the background worker
processes need.
Report and patch by Kyotaro Horiguchi. Only the patch for 9.3 was
modified by me. Back-patch to 9.2 where the checkpointer process was
added and the number of needed semaphores was increased.
Joe Conway [Tue, 16 Feb 2016 00:37:03 +0000 (16:37 -0800)]
Move DATA entry to correct position
In commit 7b4bfc87 the DATA and DESCR entries for the new
row_security_active() function were inadvertantly put after
the PROVOLATILE defines, rather than before as they should
have been placed. Move them up where they belong.
Backpatch to 9.5 where the new entries were introduced.
Andres Freund [Mon, 15 Feb 2016 22:52:38 +0000 (23:52 +0100)]
Allow the WAL writer to flush WAL at a reduced rate.
Commit 4de82f7d7 increased the WAL flush rate, mainly to increase the
likelihood that hint bits can be set quickly. More quickly set hint bits
can reduce contention around the clog et al. But unfortunately the
increased flush rate can have a significant negative performance impact,
I have measured up to a factor of ~4. The reason for this slowdown is
that if there are independent writes to the underlying devices, for
example because shared buffers is a lot smaller than the hot data set,
or because a checkpoint is ongoing, the fdatasync() calls force cache
flushes to be emitted to the storage.
This is achieved by flushing WAL only if the last flush was longer than
wal_writer_delay ago, or if more than wal_writer_flush_after (new GUC)
unflushed blocks are pending. Based on some tests the default for
wal_writer_delay is 1MB, which seems to work well both on SSD and
rotational media.
To avoid negative performance impact due to 4de82f7d7 an earlier
commit (db76b1e) made SetHintBits() more likely to succeed; preventing
performance regressions in the pgbench tests I performed.
Alvaro Herrera [Mon, 15 Feb 2016 23:33:43 +0000 (20:33 -0300)]
pgbench: avoid FD_ISSET on an invalid file descriptor
The original code wasn't careful to test the file descriptor returned by
PQsocket() for an invalid socket. If an invalid socket did turn up,
that would amount to calling FD_ISSET with fd = -1, whereby undefined
behavior can be invoked.
To fix, test file descriptor for validity and stop further processing if
that fails.
Problem noticed by Coverity.
There is an existing FD_ISSET callsite that does check for invalid
sockets beforehand, but the error message reported by it was
strerror(errno); in testing the aforementioned change, that turns out to
result in "bad socket: Success" which isn't terribly helpful. Instead
use PQerrorMessage() in both places which is more likely to contain an
useful error message.
Tom Lane [Mon, 15 Feb 2016 22:11:51 +0000 (17:11 -0500)]
Suppress compiler warnings about useless comparison of unsigned to zero.
Reportedly, some compilers warn about tests like "c < 0" if c is unsigned,
and hence complain about the character range checks I added in commit 3bb3f42f3749d40b8d4de65871e8d828b18d4a45. This is a bit of a pain since
the regex library doesn't really want to assume that chr is unsigned.
However, since any such reconfiguration would involve manual edits of
regcustom.h anyway, we can put it on the shoulders of whoever wants to
do that to adjust this new range-checking macro correctly.
Andres Freund [Mon, 15 Feb 2016 21:15:35 +0000 (22:15 +0100)]
Allow SetHintBits() to succeed if the buffer's LSN is new enough.
Previously we only allowed SetHintBits() to succeed if the commit LSN of
the last transaction touching the page has already been flushed to
disk. We can't generally change the LSN of the page, because we don't
necessarily have the required locks on the page. But the required LSN
interlock does not mean the commit record has to be flushed immediately,
it just requires that the commit record will be flushed before the page is
written out. Therefore if the buffer LSN is newer than the commit LSN,
the hint bit can be safely set.
In a number of scenarios (e.g. pgbench) this noticeably increases the
number of hint bits are set. But more importantly it also keeps the
success rate up when flushing WAL less frequently. That was the original
reason for commit 4de82f7d7, which has negative performance consequences
in a number of scenarios. This will allow a followup commit to reduce
the flush rate.
Fujii Masao [Mon, 15 Feb 2016 17:15:44 +0000 (02:15 +0900)]
Make concurrent refresh check early that there is a unique index on matview.
In REFRESH MATERIALIZED VIEW command, CONCURRENTLY option is only
allowed if there is at least one unique index with no WHERE clause on
one or more columns of the matview. Previously, concurrent refresh
checked the existence of a unique index on the matview after filling
the data to new snapshot, i.e., after calling refresh_matview_datafill().
So, when there was no unique index, we could need to wait a long time
before we detected that and got the error. It was a waste of time.
To eliminate such wasting time, this commit changes concurrent refresh
so that it checks the existence of a unique index at the beginning of
the refresh operation, i.e., before starting any time-consuming jobs.
If CONCURRENTLY option is not allowed due to lack of a unique index,
concurrent refresh can immediately detect it and emit an error.
Author: Masahiko Sawada Reviewed-by: Michael Paquier, Fujii Masao
Tom Lane [Sat, 13 Feb 2016 20:42:31 +0000 (15:42 -0500)]
Make GetLockStatusData's header comment resemble reality.
The API spec for this function was changed completely (and for the better)
by commit 3cba8999b343648c4c528432ab3d51400194e93b, but it didn't bother
with anything as mundane as updating the comments.
Joe Conway [Fri, 12 Feb 2016 22:23:59 +0000 (14:23 -0800)]
Change delimiter used for display of NextXID
NextXID has been rendered in the form of a pg_lsn even though it
really is not. This can cause confusion, so change the format from
%u/%u to %u:%u, per discussion on hackers.
Complaint by me, patch by me and Bruce, reviewed by Michael Paquier
and Alvaro. Applied to HEAD only.
Author: Joe Conway, Bruce Momjian Reviewed-by: Michael Paquier, Alvaro Herrera
Backpatch-through: master
Tom Lane [Fri, 12 Feb 2016 22:22:42 +0000 (17:22 -0500)]
Increase deadlock_timeout some more in the deadlock-hard isolation test.
The previous value of 5s is inadequate for the buildfarm's
CLOBBER_CACHE_ALWAYS animals: they take long enough to do the is-it-waiting
queries that the timeout expires, allowing the database state to change,
before isolationtester is done looking. Perhaps 10s will be enough.
(If it isn't, I'm inclined to reduce the number of sessions involved.)
Tom Lane [Fri, 12 Feb 2016 22:12:23 +0000 (17:12 -0500)]
Revert "isolationtester: don't repeat the is-it-waiting query when retrying a step."
This mostly reverts commit 9c9782f066e0ce5424b8706df2cce147cb78170f.
I left in the parts that rearranged removal of completed waiting steps;
but the idea of not rechecking a step's blocked-ness isn't working.
Tom Lane [Fri, 12 Feb 2016 22:02:59 +0000 (17:02 -0500)]
Revert "Still further tweaking of deadlock isolation tests."
This reverts commit d03130d378b5fb071d231a7822784ad87268583a.
That was dependent on an isolationtester.c change that now proves
to be broken; we will need to find another solution.
Alvaro Herrera [Fri, 12 Feb 2016 20:30:46 +0000 (17:30 -0300)]
pgbench: cleanup use of a "logfile" parameter
There is no reason to have the per-thread logfile file pointer as a
separate parameter in various functions: it's much simpler to put it in
the per-thread state struct instead, which is already being passed to
all functions that need the log file anyway. Change the callsites in
which it was used as a boolean to test whether logging is active, so
that they use the use_log global variable instead.
No backpatch, even though this exists since commit a887c486d5df of March
2010, because this is just for cleanliness' sake and the surrounding
code has been modified a lot recently anyway.
Alvaro Herrera [Fri, 12 Feb 2016 20:14:45 +0000 (17:14 -0300)]
pgbench: fix segfault with empty sql file
Commit 1d0c3b3f8a introduced a bug that causes pgbench to crash if an
empty script file is specified. Fix it by rejecting such files at
startup, which is the historical and intended behavior.
Reported-By: Jeff Janes
Discussion: https://www.postgresql.org/message-id/CAMkU=1zxKUbLPOt9hQWFp14pTc=V0cGo2GQBbn2GsK2Pu+8ZfA@mail.gmail.com
Tom Lane [Fri, 12 Feb 2016 19:19:57 +0000 (14:19 -0500)]
Still further tweaking of deadlock isolation tests.
It turns out that there is a second race condition in the new deadlock-hard
test: once the deadlock detector fires, it's uncertain whether step s7a8 or
step s8a1 will report first, because killing s8's transaction unblocks s7.
So far, s7 has only been seen to report first in CLOBBER_CACHE_ALWAYS
builds, but it's pretty reproducible there, and in theory it should
sometimes occur in normal builds too. If s7 were a bit slower than usual,
that could also break the test, since the existing expected-file assumes
that we'll see s7a8 report the first time we check it after s8a1 completes.
To fix, add a post-lock delay to s7a8.
Tom Lane [Fri, 12 Feb 2016 19:10:36 +0000 (14:10 -0500)]
isolationtester: don't repeat the is-it-waiting query when retrying a step.
If we're retrying a step, then we already decided it was blocked on a lock,
and there's no need to recheck that. The original coding of commit 38f8bdcac4982215beb9f65a19debecaf22fd470 resulted in a large number of
is-it-waiting queries when dealing with multiple concurrently-blocked
sessions, which is fairly pointless and also results in test failures in
CLOBBER_CACHE_ALWAYS builds, where the is-it-waiting query is quite slow.
This definition also permits appending pg_sleep() calls to steps where it's
needed to control the order of finish of concurrent steps. Before, that
did not work nicely because we'd decide that a step performing a sleep was
not blocked and hang up waiting for it to finish, rather than noticing the
completion of the concurrent step we're supposed to notice first.
In passing, revise handling of removal of completed waiting steps
to make it a bit less messy.
Robert Haas [Fri, 12 Feb 2016 14:31:16 +0000 (09:31 -0500)]
Introduce extensible node types.
An extensible node is always tagged T_Extensible, but the extnodename
field identifies it more specifically; it may also include arbitrary
private data. Extensible nodes can be copied, tested for equality,
serialized, and deserialized, but the core system doesn't know
anything about them otherwise. Some extensions may find it useful to
include these nodes in fdw_private or custom_private lists in lieu of
arm-wrestling their data into a format that the core code can
understand.
Along the way, so as not to burden the authors of such extensible
node types too much, expose the functions for writing serialized
tokens, and for serializing and deserializing bitmapsets.
KaiGai Kohei, per a design suggested by me. Reviewed by Andres Freund
and by me, and further edited by me.
Tom Lane [Fri, 12 Feb 2016 04:21:33 +0000 (23:21 -0500)]
Further tweaking of deadlock isolation tests.
The new deadlock-soft-2 test has a timing dependency too: it supposes
that isolationtester will detect step s1b as waiting before the deadlock
detector runs and grants it the lock. Adjust deadlock_timeout to ensure
that that's true even in CLOBBER_CACHE_ALWAYS builds, where the wait
detection query is quite slow. Per buildfarm member jaguarundi.
Tom Lane [Thu, 11 Feb 2016 22:52:03 +0000 (17:52 -0500)]
Refactor check_functional_grouping() to use get_primary_key_attnos().
If we ever get around to allowing functional dependency to be proven
from other things besides simple primary keys, this code will need to
be rethought, but that was true anyway. In the meantime, we might as
well not have two very-similar routines for scanning pg_constraint.
Tom Lane [Thu, 11 Feb 2016 22:34:59 +0000 (17:34 -0500)]
Remove GROUP BY columns that are functionally dependent on other columns.
If a GROUP BY clause includes all columns of a non-deferred primary key,
as well as other columns of the same relation, those other columns are
redundant and can be dropped from the grouping; the pkey is enough to
ensure that each row of the table corresponds to a separate group.
Getting rid of the excess columns will reduce the cost of the sorting or
hashing needed to implement GROUP BY, and can indeed remove the need for
a sort step altogether.
This seems worth testing for since many query authors are not aware of
the GROUP-BY-primary-key exception to the rule about queries not being
allowed to reference non-grouped-by columns in their targetlists or
HAVING clauses. Thus, redundant GROUP BY items are not uncommon. Also,
we can make the test pretty cheap in most queries where it won't help
by not looking up a rel's primary key until we've found that at least
two of its columns are in GROUP BY.
Tom Lane [Thu, 11 Feb 2016 20:51:28 +0000 (15:51 -0500)]
Move pg_constraint.h function declarations to new file pg_constraint_fn.h.
A pending patch requires exporting a function returning Bitmapset from
catalog/pg_constraint.c. As things stand, that would mean including
nodes/bitmapset.h in pg_constraint.h, which might be hazardous for the
client-side includability of that header. It's not entirely clear whether
any client-side code needs to include pg_constraint.h, but it seems prudent
to assume that there is some such code somewhere. Therefore, split off the
function definitions into a new file pg_constraint_fn.h, similarly to what
we've done for some other catalog header files.
Tom Lane [Thu, 11 Feb 2016 19:14:07 +0000 (14:14 -0500)]
Shift the responsibility for emitting "database system is shut down".
Historically this message has been emitted at the end of ShutdownXLOG().
That's not an insane place for it in a standalone backend, but in the
postmaster environment we've grown a fair amount of stuff that happens
later, including archiver/walsender shutdown, stats collector shutdown,
etc. Recent buildfarm experimentation showed that on slower machines
there could be many seconds' delay between finishing ShutdownXLOG() and
actual postmaster exit. That's fairly confusing, both for testing
purposes and for DBAs. Hence, move the code that prints this message
into UnlinkLockFiles(), so that it comes out just after we remove the
postmaster's pidfile. That is a more appropriate definition of "is shut
down" from the point of view of "pg_ctl stop", for example. In general,
removing the pidfile should be the last externally-visible action of
either a postmaster or a standalone backend; compare commit d73d14c271653dff10c349738df79ea03b85236c for instance. So this seems
like a reasonably future-proof approach.
Robert Haas [Thu, 11 Feb 2016 19:07:33 +0000 (14:07 -0500)]
Use separate lwlock tranches for buffer, lock, and predicate lock managers.
This finishes the work - spread across many commits over the last
several months - of putting each type of lock other than the named
individual locks into a separate tranche.
Tom Lane [Thu, 11 Feb 2016 16:59:11 +0000 (11:59 -0500)]
Make new deadlock isolation test more reproducible.
The original formulation of 4c9864b9b4d87d02f07f40bb27976da737afdcab
was extremely timing-sensitive, because it arranged for the deadlock
detector to be running (and possibly unblocking the current query)
at almost exactly the same time as isolationtester would be probing
to see if the query is blocked. The committed expected-file assumed
that the deadlock detection would finish first, but we see the opposite
on both fast and slow buildfarm animals. Adjust the deadlock timeout
settings to make it predictable that isolationtester *will* see the
query as waiting before deadlock detection unblocks it.
Tom Lane [Thu, 11 Feb 2016 16:30:46 +0000 (11:30 -0500)]
Code review for isolationtester changes.
Fix a few oversights in 38f8bdcac4982215beb9f65a19debecaf22fd470:
don't leak memory in run_permutation(), remember when we've issued
a cancel rather than issuing another one every 10ms,
fix some typos in comments.
Robert Haas [Thu, 11 Feb 2016 13:55:24 +0000 (08:55 -0500)]
Rename PGPROC fields related to group XID clearing again.
Commit 0e141c0fbb211bdd23783afa731e3eef95c9ad7a introduced a new
facility to reduce ProcArrayLock contention by clearing several XIDs
from the ProcArray under a single lock acquisition. The names
initially chosen were deemed not to be very good choices, so commit 4aec49899e5782247e134f94ce1c6ee926f88e1c renamed them. But now it
seems like we still didn't get it right. A pending patch wants to
add similar infrastructure for batching CLOG updates, so the names
need to be clear enough to allow a new set of structure members with
a related purpose.
Robert Haas [Thu, 11 Feb 2016 13:36:30 +0000 (08:36 -0500)]
Modify the isolation tester so that multiple sessions can wait.
This allows testing of deadlock scenarios. Scenarios that would
previously have been considered invalid are now simply taken as a
scenario in which more than one backend will wait.
Robert Haas [Thu, 11 Feb 2016 13:23:46 +0000 (08:23 -0500)]
Specify permutations for isolation tests with "invalid" permutations.
This is a necessary prerequisite for forthcoming changes to allow deadlock
scenarios to be tested by the isolation tester. It is also a good idea on
general principle, since these scenarios add no useful test coverage not
provided by other scenarios, but do to take time to execute.
Noah Misch [Thu, 11 Feb 2016 01:34:57 +0000 (20:34 -0500)]
In pg_rewind test suite, triple promote timeout to 90s.
Thirty seconds was not consistently enough for promotion to complete on
buildfarm members sungazer and tern. Experiments suggest 43s would have
been enough. Back-patch to 9.5, where pg_rewind was introduced.
Noah Misch [Thu, 11 Feb 2016 01:34:02 +0000 (20:34 -0500)]
Accept pg_ctl timeout from the PGCTLTIMEOUT environment variable.
Many automated test suites call pg_ctl. Buildfarm members axolotl,
hornet, mandrill, shearwater, sungazer and tern have failed when server
shutdown took longer than the pg_ctl default 60s timeout. This addition
permits slow hosts to easily raise the timeout without us editing a
--timeout argument into every test suite pg_ctl call. Back-patch to 9.1
(all supported versions) for the sake of automated testing.
Tom Lane [Thu, 11 Feb 2016 00:30:11 +0000 (19:30 -0500)]
Avoid use of sscanf() to parse ispell dictionary files.
It turns out that on FreeBSD-derived platforms (including OS X), the
*scanf() family of functions is pretty much brain-dead about multibyte
characters. In particular it will apply isspace() to individual bytes
of input even when those bytes are part of a multibyte character, thus
allowing false recognition of a field-terminating space.
We appear to have little alternative other than instituting a coding
rule that *scanf() is not to be used if the input string might contain
multibyte characters. (There was some discussion of relying on "%ls",
but that probably just moves the portability problem somewhere else,
and besides it doesn't fully prevent BSD *scanf() from using isspace().)
This patch is a down payment on that: it gets rid of use of sscanf()
to parse ispell dictionary files, which are certainly at great risk
of having a problem. The code is cleaner this way anyway, though
a bit longer.
In passing, improve a few comments.
Report and patch by Artur Zakirov, reviewed and somewhat tweaked by me.
Back-patch to all supported branches.
Robert Haas [Wed, 10 Feb 2016 14:58:09 +0000 (09:58 -0500)]
Code cleanup in the wake of recent LWLock refactoring.
As of commit c1772ad9225641c921545b35c84ee478c326b95e, there's no
longer any way of requesting additional LWLocks in the main tranche,
so we don't need NumLWLocks() or LWLockAssign() any more. Also,
some of the allocation counters that we had previously aren't needed
any more either.
Tom Lane [Wed, 10 Feb 2016 00:36:21 +0000 (19:36 -0500)]
Add still more chattiness in server shutdown.
Further investigation says that there may be some slow operations after
we've finished ShutdownXLOG(), so add some more log messages to try to
isolate that. This is all temporary code too.
Robert Haas [Tue, 9 Feb 2016 19:00:50 +0000 (14:00 -0500)]
postgres_fdw: Push down joins to remote servers.
If we've got a relatively straightforward join between two tables,
this pushes that join down to the remote server instead of fetching
the rows for each table and performing the join locally. Some cases
are not handled yet, such as SEMI and ANTI joins. Also, we don't
yet attempt to create presorted join paths or parameterized join
paths even though these options do get tried for a base relation
scan. Nevertheless, this seems likely to be a very significant win
in many practical cases.
Shigeru Hanada and Ashutosh Bapat, reviewed by Robert Haas, with
additional review at various points by Tom Lane, Etsuro Fujita,
KaiGai Kohei, and Jeevan Chalke.
Tom Lane [Tue, 9 Feb 2016 16:21:46 +0000 (11:21 -0500)]
Add more chattiness in server shutdown.
Early returns from the buildfarm show that there's a bit of a gap in the
logging I added in 3971f64843b02e4a: the portion of CreateCheckPoint()
after CheckPointGuts() can take a fair amount of time. Add a few more
log messages in that section of code. This too shall be reverted later.
Tom Lane [Mon, 8 Feb 2016 23:43:11 +0000 (18:43 -0500)]
Temporarily make pg_ctl and server shutdown a whole lot chattier.
This is a quick hack, due to be reverted when its purpose has been served,
to try to gather information about why some of the buildfarm critters
regularly fail with "postmaster does not shut down" complaints. Maybe they
are just really overloaded, but maybe something else is going on. Hence,
instrument pg_ctl to print the current time when it starts waiting for
postmaster shutdown and when it gives up, and add a lot of logging of the
current time in the server's checkpoint and shutdown code paths.
No attempt has been made to make this pretty. I'm not even totally sure
if it will build on Windows, but we'll soon find out.
Tom Lane [Mon, 8 Feb 2016 20:15:56 +0000 (15:15 -0500)]
Rename typedef "string" to "VarString".
Since pgindent treats typedef names as global, the original coding of b47b4dbf683f13e6 would have had rather nasty effects on the formatting
of other files in which "string" is used as a variable or field name.
Use a less generic name for this typedef, and rename some other
identifiers to match.
Tom Lane [Mon, 8 Feb 2016 15:25:40 +0000 (10:25 -0500)]
Fix some regex issues with out-of-range characters and large char ranges.
Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a
bad choice because it is outside the range of type "celt" (int32).
Characters approaching that limit could lead to infinite loops in logic
such as "for (c = a; c <= b; c++)" where c is of type celt but the
range bounds are chr. Such loops will work safely only if CHR_MAX+1
is representable in celt, since c must advance to beyond b before the
loop will exit.
Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe.
It's highly unlikely that Unicode will ever assign codes that high, and
none of our other backend encodings need characters beyond that either.
In addition to modifying the macro, we have to explicitly enforce character
range restrictions on the values of \u, \U, and \x escape sequences, else
the limit is trivially bypassed.
Also, the code for expanding case-independent character ranges in bracket
expressions had a potential integer overflow in its calculation of the
number of characters it could generate, which could lead to allocating too
small a character vector and then overwriting memory. An attacker with the
ability to supply arbitrary regex patterns could easily cause transient DOS
via server crashes, and the possibility for privilege escalation has not
been ruled out.
Quite aside from the integer-overflow problem, the range expansion code was
unnecessarily inefficient in that it always produced a result consisting of
individual characters, abandoning the knowledge that we had a range to
start with. If the input range is large, this requires excessive memory.
Change it so that the original range is reported as-is, and then we add on
any case-equivalent characters that are outside that range. With this
approach, we can bound the number of individual characters allowed without
sacrificing much. This patch allows at most 100000 individual characters,
which I believe to be more than the number of case pairs existing in
Unicode, so that the restriction will never be hit in practice.
It's still possible for range() to take awhile given a large character code
range, so also add statement-cancel detection to its loop. The downstream
function dovec() also lacked cancel detection, and could take a long time
given a large output from range().
Per fuzz testing by Greg Stark. Back-patch to all supported branches.
Fujii Masao [Mon, 8 Feb 2016 14:41:46 +0000 (23:41 +0900)]
Make GIN regression test stable.
Commit 7f46eaf added the regression test which checks that
gin_clean_pending_list() cleans up the GIN pending list and returns >0.
This usually works fine. But if autovacuum comes along and cleans
the list before gin_clean_pending_list() starts, the function may
return 0, and then the regression test may fail.
To fix the problem, this commit disables autovacuum on the target
index of gin_clean_pending_list() by setting autovacuum_enabled
reloption to off when creating the table.
Also this commit sets gin_pending_list_limit reloption to 4MB on
the target index. Otherwise when running "make installcheck" with
small gin_pending_list_limit GUC, insertions of data may trigger
the cleanup of pending list before gin_clean_pending_list() starts
and the function may return 0. This could cause the regression test
to fail.
Andres Freund [Mon, 8 Feb 2016 10:03:31 +0000 (11:03 +0100)]
Fix overeager pushdown of HAVING clauses when grouping sets are used.
In 61444bfb we started to allow HAVING clauses to be fully pushed down
into WHERE, even when grouping sets are in use. That turns out not to
work correctly, because grouping sets can "produce" NULLs, meaning that
filtering in WHERE and HAVING can have different results, even when no
aggregates or volatile functions are involved.
Instead only allow pushdown of empty grouping sets.
It'd be nice to do better, but the exact mechanics of deciding which
cases are safe are still being debated. It's important to give correct
results till we find a good solution, and such a solution might not be
appropriate for backpatching anyway.
Bug: #13863 Reported-By: 'wrb' Diagnosed-By: Dean Rasheed
Author: Andrew Gierth Reviewed-By: Dean Rasheed and Andres Freund
Discussion: 20160113183558.12989.56904@wrigleys.postgresql.org
Backpatch: 9.5, where grouping sets were introduced
Tom Lane [Sun, 7 Feb 2016 21:02:44 +0000 (16:02 -0500)]
Improve documentation about PRIMARY KEY constraints.
Get rid of the false implication that PRIMARY KEY is exactly equivalent to
UNIQUE + NOT NULL. That was more-or-less true at one time in our
implementation, but the standard doesn't say that, and we've grown various
features (many of them required by spec) that treat a pkey differently from
less-formal constraints. Per recent discussion on pgsql-general.
I failed to resist the temptation to do some other wordsmithing in the
same area.
Tom Lane [Sun, 7 Feb 2016 19:57:24 +0000 (14:57 -0500)]
Fix deparsing of ON CONFLICT arbiter WHERE clauses.
The parser doesn't allow qualification of column names appearing in
these clauses, but ruleutils.c would sometimes qualify them, leading
to dump/reload failures. Per bug #13891 from Onder Kalaci.
(In passing, make stanzas in ruleutils.c that save/restore varprefix
more consistent.)
Tom Lane [Sun, 7 Feb 2016 17:29:17 +0000 (12:29 -0500)]
ExecHashRemoveNextSkewBucket must physically copy tuples to main hashtable.
Commit 45f6240a8fa9d355 added an assumption in ExecHashIncreaseNumBatches
and ExecHashIncreaseNumBuckets that they could find all tuples in the main
hash table by iterating over the "dense storage" introduced by that patch.
However, ExecHashRemoveNextSkewBucket continued its old practice of simply
re-linking deleted skew tuples into the main table's hashchains. Hence,
such tuples got lost during any subsequent increase in nbatch or nbuckets,
and would never get joined, as reported in bug #13908 from Seth P.
I (tgl) think that the aforesaid commit has got multiple design issues
and should be reworked rather completely; but there is no time for that
right now, so band-aid the problem by making ExecHashRemoveNextSkewBucket
physically copy deleted skew tuples into the "dense storage" arena.
The added test case is able to exhibit the problem by means of fooling the
planner with a WHERE condition that it will underestimate the selectivity
of, causing the initial nbatch estimate to be too small.
Tomas Vondra and Tom Lane. Thanks to David Johnston for initial
investigation into the bug report.