Tom Lane [Fri, 21 Apr 2017 19:55:56 +0000 (15:55 -0400)]
Avoid depending on non-POSIX behavior of fcntl(2).
The POSIX standard does not say that the success return value for
fcntl(F_SETFD) and fcntl(F_SETFL) is zero; it says only that it's not -1.
We had several calls that were making the stronger assumption. Adjust
them to test specifically for -1 for strict spec compliance.
The standard further leaves open the possibility that the O_NONBLOCK
flag bit is not the only active one in F_SETFL's argument. Formally,
therefore, one ought to get the current flags with F_GETFL and store
them back with only the O_NONBLOCK bit changed when trying to change
the nonblock state. In port/noblock.c, we were doing the full pushup
in pg_set_block but not in pg_set_noblock, which is just weird. Make
both of them do it properly, since they have little business making
any assumptions about the socket they're handed. The other places
where we're issuing F_SETFL are working with FDs we just got from
pipe(2), so it's reasonable to assume the FDs' properties are all
default, so I didn't bother adding F_GETFL steps there.
Also, while pg_set_block deserves some points for trying to do things
right, somebody had decided that it'd be even better to cast fcntl's
third argument to "long". Which is completely loony, because POSIX
clearly says the third argument for an F_SETFL call is "int".
Given the lack of field complaints, these missteps apparently are not
of significance on any common platforms. But they're still wrong,
so back-patch to all supported branches.
Change the on-disk format of SCRAM verifiers to conform to RFC 5803.
It doesn't make any immediate difference to PostgreSQL, but might as well
follow the standard, since one exists. (I looked at RFC 5803 earlier, but
didn't fully understand it back then.)
The new format uses Base64 instead of hex to encode StoredKey and
ServerKey, which makes the verifiers slightly smaller. Using the same
encoding for the salt and the keys also means that you only need one
encoder/decoder instead of two. Although we have code in the backend to
do both, we are talking about teaching libpq how to create SCRAM verifiers
for PQencodePassword(), and libpq doesn't currently have any code for hex
encoding.
Bump catversion, because this renders any existing SCRAM verifiers in
pg_authid invalid.
Tom Lane [Fri, 21 Apr 2017 18:48:29 +0000 (14:48 -0400)]
Remove long-obsolete catering for platforms without F_SETFD/FD_CLOEXEC.
SUSv2 mandates that <fcntl.h> provide both F_SETFD and FD_CLOEXEC,
so it seems pretty unlikely that any platforms remain without those.
Remove the #ifdef-ery installed by commit 7627b91cd to see if the
buildfarm agrees.
Extended statistics commit 7b504eb282c did not include appropriate
documentation next to where we document regular planner statistics (I
ripped what was submitted before commit and then forgot to put it back),
and while later commit 2686ee1b7ccf added some material, it structurally
depended on what I had ripped out, so the end result wasn't proper.
Fix those problems by shuffling what was added by 2686ee1b7ccf and
including some additional material, so that now chapter 14 "Performance
Tips" now describes the types of multivariate statistics we currently
have, and chapter 68 "How the Planner Uses Statistics" shows some
examples. The new text should be more in line with previous material,
in (hopefully) the appropriate depth.
While at it, fix a small bug in pg_statistic_ext docs: one column was
listed in the wrong spot.
Tom Lane [Thu, 20 Apr 2017 18:41:48 +0000 (14:41 -0400)]
Sync pg_ctl documentation and usage message with reality.
Commit 05cd12ed5 ("pg_ctl: Change default to wait for all actions")
was a tad sloppy about updating the documentation to match. The
documentation was also sorely in need of a copy-editing pass, having
been adjusted at different times by different people who took little
care to maintain consistency of style.
Prevent log_replication_commands from causing SQL commands to be logged.
Commit 7c4f524 allowed walsender to execute normal SQL commands
to support table sync feature in logical replication. Previously
while log_statement caused such SQL commands to be logged,
log_replication_commands caused them to be logged, too.
That is, such SQL commands were logged twice unexpectedly
when those settings were both enabled.
This commit forces log_replication_commands to log only replication
commands, to prevent normal SQL commands from being logged twice.
In pg_subscription, subconninfo, subslotname, subsynccommit and
subpublications are expected not to be NULL. Therefore this patch
adds BKI_FORCE_NOT_NULL markings to them.
This patch is basically unnecessary unless the code has a bug which
wrongly sets either of those columns to NULL. But it's good to have
this as a safeguard.
Don't call the function that may raise an error while holding spinlock.
It's not safe to raise an error while holding spinlock. But previously
logical replication worker for table sync called the function which
reads the system catalog and may raise an error while it's holding
spinlock. Which could lead to the trouble where spinlock will never
be released and the server gets stuck infinitely.
Author: Petr Jelinek Reviewed-by: Kyotaro Horiguchi and Fujii Masao Reported-by: Fujii Masao
Discussion: http://postgr.es/m/CAHGQGwFDWh_Qr-q_GEMpD+qH=vYPMdVqw=ZOSY3kX_Pna9R9SA@mail.gmail.com
Tom Lane [Tue, 18 Apr 2017 19:43:56 +0000 (15:43 -0400)]
Fix testing of parallel-safety of SubPlans.
is_parallel_safe() supposed that the only relevant property of a SubPlan
was the parallel safety of the referenced subplan tree. This is wrong:
the testexpr or args subtrees might contain parallel-unsafe stuff, as
demonstrated by the test case added here. However, just recursing into the
subtrees fails in a different way: we'll typically find PARAM_EXEC Params
representing the subplan's output columns in the testexpr. The previous
coding supposed that any Param must be treated as parallel-restricted, so
that a naive attempt at fixing this disabled parallel pushdown of SubPlans
altogether. We must instead determine, for any visited Param, whether it
is one that would be computed by a surrounding SubPlan node; if so, it's
safe to push down along with the SubPlan node.
We might later be able to extend this logic to cope with Params used for
correlated subplans and other cases; but that's a task for v11 or beyond.
Tom Lane [Tue, 18 Apr 2017 17:20:59 +0000 (13:20 -0400)]
Provide an error cursor for "can't call an SRF here" errors.
Since it appears that v10 is going to move the goalposts by some amount
in terms of where you can and can't invoke set-returning functions,
arrange for the executor's "set-valued function called in context that
cannot accept a set" errors to include a syntax position if possible,
pointing to the specific SRF that can't be called where it's located.
The main bit of infrastructure needed for this is to make the query source
text accessible in the executor; but it turns out that commit 4c728f382
already did that. We just need a new function executor_errposition()
modeled on parser_errposition(), and we're ready to rock.
While experimenting with this, I noted that the error position wasn't
properly reported if it occurred in a plpgsql FOR-over-query loop,
which turned out to be because SPI_cursor_open_internal wasn't providing
an error context callback during PortalStart. Fix that.
There's a whole lot more that could be done with this infrastructure
now that it's there, but this is not the right time in the development
cycle for that sort of work. Hence, resist the temptation to plaster
executor_errposition() calls everywhere ... for the moment.
A collection of small fixes for logical replication.
* Be sure to reset the launcher's pid (LogicalRepCtx->launcher_pid) to 0
even when the launcher emits an error.
* Declare ApplyLauncherWakeup() as a static function because it's called
only in launcher.c.
* Previously IsBackendPId() was used to check whether the launcher's pid
was valid. IsBackendPid() was necessary because there was the bug where
the launcher's pid was not reset to 0. But now it's fixed, so IsBackendPid()
is not necessary and this patch removes it.
Rename "scram" to "scram-sha-256" in pg_hba.conf and password_encryption.
Per discussion, plain "scram" is confusing because we actually implement
SCRAM-SHA-256 rather than the original SCRAM that uses SHA-1 as the hash
algorithm. If we add support for SCRAM-SHA-512 or some other mechanism in
the SCRAM family in the future, that would become even more confusing.
Most of the internal files and functions still use just "scram" as a
shorthand for SCRMA-SHA-256, but I did change PASSWORD_TYPE_SCRAM to
PASSWORD_TYPE_SCRAM_SHA_256, as that could potentially be used by 3rd
party extensions that hook into the password-check hook.
Michael Paquier did this in an earlier version of the SCRAM patch set
already, but I didn't include that in the version that was committed.
Simon Riggs [Tue, 18 Apr 2017 10:14:05 +0000 (11:14 +0100)]
Don’t push nextid too far forwards in recovery
Doing so allows various crash possibilities. Fix by avoiding
having PrescanPreparedTransactions() increment
ShmemVariableCache->nextXid when it has no 2PC files
Bug found by Jeff Janes, diagnosis and patch by Pavan Deolasee,
then patch re-designed for clarity and full accuracy by
Michael Paquier.
Reported-by: Jeff Janes
Author: Pavan Deolasee, Michael Paquier
Discussion: https://postgr.es/m/CAMkU=1zMLnH_i1-PVQ-biZzvNx7VcuatriquEnh7HNk6K8Ss3Q@mail.gmail.com
Peter Eisentraut [Tue, 18 Apr 2017 03:22:04 +0000 (23:22 -0400)]
Set range table for CopyFrom() in tablesync
CopyFrom() needs a range table for formatting certain errors for
constraint violations.
This changes the mechanism of how the range table is passed to the
CopyFrom() executor state. We used to generate the range table and one
entry for the relation manually inside DoCopy(). Now we use
addRangeTableEntryForRelation() to setup the range table and relation
entry for the ParseState, which is then passed down by BeginCopyFrom().
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com> Reported-by: Euler Taveira <euler@timbira.com.br>
The new catalog reused a column prefix "sta" from pg_statistic, but this
is undesirable, so change the catalog to use prefix "stx" instead.
Also, rename the column that lists enabled statistic kinds as "stxkind"
rather than "enabled".
Tighten up relation kind checks for extended statistics
We were accepting creation of extended statistics only for regular
tables, but they can usefully be created for foreign tables, partitioned
tables, and materialized views, too. Allow those cases.
While at it, make sure all the rejected cases throw a consistent error
message, and add regression tests for the whole thing.
Author: David Rowley, Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f-BmGo410bh5RSPZUvOO0LhmHL2NYmdrC_Jm8pk_FfyCA@mail.gmail.com
Tom Lane [Mon, 17 Apr 2017 19:29:00 +0000 (15:29 -0400)]
Always build a custom plan node's targetlist from the path's pathtarget.
We were applying the use_physical_tlist optimization to all relation
scan plans, even those implemented by custom scan providers. However,
that's a bad idea for a couple of reasons. The custom provider might
be unable to provide columns that it hadn't expected to be asked for
(for example, the custom scan might depend on an index-only scan).
Even more to the point, there's no good reason to suppose that this
"optimization" is a win for a custom scan; whatever the custom provider
is doing is likely not based on simply returning physical heap tuples.
(As a counterexample, if the custom scan is an interface to a column store,
demanding all columns would be a huge loss.) If it is a win, the custom
provider could make that decision for itself and insert a suitable
pathtarget into the path, anyway.
Per discussion with Dmitry Ivanov. Back-patch to 9.5 where custom scan
support was introduced. The argument that the custom provider can adjust
the behavior by changing the pathtarget only applies to 9.6+, but on
balance it seems more likely that use_physical_tlist will hurt custom
scans than help them.
Either because of a previous ALTER TABLE .. SET STATISTICS 0 or because
of being invoked with a partial column list, ANALYZE could fail to
acquire sufficient data to build extended statistics. Previously, this
would draw an ERROR and fail to collect any statistics at all (extended
and regular). Change things so that we raise a WARNING instead, and
remove a hint that was wrong in half the cases.
Reported by: David Rowley
Discussion: https://postgr.es/m/CAKJS1f9Kk0NF6Fg7TA=JUXsjpS9kX6NVu27pb5QDCpOYAvb-Og@mail.gmail.com
Tom Lane [Mon, 17 Apr 2017 03:23:44 +0000 (23:23 -0400)]
Ensure BackgroundWorker struct contents are well-defined.
Coverity complained because bgw.bgw_extra wasn't being filled in by
ApplyLauncherRegister(). The most future-proof fix is to memset the
whole BackgroundWorker struct to zeroes. While at it, let's apply the
same coding rule to other places that set up BackgroundWorker structs;
four out of five had the same or related issues.
Tom Lane [Sun, 16 Apr 2017 18:02:47 +0000 (14:02 -0400)]
Sync addRangeTableEntryForENR() with its peer functions.
addRangeTableEntryForENR had a check for pstate != NULL, which Coverity
pointed out was rather useless since it'd already dereferenced pstate
before that. More to the point, we'd established policy in commit bc93ac12c that we'd require non-NULL pstate for all addRangeTableEntryFor*
functions; this test was evidently copied-and-pasted from some older
version of one of those functions. Make it look more like the others.
In passing, make an elog message look more like the rest of the code,
too.
Tom Lane [Sat, 15 Apr 2017 21:27:38 +0000 (17:27 -0400)]
Provide a way to control SysV shmem attach address in EXEC_BACKEND builds.
In standard non-Windows builds, there's no particular reason to care what
address the kernel chooses to map the shared memory segment at. However,
when building with EXEC_BACKEND, there's a risk that the chosen address
won't be available in all child processes. Linux with ASLR enabled (which
it is by default) seems particularly at risk because it puts shmem segments
into the same area where it maps shared libraries. We can work around
that by specifying a mapping address that's outside the range where
shared libraries could get mapped. On x86_64 Linux, 0x7e0000000000
seems to work well.
This is only meant for testing/debugging purposes, so it doesn't seem
necessary to go as far as providing a GUC (or any user-visible
documentation, though we might change that later). Instead, it's just
controlled by setting an environment variable PG_SHMEM_ADDR to the
desired attach address.
Back-patch to all supported branches, since the point here is to
remove intermittent buildfarm failures on EXEC_BACKEND animals.
Owners of affected animals will need to add a suitable setting of
PG_SHMEM_ADDR to their build_env configuration.
Tom Lane [Sat, 15 Apr 2017 18:22:26 +0000 (14:22 -0400)]
Fix erroneous cross-reference in comment.
Seems to have been introduced in commit c219d9b0a. I think there indeed
was a "tupbasics.h" in some early drafts of that refactoring, but it
didn't survive into the committed version.
Andrew Dunstan [Sat, 15 Apr 2017 13:47:36 +0000 (09:47 -0400)]
Downcase "Wincrypt.h"
This is consistent with how we refer to other Windows include files, and
prevents a failure when cross-compiling on a system with case sensitive
file names.
Tom Lane [Sat, 15 Apr 2017 03:50:16 +0000 (23:50 -0400)]
Avoid passing function pointers across process boundaries.
We'd already recognized that we can't pass function pointers across process
boundaries for functions in loadable modules, since a shared library could
get loaded at different addresses in different processes. But actually the
practice doesn't work for functions in the core backend either, if we're
using EXEC_BACKEND. This is the cause of recent failures on buildfarm
member culicidae. Switch to passing a string function name in all cases.
Something like this needs to be back-patched into 9.6, but let's see
if the buildfarm likes it first.
Petr Jelinek, with a bunch of basically-cosmetic adjustments by me
Tom Lane [Fri, 14 Apr 2017 21:51:25 +0000 (17:51 -0400)]
Use one transaction while reading postgres.bki, not one per line.
AFAICT, the only actual benefit of closing a bootstrap transaction
is to reclaim transient memory. We can do that a lot more cheaply
by just doing a MemoryContextReset on a suitable context. This
gets the runtime of the "bootstrap" phase of initdb down to the
point where, at least by eyeball, it's quite negligible compared
to the rest of the phases. Per discussion with Andres Freund.
Tom Lane [Fri, 14 Apr 2017 21:04:25 +0000 (17:04 -0400)]
Clean up manipulations of hash indexes' hasho_flag field.
Standardize on testing a hash index page's type by doing
(opaque->hasho_flag & LH_PAGE_TYPE) == LH_xxx_PAGE
Various places were taking shortcuts like
opaque->hasho_flag & LH_BUCKET_PAGE
which while not actually wrong, is still bad practice because
it encourages use of
opaque->hasho_flag & LH_UNUSED_PAGE
which *is* wrong (LH_UNUSED_PAGE == 0, so the above is constant false).
hash_xlog.c's hash_mask() contained such an incorrect test.
This also ensures that we mask out the additional flag bits that
hasho_flag has accreted since 9.6. pgstattuple's pgstat_hash_page(),
for one, was failing to do that and was thus actively broken.
Also fix assorted comments that hadn't been updated to reflect the
extended usage of hasho_flag, and fix some macros that were testing
just "(hasho_flag & bit)" to use the less dangerous, project-approved
form "((hasho_flag & bit) != 0)".
Coverity found the bug in hash_mask(); I noted the one in
pgstat_hash_page() through code reading.
Tom Lane [Fri, 14 Apr 2017 18:52:03 +0000 (14:52 -0400)]
Further fix pg_trgm's extraction of trigrams from regular expressions.
Commit 9e43e8714 turns out to have been insufficient: not only is it
necessary to track tentative parent links while considering a set of
arc removals, but it's necessary to track tentative flag additions
as well. This is because we always merge arc target states into
arc source states; therefore, when considering a merge of the final
state with some other, it is the other state that will acquire a new
TSTATE_FIN bit. If there's another arc for the same color trigram
that would cause merging of that state with the initial state, we
failed to recognize the problem. The test cases for the prior commit
evidently only exercised situations where a tentative merge with the
initial state occurs before one with the final state. If it goes the
other way around, we'll happily merge the initial and final states,
either producing a broken final graph that would never match anything,
or triggering the Assert added by the prior commit.
It's tempting to consider switching the merge direction when the merge
involves the final state, but I lack the time to analyze that idea in
detail. Instead just keep track of the flag changes that would result
from proposed merges, in the same way that the prior commit tracked
proposed parent links.
Along the way, add some more debugging support, because I'm not entirely
confident that this is the last bug here. And tweak matters so that
the transformed.dot file uses small integers rather than pointer values
to identify states; that makes it more readable if you're just eyeballing
it rather than fooling with Graphviz. And rename a couple of identically
named struct fields to reduce confusion.
Per report from Corey Csuhta. Add a test case based on his example.
(Note: this case does not trigger the bug under 9.3, apparently because
its different measurement of costs causes it to stop merging states before
it hits the failure. I spent some time trying to find a variant that would
fail in 9.3, without success; but I'm sure such cases exist.)
Like the previous patch, back-patch to 9.3 where this code was added.
Tom Lane [Thu, 13 Apr 2017 21:18:35 +0000 (17:18 -0400)]
Fix regexport.c to behave sanely with lookaround constraints.
regexport.c thought it could just ignore LACON arcs, but the correct
behavior is to treat them as satisfiable while consuming zero input
(rather reminiscently of commit 9f1e642d5). Otherwise, the emitted
simplified-NFA representation may contain no paths leading from initial
to final state, which unsurprisingly confuses pg_trgm, as seen in
bug #14623 from Jeff Janes.
Since regexport's output representation has no concept of an arc that
consumes zero input, recurse internally to find the next normal arc(s)
after any LACON transitions. We'd be forced into changing that
representation if a LACON could be the last arc reaching the final
state, but fortunately the regex library never builds NFAs with such
a configuration, so there always is a next normal arc.
Back-patch to 9.3 where this logic was introduced.
This contains some protocol changes to SASL authentiation (which is new
in v10):
* For future-proofing, in the AuthenticationSASL message that begins SASL
authentication, provide a list of SASL mechanisms that the server
supports, for the client to choose from. Currently, it's always just
SCRAM-SHA-256.
* Add a separate authentication message type for the final server->client
SASL message, which the client doesn't need to respond to. This makes
it unambiguous whether the client is supposed to send a response or not.
The SASL mechanism should know that anyway, but better to be explicit.
Also, in the server, support clients that don't send an Initial Client
response in the first SASLInitialResponse message. The server is supposed
to first send an empty request in that case, to which the client will
respond with the data that usually comes in the Initial Client Response.
libpq uses the Initial Client Response field and doesn't need this, and I
would assume any other sensible implementation to use Initial Client
Response, too, but let's follow the SASL spec.
Improve the documentation on SASL authentication in protocol. Add a
section describing the SASL message flow, and some details on our
SCRAM-SHA-256 implementation.
Document the different kinds of PasswordMessages that the frontend sends
in different phases of SASL authentication, as well as GSS/SSPI
authentication as separate message formats. Even though they're all 'p'
messages, and the exact format depends on the context, describing them as
separate message formats makes the documentation more clear.
Reviewed by Michael Paquier and Álvaro Hernández Tortosa.
Move the responsibility of reading the data from the authentication request
message from PQconnectPoll() to pg_fe_sendauth(). This way, PQconnectPoll()
doesn't need to know about all the different authentication request types,
and we don't need the extra fields in the pg_conn struct to pass the data
from PQconnectPoll() to pg_fe_sendauth() anymore.
Tom Lane [Thu, 13 Apr 2017 16:07:47 +0000 (12:07 -0400)]
Move bootstrap-time lookup of regproc OIDs into genbki.pl.
Formerly, the bootstrap backend looked up the OIDs corresponding to
names in regproc catalog entries using brute-force searches of pg_proc.
It was somewhat remarkable that that worked at all, since it was used
while populating other pretty-fundamental catalogs like pg_operator.
And it was also quite slow, and getting slower as pg_proc gets bigger.
This patch moves the lookup work into genbki.pl, so that the values in
postgres.bki for regproc columns are always numeric OIDs, an option
that regprocin() already supported. Perl isn't the world's speediest
language, so this about doubles the time needed to run genbki.pl (from
0.3 to 0.6 sec on my machine). But we only do that at most once per
build. The time needed to run initdb drops significantly --- on my
machine, initdb --no-sync goes from 1.8 to 1.3 seconds. So this is
a small net win even for just one initdb per build, and it becomes
quite a nice win for test sequences requiring many initdb runs.
Strip out the now-dead code for brute-force catalog searching in
regprocin. We'd also cargo-culted similar logic into regoperin
and some (not all) of the other reg*in functions. That is all
dead code too since we currently have no need to load such values
during bootstrap. I removed it all, reasoning that if we ever
need such functionality it'd be much better to do it in a similar
way to this patch.
There might be some simplifications possible in the backend now that
regprocin doesn't require doing catalog reads so early in bootstrap.
I've not looked into that, though.
Andreas Karlsson, with some small adjustments by me
Peter Eisentraut [Thu, 13 Apr 2017 02:12:30 +0000 (22:12 -0400)]
pg_dump: Always dump subscriptions NOCONNECT
This removes the pg_dump option --no-subscription-connect and makes it
the default. Dumping a subscription so that it activates right away
when restored is not very useful, because the state of the publication
server is unclear.
Free each SASL message after sending it. It's not a lot of wasted memory,
and it's short-lived, but the authentication code in general tries to
pfree() stuff, so let's follow the example.
Adding the pfree() revealed a little bug in build_server_first_message().
It attempts to keeps a copy of the sent message, but it was missing a
pstrdup(), so the pointer started to dangle, after adding the pfree()
into CheckSCRAMAuth().
Reword comments and debug messages slightly, while we're at it.
It was created as equivalent of pg_stats, but since the code underlying
pg_statistic_ext is more convenient than the one for pg_statistic,
pg_stats_ext is no longer useful.
Author: David Rowley Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/CAKJS1f9zAkPUf9nQrqpFBAsrOHvb5eYa2FVNsmCJy1wegcO_TQ@mail.gmail.com
Move pg_stat_progress_vacuum to the table of Dynamic Statistics Views in doc.
Previously the description about pg_stat_progress_vacuum was in the table
of "Collected Statistics Views" in the doc. But since it repors dynamic
information, i.e., the current progress of VACUUM, its description should be
in the table of "Dynamic Statistics Views".
Back-patch to 9.6 where pg_stat_progress_vacuum was added.
Author: Amit Langote
Discussion: http://postgr.es/m/7ab51b59-8d4d-6193-c60a-b75f222efb12@lab.ntt.co.jp
Tom Lane [Wed, 12 Apr 2017 20:17:53 +0000 (16:17 -0400)]
Speed up hash_index regression test.
Commit f5ab0a14e made this test take substantially longer than it used
to. With a bit more care, we can get the runtime back down while
achieving the same, or even a bit better, code coverage.
Tom Lane [Wed, 12 Apr 2017 20:06:49 +0000 (16:06 -0400)]
Avoid transferring parallel-unsafe subplans to parallel workers.
Commit 5e6d8d2bb allowed parallel workers to execute parallel-safe
subplans, but it transmitted the query's entire list of subplans to
the worker(s). Since execMain.c blindly does ExecInitNode and later
ExecEndNode on every list element, this resulted in parallel-unsafe plan
nodes nonetheless getting started up and shut down in parallel workers.
That seems mostly harmless as far as core plan node types go (but
maybe not so much for Gather?). But it resulted in postgres_fdw
opening and then closing extra remote connections, and it's likely
that other non-parallel-safe FDWs or custom scan providers would have
worse reactions.
To fix, just make ExecSerializePlan replace parallel-unsafe subplans
with NULLs in the cut-down plan tree that it transmits to workers.
This relies on ExecInitNode and ExecEndNode to do nothing on NULL
input, but they do anyway. If anything else is touching the dropped
subplans in a parallel worker, that would be a bug to be fixed.
(This thus provides a strong guarantee that we won't try to do
something with a parallel-unsafe subplan in a worker.)
This is, I think, the last fix directly occasioned by Andreas Seltenreich's
bug report of a few days ago.
Peter Eisentraut [Wed, 12 Apr 2017 19:37:03 +0000 (15:37 -0400)]
doc: Tweak CSS
Tweak CSS a bit to match latest similar changes to web site style. Also
move some CSS out of the HTML to the stylesheet so that the web site
stylesheet can override it. This should ensure that notes and such are
back to being centered.
Tom Lane [Wed, 12 Apr 2017 19:13:23 +0000 (15:13 -0400)]
Mark finished Plan nodes with parallel_safe flags.
We'd managed to avoid doing this so far, but it seems pretty obvious
that it would be forced on us some day, and this is much the cleanest
way of approaching the open problem that parallel-unsafe subplans are
being transmitted to parallel workers. Anyway there's no space cost
due to alignment considerations, and the time cost is pretty minimal
since we're just copying the flag from the corresponding Path node.
(At least in most cases ... some of the klugier spots in createplan.c
have to work a bit harder.)
In principle we could perhaps get rid of SubPlan.parallel_safe,
but I thought it better to keep that in case there are reasons to
consider a SubPlan unsafe even when its child plan is parallel-safe.
This patch doesn't actually do anything with the new flags, but
I thought I'd commit it separately anyway.
Note: although this touches outfuncs/readfuncs, there's no need for
a catversion bump because Plan trees aren't stored on disk.
Robert Haas [Wed, 12 Apr 2017 15:53:00 +0000 (11:53 -0400)]
Fix pgstattuple's handling of unused hash pages.
Hash indexes can contain both pages which are all-zeroes (i.e.
PageIsNew()) and pages which have been initialized but currently
aren't used. The latter category can happen either when a page
has been reserved but not yet used or when it is used for a time
and then freed. pgstattuple was only prepared to deal with the
pages that are actually-zeroes, which it called zero_pages.
Rename the column to unused_pages (extension version 1.5 is
as-yet-unreleased) and make it count both kinds of unused pages.
Along the way, slightly tidy up the way we test for pages of
various types.
Robert Haas and Ashutosh Sharma, reviewed by Amit Kapila
validateCheckConstraint() shouldn't try to access the storage for
a partitioned table, because it no longer has any. Creating a
_RETURN table on a partitioned table shouldn't be allowed, both
because there's no value in it and because trying to do so would
involve a validation scan against its nonexistent storage.
Amit Langote, reviewed by Tom Lane. Regression test outputs
updated to pass by me.
Magnus Hagander [Wed, 12 Apr 2017 11:43:59 +0000 (13:43 +0200)]
Fix reversed check of return value from sync
While at it also update the comments in walmethods.h to make it less
likely for mistakes like this to appear in the future (thanks to Tom for
improvements to the comments).
And finally, in passing change the return type of walmethod.getlasterror
to being const, also per suggestion from Tom.
Tom Lane [Tue, 11 Apr 2017 19:32:33 +0000 (15:32 -0400)]
Remove bogus redefinition of _MSC_VER.
Commit a4777f355 was a shade too mechanical: we don't want to override
MSVC's own definition of _MSC_VER, as that breaks tests on its numerical
value. Per buildfarm.
Tom Lane [Tue, 11 Apr 2017 17:53:13 +0000 (13:53 -0400)]
Simplify handling of remote-qual pass-forward in postgres_fdw.
Commit 0bf3ae88a encountered a need to pass the finally chosen remote qual
conditions forward from postgresGetForeignPlan to postgresPlanDirectModify.
It solved that by sticking them into the plan node's fdw_private list,
which in hindsight was a pretty bad idea. In the first place, there's no
use for those qual trees either in EXPLAIN or execution; indeed they could
never safely be used for any post-planning purposes, because they would not
get processed by setrefs.c. So they're just dead weight to carry around in
the finished plan tree, plus being an attractive nuisance for somebody who
might get the idea that they could be used that way. Secondly, because
those qual trees (sometimes) contained RestrictInfos, they created a
plan-transmission hazard for parallel query, which is how come we noticed a
problem. We dealt with that symptom in commit 28b047875, but really a more
straightforward and more efficient fix is to pass the data through in a new
field of struct PgFdwRelationInfo. So do it that way. (There's no need
to revert 28b047875, as it has sufficient reason to live anyway.)
Robert Haas [Tue, 11 Apr 2017 17:03:44 +0000 (13:03 -0400)]
Add an Assert() to max_parallel_workers enforcement.
To prevent future bugs along the lines of the one corrected by commit 8ff518699f19dd0a5076f5090bac8400b8233f7f, or find any that remain
in the current code, add an Assert() that the difference between
parallel_register_count and parallel_terminate_count is in a sane
range.
Kuntal Ghosh, with considerable tidying-up by me, per a suggestion
from Neha Khatri. Reviewed by Tomas Vondra.
Robert Haas [Tue, 11 Apr 2017 16:31:00 +0000 (12:31 -0400)]
Fix confusion of max_parallel_workers mechanism following crash.
Commit b460f5d6693103076dc554aa7cbb96e1e53074f9 failed to contemplate
the possibilit that a parallel worker registered before a crash would
be unregistered only after the crash; if that happened, we'd end up
with parallel_terminate_count > parallel_register_count and the
system would refuse to launch any more parallel workers.
The easiest way to fix that seems to be to forget BGW_NEVER_RESTART
workers in ResetBackgroundWorkerCrashTimes() rather than leaving them
around to be cleaned up after the conclusion of the restart, so that
they go away before rather than after shared memory is reset.
To make sure that this fix is water-tight, don't allow parallel
workers to be anything other than BGW_NEVER_RESTART, so that after
recovering from a crash, 0 is guaranteed to be the correct starting
value for parallel_register_count. The core code wouldn't do this
anyway, but somebody might try to do it in extension code.
Report by Thomas Vondra. Patch by me, reviewed by Kuntal Ghosh.
Robert Haas [Tue, 11 Apr 2017 16:03:12 +0000 (12:03 -0400)]
Fix failure when a shared tidbitmap has only one page.
Commit 98e6e89040a0534ca26914c66cae9dd49ef62ad9 made inadequate
provision for the case of a single-page shared tidbitmap. It
allocate space for a shared PagetableEntry, but failed to
initialize it.
Report by Thomas Munro. Patch by Dilip Kumar, with some comment
changes by me.
Tom Lane [Tue, 11 Apr 2017 15:58:59 +0000 (11:58 -0400)]
Handle restriction clause lists more uniformly in postgres_fdw.
Clauses in the lists retained by postgres_fdw during planning were
sometimes bare boolean clauses, sometimes RestrictInfos, and sometimes
a mixture of the two in the same list. The comment about that situation
didn't come close to telling the full truth, either. Aside from being
confusing, this had a couple of bad practical consequences:
* waste of planning cycles due to inability to cache per-clause selectivity
and cost estimates;
* sometimes, RestrictInfos would sneak into the fdw_private list of a
finished Plan node, causing failures if, for example, we tried to ship
the Plan tree to a parallel worker.
(It may well be that it's a bug in the parallel-query logic that we
would ever try to ship such a plan to a parallel worker, but in any
case this deserves to be cleaned up.)
To fix, rearrange so that clause lists in PgFdwRelationInfo are always
lists of RestrictInfos, and then strip the RestrictInfos at the last
minute when making a Plan node. In passing do a bit of refactoring and
comment cleanup in postgresGetForeignPlan and foreign_join_ok.
Although the messiness here dates back at least to 9.6, there's no evidence
that it causes anything worse than wasted planning cycles in 9.6, so no
back-patch for now.
Add max_sync_workers_per_subscription to postgresql.conf.sample.
This commit also does
- add REPLICATION_SUBSCRIBERS into config_group
- mark max_logical_replication_workers and max_sync_workers_per_subscription
as REPLICATION_SUBSCRIBERS parameters
- move those parameters into "Subscribers" section in postgresql.conf.sample
Author: Masahiko Sawada, Petr Jelinek and me Reported-by: Masahiko Sawada
Discussion: http://postgr.es/m/CAD21AoAonSCoa=v=87ZO3vhfUZA1k_E2XRNHTt=xioWGUa+0ug@mail.gmail.com
Bruce Momjian [Tue, 11 Apr 2017 14:47:40 +0000 (10:47 -0400)]
docs: Improve window function docs
Specifically, the behavior of general-purpose and statistical aggregates
as window functions was not clearly documented, and terms were
inconsistently used. Also add docs about the difference between
cume_dist and percent_rank, rather than just the formulas.
Magnus Hagander [Tue, 11 Apr 2017 13:21:25 +0000 (15:21 +0200)]
Remove symbol WIN32_ONLY_COMPILER
This used to mean "Visual C++ except in those parts where Borland C++
was supported where it meant one of those". Now that we don't support
Borland C++ anymore, simplify by using _MSC_VER which is the normal way
to detect Visual C++.
Magnus Hagander [Tue, 11 Apr 2017 13:14:26 +0000 (15:14 +0200)]
Remove support for bcc and msvc standalone libpq builds
This removes the support for building just libpq using Borland C++ or
Visual C++. This has not worked properly for years, and given the number
of complaints it's clearly not worth the maintenance burden.
Building libpq using the standard MSVC build system is of course still
supported, along with mingw.