Stephen Frost [Sun, 7 May 2017 02:58:12 +0000 (22:58 -0400)]
pg_dump: Don't leak memory in buildDefaultACLCommands()
buildDefaultACLCommands() didn't destroy the string buffer created in
certain cases, leading to a memory leak. Fix by destroying the buffer
before returning from the function.
Spotted by Coverity.
Author: Michael Paquier
Back-patch to 9.6 where buildDefaultACLCommands() was added.
Stephen Frost [Sun, 7 May 2017 01:46:35 +0000 (21:46 -0400)]
RLS: Fix ALL vs. SELECT+UPDATE policy usage
When we add the SELECT-privilege based policies to the RLS with check
options (such as for an UPDATE statement, or when we have INSERT ...
RETURNING), we need to be sure and use the 'USING' case if the policy is
actually an 'ALL' policy (which could have both a USING clause and an
independent WITH CHECK clause).
This could result in policies acting differently when built using ALL
(when the ALL had both USING and WITH CHECK clauses) and when building
the policies independently as SELECT and UPDATE policies.
Fix this by adding an explicit boolean to add_with_check_options() to
indicate when the USING policy should be used, even if the policy has
both USING and WITH CHECK policies on it.
Andres Freund [Sun, 7 May 2017 00:03:04 +0000 (17:03 -0700)]
Fix duplicated words in comment.
Reported-By: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-Wzn3rY2N0gTWndaApD113T+O8L6oz8cm7_F3P8y4awdoOg@mail.gmail.com
Backpatch: no, only present in master
Andres Freund [Sat, 6 May 2017 23:47:40 +0000 (16:47 -0700)]
Fix off-by-one possibly leading to skipped XLOG_RUNNING_XACTS records.
Since 6ef2eba3f57f1 ("Skip checkpoints, archiving on idle systems."),
GetLastImportantRecPtr() is used to avoid performing superfluous
checkpoints, xlog switches, running-xact records when the system is
idle. Unfortunately the check concerning running-xact records had a
off-by-one error, leading to such records being potentially skipped
when only a single record has been inserted since the last
running-xact record.
An alternative approach would have been to change
GetLastImportantRecPtr()'s definition to point to the end of records,
but that would make the checkpoint code more complicated.
Author: Andres Freund
Discussion: https://postgr.es/m/20170505012447.wsrympaxnfis6ojt@alap3.anarazel.de
Backpatch: no, code only present in master
Tom Lane [Sat, 6 May 2017 20:28:20 +0000 (16:28 -0400)]
Second pass on 9.6.3 release notes.
Improve description of logical decoding snapshot issues, per suggestion
from Petr Jelinek. Mention possible need to re-sync logical replicas
as a post-upgrade task. Minor copy-editing for some other items.
Tom Lane [Fri, 5 May 2017 23:33:34 +0000 (19:33 -0400)]
First-draft release notes for 9.6.3.
As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first. Note there
are some entries that really only apply to pre-9.6 branches.
Alvaro Herrera [Fri, 5 May 2017 15:05:34 +0000 (12:05 -0300)]
Allow MSVC to build with Tcl 8.6.
Commit eaba54c20c5 added support for Tcl 8.6 for configure-supported
platforms after verifying that pltcl works without further changes, but
the MSVC tooling wasn't updated accordingly. Update MSVC to match,
restructuring the code to avoid duplicating the logic for every Tcl
version supported.
Backpatch to all live branches, like eaba54c20c5. In 9.4 and previous,
change the patch to use backslashes rather than forward, as in the rest
of the file.
Reported by Paresh More, who also tested the patch I provided.
Discussion: https://postgr.es/m/CAAgiCNGVw3ssBtSi3ZNstrz5k00ax=UV+_ZEHUeW_LMSGL2sew@mail.gmail.com
When the checkpointer writes the shutdown checkpoint, it checks
afterwards whether any WAL has been written since it started and throws
a PANIC if so. At that point, only walsenders are still active, so one
might think this could not happen, but walsenders can also generate WAL,
for instance in BASE_BACKUP and certain variants of
CREATE_REPLICATION_SLOT. So they can trigger this panic if such a
command is run while the shutdown checkpoint is being written.
To fix this, divide the walsender shutdown into two phases. First, the
postmaster sends a SIGUSR2 signal to all walsenders. The walsenders
then put themselves into the "stopping" state. In this state, they
reject any new commands. (For simplicity, we reject all new commands,
so that in the future we do not have to track meticulously which
commands might generate WAL.) The checkpointer waits for all walsenders
to reach this state before proceeding with the shutdown checkpoint.
After the shutdown checkpoint is done, the postmaster sends
SIGINT (previously unused) to the walsenders. This triggers the
existing shutdown behavior of sending out the shutdown checkpoint record
and then terminating.
Author: Michael Paquier <michael.paquier@gmail.com> Reported-by: Fujii Masao <masao.fujii@gmail.com>
The salt is stored base64-encoded. With the old 10 bytes raw length, it was
always padded to 16 bytes after encoding. We might as well use 12 raw bytes
for the salt, and it's still encoded into 16 bytes.
Similarly for the random nonces, use a raw length that's divisible by 3, so
that there's no padding after base64 encoding. Make the nonces longer while
we're at it. 10 bytes was probably enough to prevent replay attacks, but
there's no reason to be skimpy here.
* Remove is_scram_verifier() function. It was unused.
* Fix sanitize_char() function, used in error messages on protocol
violations, to print bytes >= 0x7F correctly.
* Change spelling of scram_MockSalt() function to be more consistent with
the surroundings.
* Change a few more references to "server proof" to "server signature" that
I missed in commit d981074c24.
Don't use SCRAM-specific "e=invalid-proof" on invalid password.
Instead, send the same FATAL message as with other password-based
authentication mechanisms. This gives a more user-friendly message:
psql: FATAL: password authentication failed for user "test"
instead of:
psql: error received from server in SASL exchange: invalid-proof
Even before this patch, the server sent that FATAL message, after the
SCRAM-specific "e=invalid-proof" message. But libpq would stop at the
SCRAM error message, and not process the ErrorResponse that would come
after that. We could've taught libpq to check for an ErrorResponse after
failed authentication, but it's simpler to modify the server to send only
the ErrorResponse. The SCRAM specification allows for aborting the
authentication at any point, using an application-defined error mechanism,
like PostgreSQL's ErrorResponse. Using the e=invalid-proof message is
optional.
Stephen Frost [Fri, 5 May 2017 02:17:52 +0000 (22:17 -0400)]
Change the way pg_dump retrieves partitioning info
This gets rid of the code that issued separate queries to retrieve the
partitioning parent-child relationship, parent partition key, and child
partition bound information. With this patch, the information is
retrieved instead using the queries issued from getTables() and
getInherits(), which is both more efficient than the previous approach
and doesn't require any new code.
Since the partitioning parent-child relationship is now retrieved with
the same old code that handles inheritance, partition attributes receive
a proper flagInhAttrs() treatment (that it didn't receive before), which
is needed so that the inherited NOT NULL constraints are not emitted if
we already emitted it for the parent.
Also, fix a bug in pg_dump's --binary-upgrade code, which caused pg_dump
to emit invalid command to attach a partition to its parent.
Author: Amit Langote, with some additional changes by me.
Tom Lane [Thu, 4 May 2017 17:59:13 +0000 (13:59 -0400)]
Fix pfree-of-already-freed-tuple when rescanning a GiST index-only scan.
GiST's getNextNearest() function attempts to pfree the previously-returned
tuple if any (that is, scan->xs_hitup in HEAD, or scan->xs_itup in older
branches). However, if we are rescanning a plan node after ending a
previous scan early, those tuple pointers could be pointing to garbage,
because they would be pointing into the scan's pageDataCxt or queueCxt
which has been reset. In a debug build this reliably results in a crash,
although I think it might sometimes accidentally fail to fail in
production builds.
To fix, clear the pointer field anyplace we reset a context it might
be pointing into. This may be overkill --- I think probably only the
queueCxt case is involved in this bug, so that resetting in gistrescan()
would be sufficient --- but dangling pointers are generally bad news,
so let's avoid them.
Another plausible answer might be to just not bother with the pfree in
getNextNearest(). The reconstructed tuples would go away anyway in the
context resets, and I'm far from convinced that freeing them a bit earlier
really saves anything meaningful. I'll stick with the original logic in
this patch, but if we find more problems in the same area we should
consider that approach.
Per bug #14641 from Denis Smirnov. Back-patch to 9.5 where this
logic was introduced.
Fix PQencryptPasswordConn to work with older server versions.
password_encryption was a boolean before version 10, so cope with "on" and
"off".
Also, change the behavior with "plain", to treat it the same as "md5".
We're discussing removing the password_encryption='plain' option from the
server altogether, which will make this the only reasonable choice, but
even if we kept it, it seems best to never send the password in cleartext.
Tom Lane [Wed, 3 May 2017 23:37:01 +0000 (19:37 -0400)]
Remove useless and rather expensive stanza in matview regression test.
This removes a test case added by commit b69ec7cc9, which was intended
to exercise a corner case involving the rule used at that time that
materialized views were unpopulated iff they had physical size zero.
We got rid of that rule very shortly later, in commit 1d6c72a55, but
kept the test case. However, because the case now asks what VACUUM
will do to a zero-sized physical file, it would be pretty surprising
if the answer were ever anything but "nothing" ... and if things were
indeed that broken, surely we'd find it out from other tests. Since
the test involves a table that's fairly large by regression-test
standards (100K rows), it's quite slow to run. Dropping it should
save some buildfarm cycles, so let's do that.
Alvaro Herrera [Wed, 3 May 2017 18:12:09 +0000 (15:12 -0300)]
pg_dump/t/002: append terminating semicolon to SQL commands
It's easy to overlook the need for one, and its lack is annoying for the
next developer wanting to create a new test. Rather than expect every
individual command to add the semicolon, just append one automatically.
Tom Lane [Wed, 3 May 2017 01:50:35 +0000 (21:50 -0400)]
Improve performance of timezone loading, especially pg_timezone_names view.
tzparse() would attempt to load the "posixrules" timezone database file on
each call. That might seem like it would only be an issue when selecting a
POSIX-style zone name rather than a zone defined in the timezone database,
but it turns out that each zone definition file contains a POSIX-style zone
string and tzload() will call tzparse() to parse that. Thus, when scanning
the whole timezone file tree as we do in the pg_timezone_names view,
"posixrules" was read repetitively for each zone definition file. Fix
that by caching the file on first use within any given process. (We cache
other zone definitions for the life of the process, so there seems little
reason not to cache this one as well.) This probably won't help much in
processes that never run pg_timezone_names, but even one additional SET
of the timezone GUC would come out ahead.
An even worse problem for pg_timezone_names is that pg_open_tzfile()
has an inefficient way of identifying the canonical case of a zone name:
it basically re-descends the directory tree to the zone file. That's not
awful for an individual "SET timezone" operation, but it's pretty horrid
when we're inspecting every zone in the database. And it's pointless too
because we already know the canonical spelling, having just read it from
the filesystem. Fix by teaching pg_open_tzfile() to avoid the directory
search if it's not asked for the canonical name, and backfilling the
proper result in pg_tzenumerate_next().
In combination these changes seem to make the pg_timezone_names view
about 3x faster to read, for me. Since a scan of pg_timezone_names
has up to now been one of the slowest queries in the regression tests,
this should help some little bit for buildfarm cycle times.
Back-patch to all supported branches, not so much because it's likely
that users will care much about the view's performance as because
tracking changes in the upstream IANA timezone code is really painful
if we don't keep all the branches in sync.
Tom Lane [Wed, 3 May 2017 00:41:37 +0000 (20:41 -0400)]
Remove create_singleton_array(), hard-coding the case in its sole caller.
create_singleton_array() was not really as useful as we perhaps thought
when we added it. It had never accreted more than one call site, and is
only saving a dozen lines of code at that one, which is considerably less
bulk than the function itself. Moreover, because of its insistence on
using the caller's fn_extra cache space, it's arguably a coding hazard.
text_to_array_internal() does not currently use fn_extra in any other way,
but if it did it would be subtly broken, since the conflicting fn_extra
uses could be needed within a single query, in the seldom-tested case that
the field separator varies during the query. The same objection seems
likely to apply to any other potential caller.
The replacement code is a bit uglier, because it hardwires knowledge of
the storage parameters of type TEXT, but it's not like we haven't got
dozens or hundreds of other places that do the same. Uglier seems like
a good tradeoff for smaller, faster, and safer.
Tom Lane [Tue, 2 May 2017 22:05:53 +0000 (18:05 -0400)]
Ensure commands in extension scripts see the results of preceding DDL.
Due to a missing CommandCounterIncrement() call, parsing of a non-utility
command in an extension script would not see the effects of the immediately
preceding DDL command, unless that command's execution ends with
CommandCounterIncrement() internally ... which some do but many don't.
Report by Philippe Beaudoin, diagnosis by Julien Rouhaud.
Rather remarkably, this bug has evaded detection since extensions were
invented, so back-patch to all supported branches.
doc: Improve order in ALTER PUBLICATION/SUBSCRIPTION ref pages
Move the OWNER and RENAME clauses to the end, so the interesting
functionality is listed first. This is more typical on nearby reference
pages, whereas the previous order was the order in which the clauses
were added.
Avoid unnecessary catalog updates in ALTER SEQUENCE
ALTER SEQUENCE can do nontransactional changes to the sequence (RESTART
clause) and transactional updates to the pg_sequence catalog (most other
clauses). When just calling RESTART, the code would still needlessly do
a catalog update without any changes. This would entangle that
operation in the concurrency issues of a catalog update (causing either
locking or concurrency errors, depending on how that issue is to be
resolved).
Fix by keeping track during options parsing whether a catalog update is
needed, and skip it if not.
Magnus Hagander [Tue, 2 May 2017 09:12:30 +0000 (11:12 +0200)]
Change hot_standby default value to 'on'
This goes together with the changes made to enable replication on the
sending side by default (wal_level, max_wal_senders etc) by making the
receiving stadby node also enable it by default.
Tom Lane [Mon, 1 May 2017 19:31:41 +0000 (15:31 -0400)]
Improve function header comment for create_singleton_array().
Mentioning the caller is neither future-proof nor an adequate substitute
for giving an API specification. Per gripe from Neha Khatri, though
I changed the patch around some.
Tom Lane [Mon, 1 May 2017 18:53:42 +0000 (14:53 -0400)]
Reduce semijoins with unique inner relations to plain inner joins.
If the inner relation can be proven unique, that is it can have no more
than one matching row for any row of the outer query, then we might as
well implement the semijoin as a plain inner join, allowing substantially
more freedom to the planner. This is a form of outer join strength
reduction, but it can't be implemented in reduce_outer_joins() because
we don't have enough info about the individual relations at that stage.
Instead do it much like remove_useless_joins(): once we've built base
relations, we can make another pass over the SpecialJoinInfo list and
get rid of any entries representing reducible semijoins.
This is essentially a followon to the inner-unique patch (commit 9c7f5229a)
and makes use of the proof machinery that that patch created. We need only
minor refactoring of innerrel_is_unique's API to support this usage.
Tom Lane [Mon, 1 May 2017 18:39:11 +0000 (14:39 -0400)]
Fix mis-optimization of semijoins with more than one LHS relation.
The inner-unique patch (commit 9c7f5229a) supposed that if we're
considering a JOIN_UNIQUE_INNER join path, we can always set inner_unique
for the join, because the inner path produced by create_unique_path should
be unique relative to the outer relation. However, that's true only if
we're considering joining to the whole outer relation --- otherwise we may
be applying only some of the join quals, and so the inner path might be
non-unique from the perspective of this join. Adjust the test to only
believe that we can set inner_unique if we have the whole semijoin LHS on
the outer side.
There is more that can be done in this area, but this commit is only
intended to provide the minimal fix needed to get correct plans.
Per report from Teodor Sigaev. Thanks to David Rowley for preliminary
investigation.
Tom Lane [Mon, 1 May 2017 15:52:59 +0000 (11:52 -0400)]
Update time zone data files to tzdata release 2017b.
DST law changes in Chile, Haiti, and Mongolia. Historical corrections for
Ecuador, Kazakhstan, Liberia, and Spain.
The IANA crew continue their campaign to replace invented time zone
abbrevations with numeric GMT offsets. This update changes numerous zones
in South America, the Pacific and Indian oceans, and some Asian and Middle
Eastern zones. I kept these abbreviations in the tznames/ data files,
however, so that we will still accept them for input. (We may want to
start trimming those files someday, but I think we should wait for the
upstream dust to settle before deciding what to do.)
In passing, add MESZ (Mitteleuropaeische Sommerzeit) to the tznames lists;
since we accept MEZ (Mitteleuropaeische Zeit) it seems rather strange not
to take the other one. And fix some incorrect, or at least obsolete,
comments that certain abbreviations are not traceable to the IANA data.
Robert Haas [Mon, 1 May 2017 15:27:09 +0000 (11:27 -0400)]
libpq: Fix inadvertent change in .pgpass lookup behavior.
Commit 274bb2b3857cc987cfa21d14775cae9b0dababa5 caused password file
lookups to use the hostaddr in preference to the host, but that was
not intended and the documented behavior is the opposite.
Andrew Dunstan [Mon, 1 May 2017 14:12:02 +0000 (10:12 -0400)]
Allow vcregress.pl to run an arbitrary TAP test set
Currently only provision for running the bin checks in a single step is
provided for. Now these tests can be run individually, as well as tests
in other locations (e.g. src.test/recover).
Also provide for suppressing unnecessary temp installs by setting the
NO_TEMP_INSTALL environment variable just as the Makefiles do.
Fix logical replication launcher wake up and reset
After the logical replication launcher was told to wake up at
commit (for example, by a CREATE SUBSCRIPTION command), the flag to wake
up was not reset, so it would be woken up at every following commit as
well. So fix that by resetting the flag.
Also, we don't need to wake up anything if the transaction was rolled
back. Just reset the flag in that case.
Robert Haas [Mon, 1 May 2017 12:23:01 +0000 (08:23 -0400)]
Fire per-statement triggers on partitioned tables.
Even though no actual tuples are ever inserted into a partitioned
table (the actual tuples are in the partitions, not the partitioned
table itself), we still need to have a ResultRelInfo for the
partitioned table, or per-statement triggers won't get fired.
Amit Langote, per a report from Rajkumar Raghuwanshi. Reviewed by me.
Tom Lane [Sun, 30 Apr 2017 19:13:51 +0000 (15:13 -0400)]
Sync our copy of the timezone library with IANA release tzcode2017b.
zic no longer mishandles some transitions in January 2038 when it
attempts to work around Qt bug 53071. This fixes a bug affecting
Pacific/Tongatapu that was introduced in zic 2016e. localtime.c
now contains a workaround, useful when loading a file generated by
a buggy zic.
There are assorted cosmetic changes as well, notably relocation
of a bunch of #defines.
Tom Lane [Sun, 30 Apr 2017 16:21:02 +0000 (12:21 -0400)]
Fix possible null pointer dereference or invalid warning message.
Thinko in commit de4389712: this warning message references the wrong
"LogicalRepWorker *" variable. This would often result in a core dump,
but if it didn't, the message would show the wrong subscription OID.
In passing, adjust the message text to format a subscription OID
similarly to how that's done elsewhere in the function; and fix
grammatical issues in some nearby messages.
Tom Lane [Sun, 30 Apr 2017 00:14:52 +0000 (20:14 -0400)]
Micro-optimize some slower queries in the opr_sanity regression test.
Convert the binary_coercible() and physically_coercible() functions from
SQL to plpgsql. It's not that plpgsql is inherently better at doing
queries; if you simply convert the previous single SQL query into one
RETURN expression, it's no faster. The problem with the existing code
is that it fools the plancache into deciding that it's worth re-planning
the query every time, since constant-folding with a concrete value for $2
allows elimination of at least one sub-SELECT. In reality that's using the
planner to do the equivalent of a few runtime boolean tests, causing the
function to run much slower than it should. Splitting the AND/OR logic
into separate plpgsql statements allows each if-expression to acquire a
static plan.
Also, get rid of some uses of obj_description() in favor of explicitly
joining to pg_description, allowing the joins to be optimized better.
(Someday we might improve the SQL-function-inlining logic enough that
this happens automatically, but today is not that day.)
Together, these changes reduce the runtime of the opr_sanity regression
test by about a factor of two on one of my slower machines. They don't
seem to help as much on a fast machine, but this should at least benefit
the buildfarm.
Robert Haas [Fri, 28 Apr 2017 18:48:38 +0000 (14:48 -0400)]
Fix VALIDATE CONSTRAINT to consider NO INHERIT attribute.
Currently, trying to validate a NO INHERIT constraint on the parent will
search for the constraint in child tables (where it is not supposed to
exist), wrongly causing a "constraint does not exist" error.
Peter Eisentraut [Thu, 27 Apr 2017 18:57:26 +0000 (14:57 -0400)]
Wait between tablesync worker restarts
Before restarting a tablesync worker for the same relation, wait
wal_retrieve_retry_interval (currently 5s by default). This avoids
restarting failing workers in a tight loop.
We keep the last start times in a hash table last_start_times that is
separate from the table_states list, because that list is cleared out on
syscache invalidation, which happens whenever a table finishes syncing.
The hash table is kept until all tables have finished syncing.
A future project might be to unify these two and keep everything in one
data structure, but for now this is a less invasive change to accomplish
the original purpose.
For the test suite, set wal_retrieve_retry_interval to its minimum
value, to not increase the test suite run time.
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com> Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>
* Move computation of SaltedPassword to a separate function from
scram_ClientOrServerKey(). This saves a lot of cycles in libpq, by
computing SaltedPassword only once per authentication. (Computing
SaltedPassword is expensive by design.)
* Split scram_ClientOrServerKey() into two functions. Improves
readability, by making the calling code less verbose.
* Rename "server proof" to "server signature", to better match the
nomenclature used in RFC 5802.
* Rename SCRAM_SALT_LEN to SCRAM_DEFAULT_SALT_LEN, to make it more clear
that the salt can be of any length, and the constant only specifies how
long a salt we use when we generate a new verifier. Also rename
SCRAM_ITERATIONS_DEFAULT to SCRAM_DEFAULT_ITERATIONS, for consistency.
These things caught my eye while working on other upcoming changes.
Declarative partitioning duplicated the TypedTableElement productions,
evidently to remove the need to specify WITH OPTIONS when creating
partitions. Instead, simply make WITH OPTIONS optional in the
TypedTableElement production and remove all of the duplicate
PartitionElement-related productions. This change simplifies the
syntax and makes WITH OPTIONS optional when adding defaults, constraints
or storage parameters to columns when creating either typed tables or
partitions.
Also update pg_dump to no longer include WITH OPTIONS, since it's not
necessary, and update the documentation to reflect that WITH OPTIONS is
now optional.
Andres Freund [Thu, 27 Apr 2017 22:49:22 +0000 (15:49 -0700)]
Don't build full initial logical decoding snapshot if NOEXPORT_SNAPSHOT.
Earlier commits (56e19d938dd14 and 2bef06d5164) make it cheaper to
create a logical slot if not exporting the initial snapshot. If
NOEXPORT_SNAPSHOT is specified, we can skip the overhead, not just
when creating a slot via sql (which can't export snapshots). As
NOEXPORT_SNAPSHOT has only recently been introduced, this shouldn't be
backpatched.
Andres Freund [Thu, 27 Apr 2017 22:28:24 +0000 (15:28 -0700)]
Don't use on-disk snapshots for exported logical decoding snapshot.
Logical decoding stores historical snapshots on disk, so that logical
decoding can restart without having to reconstruct a snapshot from
scratch (for which the resources are not guaranteed to be present
anymore). These serialized snapshots were also used when creating a
new slot via the walsender interface, which can export a "full"
snapshot (i.e. one that can read all tables, not just catalog ones).
The problem is that the serialized snapshots are only useful for
catalogs and not for normal user tables. Thus the use of such a
serialized snapshot could result in an inconsistent snapshot being
exported, which could lead to queries returning wrong data. This
would only happen if logical slots are created while another logical
slot already exists.
Author: Petr Jelinek Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
Tom Lane [Thu, 27 Apr 2017 22:27:02 +0000 (18:27 -0400)]
Avoid slow shutdown of pg_basebackup.
pg_basebackup's child process did not pay any attention to the pipe
from its parent while waiting for input from the source server.
If no server data was arriving, it would only wake up and check the
pipe every standby_message_timeout or so. This creates a problem
since the parent process might determine and send the desired stop
position only after the server has reached end-of-WAL and stopped
sending data. In the src/test/recovery regression tests, the timing
is repeatably such that it takes nearly 10 seconds for the child
process to realize that it should shut down. It's not clear how
often that would happen in real-world cases, but it sure seems like
a bug --- and if the user turns off standby_message_timeout or sets
it very large, the delay could be a lot worse.
To fix, expand the StreamCtl API to allow the pipe input FD to be
passed down to the low-level wait routine, and watch both sockets
when sleeping.
(Note: AFAICS this issue doesn't affect the Windows port, since
it doesn't rely on a pipe to transfer the stop position to the
child thread.)
Fix bug so logical rep launcher saves correctly time of last startup of worker.
Previously the logical replication launcher stored the last timestamp
when it started the worker, in the local variable "last_start_time",
in order to check whether wal_retrive_retry_interval elapsed since
the last startup of worker. If it has elapsed, the launcher sees
pg_subscription and starts new worker if necessary. This is for
limitting the startup of worker to once a wal_retrieve_retry_interval.
The bug was that the variable "last_start_time" was defined and
always initialized with 0 at the beginning of the launcher's main loop.
So even if it's set to the last timestamp in later phase of the loop,
it's always reset to 0. Therefore the launcher could not check
correctly whether wal_retrieve_retry_interval elapsed since
the last startup.
This patch moves the variable "last_start_time" outside the main loop
so that it will not be reset.
Reviewed-by: Petr Jelinek
Discussion: http://postgr.es/m/CAHGQGwGJrPO++XM4mFENAwpy1eGXKsGdguYv43GUgLgU-x8nTQ@mail.gmail.com
Tom Lane [Thu, 27 Apr 2017 21:13:29 +0000 (17:13 -0400)]
Cope with glibc too old to have epoll_create1().
Commit fa31b6f4e supposed that we didn't have to worry about that
anymore, but it seems that RHEL5 is like that, and that's still
a supported platform. Put back the prior coding under an #ifdef,
adding an explicit fcntl() to retain the desired CLOEXEC property.
Andres Freund [Mon, 24 Apr 2017 03:41:29 +0000 (20:41 -0700)]
Preserve required !catalog tuples while computing initial decoding snapshot.
The logical decoding machinery already preserved all the required
catalog tuples, which is sufficient in the course of normal logical
decoding, but did not guarantee that non-catalog tuples were preserved
during computation of the initial snapshot when creating a slot over
the replication protocol.
This could cause a corrupted initial snapshot being exported. The
time window for issues is usually not terribly large, but on a busy
server it's perfectly possible to it hit it. Ongoing decoding is not
affected by this bug.
To avoid increased overhead for the SQL API, only retain additional
tuples when a logical slot is being created over the replication
protocol. To do so this commit changes the signature of
CreateInitDecodingContext(), but it seems unlikely that it's being
used in an extension, so that's probably ok.
In a drive-by fix, fix handling of
ReplicationSlotsComputeRequiredXmin's already_locked argument, which
should only apply to ProcArrayLock, not ReplicationSlotControlLock.
Reported-By: Erik Rijkers Analyzed-By: Petr Jelinek
Author: Petr Jelinek, heavily editorialized by Andres Freund Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/9a897b86-46e1-9915-ee4c-da02e4ff6a95@2ndquadrant.com
Backport: 9.4, where logical decoding was introduced.
Tom Lane [Thu, 27 Apr 2017 19:07:36 +0000 (15:07 -0400)]
Make latch.c more paranoid about child-process cases.
Although the postmaster doesn't currently create a self-pipe or any
latches, there's discussion of it doing so in future. It's also
conceivable that a shared_preload_libraries extension would try to
create such a thing in the postmaster process today. In that case
the self-pipe FDs would be inherited by forked child processes.
latch.c was entirely unprepared for such a case and could suffer an
assertion failure, or worse try to use the inherited pipe if somebody
called WaitLatch without having called InitializeLatchSupport in that
process. Make it keep track of whether InitializeLatchSupport has been
called in the *current* process, and do the right thing if state has
been inherited from a parent.
Apply FD_CLOEXEC to file descriptors created in latch.c (the self-pipe,
as well as epoll event sets). This ensures that child processes spawned
in backends, the archiver, etc cannot accidentally or intentionally mess
with these FDs. It also ensures that we end up with the right state
for the self-pipe in EXEC_BACKEND processes, which otherwise wouldn't
know to close the postmaster's self-pipe FDs.
Back-patch to 9.6, mainly to keep latch.c looking similar in all branches
it exists in.
Simon Riggs [Thu, 27 Apr 2017 12:41:22 +0000 (14:41 +0200)]
Rework handling of subtransactions in 2PC recovery
The bug fixed by 0874d4f3e183757ba15a4b3f3bf563e0393dd9c2
caused us to question and rework the handling of
subtransactions in 2PC during and at end of recovery.
Patch adds checks and tests to ensure no further bugs.
Author: Simon Riggs Reviewed-by: Tom Lane, Michael Paquier
Discussion: http://postgr.es/m/CANP8+j+vvXmruL_i2buvdhMeVv5TQu0Hm2+C5N+kdVwHJuor8w@mail.gmail.com
Tom Lane [Wed, 26 Apr 2017 20:17:29 +0000 (16:17 -0400)]
Allow multiple bgworkers to be launched per postmaster iteration.
Previously, maybe_start_bgworker() would launch at most one bgworker
process per call, on the grounds that the postmaster might otherwise
neglect its other duties for too long. However, that seems overly
conservative, especially since bad effects only become obvious when
many hundreds of bgworkers need to be launched at once. On the other
side of the coin is that the existing logic could result in substantial
delay of bgworker launches, because ServerLoop isn't guaranteed to
iterate immediately after a signal arrives. (My attempt to fix that
by using pselect(2) encountered too many portability question marks,
and in any case could not help on platforms without pselect().)
One could also question the wisdom of using an O(N^2) processing
method if the system is intended to support so many bgworkers.
As a compromise, allow that function to launch up to 100 bgworkers
per call (and in consequence, rename it to maybe_start_bgworkers).
This will allow any normal parallel-query request for workers
to be satisfied immediately during sigusr1_handler, avoiding the
question of whether ServerLoop will be able to launch more promptly.
There is talk of rewriting the postmaster to use a WaitEventSet to
avoid the signal-response-delay problem, but I'd argue that this change
should be kept even after that happens (if it ever does).
Backpatch to 9.6 where parallel query was added. The issue exists
before that, but previous uses of bgworkers typically aren't as
sensitive to how quickly they get launched.
Stephen Frost [Wed, 26 Apr 2017 18:59:22 +0000 (14:59 -0400)]
pg_get_partkeydef: return NULL for non-partitions
Our general rule for pg_get_X(oid) functions is to simply return NULL
when passed an invalid or inappropriate OID. Teach pg_get_partkeydef to
do this also, making it easier for users to use this function when
querying against tables with both partitions and non-partitions (such as
pg_class).
As a concrete example, this makes pg_dump's life a little easier.
Peter Eisentraut [Wed, 26 Apr 2017 14:43:04 +0000 (10:43 -0400)]
Fix various concurrency issues in logical replication worker launching
The code was originally written with assumption that launcher is the
only process starting the worker. However that hasn't been true since
commit 7c4f52409 which failed to modify the worker management code
adequately.
This patch adds an in_use field to the LogicalRepWorker struct to
indicate whether the worker slot is being used and uses proper locking
everywhere this flag is set or read.
However if the parent process dies while the new worker is starting and
the new worker fails to attach to shared memory, this flag would never
get cleared. We solve this rare corner case by adding a sort of garbage
collector for in_use slots. This uses another field in the
LogicalRepWorker struct named launch_time that contains the time when
the worker was started. If any request to start a new worker does not
find free slot, we'll check for workers that were supposed to start but
took too long to actually do so, and reuse their slot.
In passing also fix possible race conditions when stopping a worker that
hasn't finished starting yet.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com> Reported-by: Fujii Masao <masao.fujii@gmail.com>
Stephen Frost [Tue, 25 Apr 2017 20:57:43 +0000 (16:57 -0400)]
Allow ALTER TABLE ONLY on partitioned tables
There is no need to forbid ALTER TABLE ONLY on partitioned tables,
when no partitions exist yet. This can be handy for users who are
building up their partitioned table independently and will create actual
partitions later.
In addition, this is how pg_dump likes to operate in certain instances.
Author: Amit Langote, with some error message word-smithing by me
Peter Eisentraut [Tue, 25 Apr 2017 18:40:33 +0000 (14:40 -0400)]
Wake up launcher when enabling a subscription
Otherwise one would have to wait up to DEFAULT_NAPTIME_PER_CYCLE until
the subscription worker is considered for starting.
There is a small race condition: If one enables a subscription right
after disabling it, the launcher might not have registered the stopping
when receiving the wakeup signal for the re-enabling. The start will
then not happen right away but after the full cycle time.
Set the priorities of all quorum synchronous standbys to 1.
In quorum-based synchronous replication, all the standbys listed in
synchronous_standby_names equally have chances to be chosen
as synchronous standbys. So they should have the same priority.
However, previously, quorum standbys whose names appear earlier
in the list were given higher priority values though the difference of
those priority values didn't affect the selection of synchronous standbys.
Users could see those "meaningless" priority values in pg_stat_replication
and this was confusing.
This commit gives all the quorum synchronous standbys the same
highest priority, i.e., 1, in order to remove such confusion.
Robert Haas [Tue, 25 Apr 2017 14:57:13 +0000 (10:57 -0400)]
Adjust outdated comment.
Commit 5dfc198146b49ce7ecc8a1fc9d5e171fb75f6ba5 removed the only
existing caller of hash_freeze, but left behind a comment indicating
that hash_freeze was still used. Adjust.
Peter Eisentraut [Tue, 25 Apr 2017 02:50:07 +0000 (22:50 -0400)]
postgres_fdw: Fix join push down with extensions
Objects in an extension are shippable to a foreign server if the
extension is part of the foreign server definition's shippable
extensions list. But this was not properly considered in some cases
when checking whether a join condition can be pushed to a foreign server
and the join condition uses an object from a shippable extension. So
the join would never be pushed down in those cases.
So, the list of extensions needs to be made available in fpinfo of the
relation being considered to be pushed down before any expressions are
assessed for being shippable. Fix foreign_join_ok() to do that for a
join relation.
The code to save FDW options in fpinfo is scattered at multiple places.
Bring all of that together into functions apply_server_options(),
apply_table_options(), and merge_fdw_options().
David Rowley and Ashutosh Bapat, per report from David Rowley