Tom Lane [Mon, 19 Jun 2017 22:32:22 +0000 (18:32 -0400)]
Fix materialized-view documentation oversights.
When materialized views were added, psql's \d commands were made to
treat them as a separate object category ... but not everyplace in the
documentation or comments got the memo.
Noted by David Johnston. Back-patch to 9.3 where matviews came in.
Tom Lane [Mon, 19 Jun 2017 19:33:41 +0000 (15:33 -0400)]
Avoid regressions in foreign-key-based selectivity estimates.
David Rowley found that the "use the smallest per-column selectivity"
heuristic applied in some cases by get_foreign_key_join_selectivity()
was badly off if the FK columns are independent, producing estimates
much worse than we got before that code was added in 9.6.
One case where that heuristic was used was for LEFT and FULL outer joins
with the referenced rel on the outside of the join. But we should not
really need to special-case those here. eqjoinsel() never has had such a
special case; the correction is applied by calc_joinrel_size_estimate()
instead. Let's just estimate such cases like inner joins and rely on that
later adjustment. (I think there was something of a thinko here, in that
the comments seem to be thinking about the selectivity as defined for
semi/anti joins; but that shouldn't apply to left/full joins.) Add a
regression test exercising such a case to show that this is sane in
at least some cases.
The other case where we used that heuristic was for SEMI/ANTI outer joins,
either if the referenced rel was on the outside, or if it was on the inside
but was part of a join within the RHS. In either case, the FK doesn't give
us a lot of traction towards estimating the selectivity. To ensure that
we don't have regressions from what happened before 9.6, let's punt by
ignoring the FK in such cases and applying the traditional selectivity
calculation. (We might be able to improve on that later, but for now
I just want to be sure it's not worse than 9.5.)
Report and patch by David Rowley, simplified a bit by me. Back-patch
to 9.6 where this code was added.
Tom Lane [Mon, 19 Jun 2017 15:02:45 +0000 (11:02 -0400)]
On Windows, make pg_dump use binary mode for compressed plain text output.
The combination of -Z -Fp and output to stdout resulted in corrupted
output data, because we left stdout in text mode, resulting in newline
conversion being done on the compressed stream. Switch stdout to binary
mode for this case, at the same place where we do it for non-text output
formats.
Report and patch by Kuntal Ghosh, tested by Ashutosh Sharma and Neha
Sharma. Back-patch to all supported branches.
Andres Freund [Mon, 19 Jun 2017 01:48:22 +0000 (18:48 -0700)]
Fix leaking of small spilled subtransactions during logical decoding.
When, during logical decoding, a transaction gets too big, it's
contents get spilled to disk. Not just the top-transaction gets
spilled, but *also* all of its subtransactions, even if they're not
that large themselves. Unfortunately we didn't clean up
such small spilled subtransactions from disk.
Fix that, by keeping better track of whether a transaction has been
spilled to disk.
Author: Andres Freund Reported-By: Dmitriy Sarafannikov, Fabrízio de Royes Mello
Discussion:
https://postgr.es/m/1457621358.355011041@f382.i.mail.ru
https://postgr.es/m/CAFcNs+qNMhNYii4nxpO6gqsndiyxNDYV0S=JNq0v_sEE+9PHXg@mail.gmail.com
Backpatch: 9.4-, where logical decoding was introduced
Peter Eisentraut [Sat, 17 Jun 2017 01:23:22 +0000 (21:23 -0400)]
Define HAVE_UCOL_STRCOLLUTF8 on Windows
This should normally be determined by a configure check, but until
someone figures out how to do that on Windows, it's better that the code
uses the new function by default.
Tom Lane [Sat, 17 Jun 2017 03:14:27 +0000 (23:14 -0400)]
Teach pgindent to skip files generated by bison or flex automatically.
If a .c or .h file corresponds to a .y or .l file, skip indenting it.
There's no point in reindenting derived files, and these files tend to
confuse pgindent. (Which probably indicates a bug in BSD indent, but
I can't get excited about trying to fix it.)
For the same reasons, add src/backend/utils/fmgrtab.c to the set of
files excluded by src/tools/pgindent/exclude_file_patterns.
The point of doing this is that it makes it safe to run pgindent over
the tree without doing "make maintainer-clean" first. While these are
not the only derived .c/.h files in the tree, they are the only ones
pgindent fails on. Removing that prerequisite step results in one less
way to mess up a pgindent run, and it's necessary if we ever hope to get
to the ease of running pgindent via "make indent".
Fix dependency, when changing a function's argument/return type.
When a new base type is created using the old-style procedure of first
creating the input/output functions with "opaque" in place of the base
type, the "opaque" argument/return type is changed to the final base type,
on CREATE TYPE. However, we did not create a pg_depend record when doing
that, so the functions were left not depending on the type.
Noah Misch [Fri, 16 Jun 2017 07:16:11 +0000 (00:16 -0700)]
Reconcile nodes/*funcs.c with PostgreSQL 10 work.
The _equalTableFunc() omission of coltypmods has semantic significance,
but I did not track down resulting user-visible bugs, if any. The other
changes are cosmetic only, affecting order. catversion bump due to
readfuncs.c field order change.
Tom Lane [Thu, 15 Jun 2017 19:56:12 +0000 (15:56 -0400)]
Make configure check for IPC::Run when --enable-tap-tests is specified.
The TAP tests mostly don't work without IPC::Run, and the reason for
the failure is not immediately obvious from the error messages you get.
So teach configure to reject --enable-tap-tests unless IPC::Run exists.
Mostly this just involves adding ax_prog_perl_modules.m4 from the GNU
autoconf archives.
This was discussed last year, but we held off on the theory that we might
be switching to CMake soon. That's evidently not happening for v10,
so let's absorb this now.
Tom Lane [Thu, 15 Jun 2017 19:03:39 +0000 (15:03 -0400)]
Fix low-probability leaks of PGresult objects in the backend.
We had three occurrences of essentially the same coding pattern
wherein we tried to retrieve a query result from a libpq connection
without blocking. In the case where PQconsumeInput failed (typically
indicating a lost connection), all three loops simply gave up and
returned, forgetting to clear any previously-collected PGresult
object. Since those are malloc'd not palloc'd, the oversight results
in a process-lifespan memory leak.
One instance, in libpqwalreceiver, is of little significance because
the walreceiver process would just quit anyway if its connection fails.
But we might as well fix it.
The other two instances, in postgres_fdw, are somewhat more worrisome
because at least in principle the scenario could be repeated, allowing
the amount of memory leaked to build up to something worth worrying
about. Moreover, in these cases the loops contain CHECK_FOR_INTERRUPTS
calls, as well as other calls that could potentially elog(ERROR),
providing another way to exit without having cleared the PGresult.
Here we need to add PG_TRY logic similar to what exists in quite a
few other places in postgres_fdw.
Coverity noted the libpqwalreceiver bug; I found the other two cases
by checking all calls of PQconsumeInput.
Back-patch to all supported versions as appropriate (9.2 lacks
postgres_fdw, so this is really quite unexciting for that branch).
Bruce Momjian [Thu, 15 Jun 2017 16:30:02 +0000 (12:30 -0400)]
docs: Fix pg_upgrade standby server upgrade docs
It was unsafe to instruct users to start/stop the server after
pg_upgrade was run but before the standby servers were rsync'ed. The
new instructions avoid this.
RELEASE NOTES: This fix should be mentioned in the minor release notes.
Reported-by: Dmitriy Sarafannikov and Sergey Burladyan
Discussion: https://postgr.es/m/87wp8o506b.fsf@seb.koffice.internal
Backpatch-through: 9.5, where standby server upgrade instructions first appeared
Tatsuo Ishii [Thu, 15 Jun 2017 01:01:39 +0000 (10:01 +0900)]
Fix document bug regarding read only transactions.
It was explained that read only transactions (not in standby) allow to
update sequences. This had been wrong since the commit: 05d8a561ff85db1545f5768fe8d8dc9d99ad2ef7
Robert Haas [Wed, 14 Jun 2017 20:19:46 +0000 (16:19 -0400)]
Fix problems related to RangeTblEntry members enrname and enrtuples.
Commit 18ce3a4ab22d2984f8540ab480979c851dae5338 failed to update
the comments in parsenodes.h for the new members, and made only
incomplete updates to src/backend/nodes
Andres Freund [Wed, 14 Jun 2017 18:57:21 +0000 (11:57 -0700)]
Don't force-assign transaction id when exporting a snapshot.
Previously we required every exported transaction to have an xid
assigned. That was used to check that the exporting transaction is
still running, which in turn is needed to guarantee that that
necessary rows haven't been removed in between exporting and importing
the snapshot.
The exported xid caused unnecessary problems with logical decoding,
because slot creation has to wait for all concurrent xid to finish,
which in turn serializes concurrent slot creation. It also
prohibited snapshots to be exported on hot-standby replicas.
Instead export the virtual transactionid, which avoids the unnecessary
serialization and the inability to export snapshots on standbys. This
changes the file name of the exported snapshot, but since we never
documented what that one means, that seems ok.
Author: Petr Jelinek, slightly editorialized by me Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/f598b4b8-8cd7-0d54-0939-adda763d8c34@2ndquadrant.com
Robert Haas [Wed, 14 Jun 2017 17:13:11 +0000 (13:13 -0400)]
Teach predtest.c about CHECK clauses to fix partitioning bugs.
In a CHECK clause, a null result means true, whereas in a WHERE clause
it means false. predtest.c provided different functions depending on
which set of semantics applied to the predicate being proved, but had
no option to control what a null meant in the clauses provided as
axioms. Add one.
Use that in the partitioning code when figuring out whether the
validation scan on a new partition can be skipped. Rip out the
old logic that attempted (not very successfully) to compensate
for the absence of the necessary support in predtest.c.
Ashutosh Bapat and Robert Haas, reviewed by Amit Langote and
incorporating feedback from Tom Lane.
Alvaro Herrera [Wed, 14 Jun 2017 15:29:05 +0000 (11:29 -0400)]
Avoid bogus TwoPhaseState locking sequences
The optimized code in 728bd991c3c4 contains a few invalid locking
sequences. To wit, the original code would try to acquire an lwlock
that it already holds. Avoid this by moving lock acquisitions to
higher-level code, and install appropriate assertions in low-level that
the correct mode is held.
Authors: Michael Paquier, Álvaro Herrera Reported-By: chuanting wang
Bug: #14680
Discussion: https://postgr.es/m/20170531033228.1487.10124@wrigleys.postgresql.org
Tom Lane [Wed, 14 Jun 2017 15:10:05 +0000 (11:10 -0400)]
Fix no-longer-valid shortcuts in expression_returns_set().
expression_returns_set() used to short-circuit its recursion upon
seeing certain node types, such as DistinctExpr, that it knew the
executor did not support set-valued arguments for. That was never
inherent, though, just a reflection of laziness in execQual.c.
With the new implementation of SRFs there is no reason to think
that any scalar-valued expression node could not have a set-valued
subexpression, except for AggRefs and WindowFuncs where we know there
is a parser check rejecting it. And indeed, the shortcut causes
unexpected failures for cases such as a SRF underneath DistinctExpr,
because the planner stops looking for SRFs too soon.
Tom Lane [Wed, 14 Jun 2017 14:26:46 +0000 (10:26 -0400)]
Fix violations of CatalogTupleInsert/Update/Delete abstraction.
In commits 2f5c9d9c9 and ab0289651 we invented an abstraction layer
to insulate catalog manipulations from direct heap update calls.
But evidently some patches that hadn't landed in-tree at that point
didn't get the memo completely. Fix a couple of direct calls to
simple_heap_delete to use CatalogTupleDelete instead; these appear
to have been added in commits 7c4f52409 and 7b504eb28. This change is
purely cosmetic ATM, but there's no point in having an abstraction layer
if we allow random code to break it.
Dean Rasheed [Wed, 14 Jun 2017 08:00:01 +0000 (09:00 +0100)]
Teach PL/pgSQL about partitioned tables.
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update a couple of places in
PL/pgSQL to handle it. Specifically plpgsql_parse_cwordtype() and
build_row_from_class() needed updating in order to make table%ROWTYPE
and table.col%TYPE work for partitioned tables.
Dean Rasheed [Wed, 14 Jun 2017 07:43:40 +0000 (08:43 +0100)]
Teach RemoveRoleFromObjectPolicy() about partitioned tables.
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update
RemoveRoleFromObjectPolicy() to handle it, otherwise DROP OWNED BY
will fail if the role has any RLS policies referring to partitioned
tables.
Tom Lane [Wed, 14 Jun 2017 03:46:39 +0000 (23:46 -0400)]
Disallow set-returning functions inside CASE or COALESCE.
When we reimplemented SRFs in commit 69f4b9c85, our initial choice was
to allow the behavior to vary from historical practice in cases where a
SRF call appeared within a conditional-execution construct (currently,
only CASE or COALESCE). But that was controversial to begin with, and
subsequent discussion has resulted in a consensus that it's better to
throw an error instead of executing the query differently from before,
so long as we can provide a reasonably clear error message and a way to
rewrite the query.
Hence, add a parser mechanism to allow detection of such cases during
parse analysis. The mechanism just requires storing, in the ParseState,
a pointer to the set-returning FuncExpr or OpExpr most recently emitted
by parse analysis. Then the parsing functions for CASE and COALESCE can
detect the presence of a SRF in their arguments by noting whether this
pointer changes while analyzing their arguments. Furthermore, if it does,
it provides a suitable error cursor location for the complaint. (This
means that if there's more than one SRF in the arguments, the error will
point at the last one to be analyzed not the first. While connoisseurs of
parsing behavior might find that odd, it's unlikely the average user would
ever notice.)
While at it, we can also provide more specific error messages than before
about some pre-existing restrictions, such as no-SRFs-within-aggregates.
Also, reject at parse time cases where a NULLIF or IS DISTINCT FROM
construct would need to return a set. We've never supported that, but the
restriction is depended on in more subtle ways now, so it seems wise to
detect it at the start.
Also, provide some documentation about how to rewrite a SRF-within-CASE
query using a custom wrapper SRF.
It turns out that the information_schema.user_mapping_options view
contained an instance of exactly the behavior we're now forbidding; but
rewriting it makes it more clear and safer too.
initdb forced because of user_mapping_options change.
Patch by me, with error message suggestions from Alvaro Herrera and
Andres Freund, pursuant to a complaint from Regina Obe.
Peter Eisentraut [Tue, 13 Jun 2017 20:10:11 +0000 (16:10 -0400)]
doc: Update example version numbers in pg_upgrade documentation
The exact numbers don't matter, since they are examples, but it was
looking quite dated.
For the target version, we now automatically substitute the current
major version. The updated example source version should be good for a
couple of years.
Tom Lane [Tue, 13 Jun 2017 17:05:59 +0000 (13:05 -0400)]
Re-run pgindent.
This is just to have a clean base state for testing of Piotr Stefaniak's
latest version of FreeBSD indent. I fixed up a couple of places where
pgindent would have changed format not-nicely. perltidy not included.
This doesn't actually matter at present, because the current code
never consults null_index for range partitions. However, leaving
it uninitialized is still a bad idea, so let's not do that.
Dean Rasheed [Tue, 13 Jun 2017 16:30:36 +0000 (17:30 +0100)]
Teach relation_is_updatable() about partitioned tables.
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update relation_is_updatable() to
handle it. Specifically, partitioned tables and simple views built on
top of them are updatable.
This affects the SQL-callable functions pg_relation_is_updatable() and
pg_column_is_updatable(), and the views information_schema.views and
information_schema.columns.
Tom Lane [Tue, 13 Jun 2017 14:54:39 +0000 (10:54 -0400)]
In initdb, defend against assignment of NULL values to not-null columns.
Previously, you could write _null_ in a BKI DATA line for a column that's
supposed to be NOT NULL and initdb would let it pass, probably breaking
subsequent accesses to the row. No doubt the original coding overlooked
this simple sanity check because in the beginning we didn't have any way
to mark catalog columns NOT NULL at initdb time.
Tom Lane [Tue, 13 Jun 2017 03:29:44 +0000 (23:29 -0400)]
Fix confusion about number of subplans in partitioned INSERT setup.
ExecInitModifyTable() thought there was a plan per partition, but no,
there's only one. The problem had escaped detection so far because there
would only be visible misbehavior if there were a SubPlan (not an InitPlan)
in the quals being duplicated for each partition. However, valgrind
detected a bogus memory access in test cases added by commit 4f7a95be2,
and investigation of that led to discovery of the bug. The additional
test case added here crashes without the patch.
Tom Lane [Tue, 13 Jun 2017 00:04:32 +0000 (20:04 -0400)]
Assert that we don't invent relfilenodes or type OIDs in binary upgrade.
During pg_upgrade's restore run, all relfilenode choices should be
overridden by commands in the dump script. If we ever find ourselves
choosing a relfilenode in the ordinary way, someone blew it. Likewise for
pg_type OIDs. Since pg_upgrade might well succeed anyway, if there happens
not to be a conflict during the regression test run, we need assertions
here to keep us on the straight and narrow.
We might someday be able to remove the assertion in GetNewRelFileNode,
if pg_upgrade is rewritten to remove its assumption that old and new
relfilenodes always match. But it's hard to see how to get rid of the
pg_type OID constraint, since those OIDs are embedded in user tables
in some cases.
Back-patch as far as 9.5, because of the risk of back-patches breaking
something here even if it works in HEAD. I'd prefer to go back further,
but 9.4 fails both assertions due to get_rel_infos()'s use of a temporary
table. We can't use the later-branch solution of a CTE for compatibility
reasons (cf commit 5d16332e9), and it doesn't seem worth inventing some
other way to do the query. (I did check, by dint of changing the Asserts
to elog(WARNING), that there are no other cases of unwanted OID assignments
during 9.4's regression test run.)
Tom Lane [Mon, 12 Jun 2017 20:57:31 +0000 (16:57 -0400)]
Fix ALTER SEQUENCE OWNED BY to not rewrite the sequence relation.
It's not necessary for it to do that, since OWNED BY requires only ordinary
catalog updates and doesn't affect future sequence values. And pg_upgrade
needs to use OWNED BY without having it change the sequence's relfilenode.
Commit 3d79013b9 broke this by making all forms of ALTER SEQUENCE change
the relfilenode; that seems to be the explanation for the hard-to-reproduce
buildfarm failures we've been seeing since then.
Peter Eisentraut [Mon, 12 Jun 2017 19:57:22 +0000 (15:57 -0400)]
Add ICU_CFLAGS to global CPPFLAGS
The original code only added ICU_CFLAGS to the backend build. But it is
also needed for building external modules that include pg_locale.h. So
add it to the global CPPFLAGS. (This is only relevant if ICU is not in
a compiler default path, so it apparently hasn't bitten many.)
Stop table sync workers when subscription relation entry is removed
When a table sync worker is in waiting state and the subscription table
entry is removed because of a concurrent subscription refresh, the
worker could be left orphaned. To avoid that, explicitly stop the
worker when the pg_subscription_rel entry is removed.
Tom Lane [Sun, 11 Jun 2017 23:00:01 +0000 (19:00 -0400)]
Handle unqualified SEQUENCE NAME options properly in parse_utilcmd.c.
generateSerialExtraStmts() was sloppy about handling the case where
SEQUENCE NAME is given with a not-schema-qualified name. It was generating
a CreateSeqStmt with an unqualified sequence name, and an AlterSeqStmt
whose "owned_by" DefElem contained a T_String Value with a null string
pointer in the schema-name position. The generated nextval() argument was
also underqualified. This accidentally failed to fail at runtime, but only
so long as the current default creation namespace at runtime is the right
namespace. That's bogus; the parse-time transformation is supposed to be
inserting the right schema name in all cases, so as to avoid any possible
skew in that selection. I'm not sure this could fail in pg_dump's usage,
but it's still wrong; we have had real bugs in this area before adopting
the policy that parse_utilcmd.c should generate only fully-qualified
auxiliary commands. A slightly lesser problem, which is what led me to
notice this in the first place, is that pprint() dumped core on the
AlterSeqStmt because of the bogus T_String.
Noted while poking into the open problem with ALTER SEQUENCE breaking
pg_upgrade.
Joe Conway [Sun, 11 Jun 2017 15:51:18 +0000 (08:51 -0700)]
Apply RLS policies to partitioned tables.
The new partitioned table capability added a new relkind, namely
RELKIND_PARTITIONED_TABLE. Update fireRIRrules() to apply RLS
policies on RELKIND_PARTITIONED_TABLE as it does RELKIND_RELATION.
In addition, add RLS regression test coverage for partitioned tables.
Issue raised by Fakhroutdinov Evgenievich and patch by Mike Palmiotto.
Regression test editorializing by me.
Andrew Dunstan [Sat, 10 Jun 2017 14:19:06 +0000 (10:19 -0400)]
Take PROVE_FLAGS from the command line but not the environment
This reverts commit 56b6ef893fee9e9bf47d927a02f4d1ea911f4d9c and instead
makes vcregress.pl parse out PROVE_FLAGS from a command line argument
when doing a TAP test, thus making it consistent with the makefile
treatment.
Silence warning about uninitialized 'ret' variable on some compilers.
If the compiler doesn't notice that the switch-statement handles all
possible values of the enum, it might complain that 'ret' is being used
without initialization. Jeff Janes reported that on gcc 4.4.7.
Improve tablesync behavior with concurrent changes
When a table is removed from a subscription before the tablesync worker
could start, this would previously result in an error when reading
pg_subscription_rel. Now we just ignore this.
Give a better error message on invalid hostaddr option.
If you accidentally pass a host name in the hostaddr option, e.g.
hostaddr=localhost, you get an error like:
psql: could not translate host name "localhost" to address: Name or service not known
That's a bit confusing, because it implies that we tried to look up
"localhost" in DNS, but it failed. To make it more clear that we tried to
parse "localhost" as a numeric network address, change the message to:
psql: could not parse network address "localhost": Name or service not known
Andres Freund [Thu, 8 Jun 2017 22:00:53 +0000 (15:00 -0700)]
Use standard interrupt handling in logical replication launcher.
Previously the exit handling was only able to exit from within the
main loop, and not from within the backend code it calls. Fix that by
using the standard die() SIGTERM handler, and adding the necessary
CHECK_FOR_INTERRUPTS() call.
This requires adding yet another process-type-specific branch to
ProcessInterrupts(), which hints that we probably should generalize
that handling. But that's work for another day.
Author: Petr Jelinek Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/fe072153-babd-3b5d-8052-73527a6eb657@2ndquadrant.com
Andres Freund [Thu, 8 Jun 2017 21:42:18 +0000 (14:42 -0700)]
Again report a useful error message when walreceiver's connection closes.
Since 7c4f52409a8c (merged in v10), a shutdown master is reported as
FATAL: unexpected result after CommandComplete: server closed the connection unexpectedly
by walsender. It used to be
LOG: replication terminated by primary server
FATAL: could not send end-of-streaming message to primary: no COPY in progress
while the old message clearly is not perfect, it's definitely better
than what's reported now.
The change comes from the attempt to handle finished COPYs without
erroring out, needed for the new logical replication, which wasn't
needed before.
There's probably better ways to handle this, but for now just
explicitly check for a closed connection.
Author: Petr Jelinek Reviewed-By: Andres Freund
Discussion: https://postgr.es/m/f7c7dd08-855c-e4ed-41f4-d064a6c0665a@2ndquadrant.com
Backpatch: -
Tom Lane [Thu, 8 Jun 2017 17:48:27 +0000 (13:48 -0400)]
Fix bit-rot in pg_upgrade's test.sh, and improve documentation.
Doing a cross-version upgrade test with test.sh evidently hasn't been
tested since circa 9.2, because the script lacked case branches for
old-version servers newer than 9.1. Future-proof that a bit, and
clean up breakage induced by our recent drop of V0 function call
protocol (namely that oldstyle_length() isn't in the regression
suite anymore).
(This isn't enough to make the test work perfectly cleanly across
versions, but at least it finishes and provides dump files that
you can diff manually. One issue I didn't touch is that we might
want to execute the "reindex_hash.sql" file in the new DB before
dumping it, so that the hash indexes don't vanish from the dump.)
Improve the TESTING doc file: put the tl;dr version at the top not
the bottom, and bring its explanation of how to run a cross-version
test up to speed, since the installcheck target isn't there and won't
be resurrected. Improve the comment in the Makefile about why not.
In passing, teach .gitignore and "make clean" about a couple more
junk output files.
Most of the improvements were in the new SCRAM code:
* In SCRAM protocol violation messages, use errdetail to provide the
details.
* If pg_backend_random() fails, throw an ERROR rather than just LOG. We
shouldn't continue authentication if we can't generate a random nonce.
* Use ereport() rather than elog() for the "invalid SCRAM verifier"
messages. They shouldn't happen, if everything works, but it's not
inconceivable that someone would have invalid scram verifiers in
pg_authid, e.g. if a broken client application was used to generate the
verifier.
But this change applied to old code:
* Use ERROR rather than COMMERROR for protocol violation errors. There's
no reason to not tell the client what they did wrong. The client might be
confused already, so that it cannot read and display the error correctly,
but let's at least try. In the "invalid password packet size" case, we
used to actually continue with authentication anyway, but that is now a
hard error.
Patch by Michael Paquier and me. Thanks to Daniel Varrazzo for spotting
the typo in one of the messages that spurred the discussion and these
larger changes.
Joe Conway [Thu, 8 Jun 2017 00:54:33 +0000 (17:54 -0700)]
Fix contrib/sepgsql regr tests for tup-routing constraint check change.
Commit 15ce775 changed tuple-routing constraint checking logic.
This affects the expected output for contrib/sepgsql, because
there's no longer LOG entries reporting allowance of int4eq()
execution. Per buildfarm.
Tom Lane [Wed, 7 Jun 2017 21:23:38 +0000 (17:23 -0400)]
Docs: improve CREATE TABLE ref page's discussion of partition bounds.
Clarify in the syntax synopsis that partition bound values must be
exactly numeric literals or string literals; previously it
said "bound_literal" which was defined nowhere.
Replace confusing --- and, I think, incorrect in detail --- definition
of how range bounds work with a reference to row-wise comparison plus
a concrete example (which I stole from Robert Haas).
Robert Haas [Wed, 7 Jun 2017 19:14:55 +0000 (15:14 -0400)]
postgres_fdw: Allow cancellation of transaction control commands.
Commit f039eaac7131ef2a4cf63a10cf98486f8bcd09d2, later back-patched
with commit 1b812afb0eafe125b820cc3b95e7ca03821aa675, allowed many of
the queries issued by postgres_fdw to fetch remote data to respond to
cancel interrupts in a timely fashion. However, it didn't do anything
about the transaction control commands, which remained
noninterruptible.
Improve the situation by changing do_sql_command() to retrieve query
results using pgfdw_get_result(), which uses the asynchronous
interface to libpq so that it can check for interrupts every time
libpq returns control. Since this might result in a situation
where we can no longer be sure that the remote transaction state
matches the local transaction state, add a facility to force all
levels of the local transaction to abort if we've lost track of
the remote state; without this, an apparently-successful commit of
the local transaction might fail to commit changes made on the
remote side. Also, add a 60-second timeout for queries issue during
transaction abort; if that expires, give up and mark the state of
the connection as unknown. Drop all such connections when we exit
the local transaction. Together, these changes mean that if we're
aborting the local toplevel transaction anyway, we can just drop the
remote connection in lieu of waiting (possibly for a very long time)
for it to complete an abort.
This still leaves quite a bit of room for improvement. PQcancel()
has no asynchronous interface, so if we get stuck sending the cancel
request we'll still hang. Also, PQsetnonblocking() is not used, which
means we could block uninterruptibly when sending a query. There
might be some other optimizations possible as well. Nonetheless,
this allows us to escape a wait for an unresponsive remote server
quickly in many more cases than previously.
Report by Suraj Kharage. Patch by me and Rafia Sabih. Review
and testing by Amit Kapila and Tushar Ahuja.
A logical replication worker should not insert new rows into
pg_subscription_rel, only update existing rows, so that there are no
races if a concurrent refresh removes rows. Adjust the API to be able
to choose that behavior.