Robert Haas [Tue, 24 Jan 2017 21:59:18 +0000 (16:59 -0500)]
Add a SHOW command to the replication command language.
This is useful infrastructure for an upcoming proposed patch to
allow the WAL segment size to be changed at initdb time; tools like
pg_basebackup need the ability to interrogate the server setting.
But it also doesn't seem like a bad thing to have independently of
that; it may find other uses in the future.
Robert Haas and Beena Emerson. (The original patch here was by
Beena, but I rewrote it to such a degree that most of the code
being committed here is mine.)
Robert Haas [Tue, 24 Jan 2017 21:53:56 +0000 (16:53 -0500)]
Add a new DestReceiver for printing tuples without catalog access.
If you create a DestReciver of type DestRemote and try to use it from
a replication connection that is not bound to a specific daabase, or
any other hypothetical type of backend that is not bound to a specific
database, it will fail because it doesn't have a pg_proc catalog to
look up properties of the types being printed. In general, that's
an unavoidable problem, but we can hardwire the properties of a few
builtin types in order to support utility commands. This new
DestReceiver of type DestRemoteSimple does just that.
Robert Haas [Tue, 24 Jan 2017 20:46:50 +0000 (15:46 -0500)]
Fix things so that updatable views work with partitioned tables.
Previously, ExecInitModifyTable was missing handling for WITH CHECK
OPTION, and view_query_is_auto_updatable was missing handling for
RELKIND_PARTITIONED_TABLE.
Robert Haas [Tue, 24 Jan 2017 20:34:39 +0000 (15:34 -0500)]
Set ecxt_scantuple correctly for tuple routing.
In 2ac3ef7a01df859c62d0a02333b646d65eaec5ff, we changed things so that
it's possible for a different TupleTableSlot to be used for partitioned
tables at successively lower levels. If we do end up changing the slot
from the original, we must update ecxt_scantuple to point to the new one
for partition key of the tuple to be computed correctly.
Reported by Rajkumar Raghuwanshi. Patch by Amit Langote.
Robert Haas [Tue, 24 Jan 2017 15:20:02 +0000 (10:20 -0500)]
Reindent table partitioning code.
We've accumulated quite a bit of stuff with which pgindent is not
quite happy in this code; clean it up to provide a less-annoying base
for future pgindent runs.
Robert Haas [Tue, 24 Jan 2017 13:50:16 +0000 (08:50 -0500)]
Fix interaction of partitioned tables with BulkInsertState.
When copying into a partitioned table, the target heap may change from
one tuple to next. We must ask ReadBufferBI() to get a new buffer
every time such change occurs. To do that, use new function
ReleaseBulkInsertStatePin(). This fixes the bug that tuples ended up
being inserted into the wrong partition, which occurred exactly
because the wrong buffer was used.
Amit Langote, per a suggestion from Robert Haas. Some cosmetic
adjustments by me.
Reports by 高增琦 (Gao Zengqi), Venkata B Nagothi, and
Ragnar Ouchterlony.
Peter Eisentraut [Mon, 23 Jan 2017 19:00:58 +0000 (14:00 -0500)]
Fix default minimum value for descending sequences
For some reason that is lost in history, a descending sequence would
default its minimum value to -2^63+1 (-PG_INT64_MAX) instead of
-2^63 (PG_INT64_MIN), even though explicitly specifying a minimum value
of -2^63 would work. Fix this inconsistency by using the full range by
default.
Reported-by: Daniel Verite <daniel@manitou-mail.org> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Peter Eisentraut [Mon, 23 Jan 2017 18:45:32 +0000 (13:45 -0500)]
Don't error when no system locales were found
initdb used to warn about that, but it was changed to an error in
pg_import_system_locales, but some build farm members failed because of
that. Change it back to a warning.
Alvaro Herrera [Mon, 23 Jan 2017 15:55:18 +0000 (12:55 -0300)]
Prefetch blocks during lazy vacuum's truncation scan
Vacuum truncation scan can be sped up on rotating media by prefetching
blocks in forward direction. That makes the blocks already present in
memory by the time they are needed, while also letting OS read-ahead
kick in.
The truncate scan has been measured to be five times faster than without
this patch (that was on a slow disk, but it shouldn't hurt on fast
disks.)
Author: Álvaro Herrera, loosely based on a submission by Claudio Freire
Discussion: https://postgr.es/m/CAGTBQpa6NFGO_6g_y_7zQx8L9GcHDSQKYdo1tGuh791z6PYgEg@mail.gmail.com
Tom Lane [Mon, 23 Jan 2017 14:38:36 +0000 (09:38 -0500)]
Fix example plan in optimizer/README.
Joining three tables only takes two join nodes. I think when I (tgl)
wrote this, I was envisioning possible additional joins; but since the
example doesn't show any fourth table, it's just confusing to write
a third join node.
Tom Lane [Mon, 23 Jan 2017 14:15:49 +0000 (09:15 -0500)]
Volatile-ize some plperl variables that must survive into PG_CATCH blocks.
This appears to be necessary to fix a failure seen on buildfarm member
sittella. It shouldn't be necessary according to the letter of the C
standard, because we don't change the values of these variables within
the PG_TRY blocks; but somehow gcc 4.7.2 is dropping the ball.
Peter Eisentraut [Mon, 23 Jan 2017 13:28:39 +0000 (08:28 -0500)]
pg_dump: Fix minor memory leak
Missing a destroyPQExpBuffer() in the early exit branch. The early
exits aren't really necessary. Most similar functions just proceed
running the rest of the code zero times and clean up at the end.
Tom Lane [Sun, 22 Jan 2017 19:08:26 +0000 (14:08 -0500)]
Relocate static function declarations to be after typedefs in jsonfuncs.c.
Project style is to put things in this order, for the good and sufficient
reason that you often need the typedefs in the function declarations.
There already was one function declaration that needed a typedef, which
was randomly placed away from all the other static function declarations
in consequence. And the submitted patch for better json_populate_record
functionality jumped through even more hoops in order to preserve this
bad idea.
This patch only moves lines from point A to point B, no other changes.
Tom Lane [Sun, 22 Jan 2017 16:47:38 +0000 (11:47 -0500)]
Remove no-longer-needed loop in ExecGather().
Coverity complained quite properly that commit ea15e1867 had introduced
unreachable code into ExecGather(); to wit, it was no longer possible to
iterate the final for-loop more or less than once. So remove the for().
In passing, clean up a couple of comments, and make better use of a local
variable.
Tom Lane [Sat, 21 Jan 2017 20:15:39 +0000 (15:15 -0500)]
Fix cross-shlib linking in temporary installs on HPUX 10.
Turns out this has been broken for years and we'd not noticed. The one
case that was getting exercised in the buildfarm, or probably anywhere
else, was postgres_fdw.sl's reference to libpq.sl; and it turns out that
that was always going to libpq.sl in the actual installation directory
not the temporary install. We'd not noticed because the buildfarm script
does "make install" before it tests contrib. However, the recent addition
of a logical-replication test to the core regression scripts resulted in
trying to use libpqwalreceiver.sl before "make install" happens, and that
failed for lack of finding libpq.sl, as shown by failures on buildfarm
members gaur and pademelon.
There are two changes needed to fix it: the magic environment variable to
specify shlib search path at runtime is SHLIB_PATH not LD_LIBRARY_PATH,
and the shlib link command needs to specify the +s switch else the library
will not honor SHLIB_PATH.
I'm not quite sure why buildfarm members anole and gharial (HPUX 11) didn't
show the same failure. Consulting man pages on the web says that HPUX 11
honors both LD_LIBRARY_PATH and SHLIB_PATH, which would explain half of it,
and the rather confusing wording I've been able to find suggests that +s
might effectively be the default in HPUX 11. But it seems at least as
likely that there's just a libpq.so installed in /usr/lib on that machine;
as long as it's not too ancient, that would satisfy the test. In any case
I do not think this patch will break HPUX 11.
At the moment I don't see a need to back-patch this, since it only matters
for testing purposes, not to mention that HPUX 10 is probably dead in the
real world anyway.
Robert Haas [Fri, 20 Jan 2017 20:55:45 +0000 (15:55 -0500)]
Avoid useless respawining the autovacuum launcher at high speed.
When (1) autovacuum = off and (2) there's at least one database with
an XID age greater than autovacuum_freeze_max_age and (3) all tables
in that database that need vacuuming are already being processed by a
worker and (4) the autovacuum launcher is started, a kind of infinite
loop occurs. The launcher starts a worker and immediately exits. The
worker, finding no worker to do, immediately starts the launcher,
supposedly so that the next database can be processed. But because
datfrozenxid for that database hasn't been advanced yet, the new
worker gets put right back into the same database as the old one,
where it once again starts the launcher and exits. High-speed ping
pong ensues.
There are several possible ways to break the cycle; this seems like
the safest one.
Amit Khandekar (code) and Robert Haas (comments), reviewed by
Álvaro Herrera.
Alvaro Herrera [Fri, 20 Jan 2017 18:03:27 +0000 (15:03 -0300)]
tests: Use the right Perl operator
We were using != to compare strings, for which "ne" is the right thing.
It's not clear why it works everywhere except on Pavan's machine, but
it's clearly bogus anyway.
Author and reporter: Pavan Deolasee
Discussion: https://postgr.es/m/CABOikdPhsHM+pX8skoEY1_T0OtKdO1udzUj4VCjU5VEt+bj4eA@mail.gmail.com
Tom Lane [Fri, 20 Jan 2017 17:51:31 +0000 (12:51 -0500)]
Try to fix non-MSVC Windows builds in the wake of logical replication.
pgoutput evidently needs to be built without -DBUILDING_DLL. (It seems
like a pretty bad idea that these makefiles need to know exactly where
all the shlibs are in the tree, or maybe what's bad is putting them under
src/backend/. But right now is not the time to redesign that.)
Also, remove "override CPPFLAGS" in pgoutput's Makefile. I don't think
that that actually has any bad consequences, but it's certainly useless
in a directory that has no .h files, and it might be contributing to the
failure somehow.
Tom Lane [Fri, 20 Jan 2017 16:10:02 +0000 (11:10 -0500)]
Allow backslash line continuations in pgbench's meta commands.
A pgbench meta command can now be continued onto additional line(s) of a
script file by writing backslash-return. The continuation marker is
equivalent to white space in that it separates tokens.
Eventually it'd be nice to have the same thing in psql, but that will
be a much larger project.
Peter Eisentraut [Fri, 20 Jan 2017 17:00:00 +0000 (12:00 -0500)]
Paper over pg_upgrade test failure
The publication test didn't drop all the publications it was creating
when it was probably intending to do that. There is still a bug with
dependency tracking in there, but this should at least quiet down the
build farm.
Peter Eisentraut [Thu, 19 Jan 2017 17:00:00 +0000 (12:00 -0500)]
Logical replication
- Add PUBLICATION catalogs and DDL
- Add SUBSCRIPTION catalog and DDL
- Define logical replication protocol and output plugin
- Add logical replication workers
From: Petr Jelinek <petr@2ndquadrant.com> Reviewed-by: Steve Singer <steve@ssinger.info> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Erik Rijkers <er@xs4all.nl> Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Tom Lane [Fri, 20 Jan 2017 00:52:13 +0000 (19:52 -0500)]
Avoid core dump for empty prepared statement in an aborted transaction.
Brown-paper-bag bug in commit ab1f0c822: the old code here coped with
null CachedPlanSource.raw_parse_tree, the new code not so much.
Per report from Dave Cramer.
No regression test, because our core testing infrastructure doesn't
provide any easy way to exercise this path. Fortunately, the JDBC
crew test it regularly.
I'd somehow talked myself into believing that set_append_rel_size
doesn't need to worry about getting back an AND clause when it applies
eval_const_expressions to the result of adjust_appendrel_attrs (that is,
transposing the appendrel parent's restriction clauses for one child).
But that is nonsense, and Andreas Seltenreich's fuzz tester soon
turned up a counterexample. Put back the make_ands_implicit step
that was there before, and add a regression test covering the case.
Andres Freund [Thu, 19 Jan 2017 22:21:26 +0000 (14:21 -0800)]
Fix platform dependant regression output triggered by 69f4b9c85f16.
Due to the changed costing in that commit hash-aggregates started to
be used, which results in big-endian vs. little-endian output
differences. Disable hash-aggs for those tests.
Author: Andres Freund, with input from Tom Lane
Discussion: https://postgr.es/m/22891.1484791792@sss.pgh.pa.us
Andres Freund [Thu, 19 Jan 2017 22:12:38 +0000 (14:12 -0800)]
Remove obsoleted code relating to targetlist SRF evaluation.
Since 69f4b9c plain expression evaluation (and thus normal projection)
can't return sets of tuples anymore. Thus remove code dealing with
that possibility.
This will require adjustments in external code using
ExecEvalExpr()/ExecProject() - that should neither be hard nor very
common.
Author: Andres Freund and Tom Lane
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
Alvaro Herrera [Thu, 19 Jan 2017 21:23:09 +0000 (18:23 -0300)]
Fix race condition in reading commit timestamps
If a user requests the commit timestamp for a transaction old enough
that its data is concurrently being truncated away by vacuum at just the
right time, they would receive an ugly internal file-not-found error
message from slru.c rather than the expected NULL return value.
In a primary server, the window for the race is very small: the lookup
has to occur exactly between the two calls by vacuum, and there's not a
lot that happens between them (mostly just a multixact truncate). In a
standby server, however, the window is larger because the truncation is
executed as soon as the WAL record for it is replayed, but the advance
of the oldest-Xid is not executed until the next checkpoint record.
To fix in the primary, simply reverse the order of operations in
vac_truncate_clog. To fix in the standby, augment the WAL truncation
record so that the standby is aware of the new oldest-XID value and can
apply the update immediately. WAL version bumped because of this.
No backpatch, because of the low importance of the bug and its rarity.
Author: Craig Ringer Reviewed-By: Petr Jelínek, Peter Eisentraut
Discussion: https://postgr.es/m/CAMsr+YFhVtRQT1VAwC+WGbbxZZRzNou=N9Ed-FrCqkwQ8H8oJQ@mail.gmail.com
Peter Eisentraut [Thu, 19 Jan 2017 17:00:00 +0000 (12:00 -0500)]
initdb: Fix for mixed-case superuser names
The previous coding did not properly quote the user name before casting
it to regrole. To avoid all that, just pass in BOOTSTRAP_SUPERUSERID
numerically.
Also fix one place where the BOOTSTRAP_SUPERUSERID was hardcoded as 10.
Robert Haas [Thu, 19 Jan 2017 18:56:13 +0000 (13:56 -0500)]
Fix some problems in check_new_partition_bound().
Account for the fact that the highest bound less than or equal to the
upper bound might be either the lower or the upper bound of the
overlapping partition, depending on whether the proposed partition
completely contains the existing partition or merely overlaps it.
Also, we need not continue searching for even greater bound in
partition_bound_bsearch() once we find the first bound that is *equal*
to the probe, because we don't have duplicate datums. That spends
cycles needlessly.
Amit Langote, per a report from Amul Sul. Cosmetic changes by me.
Robert Haas [Thu, 19 Jan 2017 18:20:11 +0000 (13:20 -0500)]
Fix RETURNING to work correctly with partition tuple routing.
In ExecInsert(), do not switch back to the root partitioned table
ResultRelInfo until after we finish ExecProcessReturning(), so that
RETURNING projection is done using the partition's descriptor. For
the projection to work correctly, we must initialize the same for each
leaf partition during ModifyTableState initialization.
Robert Haas [Thu, 19 Jan 2017 17:30:27 +0000 (12:30 -0500)]
Fix failure to enforce partitioning contraint for internal partitions.
When a tuple is inherited into a partitioning root, no partition
constraints need to be enforced; when it is inserted into a leaf, the
parent's partitioning quals needed to be enforced. The previous
coding got both of those cases right. When a tuple is inserted into
an intermediate level of the partitioning hierarchy (i.e. a table
which is both a partition itself and in turn partitioned), it must
enforce the partitioning qual inherited from its parent. That case
got overlooked; repair.
Stephen Frost [Thu, 19 Jan 2017 17:06:21 +0000 (12:06 -0500)]
Dump sequence data based on the TableDataInfo flag
When considering a sequence's Data entry in dumpSequenceData, we were
actually looking at the sequence definition's dump flag to decide if we
should dump the data or not. That's generally fine, except for when the
sequence data entry was created by processExtensionTables() because it's
a config sequence. In that case, the sequence itself won't be marked as
dumping data because it's part of an extension, leading to the need for
processExtensionTables() to create the sequence data entry.
This leads to extension config sequence data not being included in the
dump when it should be. Fix this by looking at the sequence data's dump
flag instead, just as dumpTableData() was doing for tables (which is why
config tables were correctly being handled), and add a regression test
to make sure we don't break it moving forward.
All of this is a bit round-about since we can now represent which
components of a given dump item should be dumped out through the dump
flag. A future improvement might be to change checkExtensionMembership()
to check for config sequences/tables and set the dump flag based on that
directly, possibly removing the need for processExtensionTables().
Tom Lane [Wed, 18 Jan 2017 23:10:23 +0000 (18:10 -0500)]
Doc: improve documentation of new SRF-in-tlist behavior.
Correct a misstatement about how things used to work: we did allow nested
SRFs before, as long as no function had more than one set-returning input.
Also, attempt to document the fact that the new implementation changes the
behavior for SRFs within conditional constructs (eg CASE): the conditional
construct no longer gates whether the SRF is run, and thus cannot affect
the number of rows emitted. We might want to change this behavior, but
first it behooves us to see if we can explain it.
Minor other wordsmithing on what I wrote yesterday, too.
Andres Freund [Wed, 18 Jan 2017 20:46:50 +0000 (12:46 -0800)]
Move targetlist SRF handling from expression evaluation to new executor node.
Evaluation of set returning functions (SRFs_ in the targetlist (like SELECT
generate_series(1,5)) so far was done in the expression evaluation (i.e.
ExecEvalExpr()) and projection (i.e. ExecProject/ExecTargetList) code.
This meant that most executor nodes performing projection, and most
expression evaluation functions, had to deal with the possibility that an
evaluated expression could return a set of return values.
That's bad because it leads to repeated code in a lot of places. It also,
and that's my (Andres's) motivation, made it a lot harder to implement a
more efficient way of doing expression evaluation.
To fix this, introduce a new executor node (ProjectSet) that can evaluate
targetlists containing one or more SRFs. To avoid the complexity of the old
way of handling nested expressions returning sets (e.g. having to pass up
ExprDoneCond, and dealing with arguments to functions returning sets etc.),
those SRFs can only be at the top level of the node's targetlist. The
planner makes sure (via split_pathtarget_at_srfs()) that SRF evaluation is
only necessary in ProjectSet nodes and that SRFs are only present at the
top level of the node's targetlist. If there are nested SRFs the planner
creates multiple stacked ProjectSet nodes. The ProjectSet nodes always get
input from an underlying node.
We also discussed and prototyped evaluating targetlist SRFs using ROWS
FROM(), but that turned out to be more complicated than we'd hoped.
While moving SRF evaluation to ProjectSet would allow to retain the old
"least common multiple" behavior when multiple SRFs are present in one
targetlist (i.e. continue returning rows until all SRFs are at the end of
their input at the same time), we decided to instead only return rows till
all SRFs are exhausted, returning NULL for already exhausted ones. We
deemed the previous behavior to be too confusing, unexpected and actually
not particularly useful.
As a side effect, the previously prohibited case of multiple set returning
arguments to a function, is now allowed. Not because it's particularly
desirable, but because it ends up working and there seems to be no argument
for adding code to prohibit it.
Currently the behavior for COALESCE and CASE containing SRFs has changed,
returning multiple rows from the expression, even when the SRF containing
"arm" of the expression is not evaluated. That's because the SRFs are
evaluated in a separate ProjectSet node. As that's quite confusing, we're
likely to instead prohibit SRFs in those places. But that's still being
discussed, and the code would reside in places not touched here, so that's
a task for later.
There's a lot of, now superfluous, code dealing with set return expressions
around. But as the changes to get rid of those are verbose largely boring,
it seems better for readability to keep the cleanup as a separate commit.
Author: Tom Lane and Andres Freund
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
Tom Lane [Wed, 18 Jan 2017 21:33:18 +0000 (16:33 -0500)]
Reset the proper GUC in create_index test.
Thinko in commit a4523c5aa. It doesn't really affect anything at
present, but it would be a problem if any tests added later in this
file ought to get index-only-scan plans. Back-patch, like the previous
commit, just to avoid surprises in case we add such a test and then
back-patch it.
Alvaro Herrera [Wed, 18 Jan 2017 21:06:13 +0000 (18:06 -0300)]
Change some test macros to return true booleans
These macros work fine when they are used directly in an "if" test or
similar, but as soon as the return values are assigned to boolean
variables (or passed as boolean arguments to some function), they become
bugs, hopefully caught by compiler warnings. To avoid future problems,
fix the definitions so that they return actual booleans.
To further minimize the risk that somebody uses them in back-patched
fixes that only work correctly in branches starting from the current
master and not in old ones, back-patch the change to supported branches
as appropriate.
Magnus Hagander [Wed, 18 Jan 2017 20:37:59 +0000 (21:37 +0100)]
Implement array version of jsonb_delete and operator
This makes it possible to delete multiple keys from a jsonb value by
passing in an array of text values, which makes the operaiton much
faster than individually deleting the keys (which would require copying
the jsonb structure over and over again.
Tom Lane [Wed, 18 Jan 2017 20:21:52 +0000 (15:21 -0500)]
Disable transforms that replaced AT TIME ZONE with RelabelType.
These resulted in wrong answers if the relabeled argument could be matched
to an index column, as shown in bug #14504 from Evgeniy Kozlov. We might
be able to resurrect these optimizations by adjusting the planner's
treatment of RelabelType, or by adjusting btree's rules for selecting
comparison functions, but either solution will take careful analysis
and does not sound like a fit candidate for backpatching.
I left the catalog infrastructure in place and just reduced the transform
functions to always-return-NULL. This would be necessary anyway in the
back branches, and it doesn't seem important to be more invasive in HEAD.
Bug introduced by commit b8a18ad48. Back-patch to 9.5 where that came in.
Robert Haas [Wed, 18 Jan 2017 19:43:14 +0000 (14:43 -0500)]
Add some more tests for tuple routing.
Commit a25665088d812d08bb888e961f208eaebf522050 fixed some issues with
how PartitionDispatch related code handled multi-level partitioned
tables, but didn't add any tests.
Alvaro Herrera [Wed, 18 Jan 2017 19:08:20 +0000 (16:08 -0300)]
Make messages mentioning type names more uniform
This avoids additional translatable strings for each distinct type, as
well as making our quoting style around type names more consistent
(namely, that we don't quote type names). This continues what started
as f402b9950120.
Tom Lane [Wed, 18 Jan 2017 18:44:19 +0000 (13:44 -0500)]
Avoid conflicts with collation aliases generated by stripping.
This resulted in failures depending on the order of "locale -a" output.
The original coding in initdb sorted the results, but that should be
unnecessary as long as "locale -a" doesn't print duplicate names. The
original entries will then all be non-dups, and while we might generate
duplicate aliases by stripping, they should be for different encodings and
thus not conflict. Even if the latter assumption fails somehow, it won't
be fatal because we're using if_not_exists mode for the aliases.
Tom Lane [Wed, 18 Jan 2017 17:58:20 +0000 (12:58 -0500)]
Improve RLS planning by marking individual quals with security levels.
In an RLS query, we must ensure that security filter quals are evaluated
before ordinary query quals, in case the latter contain "leaky" functions
that could expose the contents of sensitive rows. The original
implementation of RLS planning ensured this by pushing the scan of a
secured table into a sub-query that it marked as a security-barrier view.
Unfortunately this results in very inefficient plans in many cases, because
the sub-query cannot be flattened and gets planned independently of the
rest of the query.
To fix, drop the use of sub-queries to enforce RLS qual order, and instead
mark each qual (RestrictInfo) with a security_level field establishing its
priority for evaluation. Quals must be evaluated in security_level order,
except that "leakproof" quals can be allowed to go ahead of quals of lower
security_level, if it's helpful to do so. This has to be enforced within
the ordering of any one list of quals to be evaluated at a table scan node,
and we also have to ensure that quals are not chosen for early evaluation
(i.e., use as an index qual or TID scan qual) if they're not allowed to go
ahead of other quals at the scan node.
This is sufficient to fix the problem for RLS quals, since we only support
RLS policies on simple tables and thus RLS quals will always exist at the
table scan level only. Eventually these qual ordering rules should be
enforced for join quals as well, which would permit improving planning for
explicit security-barrier views; but that's a task for another patch.
Note that FDWs would need to be aware of these rules --- and not, for
example, send an insecure qual for remote execution --- but since we do
not yet allow RLS policies on foreign tables, the case doesn't arise.
This will need to be addressed before we can allow such policies.
Patch by me, reviewed by Stephen Frost and Dean Rasheed.
Peter Eisentraut [Wed, 18 Jan 2017 17:00:00 +0000 (12:00 -0500)]
Add function to import operating system collations
Move this logic out of initdb into a user-callable function. This
simplifies the code and makes it possible to update the standard
collations later on if additional operating system collations appear.
Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Euler Taveira <euler@timbira.com.br>
Peter Eisentraut [Wed, 28 Dec 2016 17:00:00 +0000 (12:00 -0500)]
Generate fmgr prototypes automatically
Gen_fmgrtab.pl creates a new file fmgrprotos.h, which contains
prototypes for all functions registered in pg_proc.h. This avoids
having to manually maintain these prototypes across a random variety of
header files. It also automatically enforces a correct function
signature, and since there are warnings about missing prototypes, it
will detect functions that are defined but not registered in
pg_proc.h (or otherwise used).
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Fujii Masao [Tue, 17 Jan 2017 08:27:32 +0000 (17:27 +0900)]
Fix an assertion failure related to an exclusive backup.
Previously multiple sessions could execute pg_start_backup() and
pg_stop_backup() to start and stop an exclusive backup at the same time.
This could trigger the assertion failure of
"FailedAssertion("!(XLogCtl->Insert.exclusiveBackup)".
This happend because, even while pg_start_backup() was starting
an exclusive backup, other session could run pg_stop_backup()
concurrently and mark the backup as not-in-progress unconditionally.
This patch introduces ExclusiveBackupState indicating the state of
an exclusive backup. This state is used to ensure that there is only
one session running pg_start_backup() or pg_stop_backup() at
the same time, to avoid the assertion failure.
Back-patch to all supported versions.
Author: Michael Paquier Reviewed-By: Kyotaro Horiguchi and me Reported-By: Andreas Seltenreich
Discussion: <87mvktojme.fsf@credativ.de>
Tom Lane [Mon, 16 Jan 2017 20:23:11 +0000 (15:23 -0500)]
Fix check_srf_call_placement() to handle VALUES cases correctly.
INSERT ... VALUES with a single VALUES row is implemented quite differently
from the general VALUES case. A user-visible implication of that is that
we accept SRFs in the single-row case, but not in the multi-row case.
That's a historical artifact no doubt, but in view of the lack of field
complaints, I'm not excited about fixing it right now.
However, check_srf_call_placement() needs to know about this, first because
it should throw an error in the unsupported case, and second because it
should set p_hasTargetSRFs in the single-row case (because we treat that
like a SELECT tlist). That's an oversight in commit a4c35ea1c.
To fix, split EXPR_KIND_VALUES into two values. So far as I can see,
this is the only place where we need to distinguish the two cases at
present; but there might be more later.
Tom Lane [Mon, 16 Jan 2017 18:53:40 +0000 (13:53 -0500)]
Fix NULL pointer dereference in tuplesort.c.
Oversight in commit e94568ecc. This could cause a crash when an external
datum tuplesort of a pass-by-value type required multiple passes.
Per report from Mithun Cy.
Magnus Hagander [Mon, 16 Jan 2017 12:56:43 +0000 (13:56 +0100)]
Make pg_basebackup use temporary replication slots
Temporary replication slots will be used by default when wal streaming
is used and no slot name is specified with -S. If a slot name is
specified, then a permanent slot with that name is used. If --no-slot is
specified, then no permanent or temporary slot will be used.
Temporary slots are only used on 10.0 and newer, of course.
Tom Lane [Sun, 15 Jan 2017 19:09:35 +0000 (14:09 -0500)]
Fix matching of boolean index columns to sort ordering.
Normally, if we have a WHERE clause like "indexcol = constant",
the planner will figure out that that index column can be ignored
when determining whether the index has a desired sort ordering.
But this failed to work for boolean index columns, because a
condition like "boolcol = true" is canonicalized to just "boolcol"
which does not give rise to an EquivalenceClass. Add a check to
allow the same type of deduction to be made in this case too.
Per a complaint from Dima Pavlov. Arguably this is a bug, but given the
limited impact and the small number of complaints so far, I won't risk
destabilizing plans in stable branches by back-patching.
Tom Lane [Sat, 14 Jan 2017 21:17:30 +0000 (16:17 -0500)]
Teach contrib/pg_stat_statements to handle multi-statement commands better.
Make use of the statement boundary info added by commit ab1f0c822
to let pg_stat_statements behave more sanely when multiple SQL queries
are jammed into one query string. It now records just the relevant
part of the source string, not the whole thing, for each individual
query.
Even when no multi-statement strings are involved, users may notice small
changes in the output: leading and trailing whitespace and semicolons will
be stripped from statements, which did not happen before.
Also, significantly expand pg_stat_statements' regression test script.
Fabien Coelho, reviewed by Craig Ringer and Kyotaro Horiguchi,
some mods by me
Tom Lane [Sat, 14 Jan 2017 21:02:35 +0000 (16:02 -0500)]
Change representation of statement lists, and add statement location info.
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Tom Lane [Sat, 14 Jan 2017 18:27:47 +0000 (13:27 -0500)]
Throw suitable error for COPY TO STDOUT/FROM STDIN in a SQL function.
A client copy can't work inside a function because the FE/BE wire protocol
doesn't support nesting of a COPY operation within query results. (Maybe
it could, but the protocol spec doesn't suggest that clients should support
this, and libpq for one certainly doesn't.)
In most PLs, this prohibition is enforced by spi.c, but SQL functions don't
use SPI. A comparison of _SPI_execute_plan() and init_execution_state()
shows that rejecting client COPY is the only discrepancy in what they
allow, so there's no other similar bugs.
This is an astonishingly ancient oversight, so back-patch to all supported
branches.
Peter Eisentraut [Fri, 13 Jan 2017 17:00:00 +0000 (12:00 -0500)]
pg_ctl: Change default to wait for all actions
The different actions in pg_ctl had different defaults for -w and -W,
mostly for historical reasons. Most users will want the -w behavior, so
make that the default.
Remove the -w option in most example and test code, so avoid confusion
and reduce verbosity. pg_upgrade is not touched, so it can continue to
work with older installations.
Reviewed-by: Beena Emerson <memissemerson@gmail.com> Reviewed-by: Ryan Murphy <ryanfmurphy@gmail.com>
Tom Lane [Fri, 13 Jan 2017 22:32:37 +0000 (17:32 -0500)]
Fix some more regression test row-order-instability issues.
Commit 0563a3a8b just introduced another instance of the same unsafe
testing methodology that appeared in 2ac3ef7a0, which I corrected in 257d81572. Robert/Amit, please stop doing that.
Also look through the rest of f0e44751d's test cases, and correct some
other queries with underdetermined ordering of results from the system
catalogs. These haven't failed in the buildfarm yet, but I don't
have any confidence in that staying true.
Tom Lane [Fri, 13 Jan 2017 21:59:52 +0000 (16:59 -0500)]
In PL/Tcl tests, don't choke if optional error fields are missing.
This fixes a portability issue introduced by commit 961bed020: with a
compiler that doesn't support PG_FUNCNAME_MACRO, the "funcname" field of
errorCode won't be provided, leading to a failure of the unset command.
I added -nocomplain to the unset commands for filename and lineno too, just
in case, though I know of no platform that wouldn't populate those fields.
(BTW, -nocomplain is new in Tcl 8.4, but fortunately we dropped support
for pre-8.4 Tcl some time ago.)
Peter Eisentraut [Fri, 13 Jan 2017 17:00:00 +0000 (12:00 -0500)]
pg_upgrade: Fix for changed pg_ctl default stop mode
In 9.5, the default pg_ctl stop mode was changed from "smart" to "fast".
pg_upgrade still thought the default mode was "smart" and only specified
the mode when "fast" was asked for. This results in using "fast" all
the time. It's not clear what the effect in practice is, but fix it
nonetheless to restore the previous behavior.
Robert Haas [Fri, 13 Jan 2017 19:03:52 +0000 (14:03 -0500)]
Fix a bug in how we generate partition constraints.
Move the code for doing parent attnos to child attnos mapping for Vars
in partition constraint expressions to a separate function
map_partition_varattnos() and call it from the appropriate places.
Doing it in get_qual_from_partbound(), as is now, would produce wrong
result in certain multi-level partitioning cases, because it only
considers the current pair of parent-child relations. In certain
multi-level partitioning cases, attnums for the same key attribute(s)
might differ between various levels causing the same attribute to be
numbered differently in different instances of the Var corresponding
to a given attribute.
With this commit, in generate_partition_qual(), we first generate the
the whole partition constraint (considering all levels of partitioning)
and then do the mapping, so that Vars in the final expression are
numbered according the leaf relation (to which it is supposed to apply).
Robert Haas [Fri, 13 Jan 2017 18:29:31 +0000 (13:29 -0500)]
Fix cardinality estimates for parallel joins.
For a partial path, the cardinality estimate needs to reflect the
number of rows we think each worker will see, rather than the total
number of rows; otherwise, costing will go wrong. The previous coding
got this completely wrong for parallel joins.
Unfortunately, this change may destabilize plans for users of 9.6 who
have enabled parallel query, but since 9.6 is still fairly new I'm
hoping expectations won't be too settled yet. Also, this is really a
brown-paper-bag bug, so leaving it unfixed for the entire lifetime of
9.6 seems unwise.
Related reports (whose import I initially failed to recognize) by
Tomas Vondra and Tom Lane.
Tom Lane [Thu, 12 Jan 2017 23:59:46 +0000 (18:59 -0500)]
Fix field order in struct catcache.
Somebody failed to grasp the point of having the #ifdef CATCACHE_STATS
fields at the end of the struct. Put that back the way it should be,
and add a comment making it more explicit why it should be that way.