Tom Lane [Tue, 9 Apr 2019 19:47:26 +0000 (15:47 -0400)]
Prevent inlining of multiply-referenced CTEs with outer recursive refs.
This has to be prevented because inlining would result in multiple
self-references, which we don't support (and in fact that's disallowed
by the SQL spec, see statements about linearly vs. nonlinearly
recursive queries). Bug fix for commit 608b167f9.
Per report from Yaroslav Schekin (via Andrew Gierth)
Commit 25ee70511ec2 introduced a memory leak in pgbench: some PGresult
structs were not being freed during error bailout, because we're now
doing more PQgetResult() calls than previously. Since there's more
cleanup code outside the discard_response() routine than in it, refactor
the cleanup code, removing the routine.
This has little effect currently, since we abandon processing after
hitting errors, but if we ever get further pgbench features (such as
testing for serializable transactions), it'll matter.
Tom Lane [Tue, 9 Apr 2019 15:42:53 +0000 (11:42 -0400)]
Test some more cases with partitioned tables in EvalPlanQual.
We weren't testing anything involving EPQ on UPDATEs that move tuples
into different partitions. Depending on the implementation,
it might be that these cases aren't actually very interesting ...
but given our thin coverage of EPQ in general, I think it's a good
idea to have a test case.
Avoid "could not reattach" by providing space for concurrent allocation.
We've long had reports of intermittent "could not reattach to shared
memory" errors on Windows. Buildfarm member dory fails that way when
PGSharedMemoryReAttach() execution overlaps with creation of a thread
for the process's "default thread pool". Fix that by providing a second
region to receive asynchronous allocations that would otherwise intrude
into UsedShmemSegAddr. In pgwin32_ReserveSharedMemoryRegion(), stop
trying to free reservations landing at incorrect addresses; the caller's
next step has been to terminate the affected process. Back-patch to 9.4
(all supported versions).
Tom Lane [Mon, 8 Apr 2019 20:09:06 +0000 (16:09 -0400)]
Fix improper interaction of FULL JOINs with lateral references.
join_is_legal() needs to reject forming certain outer joins in cases
where that would lead the planner down a blind alley. However, it
mistakenly supposed that the way to handle full joins was to treat them
as applying the same constraints as for left joins, only to both sides.
That doesn't work, as shown in bug #15741 from Anthony Skorski: given
a lateral reference out of a join that's fully enclosed by a full join,
the code would fail to believe that any join ordering is legal, resulting
in errors like "failed to build any N-way joins".
However, we don't really need to consider full joins at all for this
purpose, because we effectively force them to be evaluated in syntactic
order, and that order is always legal for lateral references. Hence,
get rid of this broken logic for full joins and just ignore them instead.
This seems to have been an oversight in commit 7e19db0c0.
Back-patch to all supported branches, as that was.
Tom Lane [Mon, 8 Apr 2019 16:20:22 +0000 (12:20 -0400)]
Fix EvalPlanQualStart to handle partitioned result rels correctly.
The es_root_result_relations array needs to be shallow-copied in the
same way as the main es_result_relations array, else EPQ rechecks on
partitioned result relations fail, as seen in bug #15677 from
Norbert Benkocs.
doc: Add note about generated columns in foreign tables
Explain that it is not enforced that querying a generated column
returns data that is consistent with the data that was stored. This
is similar to the note about constraints nearby.
Reported-by: Amit Langote <amitlangote09@gmail.com>
vacuum_truncate controls whether vacuum tries to truncate off
any empty pages at the end of the table. Previously vacuum always
tried to do the truncation. However, the truncation could cause
some problems; for example, ACCESS EXCLUSIVE lock needs to
be taken on the table during the truncation and can cause
the query cancellation on the standby even if hot_standby_feedback
is true. Setting this reloption to false can be helpful to avoid
such problems.
Author: Tsunakawa Takayuki Reviewed-By: Julien Rouhaud, Masahiko Sawada, Michael Paquier, Kirk Jamison and Fujii Masao
Discussion: https://postgr.es/m/CAHGQGwE5UqFqSq1=kV3QtTUtXphTdyHA-8rAj4A=Y+e4kyp3BQ@mail.gmail.com
Andres Freund [Mon, 8 Apr 2019 05:42:42 +0000 (22:42 -0700)]
Reset memory context once per tuple in validateForeignKeyConstraint.
When using tableam ExecFetchSlotHeapTuple() might return a separately
allocated tuple. We could use the shouldFree argument to explicitly
free it, but it seems more robust to to protect
Also add a CHECK_FOR_INTERRUPTS() after each tuple. It's likely that
each AM has (heap does) a CFI somewhere in the relevant path, but it
seems more robust to have one in validateForeignKeyConstraint()
itself.
Note that this only affects the cases that couldn't be optimized to be
verified with a query.
Author: Andres Freund Reviewed-By: Tom Lane (in an earlier version)
Discussion:
https://postgr.es/m/19030.1554574075@sss.pgh.pa.us
https://postgr.es/m/CAKJS1f_SHKcPYMsi39An5aUjhAcEMZb6Cx1Sj1QWEWSiKJkBVQ@mail.gmail.com
https://postgr.es/m/20180711185628.mrvl46bjgk2uxoki@alap3.anarazel.de
Andres Freund [Mon, 8 Apr 2019 05:14:47 +0000 (22:14 -0700)]
Fix a number of issues around modifying a previously updated row.
This commit fixes three, unfortunately related, issues:
1) Since 5db6df0c01, the introduction of DML via tableam, it was
possible to trigger "ERROR: unexpected table_lock_tuple status: 1"
when updating a row that was previously updated in the same
transaction - but only when the previously updated row was before
updated in a concurrent transaction (and READ COMMITTED was
used). The reason for that was that that case simply wasn't
expected. Fixing that lead to:
2) Even before the above commit, there were error checks (introduced
in 6868ed7491b7) preventing a row being updated by different
commands within the same statement (say in a function called by an
UPDATE) - but that check wasn't performed when the row was first
updated in a concurrent transaction - instead the second update was
silently skipped in that case. After this change we throw the same
error as we'd without the concurrent transaction.
3) The error messages (introduced in 6868ed7491b7) preventing such
updates emitted the same error message for both DELETE and
UPDATE ("tuple to be updated was already modified by an operation
triggered by the current command"). While that could be changed
separately, it made it hard to write tests that verify the correct
correct behavior of the code.
This commit changes heap's implementation of table_lock_tuple() to
return TM_SelfModified instead of TM_Invisible (previously loosely
modeled after EvalPlanQualFetch), and teaches nodeModifyTable.c to
handle that in response to table_lock_tuple() and not just in response
to table_(delete|update).
Additionally it fixes the wrong error message (see 3 above). The
comment for table_lock_tuple() is also adjusted to state that
TM_Deleted won't return information in TM_FailureData - it'll not
always be available.
This also adds tests to ensure that DELETE/UPDATE correctly error out
when affecting a row that concurrently was modified by another
transaction.
Author: Andres Freund Reported-By: Tom Lane, when investigating a bug bug fix to another bug
by Amit Langote
Discussion: https://postgr.es/m/19321.1554567786@sss.pgh.pa.us
Michael Paquier [Mon, 8 Apr 2019 04:44:55 +0000 (13:44 +0900)]
Add more tests for partition tuple routing with dropped attributes
As bug #15733 has proved, we are lacking coverage for partition tuple
routing with dropped attributes when involving three levels of
partitioning or more. There was only an active bug in this area for
v11, and HEAD is proving to handle those scenarios fine, still it lacked
some coverage for the previous problem.
Author: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/15733-7692379e310b80ec@postgresql.org
Tom Lane [Sun, 7 Apr 2019 22:18:58 +0000 (18:18 -0400)]
Avoid fetching past the end of the indoption array.
pg_get_indexdef_worker carelessly fetched indoption entries even for
non-key index columns that don't have one. 99.999% of the time this
would be harmless, since the code wouldn't examine the value ... but
some fine day this will be a fetch off the end of memory, resulting
in SIGSEGV.
Detected through valgrind testing. Odd that the buildfarm's valgrind
critters haven't noticed.
The new command lists partitioned relations (tables and/or indexes),
possibly with their sizes, possibly including partitioned partitions;
their parents (if not top-level); if indexes show the tables they belong
to; and their descriptions.
While there are various possible improvements to this, having it in this
form is already a great improvement over not having any way to obtain
this report.
Author: Pavel Stěhule, with help from Mathias Brossard, Amit Langote and
Justin Pryzby. Reviewed-by: Amit Langote, Mathias Brossard, Melanie Plageman,
Michaël Paquier, Álvaro Herrera
Before those commits, partitioning-related code in the executor could
assume that ModifyTableState.resultRelInfo[] contains only leaf partitions.
However, now a fully-pruned update results in a dummy ModifyTable that
references the root partitioned table, and that breaks some stuff.
In v11, this led to an assertion or core dump in the tuple routing code.
Fix by disabling tuple routing, since we don't need that anyway.
(I chose to do that in HEAD as well for safety, even though the problem
doesn't manifest in HEAD as it stands.)
In v10, this confused ExecInitModifyTable's decision about whether it
needed to close the root table. But we can get rid of that altogether
by being smarter about where to find the root table.
Note that since the referenced commits haven't shipped yet, this
isn't fixing any bug the field has seen.
This uses the same infrastructure that the CREATE INDEX progress
reporting uses. Add a column to pg_stat_progress_create_index to
report the OID of the index being worked on. This was not necessary
for CREATE INDEX, but it's useful for REINDEX.
Also edit the phase descriptions a bit to be more consistent with the
source code comments.
Tom Lane [Sat, 6 Apr 2019 21:54:29 +0000 (17:54 -0400)]
Avoid Python memory leaks in hstore_plpython and jsonb_plpython.
Fix some places where we might fail to do Py_DECREF() on a Python
object (thereby leaking it for the rest of the session). Almost
all of the risks were in error-recovery paths, which we don't really
expect to hit anyway. Hence, while this is definitely a bug fix,
it doesn't quite seem worth back-patching.
Tom Lane [Sat, 6 Apr 2019 19:09:09 +0000 (15:09 -0400)]
Fix failures in validateForeignKeyConstraint's slow path.
The foreign-key-checking loop in ATRewriteTables failed to ignore
relations without storage (e.g., partitioned tables), unlike the
initial loop. This accidentally worked as long as RI_Initial_Check
succeeded, which it does in most practical cases (including all the
ones exercised in the existing regression tests :-(). However, if
that failed, as for instance when there are permissions issues,
then we entered the slow fire-the-trigger-on-each-tuple path.
And that would try to read from the referencing relation, and fail
if it lacks storage.
A second problem, recently introduced in HEAD, was that this loop
had been broken by sloppy refactoring for the tableam API changes.
Repair both issues, and add a regression test case so we have some
coverage on this code path. Back-patch as needed to v11.
(It looks like this code could do with additional bulletproofing,
but let's get a working test case in place first.)
Michael Paquier [Sat, 6 Apr 2019 06:23:37 +0000 (15:23 +0900)]
Add support TCP user timeout in libpq and the backend server
Similarly to the set of parameters for keepalive, a connection parameter
for libpq is added as well as a backend GUC, called tcp_user_timeout.
Increasing the TCP user timeout is useful to allow a connection to
survive extended periods without end-to-end connection, and decreasing
it allows application to fail faster. By default, the parameter is 0,
which makes the connection use the system default, and follows a logic
close to the keepalive parameters in its handling. When connecting
through a Unix-socket domain, the parameters have no effect.
Author: Ryohei Nagaura Reviewed-by: Fabien Coelho, Robert Haas, Kyotaro Horiguchi, Kirk
Jamison, Mikalai Keida, Takayuki Tsunakawa, Andrei Yahorau
Discussion: https://postgr.es/m/EDA4195584F5064680D8130B1CA91C45367328@G01JPEXMBYT04
Tom Lane [Fri, 5 Apr 2019 23:20:30 +0000 (19:20 -0400)]
Use Append rather than MergeAppend for scanning ordered partitions.
If we need ordered output from a scan of a partitioned table, but
the ordering matches the partition ordering, then we don't need to
use a MergeAppend to combine the pre-ordered per-partition scan
results: a plain Append will produce the same results. This
both saves useless comparison work inside the MergeAppend proper,
and allows us to start returning tuples after istarting up just
the first child node not all of them.
However, all is not peaches and cream, because if some of the
child nodes have high startup costs then there will be big
discontinuities in the tuples-returned-versus-elapsed-time curve.
The planner's cost model cannot handle that (yet, anyway).
If we model the Append's startup cost as being just the first
child's startup cost, we may drastically underestimate the cost
of fetching slightly more tuples than are available from the first
child. Since we've had bad experiences with over-optimistic choices
of "fast start" plans for ORDER BY LIMIT queries, that seems scary.
As a klugy workaround, set the startup cost estimate for an ordered
Append to be the sum of its children's startup costs (as MergeAppend
would). This doesn't really describe reality, but it's less likely
to cause a bad plan choice than an underestimated startup cost would.
In practice, the cases where we really care about this optimization
will have child plans that are IndexScans with zero startup cost,
so that the overly conservative estimate is still just zero.
David Rowley, reviewed by Julien Rouhaud and Antonin Houska
This allows the user to create duplicates of existing replication slots,
either logical or physical, and even changing properties such as whether
they are temporary or the output plugin used.
There are multiple uses for this, such as initializing multiple replicas
using the slot for one base backup; when doing investigation of logical
replication issues; and to select a different output plugins.
Author: Masahiko Sawada Reviewed-by: Michael Paquier, Andres Freund, Petr Jelinek
Discussion: https://postgr.es/m/CAD21AoAm7XX8y_tOPP6j4Nzzch12FvA1wPqiO690RCk+uYVstg@mail.gmail.com
Thomas Munro [Fri, 5 Apr 2019 20:31:48 +0000 (09:31 +1300)]
Wake up interested backends when a checkpoint fails.
Commit c6c9474a switched to condition variables instead of sleep
loops to notify backends of checkpoint start and stop, but forgot
to broadcast in case of checkpoint failure.
Author: Thomas Munro
Discussion: https://postgr.es/m/CA%2BhUKGJKbCd%2B_K%2BSEBsbHxVT60SG0ivWHHAdvL0bLTUt2xpA2w%40mail.gmail.com
Tom Lane [Fri, 5 Apr 2019 16:59:46 +0000 (12:59 -0400)]
Ensure consistent name matching behavior in processSQLNamePattern().
Prior to v12, if you used a collation-sensitive regex feature in a
pattern handled by processSQLNamePattern() (for instance, \d '\\w+'
in psql), the behavior you got matched the database's default collation.
Since commit 586b98fdf you'd usually get C-collation behavior, because
the catalog "name"-type columns are now marked as COLLATE "C". Add
explicit COLLATE specifications to restore the prior behavior.
(Note for whoever writes the v12 release notes: the need for this shows
that while 586b98fdf preserved pre-v12 behavior of "name" columns for
simple comparison operators, it changed the behavior of regex operators
on those columns. Although this patch fixes it for pattern matches
generated by our own tools, user-written queries will still be affected.
So we'd better mention this issue as a compatibility item.)
Doc: Update documentation on partitioning vs. foreign tables.
The limitations that it is not allowed to create/attach a foreign table
as a partition of an indexed partitioned table were not documented.
Reported-By: Stepan Yankevych
Author: Etsuro Fujita Reviewed-By: Amit Langote
Backpatch-through: 11 where partitioned index was introduced
Discussion: https://postgr.es/m/1553869152.858391073.5f8m3n0x@frv53.fwdcdn.com
Thomas Munro [Fri, 5 Apr 2019 03:39:47 +0000 (16:39 +1300)]
Fix bugs in mdsyncfiletag().
Commit 3eb77eba moved a _mdfd_getseg() call from mdsync() into a new
callback function mdsyncfiletag(), but didn't get the arguments quite
right. Without the EXTENSION_DONT_CHECK_SIZE flag we fail to open a
segment if lower-numbered segments have been truncated, and it wants
a block number rather than a segment number.
While comparing with the older coding, also remove an unnecessary
clobbering of errno, and adjust the code in mdunlinkfiletag() to
ressemble the original code from mdpostckpt() more closely instead
of using an unnecessary call to smgropen().
Author: Thomas Munro
Discussion: https://postgr.es/m/CA%2BhUKGL%2BYLUOA0eYiBXBfwW%2BbH5kFgh94%3DgQH0jHEJ-t5Y91wQ%40mail.gmail.com
Stephen Frost [Fri, 5 Apr 2019 02:52:42 +0000 (22:52 -0400)]
Handle errors during GSSAPI startup better
There was some confusion over the format of the error message returned
from the server during GSSAPI startup; specifically, it was expected
that a length would be returned when, in reality, at this early stage in
the startup sequence, no length is returned from the server as part of
an error message.
Correct the client-side code for dealing with error messages sent by the
server during startup by simply reading what's available into our
buffer, after we've discovered it's an error message, and then reporting
back what was returned.
In passing, also add in documentation of the environment variable
PGGSSENCMODE which was missed previously, and adjust the code to look
for the PGGSSENCMODE variable (the environment variable change was
missed in the prior GSSMODE -> GSSENCMODE commit).
Error-handling issue discovered by Peter Eisentraut, the rest were items
discovered during testing of the error handling.
Michael Paquier [Fri, 5 Apr 2019 01:37:59 +0000 (10:37 +0900)]
Fix some documentation in pg_rewind
Since 11, it is possible to use a non-superuser role when using an
online source cluster with pg_rewind as long as the role has proper
permissions to execute on the source all the functions used by
pg_rewind, and the documentation stated that a superuser is necessary.
Let's add at the same time all the details needed to create such a
role.
A second confusion which comes a lot from users is that it is necessary
to issue a checkpoint on a freshly-promoted standby so as its control
file has up-to-date timeline information which is used by pg_rewind to
validate the operation. Let's document that properly. This is
back-patched down to 9.5 where pg_rewind has been introduced.
Author: Michael Paquier Reviewed-by: Magnus Hagander
Discussion: https://postgr.es/m/CABUevEz5bpvbwVsYCaSMV80CBZ5-82nkMzbb+Bu=h1m=rLdn=g@mail.gmail.com
Backpatch-through: 9.5
Andres Freund [Fri, 5 Apr 2019 00:17:50 +0000 (17:17 -0700)]
Harden tableam against nonexistant / wrong kind of AMs.
Previously it was allowed to set default_table_access_method to an
empty string. That makes sense for default_tablespace, where that was
copied from, as it signals falling back to the database's default
tablespace. As there is no equivalent for table AMs, forbid that.
Also make sure to throw a usable error when creating a table using an
index AM, by using get_am_type_oid() to implement get_table_am_oid()
instead of a separate copy. Previously we'd error out only later, in
GetTableAmRoutine().
Thirdly remove GetTableAmRoutineByAmId() - it was only used in an
earlier version of 8586bf7ed8.
Add tests for the above (some for index AMs as well).
Peter Geoghegan [Fri, 5 Apr 2019 00:25:35 +0000 (17:25 -0700)]
Add test coverage for rootdescend verification.
Commit c1afd175, which added support for rootdescend verification to
amcheck, added only minimal regression test coverage. Address this by
making sure that rootdescend verification is run on a multi-level index.
In passing, simplify some of the regression tests that exercise
multi-level nbtree page deletion.
Both issues spotted while rereviewing coverage of the nbtree patch
series using gcov.
Andres Freund [Thu, 4 Apr 2019 22:47:19 +0000 (15:47 -0700)]
tableam: Add table_multi_insert() and revamp/speed-up COPY FROM buffering.
This adds table_multi_insert(), and converts COPY FROM, the only user
of heap_multi_insert, to it.
A simple conversion of COPY FROM use slots would have yielded a
slowdown when inserting into a partitioned table for some
workloads. Different partitions might need different slots (both slot
types and their descriptors), and dropping / creating slots when
there's constant partition changes is measurable.
Thus instead revamp the COPY FROM buffering for partitioned tables to
allow to buffer inserts into multiple tables, flushing only when
limits are reached across all partition buffers. By only dropping
slots when there've been inserts into too many different partitions,
the aforementioned overhead is gone. By allowing larger batches, even
when there are frequent partition changes, we actuall speed such cases
up significantly.
By using slots COPY of very narrow rows into unlogged / temporary
might slow down very slightly (due to the indirect function calls).
Author: David Rowley, Andres Freund, Haribabu Kommi
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20190327054923.t3epfuewxfqdt22e@alap3.anarazel.de
Tom Lane [Thu, 4 Apr 2019 21:22:02 +0000 (17:22 -0400)]
Add a "SQLSTATE-only" error verbosity option to libpq and psql.
This is intended for use mostly in test scripts for external tools,
which could do without cross-PG-version variations in error message
wording. Of course, the SQLSTATE isn't guaranteed stable either, but
it should be more so than the error message text.
Note: there's a bit of an ABI change for libpq here, but it seems
OK because if somebody compiles against a newer version of libpq-fe.h,
and then tries to pass PQERRORS_SQLSTATE to PQsetErrorVerbosity()
of an older libpq library, it will be accepted and then act like
PQERRORS_DEFAULT, thanks to the way the tests in pqBuildErrorMessage3
have historically been phrased. That seems acceptable.
Didier Gautheron, reviewed by Dagfinn Ilmari Mannsåker
The previous convention that stdout was selected by default when nothing
is specified was just too error-prone.
After a suggestion from Andrew Gierth.
Author: Euler Taveira Reviewed-by: Yoshikazu Imai, José Arthur Benetasso Villanova
Discussion: https://postgr.es/m/87sgwrmhdv.fsf@news-spur.riddles.org.uk
Tom Lane [Thu, 4 Apr 2019 19:12:51 +0000 (15:12 -0400)]
Make queries' locking of indexes more consistent.
The assertions added by commit b04aeb0a0 exposed that there are some
code paths wherein the executor will try to open an index without
holding any lock on it. We do have some lock on the index's table,
so it seems likely that there's no fatal problem with this (for
instance, the index couldn't get dropped from under us). Still,
it's bad practice and we should fix it.
To do so, remove the optimizations in ExecInitIndexScan and friends
that tried to avoid taking a lock on an index belonging to a target
relation, and just take the lock always. In non-bug cases, this
will result in no additional shared-memory access, since we'll find
in the local lock table that we already have a lock of the desired
type; hence, no significant performance degradation should occur.
Also, adjust the planner and executor so that the type of lock taken
on an index is always identical to the type of lock taken for its table,
by relying on the recently added RangeTblEntry.rellockmode field.
This avoids some corner cases where that might not have been true
before (possibly resulting in extra locking overhead), and prevents
future maintenance issues from having multiple bits of logic that
all needed to be in sync. In addition, this change removes all core
calls to ExecRelationIsTargetRelation, which avoids a possible O(N^2)
startup penalty for queries with large numbers of target relations.
(We'd probably remove that function altogether, were it not that we
advertise it as something that FDWs might want to use.)
Also adjust some places in selfuncs.c to not take any lock on indexes
they are transiently opening, since we can assume that plancat.c
did that already.
In passing, change gin_clean_pending_list() to take RowExclusiveLock
not AccessShareLock on its target index. Although it's not clear that
that's actually a bug, it seemed very strange for a function that's
explicitly going to modify the index to use only AccessShareLock.
David Rowley, reviewed by Julien Rouhaud and Amit Langote,
a bit of further tweaking by me
Robert Haas [Thu, 4 Apr 2019 18:58:53 +0000 (14:58 -0400)]
Allow VACUUM to be run with index cleanup disabled.
This commit adds a new reloption, vacuum_index_cleanup, which
controls whether index cleanup is performed for a particular
relation by default. It also adds a new option to the VACUUM
command, INDEX_CLEANUP, which can be used to override the
reloption. If neither the reloption nor the VACUUM option is
used, the default is true, as before.
Masahiko Sawada, reviewed and tested by Nathan Bossart, Alvaro
Herrera, Kyotaro Horiguchi, Darafei Praliaskouski, and me.
The wording of the documentation is mostly due to me.
Peter Geoghegan [Thu, 4 Apr 2019 16:38:08 +0000 (09:38 -0700)]
Invalidate binary search bounds consistently.
_bt_check_unique() failed to invalidate binary search bounds in the
event of a live conflict following commit e5adcb78. This resulted in
problems after waiting for the conflicting xact to commit or abort. The
subsequent call to _bt_check_unique() would restore the initial binary
search bounds, rather than starting a new search. Fix by explicitly
invalidating bounds when it becomes clear that there is a live conflict
that insertion will have to wait to resolve.
Ashutosh Sharma, with a few additional tweaks by me.
Stephen Frost [Thu, 4 Apr 2019 15:11:46 +0000 (11:11 -0400)]
Move the be_gssapi_get_* prototypes
The be_gssapi_get_* prototypes were put close to similar ones for SSL-
but a bit too close since that meant they ended up only being included
for SSL-enabled builds. Move those to be under ENABLE_GSS instead.
Thomas Munro [Thu, 4 Apr 2019 08:56:03 +0000 (21:56 +1300)]
Refactor the fsync queue for wider use.
Previously, md.c and checkpointer.c were tightly integrated so that
fsync calls could be handed off and processed in the background.
Introduce a system of callbacks and file tags, so that other modules
can hand off fsync work in the same way.
For now only md.c uses the new interface, but other users are being
proposed. Since there may be use cases that are not strictly SMGR
implementations, use a new function table for sync handlers rather
than extending the traditional SMGR one.
Instead of using a bitmapset of segment numbers for each RelFileNode
in the checkpointer's hash table, make the segment number part of the
key. This requires sending explicit "forget" requests for every
segment individually when relations are dropped, but suits the file
layout schemes of proposed future users better (ie sparse or high
segment numbers).
Author: Shawn Debnath and Thomas Munro Reviewed-by: Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/CAEepm=2gTANm=e3ARnJT=n0h8hf88wqmaZxk0JYkxw+b21fNrw@mail.gmail.com
Since file_fdw uses COPY internally, but COPY doesn't allow listing
generated columns in its column list, we need to make sure that we
don't add generated columns to the column lists internally generated
by file_fdw.
Make src/test/recovery/t/017_shm.pl safe for concurrent execution.
Buildfarm members idiacanthus and komodoensis, which share a host, both
executed this test in the same second. That failed. Back-patch to 9.6,
where the test first appeared.
Michael Paquier [Thu, 4 Apr 2019 01:24:56 +0000 (10:24 +0900)]
Improve readability of some tests in strings.sql
c251336 has added some tests to check if a toast relation should be
empty or not, hardcoding the toast relation name when calling
pg_relation_size(). pg_class.reltoastrelid offers the same information,
so simplify the tests to use that.
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20190403065949.GH3298@paquier.xyz
Andres Freund [Thu, 4 Apr 2019 00:37:00 +0000 (17:37 -0700)]
tableam: basic documentation.
This adds documentation about the user oriented parts of table access
methods (i.e. the default_table_access_method GUC and the USING clause
for CREATE TABLE etc), adds a basic chapter about the table access
method interface, and adds a note to storage.sgml that it's contents
don't necessarily apply for non-builtin AMs.
Author: Haribabu Kommi and Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Assert that pgwin32_signal_initialize() has been called early enough.
Before the pgwin32_signal_initialize() call, the backend version of
pg_usleep() has no effect. No in-tree code falls afoul of that today,
but temporary commit 23078689a9921968ac0873b017be6e7f772f10bc did so.
Handle USE_MODULE_DB for all tests able to use an installed postmaster.
When $(MODULES) and $(MODULE_big) are empty, derive the database name
from the first element of $(REGRESS) instead of using a constant string.
When deriving the database name from $(MODULES), use its first element
instead of the entire list; the earlier approach would fail if any
multi-module directory had $(REGRESS) tests. Treat isolation suites and
src/pl correspondingly. Under USE_MODULE_DB=1, installcheck-world and
check-world no longer reuse any database name in a given postmaster.
Buildfarm members axolotl, mandrill and frogfish saw spurious "is being
accessed by other users" failures that would not have happened without
database name reuse. (The CountOtherDBBackends() 5s deadline expired
during DROP DATABASE; a backend for an earlier test suite had used the
same database name and had not yet exited.) Back-patch to 9.4 (all
supported versions), except bits pertaining to isolation suites.
Concept reviewed by Andrew Dunstan, Andres Freund and Tom Lane.
postmaster startup scrutinizes any shared memory segment recorded in
postmaster.pid, exiting if that segment matches the current data
directory and has an attached process. When the postmaster.pid file was
missing, a starting postmaster used weaker checks. Change to use the
same checks in both scenarios. This increases the chance of a startup
failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1
postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster
will no longer recycle segments pertaining to other data directories.
That's good for production, but it's bad for integration tests that
crash a postmaster and immediately delete its data directory. Such a
test now leaks a segment indefinitely. No "make check-world" test does
that. win32_shmem.c already avoided all these problems. In 9.6 and
later, enhance PostgresNode to facilitate testing. Back-patch to 9.4
(all supported versions).
Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI.
Tomas Vondra [Wed, 3 Apr 2019 22:04:31 +0000 (00:04 +0200)]
Add SETTINGS option to EXPLAIN, to print modified settings.
Query planning is affected by a number of configuration options, and it
may be crucial to know which of those options were set to non-default
values. With this patch you can say EXPLAIN (SETTINGS ON) to include
that information in the query plan. Only options affecting planning,
with values different from the built-in default are printed.
This patch also adds auto_explain.log_settings option, providing the
same capability in auto_explain module.
Author: Tomas Vondra Reviewed-by: Rafia Sabih, John Naylor
Discussion: https://postgr.es/m/e1791b4c-df9c-be02-edc5-7c8874944be0@2ndquadrant.com
Author: Justin Pryzby, partly after a suggestion from Masahiko Sawada
Discussion: https://postgr.es/m/20190328135918.GA27808@telsasoft.com
Discussion: https://postgr.es/m/CAD21AoB9+y8N4+Fan-ne-_7J5yTybPttxeVKfwUocKp4zT1vNQ@mail.gmail.com
Tomas Vondra [Wed, 3 Apr 2019 19:08:36 +0000 (21:08 +0200)]
Reduce overhead of pg_mcv_list (de)serialization
Commit ea4e1c0e8f resolved issues with memory alignment in serialized
pg_mcv_list values, but it required copying data to/from the varlena
buffer during serialization and deserialization. As the MCV lits may
be fairly large, the overhead (memory consumption, CPU usage) can get
rather significant too.
This change tweaks the serialization format so that the alignment is
correct with respect to the varlena value, and so the parts may be
accessed directly without copying the data.
Catversion bump, as it affects existing pg_statistic_ext data.
Stephen Frost [Wed, 3 Apr 2019 19:02:33 +0000 (15:02 -0400)]
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
Support foreign keys that reference partitioned tables
Previously, while primary keys could be made on partitioned tables, it
was not possible to define foreign keys that reference those primary
keys. Now it is possible to do that.
Author: Álvaro Herrera Reviewed-by: Amit Langote, Jesper Pedersen
Discussion: https://postgr.es/m/20181102234158.735b3fevta63msbj@alvherre.pgsql
Generate less WAL during GiST, GIN and SP-GiST index build.
Instead of WAL-logging every modification during the build separately,
first build the index without any WAL-logging, and make a separate pass
through the index at the end, to write all pages to the WAL. This
significantly reduces the amount of WAL generated, and is usually also
faster, despite the extra I/O needed for the extra scan through the index.
WAL generated this way is also faster to replay.
For GiST, the LSN-NSN interlock makes this a little tricky. All pages must
be marked with a valid (i.e. non-zero) LSN, so that the parent-child
LSN-NSN interlock works correctly. We now use magic value 1 for that during
index build. Change the fake LSN counter to begin from 1000, so that 1 is
safely smaller than any real or fake LSN. 2 would've been enough for our
purposes, but let's reserve a bigger range, in case we need more special
values in the future.
Author: Anastasia Lubennikova, Andrey V. Lepikhov Reviewed-by: Heikki Linnakangas, Dmitry Dolgov
Valgrind was rightly complaining that IndexVacuumInfo->report_progress
(added by commit ab0dfc961b6a) was not being initialized in some code
paths. Repair.
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Stephen Frost [Tue, 2 Apr 2019 16:35:32 +0000 (12:35 -0400)]
Add support for partial TOAST decompression
When asked for a slice of a TOAST entry, decompress enough to return the
slice instead of decompressing the entire object.
For use cases where the slice is at, or near, the beginning of the entry,
this avoids a lot of unnecessary decompression work.
This changes the signature of pglz_decompress() by adding a boolean to
indicate if it's ok for the call to finish before consuming all of the
source or destination buffers.
Author: Paul Ramsey Reviewed-By: Rafia Sabih, Darafei Praliaskouski, Regina Obe
Discussion: https://postgr.es/m/CACowWR07EDm7Y4m2kbhN_jnys%3DBBf9A6768RyQdKm_%3DNpkcaWg%40mail.gmail.com
postgres_fdw: Perform the (FINAL, NULL) upperrel operations remotely.
The upper-planner pathification allows FDWs to arrange to push down
different types of upper-stage operations to the remote side. This
commit teaches postgres_fdw to do it for the (FINAL, NULL) upperrel,
which is responsible for doing LockRows, LIMIT, and/or ModifyTable.
This provides the ability for postgres_fdw to handle SELECT commands
so that it 1) skips the LockRows step (if any) (note that this is
safe since it performs early locking) and 2) pushes down the LIMIT
and/or OFFSET restrictions (if any) to the remote side. This doesn't
handle the INSERT/UPDATE/DELETE cases.
Author: Etsuro Fujita Reviewed-By: Antonin Houska and Jeff Janes
Discussion: https://postgr.es/m/87pnz1aby9.fsf@news-spur.riddles.org.uk
postgres_fdw: Perform the (ORDERED, NULL) upperrel operations remotely.
The upper-planner pathification allows FDWs to arrange to push down
different types of upper-stage operations to the remote side. This
commit teaches postgres_fdw to do it for the (ORDERED, NULL) upperrel,
which is responsible for evaluating the query's ORDER BY ordering.
Since postgres_fdw is already able to evaluate that ordering remotely
for foreign baserels and foreign joinrels (see commit aa09cd242f et al.),
this adds support for that for foreign grouping relations.
Author: Etsuro Fujita Reviewed-By: Antonin Houska and Jeff Janes
Discussion: https://postgr.es/m/87pnz1aby9.fsf@news-spur.riddles.org.uk
Dean Rasheed [Tue, 2 Apr 2019 07:13:59 +0000 (08:13 +0100)]
Perform RLS subquery checks as the right user when going via a view.
When accessing a table with RLS via a view, the RLS checks are
performed as the view owner. However, the code neglected to propagate
that to any subqueries in the RLS checks. Fix that by calling
setRuleCheckAsUser() for all RLS policy quals and withCheckOption
checks for RTEs with RLS.
Michael Paquier [Tue, 2 Apr 2019 01:58:07 +0000 (10:58 +0900)]
Add progress reporting to pg_checksums
This adds a new option to pg_checksums called -P/--progress, showing
every second some information about the computation state of an
operation for --check and --enable (--disable only updates the control
file and is quick). This requires a pre-scan of the data folder so as
the total size of checksummable items can be calculated, and then it
gets compared to the amount processed.
Similarly to what is done for pg_rewind and pg_basebackup, the
information printed in the progress report consists of the current
amount of data computed and the total amount of data to compute. This
could be extended later on.
Author: Michael Banck, Bernd Helmle Reviewed-by: Fabien Coelho, Michael Paquier
Discussion: https://postgr.es/m/1535719851.1286.17.camel@credativ.de
Thomas Munro [Tue, 2 Apr 2019 01:37:14 +0000 (14:37 +1300)]
Add wal_recycle and wal_init_zero GUCs.
On at least ZFS, it can be beneficial to create new WAL files every
time and not to bother zero-filling them. Since it's not clear which
other filesystems might benefit from one or both of those things,
add individual GUCs to control those two behaviors independently and
make only very general statements in the docs.
Author: Jerry Jelinek, with some adjustments by Thomas Munro Reviewed-by: Alvaro Herrera, Andres Freund, Tomas Vondra, Robert Haas and others
Discussion: https://postgr.es/m/CACPQ5Fo00QR7LNAcd1ZjgoBi4y97%2BK760YABs0vQHH5dLdkkMA%40mail.gmail.com
Andres Freund [Mon, 1 Apr 2019 21:57:21 +0000 (14:57 -0700)]
Only allow heap in a number of contrib modules.
Contrib modules pgrowlocks, pgstattuple and some functionality in
pageinspect currently only supports the heap table AM. As they are all
concerned with low-level details that aren't reasonably exposed via
tableam, error out if invoked on a non heap relation.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Mon, 1 Apr 2019 21:41:42 +0000 (14:41 -0700)]
tableam: Add table_finish_bulk_insert().
This replaces the previous calls of heap_sync() in places using
bulk-insert. By passing in the flags used for bulk-insert the AM can
decide (first at insert time and then during the finish call) which of
the optimizations apply to it, and what operations are necessary to
finish a bulk insert operation.
Also change HEAP_INSERT_* flags to TABLE_INSERT, and rename hi_options
to ti_options.
These changes are made even in copy.c, which hasn't yet been converted
to tableam. There's no harm in doing so.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Tom Lane [Mon, 1 Apr 2019 21:37:26 +0000 (17:37 -0400)]
Restrict pgbench's zipfian parameter to ensure good performance.
Remove the code that supported zipfian distribution parameters less
than 1.0, as it had undocumented performance hazards, and it's not
clear that the case is useful enough to justify either fixing or
documenting those hazards.
Also, since the code path for parameter > 1.0 could perform badly
for values very close to 1.0, establish a minimum allowed value
of 1.001. This solution seems superior to the previous vague
documentation warning about small values not performing well.
Thomas Munro [Mon, 1 Apr 2019 20:08:15 +0000 (09:08 +1300)]
Fix deadlock in heap_compute_xid_horizon_for_tuples().
We can't call code that uses syscache while we hold buffer locks
on a catalog relation. If passed such a relation, just fall back
to the general effective_io_concurrency GUC rather than trying to
look up the containing tablespace's IO concurrency setting.
We might find a better way to control prefetching in follow-up
work, but for now this is enough to avoid the deadlock introduced
by commit 558a9165e0.
Reviewed-by: Andres Freund Diagnosed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CA%2BhUKGLCwPF0S4Mk7S8qw%2BDK0Bq65LueN9rofAA3HHSYikW-Zw%40mail.gmail.com
Discussion: https://postgr.es/m/962831d8-c18d-180d-75fb-8b842e3a2742%40chrullrich.net
Tom Lane [Mon, 1 Apr 2019 20:20:22 +0000 (16:20 -0400)]
Improve documentation about our XML functionality.
Add a section explaining how our XML features depart from current
versions of the SQL standard. Update and clarify the descriptions
of some XML functions.
This unifies the various ad hoc logging (message printing, error
printing) systems used throughout the command-line programs.
Features:
- Program name is automatically prefixed.
- Message string does not end with newline. This removes a common
source of inconsistencies and omissions.
- Additionally, a final newline is automatically stripped, simplifying
use of PQerrorMessage() etc., another common source of mistakes.
- I converted error message strings to use %m where possible.
- As a result of the above several points, more translatable message
strings can be shared between different components and between
frontends and backend, without gratuitous punctuation or whitespace
differences.
- There is support for setting a "log level". This is not meant to be
user-facing, but can be used internally to implement debug or
verbose modes.
- Lazy argument evaluation, so no significant overhead if logging at
some level is disabled.
- Some color in the messages, similar to gcc and clang. Set
PG_COLOR=auto to try it out. Some colors are predefined, but can be
customized by setting PG_COLORS.
- Common files (common/, fe_utils/, etc.) can handle logging much more
simply by just using one API without worrying too much about the
context of the calling program, requiring callbacks, or having to
pass "progname" around everywhere.
- Some programs called setvbuf() to make sure that stderr is
unbuffered, even on Windows. But not all programs did that. This
is now done centrally.
Soft goals:
- Reduces vertical space use and visual complexity of error reporting
in the source code.
- Encourages more deliberate classification of messages. For example,
in some cases it wasn't clear without analyzing the surrounding code
whether a message was meant as an error or just an info.
- Concepts and terms are vaguely aligned with popular logging
frameworks such as log4j and Python logging.
This is all just about printing stuff out. Nothing affects program
flow (e.g., fatal exits). The uses are just too varied to do that.
Some existing code had wrappers that do some kind of print-and-exit,
and I adapted those.
I tried to keep the output mostly the same, but there is a lot of
historical baggage to unwind and special cases to consider, and I
might not always have succeeded. One significant change is that
pg_rewind used to write all error messages to stdout. That is now
changed to stderr.
Reviewed-by: Donald Dong <xdong@csumb.edu> Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
Throw error in jsonb_path_match() when result is not single boolean
jsonb_path_match() checks if jsonb document matches jsonpath query. Therefore,
jsonpath query should return single boolean. Currently, if result of jsonpath
is not a single boolean, NULL is returned independently whether silent mode
is on or off. But that appears to be wrong when silent mode is off. This
commit makes jsonb_path_match() throw an error in this case.
Restrict some cases in parsing numerics in jsonpath
Jsonpath now accepts integers with leading zeroes and floats starting with
a dot. However, SQL standard requires to follow JSON specification, which
doesn't allow none of these cases. Our json[b] datatypes also restrict that.
So, restrict it in jsonpath altogether.
This commit makes existing GIN operator classes jsonb_ops and json_path_ops
support "jsonb @@ jsonpath" and "jsonb @? jsonpath" operators. Basic idea is
to extract statements of following form out of jsonpath.
key1.key2. ... .keyN = const
The rest of jsonpath is rechecked from heap.
Catversion is bumped.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Alexander Korotkov Reviewed-by: Jonathan Katz, Pavel Stehule
is not allowed but we have to accept it in the grammar to avoid
shift/reduce conflicts because of the similar syntax for identity
columns. The existing code just ignored this, incorrectly. Add an
explicit error check and a bespoke error message.
One should almost always terminate an old process, not use a manual
removal tool like ipcrm. Removal of the ipcclean script eleven years
ago (39627b1ae680cba44f6e56ca5facec4fdbfe9495) and its non-replacement
corroborate that manual shm removal is now a niche goal. Back-patch to
9.4 (all supported versions).
Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI.
Andres Freund [Mon, 1 Apr 2019 00:51:49 +0000 (17:51 -0700)]
tableam: bitmap table scan.
This moves bitmap heap scan support to below an optional tableam
callback. It's optional as the whole concept of bitmap heapscans is
fairly block specific.
This basically moves the work previously done in bitgetpage() into the
new scan_bitmap_next_block callback, and the direct poking into the
buffer done in BitmapHeapNext() into the new scan_bitmap_next_tuple()
callback.
The abstraction is currently somewhat leaky because
nodeBitmapHeapscan.c's prefetching and visibilitymap based logic
remains - it's likely that we'll later have to move more into the
AM. But it's not trivial to do so without introducing a significant
amount of code duplication between the AMs, so that's a project for
later.
Note that now nodeBitmapHeapscan.c and the associated node types are a
bit misnamed. But it's not clear whether renaming wouldn't be a cure
worse than the disease. Either way, that'd be best done in a separate
commit.
Author: Andres Freund Reviewed-By: Robert Haas (in an older version)
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Sun, 31 Mar 2019 03:18:53 +0000 (20:18 -0700)]
tableam: sample scan.
This moves sample scan support to below tableam. It's not optional as
there is, in contrast to e.g. bitmap heap scans, no alternative way to
perform tablesample queries. If an AM can't deal with the block based
API, it will have to throw an ERROR.
The tableam callbacks for this are block based, but given the current
TsmRoutine interface, that seems to be required.
The new interface doesn't require TsmRoutines to perform visibility
checks anymore - that requires the TsmRoutine to know details about
the AM, which we want to avoid. To continue to allow taking the
returned number of tuples account SampleScanState now has a donetuples
field (which previously e.g. existed in SystemRowsSamplerData), which
is only incremented after the visibility check succeeds.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Peter Geoghegan [Mon, 1 Apr 2019 00:24:04 +0000 (17:24 -0700)]
Fix nbtree high key "continuescan" row compare bug.
Commit 29b64d1d mishandled skipping over truncated high key attributes
during row comparisons. The row comparison key matching loop would loop
forever when a truncated attribute was encountered for a row compare
subkey. Fix by following the example of other code in the loop: advance
the current subkey, or break out of the loop when the last subkey is
reached.
Add test coverage for the relevant _bt_check_rowcompare() code path.
The new test case is somewhat tied to nbtree implementation details,
which isn't ideal, but seems unavoidable.
Tom Lane [Sun, 31 Mar 2019 19:49:06 +0000 (15:49 -0400)]
Add test case exercising formerly-unreached code in inheritance_planner.
There was some debate about whether the code I'd added to remap
AppendRelInfos obtained from the initial SELECT planning run is
actually necessary. Add a test case demonstrating that it is.