postmaster startup scrutinizes any shared memory segment recorded in
postmaster.pid, exiting if that segment matches the current data
directory and has an attached process. When the postmaster.pid file was
missing, a starting postmaster used weaker checks. Change to use the
same checks in both scenarios. This increases the chance of a startup
failure, in lieu of data corruption, if the DBA does "kill -9 `head -n1
postmaster.pid` && rm postmaster.pid && pg_ctl -w start". A postmaster
will no longer recycle segments pertaining to other data directories.
That's good for production, but it's bad for integration tests that
crash a postmaster and immediately delete its data directory. Such a
test now leaks a segment indefinitely. No "make check-world" test does
that. win32_shmem.c already avoided all these problems. In 9.6 and
later, enhance PostgresNode to facilitate testing. Back-patch to 9.4
(all supported versions).
Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI.
Tomas Vondra [Wed, 3 Apr 2019 22:04:31 +0000 (00:04 +0200)]
Add SETTINGS option to EXPLAIN, to print modified settings.
Query planning is affected by a number of configuration options, and it
may be crucial to know which of those options were set to non-default
values. With this patch you can say EXPLAIN (SETTINGS ON) to include
that information in the query plan. Only options affecting planning,
with values different from the built-in default are printed.
This patch also adds auto_explain.log_settings option, providing the
same capability in auto_explain module.
Author: Tomas Vondra Reviewed-by: Rafia Sabih, John Naylor
Discussion: https://postgr.es/m/e1791b4c-df9c-be02-edc5-7c8874944be0@2ndquadrant.com
Author: Justin Pryzby, partly after a suggestion from Masahiko Sawada
Discussion: https://postgr.es/m/20190328135918.GA27808@telsasoft.com
Discussion: https://postgr.es/m/CAD21AoB9+y8N4+Fan-ne-_7J5yTybPttxeVKfwUocKp4zT1vNQ@mail.gmail.com
Tomas Vondra [Wed, 3 Apr 2019 19:08:36 +0000 (21:08 +0200)]
Reduce overhead of pg_mcv_list (de)serialization
Commit ea4e1c0e8f resolved issues with memory alignment in serialized
pg_mcv_list values, but it required copying data to/from the varlena
buffer during serialization and deserialization. As the MCV lits may
be fairly large, the overhead (memory consumption, CPU usage) can get
rather significant too.
This change tweaks the serialization format so that the alignment is
correct with respect to the varlena value, and so the parts may be
accessed directly without copying the data.
Catversion bump, as it affects existing pg_statistic_ext data.
Stephen Frost [Wed, 3 Apr 2019 19:02:33 +0000 (15:02 -0400)]
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
Support foreign keys that reference partitioned tables
Previously, while primary keys could be made on partitioned tables, it
was not possible to define foreign keys that reference those primary
keys. Now it is possible to do that.
Author: Álvaro Herrera Reviewed-by: Amit Langote, Jesper Pedersen
Discussion: https://postgr.es/m/20181102234158.735b3fevta63msbj@alvherre.pgsql
Generate less WAL during GiST, GIN and SP-GiST index build.
Instead of WAL-logging every modification during the build separately,
first build the index without any WAL-logging, and make a separate pass
through the index at the end, to write all pages to the WAL. This
significantly reduces the amount of WAL generated, and is usually also
faster, despite the extra I/O needed for the extra scan through the index.
WAL generated this way is also faster to replay.
For GiST, the LSN-NSN interlock makes this a little tricky. All pages must
be marked with a valid (i.e. non-zero) LSN, so that the parent-child
LSN-NSN interlock works correctly. We now use magic value 1 for that during
index build. Change the fake LSN counter to begin from 1000, so that 1 is
safely smaller than any real or fake LSN. 2 would've been enough for our
purposes, but let's reserve a bigger range, in case we need more special
values in the future.
Author: Anastasia Lubennikova, Andrey V. Lepikhov Reviewed-by: Heikki Linnakangas, Dmitry Dolgov
Valgrind was rightly complaining that IndexVacuumInfo->report_progress
(added by commit ab0dfc961b6a) was not being initialized in some code
paths. Repair.
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Stephen Frost [Tue, 2 Apr 2019 16:35:32 +0000 (12:35 -0400)]
Add support for partial TOAST decompression
When asked for a slice of a TOAST entry, decompress enough to return the
slice instead of decompressing the entire object.
For use cases where the slice is at, or near, the beginning of the entry,
this avoids a lot of unnecessary decompression work.
This changes the signature of pglz_decompress() by adding a boolean to
indicate if it's ok for the call to finish before consuming all of the
source or destination buffers.
Author: Paul Ramsey Reviewed-By: Rafia Sabih, Darafei Praliaskouski, Regina Obe
Discussion: https://postgr.es/m/CACowWR07EDm7Y4m2kbhN_jnys%3DBBf9A6768RyQdKm_%3DNpkcaWg%40mail.gmail.com
postgres_fdw: Perform the (FINAL, NULL) upperrel operations remotely.
The upper-planner pathification allows FDWs to arrange to push down
different types of upper-stage operations to the remote side. This
commit teaches postgres_fdw to do it for the (FINAL, NULL) upperrel,
which is responsible for doing LockRows, LIMIT, and/or ModifyTable.
This provides the ability for postgres_fdw to handle SELECT commands
so that it 1) skips the LockRows step (if any) (note that this is
safe since it performs early locking) and 2) pushes down the LIMIT
and/or OFFSET restrictions (if any) to the remote side. This doesn't
handle the INSERT/UPDATE/DELETE cases.
Author: Etsuro Fujita Reviewed-By: Antonin Houska and Jeff Janes
Discussion: https://postgr.es/m/87pnz1aby9.fsf@news-spur.riddles.org.uk
postgres_fdw: Perform the (ORDERED, NULL) upperrel operations remotely.
The upper-planner pathification allows FDWs to arrange to push down
different types of upper-stage operations to the remote side. This
commit teaches postgres_fdw to do it for the (ORDERED, NULL) upperrel,
which is responsible for evaluating the query's ORDER BY ordering.
Since postgres_fdw is already able to evaluate that ordering remotely
for foreign baserels and foreign joinrels (see commit aa09cd242f et al.),
this adds support for that for foreign grouping relations.
Author: Etsuro Fujita Reviewed-By: Antonin Houska and Jeff Janes
Discussion: https://postgr.es/m/87pnz1aby9.fsf@news-spur.riddles.org.uk
Dean Rasheed [Tue, 2 Apr 2019 07:13:59 +0000 (08:13 +0100)]
Perform RLS subquery checks as the right user when going via a view.
When accessing a table with RLS via a view, the RLS checks are
performed as the view owner. However, the code neglected to propagate
that to any subqueries in the RLS checks. Fix that by calling
setRuleCheckAsUser() for all RLS policy quals and withCheckOption
checks for RTEs with RLS.
Michael Paquier [Tue, 2 Apr 2019 01:58:07 +0000 (10:58 +0900)]
Add progress reporting to pg_checksums
This adds a new option to pg_checksums called -P/--progress, showing
every second some information about the computation state of an
operation for --check and --enable (--disable only updates the control
file and is quick). This requires a pre-scan of the data folder so as
the total size of checksummable items can be calculated, and then it
gets compared to the amount processed.
Similarly to what is done for pg_rewind and pg_basebackup, the
information printed in the progress report consists of the current
amount of data computed and the total amount of data to compute. This
could be extended later on.
Author: Michael Banck, Bernd Helmle Reviewed-by: Fabien Coelho, Michael Paquier
Discussion: https://postgr.es/m/1535719851.1286.17.camel@credativ.de
Thomas Munro [Tue, 2 Apr 2019 01:37:14 +0000 (14:37 +1300)]
Add wal_recycle and wal_init_zero GUCs.
On at least ZFS, it can be beneficial to create new WAL files every
time and not to bother zero-filling them. Since it's not clear which
other filesystems might benefit from one or both of those things,
add individual GUCs to control those two behaviors independently and
make only very general statements in the docs.
Author: Jerry Jelinek, with some adjustments by Thomas Munro Reviewed-by: Alvaro Herrera, Andres Freund, Tomas Vondra, Robert Haas and others
Discussion: https://postgr.es/m/CACPQ5Fo00QR7LNAcd1ZjgoBi4y97%2BK760YABs0vQHH5dLdkkMA%40mail.gmail.com
Andres Freund [Mon, 1 Apr 2019 21:57:21 +0000 (14:57 -0700)]
Only allow heap in a number of contrib modules.
Contrib modules pgrowlocks, pgstattuple and some functionality in
pageinspect currently only supports the heap table AM. As they are all
concerned with low-level details that aren't reasonably exposed via
tableam, error out if invoked on a non heap relation.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Mon, 1 Apr 2019 21:41:42 +0000 (14:41 -0700)]
tableam: Add table_finish_bulk_insert().
This replaces the previous calls of heap_sync() in places using
bulk-insert. By passing in the flags used for bulk-insert the AM can
decide (first at insert time and then during the finish call) which of
the optimizations apply to it, and what operations are necessary to
finish a bulk insert operation.
Also change HEAP_INSERT_* flags to TABLE_INSERT, and rename hi_options
to ti_options.
These changes are made even in copy.c, which hasn't yet been converted
to tableam. There's no harm in doing so.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Tom Lane [Mon, 1 Apr 2019 21:37:26 +0000 (17:37 -0400)]
Restrict pgbench's zipfian parameter to ensure good performance.
Remove the code that supported zipfian distribution parameters less
than 1.0, as it had undocumented performance hazards, and it's not
clear that the case is useful enough to justify either fixing or
documenting those hazards.
Also, since the code path for parameter > 1.0 could perform badly
for values very close to 1.0, establish a minimum allowed value
of 1.001. This solution seems superior to the previous vague
documentation warning about small values not performing well.
Thomas Munro [Mon, 1 Apr 2019 20:08:15 +0000 (09:08 +1300)]
Fix deadlock in heap_compute_xid_horizon_for_tuples().
We can't call code that uses syscache while we hold buffer locks
on a catalog relation. If passed such a relation, just fall back
to the general effective_io_concurrency GUC rather than trying to
look up the containing tablespace's IO concurrency setting.
We might find a better way to control prefetching in follow-up
work, but for now this is enough to avoid the deadlock introduced
by commit 558a9165e0.
Reviewed-by: Andres Freund Diagnosed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CA%2BhUKGLCwPF0S4Mk7S8qw%2BDK0Bq65LueN9rofAA3HHSYikW-Zw%40mail.gmail.com
Discussion: https://postgr.es/m/962831d8-c18d-180d-75fb-8b842e3a2742%40chrullrich.net
Tom Lane [Mon, 1 Apr 2019 20:20:22 +0000 (16:20 -0400)]
Improve documentation about our XML functionality.
Add a section explaining how our XML features depart from current
versions of the SQL standard. Update and clarify the descriptions
of some XML functions.
This unifies the various ad hoc logging (message printing, error
printing) systems used throughout the command-line programs.
Features:
- Program name is automatically prefixed.
- Message string does not end with newline. This removes a common
source of inconsistencies and omissions.
- Additionally, a final newline is automatically stripped, simplifying
use of PQerrorMessage() etc., another common source of mistakes.
- I converted error message strings to use %m where possible.
- As a result of the above several points, more translatable message
strings can be shared between different components and between
frontends and backend, without gratuitous punctuation or whitespace
differences.
- There is support for setting a "log level". This is not meant to be
user-facing, but can be used internally to implement debug or
verbose modes.
- Lazy argument evaluation, so no significant overhead if logging at
some level is disabled.
- Some color in the messages, similar to gcc and clang. Set
PG_COLOR=auto to try it out. Some colors are predefined, but can be
customized by setting PG_COLORS.
- Common files (common/, fe_utils/, etc.) can handle logging much more
simply by just using one API without worrying too much about the
context of the calling program, requiring callbacks, or having to
pass "progname" around everywhere.
- Some programs called setvbuf() to make sure that stderr is
unbuffered, even on Windows. But not all programs did that. This
is now done centrally.
Soft goals:
- Reduces vertical space use and visual complexity of error reporting
in the source code.
- Encourages more deliberate classification of messages. For example,
in some cases it wasn't clear without analyzing the surrounding code
whether a message was meant as an error or just an info.
- Concepts and terms are vaguely aligned with popular logging
frameworks such as log4j and Python logging.
This is all just about printing stuff out. Nothing affects program
flow (e.g., fatal exits). The uses are just too varied to do that.
Some existing code had wrappers that do some kind of print-and-exit,
and I adapted those.
I tried to keep the output mostly the same, but there is a lot of
historical baggage to unwind and special cases to consider, and I
might not always have succeeded. One significant change is that
pg_rewind used to write all error messages to stdout. That is now
changed to stderr.
Reviewed-by: Donald Dong <xdong@csumb.edu> Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
Throw error in jsonb_path_match() when result is not single boolean
jsonb_path_match() checks if jsonb document matches jsonpath query. Therefore,
jsonpath query should return single boolean. Currently, if result of jsonpath
is not a single boolean, NULL is returned independently whether silent mode
is on or off. But that appears to be wrong when silent mode is off. This
commit makes jsonb_path_match() throw an error in this case.
Restrict some cases in parsing numerics in jsonpath
Jsonpath now accepts integers with leading zeroes and floats starting with
a dot. However, SQL standard requires to follow JSON specification, which
doesn't allow none of these cases. Our json[b] datatypes also restrict that.
So, restrict it in jsonpath altogether.
This commit makes existing GIN operator classes jsonb_ops and json_path_ops
support "jsonb @@ jsonpath" and "jsonb @? jsonpath" operators. Basic idea is
to extract statements of following form out of jsonpath.
key1.key2. ... .keyN = const
The rest of jsonpath is rechecked from heap.
Catversion is bumped.
Discussion: https://postgr.es/m/fcc6fc6a-b497-f39a-923d-aa34d0c588e8%402ndQuadrant.com
Author: Nikita Glukhov, Alexander Korotkov Reviewed-by: Jonathan Katz, Pavel Stehule
is not allowed but we have to accept it in the grammar to avoid
shift/reduce conflicts because of the similar syntax for identity
columns. The existing code just ignored this, incorrectly. Add an
explicit error check and a bespoke error message.
One should almost always terminate an old process, not use a manual
removal tool like ipcrm. Removal of the ipcclean script eleven years
ago (39627b1ae680cba44f6e56ca5facec4fdbfe9495) and its non-replacement
corroborate that manual shm removal is now a niche goal. Back-patch to
9.4 (all supported versions).
Reviewed by Daniel Gustafsson and Kyotaro HORIGUCHI.
Andres Freund [Mon, 1 Apr 2019 00:51:49 +0000 (17:51 -0700)]
tableam: bitmap table scan.
This moves bitmap heap scan support to below an optional tableam
callback. It's optional as the whole concept of bitmap heapscans is
fairly block specific.
This basically moves the work previously done in bitgetpage() into the
new scan_bitmap_next_block callback, and the direct poking into the
buffer done in BitmapHeapNext() into the new scan_bitmap_next_tuple()
callback.
The abstraction is currently somewhat leaky because
nodeBitmapHeapscan.c's prefetching and visibilitymap based logic
remains - it's likely that we'll later have to move more into the
AM. But it's not trivial to do so without introducing a significant
amount of code duplication between the AMs, so that's a project for
later.
Note that now nodeBitmapHeapscan.c and the associated node types are a
bit misnamed. But it's not clear whether renaming wouldn't be a cure
worse than the disease. Either way, that'd be best done in a separate
commit.
Author: Andres Freund Reviewed-By: Robert Haas (in an older version)
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Sun, 31 Mar 2019 03:18:53 +0000 (20:18 -0700)]
tableam: sample scan.
This moves sample scan support to below tableam. It's not optional as
there is, in contrast to e.g. bitmap heap scans, no alternative way to
perform tablesample queries. If an AM can't deal with the block based
API, it will have to throw an ERROR.
The tableam callbacks for this are block based, but given the current
TsmRoutine interface, that seems to be required.
The new interface doesn't require TsmRoutines to perform visibility
checks anymore - that requires the TsmRoutine to know details about
the AM, which we want to avoid. To continue to allow taking the
returned number of tuples account SampleScanState now has a donetuples
field (which previously e.g. existed in SystemRowsSamplerData), which
is only incremented after the visibility check succeeds.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Peter Geoghegan [Mon, 1 Apr 2019 00:24:04 +0000 (17:24 -0700)]
Fix nbtree high key "continuescan" row compare bug.
Commit 29b64d1d mishandled skipping over truncated high key attributes
during row comparisons. The row comparison key matching loop would loop
forever when a truncated attribute was encountered for a row compare
subkey. Fix by following the example of other code in the loop: advance
the current subkey, or break out of the loop when the last subkey is
reached.
Add test coverage for the relevant _bt_check_rowcompare() code path.
The new test case is somewhat tied to nbtree implementation details,
which isn't ideal, but seems unavoidable.
Tom Lane [Sun, 31 Mar 2019 19:49:06 +0000 (15:49 -0400)]
Add test case exercising formerly-unreached code in inheritance_planner.
There was some debate about whether the code I'd added to remap
AppendRelInfos obtained from the initial SELECT planning run is
actually necessary. Add a test case demonstrating that it is.
Tom Lane [Sun, 31 Mar 2019 17:47:41 +0000 (13:47 -0400)]
Compute root->qual_security_level in a less random place.
We can set this up once and for all in subquery_planner's initial survey
of the flattened rangetable, rather than incrementally adjusting it in
build_simple_rel. The previous approach made it rather hard to reason
about exactly when the value would be available, and we were definitely
using it in some places before the final value was computed.
Noted while fooling around with Amit Langote's patch to delay creation
of inheritance child rels. That didn't break this code, but it made it
even more fragile, IMO.
Michael Paquier [Sun, 31 Mar 2019 13:59:12 +0000 (22:59 +0900)]
Skip redundant anti-wraparound vacuums
An anti-wraparound vacuum has to be by definition aggressive as it needs
to work on all the pages of a relation. However it can happen that due
to some concurrent activity an anti-wraparound vacuum is marked as
non-aggressive, which makes it redundant with a previous run, and
it is actually useless as an anti-wraparound vacuum should process all
the pages of a relation. This commit makes such vacuums to be skipped.
An anti-wraparound vacuum not aggressive can be found easily by mixing
low values of autovacuum_freeze_max_age (to control anti-wraparound) and
autovacuum_freeze_table_age (to control the aggressiveness).
28a8fa9 has added some extra logging printing all the possible
combinations of anti-wraparound and aggressive vacuums, which now gets
simplified as an anti-wraparound vacuum also non-aggressive gets
skipped.
Per discussion mainly between Andrew Dunstan, Robert Haas, Álvaro
Herrera, Kyotaro Horiguchi, Masahiko Sawada, and myself.
Author: Kyotaro Horiguchi, Michael Paquier Reviewed-by: Andrew Dunstan, Álvaro Herrera
Discussion: https://postgr.es/m/20180914153554.562muwr3uwujno75@alvherre.pgsql
Andres Freund [Sat, 30 Mar 2019 23:40:33 +0000 (16:40 -0700)]
tableam: Move heap specific logic from estimate_rel_size below tableam.
This just moves the table/matview[/toast] determination of relation
size to a callback, and uses a copy of the existing logic to implement
that callback for heap.
It probably would make sense to also move the index specific logic
into a callback, so the metapage handling (and probably more) can be
index specific. But that's a separate task.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Andres Freund [Sat, 30 Mar 2019 23:21:09 +0000 (16:21 -0700)]
tableam: VACUUM and ANALYZE support.
This is a relatively straightforward move of the current
implementation to sit below tableam. As the current analyze sampling
implementation is pretty inherently block based, the tableam analyze
interface is as well. It might make sense to generalize that at some
point, but that seems like a larger project that shouldn't be
undertaken at the same time as the introduction of tableam.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Tom Lane [Sat, 30 Mar 2019 22:58:55 +0000 (18:58 -0400)]
Speed up planning when partitions can be pruned at plan time.
Previously, the planner created RangeTblEntry and RelOptInfo structs
for every partition of a partitioned table, even though many of them
might later be deemed uninteresting thanks to partition pruning logic.
This incurred significant overhead when there are many partitions.
Arrange to postpone creation of these data structures until after
we've processed the query enough to identify restriction quals for
the partitioned table, and then apply partition pruning before not
after creation of each partition's data structures. In this way
we need not open the partition relations at all for partitions that
the planner has no real interest in.
For queries that can be proven at plan time to access only a small
number of partitions, this patch improves the practical maximum
number of partitions from under 100 to perhaps a few thousand.
Amit Langote, reviewed at various times by Dilip Kumar, Jesper Pedersen,
Yoshikazu Imai, and David Rowley
Tomas Vondra [Sat, 30 Mar 2019 17:34:59 +0000 (18:34 +0100)]
Additional fixes of memory alignment in pg_mcv_list code
Commit d85e0f366a tried to fix memory alignment issues in serialization
and deserialization of pg_mcv_list values, but it was a few bricks shy.
The arrays of uint16 indexes in serialized items was not aligned, and
the both the values and isnull flags were using the same pointer.
Tom Lane [Sat, 30 Mar 2019 16:48:19 +0000 (12:48 -0400)]
Avoid crash in partitionwise join planning under GEQO.
While trying to plan a partitionwise join, we may be faced with cases
where one or both input partitions for a particular segment of the join
have been pruned away. In HEAD and v11, this is problematic because
earlier processing didn't bother to make a pruned RelOptInfo fully
valid. With an upcoming patch to make partition pruning more efficient,
this'll be even more problematic because said RelOptInfo won't exist at
all.
The existing code attempts to deal with this by retroactively making the
RelOptInfo fully valid, but that causes crashes under GEQO because join
planning is done in a short-lived memory context. In v11 we could
probably have fixed this by switching to the planner's main context
while fixing up the RelOptInfo, but that idea doesn't scale well to the
upcoming patch. It would be better not to mess with the base-relation
data structures during join planning, anyway --- that's just a recipe
for order-of-operations bugs.
In many cases, though, we don't actually need the child RelOptInfo,
because if the input is certainly empty then the join segment's result
is certainly empty, so we can skip making a join plan altogether. (The
existing code ultimately arrives at the same conclusion, but only after
doing a lot more work.) This approach works except when the pruned-away
partition is on the nullable side of a LEFT, ANTI, or FULL join, and the
other side isn't pruned. But in those cases the existing code leaves a
lot to be desired anyway --- the correct output is just the result of
the unpruned side of the join, but we were emitting a useless outer join
against a dummy Result. Pending somebody writing code to handle that
more nicely, let's just abandon the partitionwise-join optimization in
such cases.
When the modified code skips making a join plan, it doesn't make a
join RelOptInfo either; this requires some upper-level code to
cope with nulls in part_rels[] arrays. We would have had to have
that anyway after the upcoming patch.
Back-patch to v11 since the crash is demonstrable there.
Peter Eisentraut [Sat, 30 Mar 2019 07:13:09 +0000 (08:13 +0100)]
Generated columns
This is an SQL-standard feature that allows creating columns that are
computed from expressions rather than assigned, similar to a view or
materialized view but on a column basis.
This implements one kind of generated column: stored (computed on
write). Another kind, virtual (computed on read), is planned for the
future, and some room is left for it.
Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b151f851-4019-bdb1-699e-ebab07d2f40a@2ndquadrant.com
Tomas Vondra [Fri, 29 Mar 2019 17:50:51 +0000 (18:50 +0100)]
Fix memory alignment in pg_mcv_list serialization
Blind attempt at fixing ia64, hppa an sparc builds.
The serialized representation of MCV lists did not enforce proper memory
alignment for internal fields, resulting in deserialization issues on
platforms that are more sensitive to this (ia64, sparc and hppa).
This forces a catalog version bump, because the layout of serialized
pg_mcv_list changes.
Michael Paquier [Fri, 29 Mar 2019 14:00:51 +0000 (23:00 +0900)]
Reorganize Notes section in documentation of pg_checksums
This commit reorders the paragraphs of the Notes section in order of
importance, and clarifies better the safe uses of pg_checksums for
replication setups.
Robert Haas [Fri, 29 Mar 2019 12:22:49 +0000 (08:22 -0400)]
Allow existing VACUUM options to take a Boolean argument.
This makes VACUUM work more like EXPLAIN already does without changing
the meaning of any commands that already work. It is intended to
facilitate the addition of future VACUUM options that may take
non-Boolean parameters or that default to false.
Robert Haas [Fri, 29 Mar 2019 12:09:39 +0000 (08:09 -0400)]
Warn more strongly about the dangers of exclusive backup mode.
Especially, warn about the hazards of mishandling the backup_label
file. Adjust a couple of server messages to be more clear about
the hazards associated with removing backup_label files, too.
David Steele and Robert Haas, reviewed by Laurenz Albe, Martín
Marqués, Peter Eisentraut, and Magnus Hagander.
Peter Eisentraut [Fri, 29 Mar 2019 09:53:40 +0000 (10:53 +0100)]
Fix incorrect code in new REINDEX CONCURRENTLY code
The previous code was adding pointers to transient variables to a
list, but by the time the list was read, the variable might be gone,
depending on the compiler. Fix it by making copies in the proper
memory context.
Peter Eisentraut [Fri, 29 Mar 2019 07:25:20 +0000 (08:25 +0100)]
REINDEX CONCURRENTLY
This adds the CONCURRENTLY option to the REINDEX command. A REINDEX
CONCURRENTLY on a specific index creates a new index (like CREATE
INDEX CONCURRENTLY), then renames the old index away and the new index
in place and adjusts the dependencies, and then drops the old
index (like DROP INDEX CONCURRENTLY). The REINDEX command also has
the capability to run its other variants (TABLE, DATABASE) with the
CONCURRENTLY option (but not SYSTEM).
The reindexdb command gets the --concurrently option.
Author: Michael Paquier, Andreas Karlsson, Peter Eisentraut Reviewed-by: Andres Freund, Fujii Masao, Jim Nasby, Sergei Kornilov
Discussion: https://www.postgresql.org/message-id/flat/60052986-956b-4478-45ed-8bd119e9b9cf%402ndquadrant.com#74948a1044c56c5e817a5050f554ddee
Andres Freund [Fri, 29 Mar 2019 03:01:14 +0000 (20:01 -0700)]
tableam: relation creation, VACUUM FULL/CLUSTER, SET TABLESPACE.
This moves the responsibility for:
- creating the storage necessary for a relation, including creating a
new relfilenode for a relation with existing storage
- non-transactional truncation of a relation
- VACUUM FULL / CLUSTER's rewrite of a table
below tableam.
This is fairly straight forward, with a bit of complexity smattered in
to move the computation of xid / multixid horizons below the AM, as
they don't make sense for every table AM.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Tomas Vondra [Thu, 28 Mar 2019 19:03:14 +0000 (20:03 +0100)]
Fix deserialization of pg_mcv_list values
There were multiple issues in deserialization of pg_mcv_list values.
Firstly, the data is loaded from syscache, but the deserialization was
performed after ReleaseSysCache(), at which point the data might have
already disappeared. Fixed by moving the calls in statext_mcv_load,
and using the same NULL-handling code as existing stats.
Secondly, the deserialized representation used pointers into the
serialized representation. But that is also unsafe, because the data
may disappear at any time. Fixed by reworking and simplifying the
deserialization code to always copy all the data.
And thirdly, when deserializing values for types passed by value, the
code simply did memcpy(d,s,typlen) which however does not work on
bigendian machines. Fixed by using fetch_att/store_att_byval.
Thomas Munro [Wed, 27 Mar 2019 21:59:19 +0000 (10:59 +1300)]
Use FullTransactionId for the transaction stack.
Provide GetTopFullTransactionId() and GetCurrentFullTransactionId().
The intended users of these interfaces are access methods that use
xids for visibility checks but don't want to have to go back and
"freeze" existing references some time later before the 32 bit xid
counter wraps around.
Use a new struct to serialize the transaction state for parallel
query, because FullTransactionId doesn't fit into the previous
serialization scheme very well.
Author: Thomas Munro Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
Thomas Munro [Wed, 27 Mar 2019 21:34:43 +0000 (10:34 +1300)]
Add basic infrastructure for 64 bit transaction IDs.
Instead of inferring epoch progress from xids and checkpoints,
introduce a 64 bit FullTransactionId type and use it to track xid
generation. This fixes an unlikely bug where the epoch is reported
incorrectly if the range of active xids wraps around more than once
between checkpoints.
The only user-visible effect of this commit is to correct the epoch
used by txid_current() and txid_status(), also visible with
pg_controldata, in those rare circumstances. It also creates some
basic infrastructure so that later patches can use 64 bit
transaction IDs in more places.
The new type is a struct that we pass by value, as a form of strong
typedef. This prevents the sort of accidental confusion between
TransactionId and FullTransactionId that would be possible if we
were to use a plain old uint64.
Author: Thomas Munro Reported-by: Amit Kapila Reviewed-by: Andres Freund, Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/CAA4eK1%2BMv%2Bmb0HFfWM9Srtc6MVe160WFurXV68iAFMcagRZ0dQ%40mail.gmail.com
Andres Freund [Thu, 28 Mar 2019 02:59:06 +0000 (19:59 -0700)]
tableam: Support for an index build's initial table scan(s).
To support building indexes over tables of different AMs, the scans to
do so need to be routed through the table AM. While moving a fair
amount of code, nearly all the changes are just moving code to below a
callback.
Currently the range based interface wouldn't make much sense for non
block based table AMs. But that seems aceptable for now.
Author: Andres Freund
Discussion: https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Peter Eisentraut [Wed, 27 Mar 2019 21:57:43 +0000 (22:57 +0100)]
doc: Add some images
Add infrastructure for having images in the documentation, in SVG
format. Add two images to start with. See the included README file
for instructions.
Author: Jürgen Purtz <juergen@purtz.de>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/aaa54502-05c0-4ea5-9af8-770411a6bf4b@purtz.de
Peter Eisentraut [Wed, 27 Mar 2019 20:12:10 +0000 (21:12 +0100)]
Use Pandoc also for plain-text documentation output
The makefile rule for the (rarely used) plain-text output postgres.txt
was still written to use lynx, but in 96b8b8b6f9d8de4af01a77797273ad88c7a8e32e, where the INSTALL file was
switched to pandoc, the rest of the makefile support for lynx was
removed, so this was broken. Rewrite the rule to also use pandoc for
postgres.txt.
Tomas Vondra [Wed, 27 Mar 2019 19:07:41 +0000 (20:07 +0100)]
Minor improvements for the multivariate MCV lists
The MCV build should always call get_mincount_for_mcv_list(), as the
there is no other logic to decide whether the MCV list represents all
the data. So just remove the (ngroups > nitems) condition.
Also, when building MCV lists, the number of items was limited by the
statistics target (i.e. up to 10000). But when deserializing the MCV
list, a different value (8192) was used to check the input, causing
an error. Simply ensure that the same value is used in both places.
This should have been included in 7300a69950, but I forgot to include it
in that commit.
Tomas Vondra [Wed, 27 Mar 2019 17:32:18 +0000 (18:32 +0100)]
Add support for multivariate MCV lists
Introduce a third extended statistic type, supported by the CREATE
STATISTICS command - MCV lists, a generalization of the statistic
already built and used for individual columns.
Compared to the already supported types (n-distinct coefficients and
functional dependencies), MCV lists are more complex, include column
values and allow estimation of much wider range of common clauses
(equality and inequality conditions, IS NULL, IS NOT NULL etc.).
Similarly to the other types, a new pseudo-type (pg_mcv_list) is used.
Author: Tomas Vondra Reviewed-by: Dean Rasheed, David Rowley, Mark Dilger, Alvaro Herrera
Discussion: https://postgr.es/m/dfdac334-9cf2-2597-fb27-f0fb3753f435@2ndquadrant.com
Tom Lane [Wed, 27 Mar 2019 16:57:41 +0000 (12:57 -0400)]
Avoid passing query tlist around separately from root->processed_tlist.
In the dim past, the planner kept the fully-processed version of the query
targetlist (the result of preprocess_targetlist) in grouping_planner's
local variable "tlist", and only grudgingly passed it to individual other
routines as needed. Later we discovered a need to still have it available
after grouping_planner finishes, and invented the root->processed_tlist
field for that purpose, but it wasn't used internally to grouping_planner;
the tlist was still being passed around separately in the same places as
before.
Now comes a proposed patch to allow appendrel expansion to add entries
to the processed tlist, well after preprocess_targetlist has finished
its work. To avoid having to pass around the tlist explicitly, it's
proposed to allow appendrel expansion to modify root->processed_tlist.
That makes aliasing the tlist with assorted parameters and local
variables really scary. It would accidentally work as long as the
tlist is initially nonempty, because then the List header won't move
around, but it's not exactly hard to think of ways for that to break.
Aliased values are poor programming practice anyway.
Hence, get rid of local variables and parameters that can be identified
with root->processed_tlist, in favor of just using that field directly.
And adjust comments to match. (Some of the new comments speak as though
it's already possible for appendrel expansion to modify the tlist; that's
not true yet, but will happen in a later patch.)
Alvaro Herrera [Wed, 27 Mar 2019 15:17:19 +0000 (12:17 -0300)]
pgbench: doExecuteCommand -> executeMetaCommand
The new function is only in charge of meta commands, not SQL commands.
This change makes the code a little clearer: now all the state changes
are effected by advanceConnectionState. It also removes one indent
level, which makes the diff look bulkier than it really is.
Michael Paquier [Wed, 27 Mar 2019 12:04:25 +0000 (21:04 +0900)]
Improve error handling of column references in expression transformation
Column references are not allowed in default expressions and partition
bound expressions, and are restricted as such once the transformation of
their expressions is done. However, trying to use more complex column
references can lead to confusing error messages. For example, trying to
use a two-field column reference name for default expressions and
partition bounds leads to "missing FROM-clause entry for table", which
makes no sense in their respective context.
In order to make the errors generated more useful, this commit adds more
verbose messages when transforming column references depending on the
context. This has a little consequence though: for example an
expression using an aggregate with a column reference as argument would
cause an error to be generated for the column reference, while the
aggregate was the problem reported before this commit because column
references get transformed first.
The confusion exists for default expressions for a long time, and the
problem is new as of v12 for partition bounds. Still per the lack of
complaints on the matter no backpatch is done.
The patch has been written by Amit Langote and me, and Tom Lane has
provided the improvement of the documentation for default expressions on
the CREATE TABLE page.
Author: Amit Langote, Michael Paquier Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20190326020853.GM2558@paquier.xyz
Thomas Munro [Wed, 27 Mar 2019 08:16:50 +0000 (21:16 +1300)]
Fix off-by-one error in txid_status().
The transaction ID returned by GetNextXidAndEpoch() is in the future,
so we can't attempt to access its status or we might try to read a
CLOG page that doesn't exist. The > vs >= confusion probably stemmed
from the choice of a variable name containing the word "last" instead
of "next", so fix that too.
Back-patch to 10 where the function arrived.
Author: Thomas Munro
Discussion: https://postgr.es/m/CA%2BhUKG%2Buua_BV5cyfsioKVN2d61Lukg28ECsWTXKvh%3DBtN2DPA%40mail.gmail.com
Michael Paquier [Wed, 27 Mar 2019 03:02:50 +0000 (12:02 +0900)]
Switch some palloc/memset calls to palloc0
Some code paths have been doing some allocations followed by an
immediate memset() to initialize the allocated area with zeros, this is
a bit overkill as there are already interfaces to do both things in one
call.
Author: Daniel Gustafsson Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/vN0OodBPkKs7g2Z1uyk3CUEmhdtspHgYCImhlmSxv1Xn6nY1ZnaaGHL8EWUIQ-NEv36tyc4G5-uA3UXUF2l4sFXtK_EQgLN1hcgunlFVKhA=@yesql.se
Michael Paquier [Wed, 27 Mar 2019 02:35:12 +0000 (11:35 +0900)]
Switch function current_schema[s]() to be parallel-unsafe
When invoked for the first time in a session, current_schema() and
current_schemas() can finish by creating a temporary schema. Currently
those functions are parallel-safe, however if for a reason or another
they get launched across multiple parallel workers, they would fail when
attempting to create a temporary schema as temporary contexts are not
supported in this case.
The original issue has been spotted by buildfarm members crake and
lapwing, after commit c5660e0 has introduced the first regression tests
based on current_schema() in the tree. After that, 396676b has
introduced a workaround to avoid parallel plans but that was not
completely right either.
Catversion is bumped.
Author: Michael Paquier Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20190118024618.GF1883@paquier.xyz
Tomas Vondra [Wed, 27 Mar 2019 01:39:39 +0000 (02:39 +0100)]
Track unowned relations in doubly-linked list
Relations dropped in a single transaction are tracked in a list of
unowned relations. With large number of dropped relations this resulted
in poor performance at the end of a transaction, when the relations are
removed from the singly linked list one by one.
Commit b4166911 attempted to address this issue (particularly when it
happens during recovery) by removing the relations in a reverse order,
resulting in O(1) lookups in the list of unowned relations. This did
not work reliably, though, and it was possible to trigger the O(N^2)
behavior in various ways.
Instead of trying to remove the relations in a specific order with
respect to the linked list, which seems rather fragile, switch to a
regular doubly linked. That allows us to remove relations cheaply no
matter where in the list they are.
As b4166911 was a bugfix, backpatched to all supported versions, do the
same thing here.
Andres Freund [Tue, 26 Mar 2019 21:41:46 +0000 (14:41 -0700)]
Compute XID horizon for page level index vacuum on primary.
Previously the xid horizon was only computed during WAL replay. That
had two major problems:
1) It relied on knowing what the table pointed to looks like. That was
easy enough before the introducing of tableam (we knew it had to be
heap, although some trickery around logging the heap relfilenodes
was required). But to properly handle table AMs we need
per-database catalog access to look up the AM handler, which
recovery doesn't allow.
2) Not knowing the xid horizon also makes it hard to support logical
decoding on standbys. When on a catalog table, we need to be able
to conflict with slots that have an xid horizon that's too old. But
computing the horizon by visiting the heap only works once
consistency is reached, but we always need to be able to detect
conflicts.
There's also a secondary problem, in that the current method performs
redundant work on every standby. But that's counterbalanced by
potentially computing the value when not necessary (either because
there's no standby, or because there's no connected backends).
Solve 1) and 2) by moving computation of the xid horizon to the
primary and by involving tableam in the computation of the horizon.
To address the potentially increased overhead, increase the efficiency
of the xid horizon computation for heap by sorting the tids, and
eliminating redundant buffer accesses. When prefetching is available,
additionally perform prefetching of buffers. As this is more of a
maintenance task, rather than something routinely done in every read
only query, we add an arbitrary 10 to the effective concurrency -
thereby using IO concurrency, when not globally enabled. That's
possibly not the perfect formula, but seems good enough for now.
Bumps WAL format, as latestRemovedXid is now part of the records, and
the heap's relfilenode isn't anymore.
Author: Andres Freund, Amit Khandekar, Robert Haas Reviewed-By: Robert Haas
Discussion:
https://postgr.es/m/20181212204154.nsxf3gzqv3gesl32@alap3.anarazel.de
https://postgr.es/m/20181214014235.dal5ogljs3bmlq44@alap3.anarazel.de
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
Alvaro Herrera [Tue, 26 Mar 2019 23:19:28 +0000 (20:19 -0300)]
Fix partitioned index creation bug with dropped columns
ALTER INDEX .. ATTACH PARTITION fails if the partitioned table where the
index is defined contains more dropped columns than its partition, with
this message:
ERROR: incorrect attribute map
The cause was that one caller of CompareIndexInfo was passing the number
of attributes of the partition rather than the parent, which confused
the length check. Repair.
This can cause pg_upgrade to fail when used on such a database. Leave
some more objects around after regression tests, so that the case is
detected by pg_upgrade test suite.
Remove some spurious empty lines noticed while looking for other cases
of the same problem.
Tom Lane [Tue, 26 Mar 2019 22:21:10 +0000 (18:21 -0400)]
Build "other rels" of appendrel baserels in a separate step.
Up to now, otherrel RelOptInfos were built at the same time as baserel
RelOptInfos, thanks to recursion in build_simple_rel(). However,
nothing in query_planner's preprocessing cares at all about otherrels,
only baserels, so we don't really need to build them until just before
we enter make_one_rel. This has two benefits:
* create_lateral_join_info did a lot of extra work to propagate
lateral-reference information from parents to the correct children.
But if we delay creation of the children till after that, it's
trivial (and much harder to break, too).
* Since we have all the restriction quals correctly assigned to
parent appendrels by this point, it'll be possible to do plan-time
pruning and never make child RelOptInfos at all for partitions that
can be pruned away. That's not done here, but will be later on.
Amit Langote, reviewed at various times by Dilip Kumar, Jesper Pedersen,
Yoshikazu Imai, and David Rowley
Tom Lane [Tue, 26 Mar 2019 17:32:30 +0000 (13:32 -0400)]
Fix oversight in data-type change for autovacuum_vacuum_cost_delay.
Commit caf626b2c missed that the relevant reloptions entry needs
to be moved from the intRelOpts[] array to realRelOpts[].
Somewhat surprisingly, it seems to work anyway, perhaps because
the desired default and limit values are all integers. We ought
to have either a simpler data structure or better cross-checking
here, but that's for another patch.
Tom Lane [Tue, 26 Mar 2019 16:03:27 +0000 (12:03 -0400)]
Get rid of duplicate child RTE for a partitioned table.
We've been creating duplicate RTEs for partitioned tables just
because we do so for regular inheritance parent tables. But unlike
regular-inheritance parents which are themselves regular tables
and thus need to be scanned, partitioned tables don't need the
extra RTE.
This makes the conditions for building a child RTE the same as those
for building an AppendRelInfo, allowing minor simplification in
expand_single_inheritance_child. Since the planner's actual processing
is driven off the AppendRelInfo list, nothing much changes beyond that,
we just have one fewer useless RTE entry.
Alvaro Herrera [Tue, 26 Mar 2019 14:14:34 +0000 (11:14 -0300)]
Improve psql's \d display of foreign key constraints
When used on a partition containing foreign keys coming from one of its
ancestors, \d would (rather unhelpfully) print the details about the
pg_constraint row in the partition. This becomes a bit frustrating when
the user tries things like dropping the FK in the partition; instead,
show the details for the foreign key on the table where it is defined.
Also, when a table is referenced by a foreign key on a partitioned
table, we would show multiple "Referenced by" lines, one for each
partition, which gets unwieldy pretty fast. Modify that so that it
shows only one line for the ancestor partitioned table where the FK is
defined.
Discussion: https://postgr.es/m/20181204143834.ym6euxxxi5aeqdpn@alvherre.pgsql Reviewed-by: Tom Lane, Amit Langote, Peter Eisentraut
Peter Eisentraut [Tue, 26 Mar 2019 08:23:08 +0000 (09:23 +0100)]
Fix misplaced const
These instances were apparently trying to carry the const qualifier
from the arguments through the complex casts, but for that the const
qualifier was misplaced.
Andres Freund [Tue, 26 Mar 2019 02:02:36 +0000 (19:02 -0700)]
Remove heap_hot_search().
After 71bdc99d0d7, "tableam: Add helper for indexes to check if a
corresponding table tuples exist." there's no in-core user left. As
there's unlikely to be an external user, and such an external user
could easily be adjusted to use table_index_fetch_tuple_check(),
remove heap_hot_search().
Per complaint from Peter Geoghegan
Author: Andres Freund
Discussion: https://postgr.es/m/CAH2-Wzn0Oq4ftJrTqRAsWy2WGjv0QrJcwoZ+yqWsF_Z5vjUBFw@mail.gmail.com
Michael Paquier [Tue, 26 Mar 2019 01:09:14 +0000 (10:09 +0900)]
Fix crash when using partition bound expressions
Since 7c079d7, partition bounds are able to use generalized expression
syntax when processed, treating "minvalue" and "maxvalue" as specific
cases as they get passed down for transformation as a column references.
The checks for infinite bounds in range expressions have been lax
though, causing crashes when trying to use column reference names with
more than one field. Here is an example causing a crash:
CREATE TABLE list_parted (a int) PARTITION BY LIST (a);
CREATE TABLE part_list_crash PARTITION OF list_parted
FOR VALUES IN (somename.somename);
Note that the creation of the second relation should fail as partition
bounds cannot have column references in their expressions, so when
finding an expression which does not match the expected infinite bounds,
then this commit lets the generic transformation machinery check after
it. The error message generated in this case references as well a
missing RTE, which is confusing. This problem will be treated
separately as it impacts as well default expressions for some time, and
for now only the cases where a crash can happen are fixed.
While on it, extend the set of regression tests in place for list
partition bounds and add an extra set for range partition bounds.
Reported-by: Alexander Lakhin
Author: Michael Paquier Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/15668-0377b1981aa1a393@postgresql.org