Tom Lane [Wed, 22 Feb 2017 20:04:07 +0000 (15:04 -0500)]
Fix contrib/pg_trgm's extraction of trigrams from regular expressions.
The logic for removing excess trigrams from the result was faulty.
It intends to avoid merging the initial and final states of the NFA,
which is necessary, but in testing whether removal of a specific trigram
would cause that, it failed to consider the combined effects of all the
state merges that that trigram's removal would cause. This could result
in a broken final graph that would never match anything, leading to GIN
or GiST indexscans not finding anything.
To fix, add a "tentParent" field that is used only within this loop,
and set it to show state merges that we are tentatively going to do.
While examining a particular arc, we must chase up through tentParent
links as well as regular parent links (the former can only appear atop
the latter), and we must account for state init/fin flag merges that
haven't actually been done yet.
To simplify the latter, combine the separate init and fin bool fields
into a bitmap flags field. I also chose to get rid of the "children"
state list, which seems entirely inessential.
Per bug #14563 from Alexey Isayko, which the added test cases are based on.
Back-patch to 9.3 where this code was added.
Andrew Dunstan [Wed, 22 Feb 2017 16:10:49 +0000 (11:10 -0500)]
Correctly handle array pseudotypes in to_json and to_jsonb
Columns with array pseudotypes have not been identified as arrays, so
they have been rendered as strings in the json and jsonb conversion
routines. This change allows them to be rendered as json arrays, making
it possible to deal correctly with the anyarray columns in pg_stats.
Robert Haas [Wed, 22 Feb 2017 06:45:17 +0000 (12:15 +0530)]
Pass the source text for a parallel query to the workers.
With this change, you can see the query that a parallel worker is
executing in pg_stat_activity, and if the worker crashes you can
see what query it was executing when it crashed.
Rafia Sabih, reviewed by Kuntal Ghosh and Amit Kapila and slightly
revised by me.
Robert Haas [Wed, 22 Feb 2017 02:29:27 +0000 (07:59 +0530)]
Shut down Gather's children before shutting down Gather itself.
It turns out that the original shutdown order here does not work well.
Multiple people attempting to develop further parallel query patches
have discovered that they need to do cleanup before the DSM goes away,
and you can't do that if the parent node gets cleaned up first.
Patch by me, reviewed by KaiGai Kohei and Dilip Kumar.
Tom Lane [Tue, 21 Feb 2017 22:51:27 +0000 (17:51 -0500)]
Fix sloppy handling of corner-case errors in fd.c.
Several places in fd.c had badly-thought-through handling of error returns
from lseek() and close(). The fact that those would seldom fail on valid
FDs is probably the reason we've not noticed this up to now; but if they
did fail, we'd get quite confused.
LruDelete and LruInsert actually just Assert'd that lseek never fails,
which is pretty awful on its face.
In LruDelete, we indeed can't throw an error, because that's likely to get
called during error abort and so throwing an error would probably just lead
to an infinite loop. But by the same token, throwing an error from the
close() right after that was ill-advised, not to mention that it would've
left the LRU state corrupted since we'd already unlinked the VFD from the
list. I also noticed that really, most of the time, we should know the
current seek position and it shouldn't be necessary to do an lseek here at
all. As patched, if we don't have a seek position and an lseek attempt
doesn't give us one, we'll close the file but then subsequent re-open
attempts will fail (except in the somewhat-unlikely case that a
FileSeek(SEEK_SET) call comes between and allows us to re-establish a known
target seek position). This isn't great but it won't result in any state
corruption.
Meanwhile, having an Assert instead of an honest test in LruInsert is
really dangerous: if that lseek failed, a subsequent read or write would
read or write from the start of the file, not where the caller expected,
leading to data corruption.
In both LruDelete and FileClose, if close() fails, just LOG that and mark
the VFD closed anyway. Possibly leaking an FD is preferable to getting
into an infinite loop or corrupting the VFD list. Besides, as far as I can
tell from the POSIX spec, it's unspecified whether or not the file has been
closed, so treating it as still open could be the wrong thing anyhow.
I also fixed a number of other places that were being sloppy about
behaving correctly when the seekPos is unknown.
Also, I changed FileSeek to return -1 with EINVAL for the cases where it
detects a bad offset, rather than throwing a hard elog(ERROR). It seemed
pretty inconsistent that some bad-offset cases would get a failure return
while others got elog(ERROR). It was missing an offset validity check for
the SEEK_CUR case on a closed file, too.
Back-patch to all supported branches, since all this code is fundamentally
identical in all of them.
Fujii Masao [Tue, 21 Feb 2017 18:11:58 +0000 (03:11 +0900)]
Make walsender always initialize the buffers.
Walsender uses the local buffers for each outgoing and incoming message.
Previously when creating replication slot, walsender forgot to initialize
one of them and which can cause the segmentation fault error. To fix this
issue, this commit changes walsender so that it always initialize them
before it executes the requested replication command.
Back-patch to 9.4 where replication slot was introduced.
Problem report and initial patch by Stas Kelvich, modified by me.
Report: https://www.postgresql.org/message-id/A1E9CB90-1FAC-4CAD-8DBA-9AA62A6E97C5@postgrespro.ru
Tom Lane [Tue, 21 Feb 2017 17:18:22 +0000 (12:18 -0500)]
Use less-generic table name in new regression test case.
Creating global objects named "foo" isn't an especially wise thing,
but especially not in a test script that has already used that name
for something else, and most especially not in a script that runs
in parallel with other scripts that use that name :-(
Tom Lane [Tue, 21 Feb 2017 16:28:23 +0000 (11:28 -0500)]
Reject too-old Python versions a bit sooner.
Commit 04aad4018 added this check after the search for a Python shared
library, which seems to me to be a pretty unfriendly ordering. The
search might fail for what are basically version-related reasons, and
in such a case it'd be better to say "your Python is too old" than
"could not find shared library for Python".
Peter Eisentraut [Tue, 21 Feb 2017 14:27:02 +0000 (09:27 -0500)]
Drop support for Python 2.3
There is no specific reason for this right now, but keeping support for
old Python versions around indefinitely increases the maintenance
burden. The oldest supported Python version is now Python 2.4, which is
still shipped in RHEL/CentOS 5 by default.
In configure, add a check for the required Python version and give a
friendly error message for an old version, instead of relying on an
obscure build error later on.
Tom Lane [Mon, 20 Feb 2017 15:27:48 +0000 (10:27 -0500)]
Improve error message for misuse of TZ, tz, OF formatting patterns.
Be specific about which pattern is being complained of, and avoid saying
"it's not supported in to_date", which is just confusing if the error is
actually coming out of to_timestamp. We can phrase it as "is only
supported in to_char", instead. Also, use the term "formatting field" not
"format pattern", because other error messages in the same file prefer that
terminology. (This isn't terribly consistent with the documentation, so
maybe we should change all these error messages?)
Tom Lane [Mon, 20 Feb 2017 15:05:00 +0000 (10:05 -0500)]
Fix documentation of to_char/to_timestamp TZ, tz, OF formatting patterns.
These are only supported in to_char, not in the other direction, but the
documentation failed to mention that. Also, describe TZ/tz as printing the
time zone "abbreviation", not "name", because what they print is elsewhere
referred to that way. Per bug #14558.
Tom Lane [Sun, 19 Feb 2017 21:41:51 +0000 (16:41 -0500)]
Dept of second thoughts: rename new perl script.
It didn't take long at all for me to become irritated that the original
choice of name for this script resulted in "warning" showing up in several
places in build logs, because I tend to grep for that. Change the script
name to avoid that.
Tom Lane [Sun, 19 Feb 2017 21:14:52 +0000 (16:14 -0500)]
Adjust PL/Tcl regression test to dodge a possible bug or zone dependency.
One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish
time zone setting. It's not clear if this is an old Tcl bug or something
odd about the zone data, but in any case that test is meant to see if the
Tcl [clock] command works at all, not what its corner-case behaviors are.
Therefore we have no need to test exactly which week a Sunday midnight is
considered to fall into. Probe the following Tuesday instead.
Tom Lane [Sun, 19 Feb 2017 18:04:30 +0000 (13:04 -0500)]
Suppress "unused variable" warnings with older versions of flex.
Versions of flex before 2.5.36 might generate code that results in an
"unused variable" warning, when using %option reentrant. Historically
we've worked around that by specifying -Wno-error, but that's an
unsatisfying solution. The official "fix" for this was just to insert a
dummy reference to the variable, so write a small perl script that edits
the generated C code similarly.
The MSVC side of this is untested, but the buildfarm should soon reveal
if I broke that.
Robert Haas [Sun, 19 Feb 2017 10:23:59 +0000 (15:53 +0530)]
Add optimizer and executor support for parallel index-only scans.
Commit 5262f7a4fc44f651241d2ff1fa688dd664a34874 added similar support
for parallel index scans; this extends that work to index-only scans.
As with parallel index scans, this requires support from the index AM,
so currently parallel index-only scans will only be possible for btree
indexes.
Rafia Sabih, reviewed and tested by Rahila Syed, Tushar Ahuja,
and Amit Kapila
Robert Haas [Sun, 19 Feb 2017 08:29:53 +0000 (13:59 +0530)]
Make dsa_allocate interface more like MemoryContextAlloc.
A new function dsa_allocate_extended now takes flags which indicate
that huge allocations should be permitted, that out-of-memory
conditions should not throw an error, and/or that the returned memory
should be zero-filled, just like MemoryContextAllocateExtended.
Commit 9acb85597f1223ac26a5b19a9345849c43d0ff54, which added
dsa_allocate0, was broken because it failed to account for the
possibility that dsa_allocate() might return InvalidDsaPointer.
This fixes that problem along the way.
Magnus Hagander [Sat, 18 Feb 2017 12:45:14 +0000 (13:45 +0100)]
Fix help message for pg_basebackup -R
The recovery.conf file that's generated is specifically for replication,
and not needed (or wanted) for regular backup restore, so indicate that
in the message.
Peter Eisentraut [Sat, 18 Feb 2017 00:32:15 +0000 (19:32 -0500)]
Optimize query for information_schema.constraint_column_usage
The way the old query was written prevented some join optimizations
because the join conditions were hidden inside a CASE expression. With
a large number of constraints, the query became unreasonably slow. The
new query performs much better.
Tom Lane [Thu, 16 Feb 2017 16:30:07 +0000 (11:30 -0500)]
Doc: remove duplicate index entry.
This causes a warning with the old html-docs toolchain, though not with the
new. I had originally supposed that we needed both <indexterm> entries to
get both a primary index entry and a see-also link; but evidently not,
as pointed out by Fabien Coelho.
Tom Lane [Wed, 15 Feb 2017 23:15:47 +0000 (18:15 -0500)]
Formatting and docs corrections for logical decoding output plugins.
Make the typedefs for output plugins consistent with project style;
they were previously not even consistent with each other as to layout
or inclusion of parameter names. Make the documentation look the same,
and fix errors therein (missing and misdescribed parameters).
Tom Lane [Wed, 15 Feb 2017 21:40:05 +0000 (16:40 -0500)]
Make sure that hash join's bulk-tuple-transfer loops are interruptible.
The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and
ExecHashRemoveNextSkewBucket() are all capable of iterating over many
tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend
might fail to respond to SIGINT or SIGTERM for an unreasonably long time.
Fix that. In the case of ExecHashJoinNewBatch(), it seems useful to put
the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather
than directly in the loop, because that will also ensure that both
principal code paths through ExecHashJoinOuterGetTuple() will do a
CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises.
Tom Lane [Wed, 15 Feb 2017 20:23:19 +0000 (15:23 -0500)]
Fix tab completion for "ALTER SYSTEM SET variable ...".
It wouldn't complete "TO" after the variable name, which is certainly
minor enough. But since we do complete "TO" after "SET variable ...",
and since this case used to work pre-9.6, I think this is a bug.
Also, fix the query used to collect the variable names; whoever last
touched it evidently didn't understand how the pieces are supposed
to fit together. It accidentally worked anyway, because readline
ignores irrelevant completions, but it was randomly unlike the ones
around it, and could be a source of actual bugs if someone copied
it as a prototype for another query.
Tom Lane [Wed, 15 Feb 2017 19:43:59 +0000 (14:43 -0500)]
Fix YA unwanted behavioral difference with operator_precedence_warning.
Jeff Janes noted that the error cursor position shown for some errors
would vary when operator_precedence_warning is turned on. We'd prefer
that option to have no undocumented effects, so this isn't desirable.
To fix, make sure that an AEXPR_PAREN node has the same exprLocation
as its child node.
(Note: it would be a little cheaper to use @2 here instead of an
exprLocation call, but there are cases where that wouldn't produce
the identical answer, so don't do it like that.)
Back-patch to 9.5 where this feature was introduced.
Robert Haas [Wed, 15 Feb 2017 18:53:24 +0000 (13:53 -0500)]
Add optimizer and executor support for parallel index scans.
In combination with 569174f1be92be93f5366212cc46960d28a5c5cd, which
taught the btree AM how to perform parallel index scans, this allows
parallel index scan plans on btree indexes. This infrastructure
should be general enough to support parallel index scans for other
index AMs as well, if someone updates them to support parallel
scans.
Amit Kapila, reviewed and tested by Anastasia Lubennikova, Tushar
Ahuja, and Haribabu Kommi, and me.
Robert Haas [Wed, 15 Feb 2017 18:37:24 +0000 (13:37 -0500)]
Replace min_parallel_relation_size with two new GUCs.
When min_parallel_relation_size was added, the only supported type
of parallel scan was a parallel sequential scan, but there are
pending patches for parallel index scan, parallel index-only scan,
and parallel bitmap heap scan. Those patches introduce two new
types of complications: first, what's relevant is not really the
total size of the relation but the portion of it that we will scan;
and second, index pages and heap pages shouldn't necessarily be
treated in exactly the same way. Typically, the number of index
pages will be quite small, but that doesn't necessarily mean that
a parallel index scan can't pay off.
Therefore, we introduce min_parallel_table_scan_size, which works
out a degree of parallelism for scans based on the number of table
pages that will be scanned (and which is therefore equivalent to
min_parallel_relation_size for parallel sequential scans) and also
min_parallel_index_scan_size which can be used to work out a degree
of parallelism based on the number of index pages that will be
scanned.
Robert Haas [Wed, 15 Feb 2017 16:03:41 +0000 (11:03 -0500)]
Document new libpq connection statuses for target_session_attrs.
I didn't realize these would ever be visible to clients, but Michael
figured out that it can happen when using asynchronous interfaces
such as PQconnectPoll.
The core of the functionality was already implemented when
pg_import_system_collations was added. This just exposes it as an
option in the SQL command.
Robert Haas [Wed, 15 Feb 2017 12:41:14 +0000 (07:41 -0500)]
btree: Support parallel index scans.
This isn't exposed to the optimizer or the executor yet; we'll add
support for those things in a separate patch. But this puts the
basic mechanism in place: several processes can attach to a parallel
btree index scan, and each one will get a subset of the tuples that
would have been produced by a non-parallel scan. Each index page
becomes the responsibility of a single worker, which then returns
all of the TIDs on that page.
Rahila Syed, Amit Kapila, Robert Haas, reviewed and tested by
Anastasia Lubennikova, Tushar Ahuja, and Haribabu Kommi.
Robert Haas [Tue, 14 Feb 2017 23:09:47 +0000 (18:09 -0500)]
Allow parallel workers to execute subplans.
This doesn't do anything to make Param nodes anything other than
parallel-restricted, so this only helps with uncorrelated subplans,
and it's not necessarily very cheap because each worker will run the
subplan separately (just as a Hash Join will build a separate copy of
the hash table in each participating process), but it's a first step
toward supporting cases that are more likely to help in practice, and
is occasionally useful on its own.
Amit Kapila, reviewed and tested by Rafia Sabih, Dilip Kumar, and
me.
Robert Haas [Tue, 14 Feb 2017 20:37:59 +0000 (15:37 -0500)]
Split index xlog headers from other private index headers.
The xlog-specific headers need to be included in both frontend code -
specifically, pg_waldump - and the backend, but the remainder of the
private headers for each index are only needed by the backend. By
splitting the xlog stuff out into separate headers, pg_waldump pulls
in fewer backend headers, which is a good thing.
Patch by me, reviewed by Michael Paquier and Andres Freund, per a
complaint from Dilip Kumar.
Tom Lane [Tue, 14 Feb 2017 16:47:12 +0000 (11:47 -0500)]
Remove duplicate code in planner.c.
I noticed while hacking on join UNION transforms that planner.c's
function get_base_rel_indexes() just duplicates the functionality of
get_relids_in_jointree(). It doesn't even have the excuse of being
older code :-(. Drop it and use the latter function instead.
Fujii Masao [Mon, 13 Feb 2017 17:30:46 +0000 (02:30 +0900)]
Replace references to "xlog" with "wal" in docs.
Commit f82ec32ac30ae7e3ec7c84067192535b2ff8ec0e renamed the pg_xlog
directory to pg_wal. To make things consistent, we decided to eliminate
"xlog" from user-visible docs.
Robert Haas [Mon, 13 Feb 2017 16:02:23 +0000 (11:02 -0500)]
Remove contrib/tsearch2.
This module was intended to ease migrations of applications that used
the pre-8.3 version of text search to the in-core version introduced
in that release. However, since all pre-8.3 releases of the database
have been out of support for more than 5 years at this point, we
expect that few people are depending on it at this point. If some
people still need it, nothing prevents it from being maintained as a
separate extension, outside of core.
Noah Misch [Sun, 12 Feb 2017 21:03:41 +0000 (16:03 -0500)]
Ignore tablespace ACLs when ignoring schema ACLs.
The ALTER TABLE ALTER TYPE implementation can issue DROP INDEX and
CREATE INDEX to refit existing indexes for the new column type. Since
this CREATE INDEX is an implementation detail of an index alteration,
the ensuing DefineIndex() should skip ACL checks specific to index
creation. It already skips the namespace ACL check. Make it skip the
tablespace ACL check, too. Back-patch to 9.2 (all supported versions).
Peter Eisentraut [Fri, 10 Feb 2017 20:12:32 +0000 (15:12 -0500)]
Add CREATE SEQUENCE AS <data type> clause
This stores a data type, required to be an integer type, with the
sequence. The sequences min and max values default to the range
supported by the type, and they cannot be set to values exceeding that
range. The internal implementation of the sequence is not affected.
Change the serial types to create sequences of the appropriate type.
This makes sure that the min and max values of the sequence for a serial
column match the range of values supported by the table column. So the
sequence can no longer overflow the table column.
This also makes monitoring for sequence exhaustion/wraparound easier,
which currently requires various contortions to cross-reference the
sequences with the table columns they are used with.
This commit also effectively reverts the pg_sequence column reordering
in f3b421da5f4addc95812b9db05a24972b8fd9739, because the new seqtypid
column allows us to fill the hole in the struct and create a more
natural overall column ordering.
Reviewed-by: Steve Singer <steve@ssinger.info> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Simon Riggs [Fri, 10 Feb 2017 10:03:28 +0000 (10:03 +0000)]
Update ddl.sgml for declarative partitioning syntax
Add a section titled "Partitioned Tables" to describe what are
partitioned tables, partition, their similarities with inheritance.
The existing section on inheritance is retained for clarity.
Then add examples to the partitioning chapter that show syntax for
partitioned tables. In fact they implement the same partitioning
scheme that is currently shown using inheritance.
Amit Langote, with additional details and explanatory text by me
Tom Lane [Thu, 9 Feb 2017 20:49:57 +0000 (15:49 -0500)]
Blind try to fix portability issue in commit 8f93bd851 et al.
The S/390 members of the buildfarm are showing failures indicating
that they're having trouble with the rint() calls I added yesterday.
There's no good reason for that, and I wonder if it is a compiler bug
similar to the one we worked around in d9476b838. Try to fix it using
the same method as before, namely to store the result of rint() back
into a "double" variable rather than immediately converting to int64.
(This isn't entirely waving a dead chicken, since on machines with
wider-than-double float registers, the extra store forces a width
conversion. I don't know if S/390 is like that, but it seems worth
trying.)
In passing, merge duplicate ereport() calls in float8_timestamptz().
Robert Haas [Thu, 9 Feb 2017 20:10:09 +0000 (15:10 -0500)]
Remove all references to "xlog" from SQL-callable functions in pg_proc.
Commit f82ec32ac30ae7e3ec7c84067192535b2ff8ec0e renamed the pg_xlog
directory to pg_wal. To make things consistent, and because "xlog" is
terrible terminology for either "transaction log" or "write-ahead log"
rename all SQL-callable functions that contain "xlog" in the name to
instead contain "wal". (Note that this may pose an upgrade hazard for
some users.)
Similarly, rename the xlog_position argument of the functions that
create slots to be called wal_position.
Robert Haas [Thu, 9 Feb 2017 19:59:57 +0000 (14:59 -0500)]
simplehash: Additional tweaks to make specifying an allocator work.
Even if we don't emit definitions for SH_ALLOCATE and SH_FREE, we
still need prototypes. The user can't define them before including
simplehash.h because SH_TYPE isn't available yet.
For the allocator to be able to access private_data, it needs to
become an argument to SH_CREATE. Previously we relied on callers
to set that after returning from SH_CREATE, but SH_CREATE calls
SH_ALLOCATE before returning.
Robert Haas [Thu, 9 Feb 2017 19:02:58 +0000 (14:02 -0500)]
pageinspect: Fix hash_bitmap_info not to read the underlying page.
It did that to verify that the page was an overflow page rather than
anything else, but that means that checking the status of all the
overflow bits requires reading the entire index. So don't do that.
The new code validates that the page is not a primary bucket page
or bitmap page by looking at the metapage, so that using this on
large numbers of pages can be reasonably efficient.
Ashutosh Sharma, per a complaint from me, and with further
modifications by me.
Tom Lane [Thu, 9 Feb 2017 16:52:12 +0000 (11:52 -0500)]
Allow index AMs to cache data across aminsert calls within a SQL command.
It's always been possible for index AMs to cache data across successive
amgettuple calls within a single SQL command: the IndexScanDesc.opaque
field is meant for precisely that. However, no comparable facility
exists for amortizing setup work across successive aminsert calls.
This patch adds such a feature and teaches GIN, GIST, and BRIN to use it
to amortize catalog lookups they'd previously been doing on every call.
(The other standard index AMs keep everything they need in the relcache,
so there's little to improve there.)
For GIN, the overall improvement in a statement that inserts many rows
can be as much as 10%, though it seems a bit less for the other two.
In addition, this makes a really significant difference in runtime
for CLOBBER_CACHE_ALWAYS tests, since in those builds the repeated
catalog lookups are vastly more expensive.
The reason this has been hard up to now is that the aminsert function is
not passed any useful place to cache per-statement data. What I chose to
do is to add suitable fields to struct IndexInfo and pass that to aminsert.
That's not widening the index AM API very much because IndexInfo is already
within the ken of ambuild; in fact, by passing the same info to aminsert
as to ambuild, this is really removing an inconsistency in the AM API.
Andres Freund [Thu, 9 Feb 2017 00:58:21 +0000 (16:58 -0800)]
Add explicit ORDER BY to a few tests that exercise hash-join code.
A proposed patch, also by Thomas and in the same thread, would change
the output order of these. Independent of the follow-up patches
getting committed, nailing down the order in these specific tests at
worst seems harmless.
Author: Thomas Munro
Discussion: https://postgr.es/m/CAEepm=1D4-tP7j7UAgT_j4ZX2j4Ehe1qgZQWFKBMb8F76UW5Rg@mail.gmail.com
Tom Lane [Wed, 8 Feb 2017 23:04:59 +0000 (18:04 -0500)]
Fix roundoff problems in float8_timestamptz() and make_interval().
When converting a float value to integer microseconds, we should be careful
to round the value to the nearest integer, typically with rint(); simply
assigning to an int64 variable will truncate, causing apparently off-by-one
values in cases that should work. Most places in the datetime code got
this right, but not these two.
float8_timestamptz() is new as of commit e511d878f (9.6). Previous
versions effectively depended on interval_mul() to do roundoff correctly,
which it does, so this fixes an accuracy regression in 9.6.
The problem in make_interval() dates to its introduction in 9.4. Aside
from being careful to round not truncate, let's incorporate the hours and
minutes inputs into the result with exact integer arithmetic, rather than
risk introducing roundoff error where there need not have been any.
float8_timestamptz() problem reported by Erik Nordström, though this is
not his proposed patch. make_interval() problem found by me.
Robert Haas [Wed, 8 Feb 2017 20:45:30 +0000 (15:45 -0500)]
Add WAL consistency checking facility.
When the new GUC wal_consistency_checking is set to a non-empty value,
it triggers recording of additional full-page images, which are
compared on the standby against the results of applying the WAL record
(without regard to those full-page images). Allowable differences
such as hints are masked out, and the resulting pages are compared;
any difference results in a FATAL error on the standby.
Kuntal Ghosh, based on earlier patches by Michael Paquier and Heikki
Linnakangas. Extensively reviewed and revised by Michael Paquier and
by me, with additional reviews and comments from Amit Kapila, Álvaro
Herrera, Simon Riggs, and Peter Eisentraut.
doc: Some improvements in CREATE SUBSCRIPTION ref page
Add link to description of libpq connection strings. Add link to
explanation of replication access control. This currently points to the
description of streaming replication access control, which is currently
the same as for logical replication, but that might be refined later.
Also remove plain-text passwords from the examples, to not encourage
that dubious practice.
Tom Lane [Tue, 7 Feb 2017 21:34:11 +0000 (16:34 -0500)]
Speed up "brin" regression test a little bit.
In the large DO block, collect row TIDs into array variables instead of
creating and dropping a pile of temporary tables. In a normal build,
this reduces the brin test script's runtime from about 1.1 sec to 0.4 sec
on my workstation. That's not all that exciting perhaps, but in a
CLOBBER_CACHE_ALWAYS test build, the runtime drops from 20 min to 17 min,
which is a little more useful. In combination with some other changes
I plan to propose, this will help provide a noticeable reduction in
cycle time for CLOBBER_CACHE_ALWAYS buildfarm critters.
Robert Haas [Tue, 7 Feb 2017 17:24:25 +0000 (12:24 -0500)]
Cache hash index's metapage in rel->rd_amcache.
This avoids a very significant amount of buffer manager traffic and
contention when scanning hash indexes, because it's no longer
necessary to lock and pin the metapage for every scan. We do need
some way of figuring out when the cache is too stale to use any more,
so that when we lock the primary bucket page to which the cached
metapage points us, we can tell whether a split has occurred since we
cached the metapage data. To do that, we use the hash_prevblkno field
in the primary bucket page, which would otherwise always be set to
InvalidBuffer.
This patch contains code so that it will continue working (although
less efficiently) with hash indexes built before this change, but
perhaps we should consider bumping the hash version and ripping out
the compatibility code. That decision can be made later, though.
Mithun Cy, reviewed by Jesper Pedersen, Amit Kapila, and by me.
Before committing, I made a number of cosmetic changes to the last
posted version of the patch, adjusted _hash_getcachedmetap to be more
careful about order of operation, and made some necessary updates to
the pageinspect documentation and regression tests.
Tom Lane [Tue, 7 Feb 2017 15:24:24 +0000 (10:24 -0500)]
Correct thinko in last-minute release note item.
The CREATE INDEX CONCURRENTLY bug can only be triggered by row updates,
not inserts, since the problem would arise from an update incorrectly
being made HOT. Noted by Alvaro.
Document the privileges required for each of the sequence functions.
This was already in the GRANT reference page, but also add it to the
function description for easier reference.
Avoid permission failure in pg_sequences.last_value
Before, reading pg_sequences.last_value would fail unless the user had
appropriate sequence permissions, which would make the pg_sequences view
cumbersome to use. Instead, return null instead of the real value when
there are no permissions.
From: Michael Paquier <michael.paquier@gmail.com> Reported-by: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>
Tom Lane [Mon, 6 Feb 2017 18:19:50 +0000 (13:19 -0500)]
Avoid returning stale attribute bitmaps in RelationGetIndexAttrBitmap().
The problem with the original coding here is that we might receive (and
clear) a relcache invalidation signal for the target relation down inside
one of the index_open calls we're doing. Since the target is open, we
would not drop the relcache entry, just reset its rd_indexvalid and
rd_indexlist fields. But RelationGetIndexAttrBitmap() kept going, and
would eventually cache and return potentially-obsolete attribute bitmaps.
The case where this matters is where the inval signal was from a CREATE
INDEX CONCURRENTLY telling us about a new index on a formerly-unindexed
column. (In all other cases, the lock we hold on the target rel should
prevent any concurrent change in index state.) Even just returning the
stale attribute bitmap is not such a problem, because it shouldn't matter
during the transaction in which we receive the signal. What hurts is
caching the stale data, because it can survive into later transactions,
breaking CREATE INDEX CONCURRENTLY's expectation that later transactions
will not create new broken HOT chains. The upshot is that there's a window
for building corrupted indexes during CREATE INDEX CONCURRENTLY.
This patch fixes the problem by rechecking that the set of index OIDs
is still the same at the end of RelationGetIndexAttrBitmap() as it was
at the start. If not, we loop back and try again. That's a little
more than is strictly necessary to fix the bug --- in principle, we
could return the stale data but not cache it --- but it seems like a
bad idea on general principles for relcache to return data it knows
is stale.
There might be more hazards of the same ilk, or there might be a better
way to fix this one, but this patch definitely improves matters and seems
unlikely to make anything worse. So let's push it into today's releases
even as we continue to study the problem.
The example of using CREATE DATABASE with the ENCODING option did not
work anymore (except in special circumstances) and did not represent a
good general-purpose example, so write some new examples.