Tom Lane [Fri, 29 Sep 2017 20:26:21 +0000 (16:26 -0400)]
Fix inadequate locking during get_rel_oids().
get_rel_oids used to not take any relation locks at all, but that stopped
being a good idea with commit 3c3bb9933, which inserted a syscache lookup
into the function. A concurrent DROP TABLE could now produce "cache lookup
failed", which we don't want to have happen in normal operation. The best
solution seems to be to transiently take a lock on the relation named by
the RangeVar (which also makes the result of RangeVarGetRelid a lot less
spongy). But we shouldn't hold the lock beyond this function, because we
don't want VACUUM to lock more than one table at a time. (That would not
be a big problem right now, but it will become one after the pending
feature patch to allow multiple tables to be named in VACUUM.)
In passing, adjust vacuum_rel and analyze_rel to document that we don't
trust the passed RangeVar to be accurate, and allow the RangeVar to
possibly be NULL --- which it is anyway for a whole-database VACUUM,
though we accidentally didn't crash for that case.
The passed RangeVar is in fact inaccurate when dealing with a child
partition, as of v10, and it has been wrong for a whole long time in the
case of vacuum_rel() recursing to a TOAST table. None of these things
present visible bugs up to now, because the passed RangeVar is in fact
only consulted for autovacuum logging, and in that particular context it's
always accurate because autovacuum doesn't let vacuum.c expand partitions
nor recurse to toast tables. Still, this seems like trouble waiting to
happen, so let's nail the door at least partly shut. (Further cleanup
is planned, in HEAD only, as part of the pending feature patch.)
Fix some sadly inaccurate/obsolete comments too. Back-patch to v10.
Robert Haas [Fri, 29 Sep 2017 19:59:11 +0000 (15:59 -0400)]
psql: Don't try to print a partition constraint we didn't fetch.
If \d rather than \d+ is used, then verbose is false and we don't ask
the server for the partition constraint; so we shouldn't print it in
that case either.
Maksim Milyutin, per a report from Jesper Pedersen. Reviewed by
Jesper Pedersen and Amit Langote.
Peter Eisentraut [Mon, 25 Sep 2017 15:59:46 +0000 (11:59 -0400)]
psql: Update \d sequence display
For \d sequencename, the psql code just did SELECT * FROM sequencename
to get the information to display, but this does not contain much
interesting information anymore in PostgreSQL 10, because the metadata
has been moved to a separate system catalog.
This patch creates a newly designed sequence display that is not merely
an extension of the general relation/table display as it was previously.
Tom Lane [Fri, 29 Sep 2017 15:32:05 +0000 (11:32 -0400)]
Marginal improvement for generated code in execExprInterp.c.
Avoid the coding pattern "*op->resvalue = f();", as some compilers think
that requires them to evaluate "op->resvalue" before the function call.
Unless there are lots of free registers, this can lead to a useless
register spill and reload across the call.
I changed all the cases like this in ExecInterpExpr(), but didn't bother
in the out-of-line opcode eval subroutines, since those are presumably
not as performance-critical.
Peter Eisentraut [Thu, 31 Aug 2017 16:24:47 +0000 (12:24 -0400)]
Add background worker type
Add bgw_type field to background worker structure. It is intended to be
set to the same value for all workers of the same type, so they can be
grouped in pg_stat_activity, for example.
The backend_type column in pg_stat_activity now shows bgw_type for a
background worker. The ps listing also no longer calls out that a
process is a background worker but just show the bgw_type. That way,
being a background worker is more of an implementation detail now that
is not shown to the user. However, most log messages still refer to
'background worker "%s"'; otherwise constructing sensible and
translatable log messages would become tricky.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Robert Haas [Fri, 29 Sep 2017 14:20:44 +0000 (10:20 -0400)]
Remove replacement selection sort.
At the time replacement_sort_tuples was introduced, there were still
cases where replacement selection sort noticeably outperformed using
quicksort even for the first run. However, those cases seem to have
evaporated as a result of further improvements made since that time
(and perhaps also advances in CPU technology). So remove replacement
selection and the controlling GUC entirely. This makes tuplesort.c
noticeably simpler and probably paves the way for further
optimizations someone might want to do later.
Peter Geoghegan, with review and testing by Tomas Vondra and me.
Peter Eisentraut [Fri, 11 Aug 2017 03:33:47 +0000 (23:33 -0400)]
Add lcov --initial
By just running lcov on the produced .gcda data files, we don't account
for source files that are not touched by tests at all. To fix that, run
lcov --initial to create a base line info file with all zero counters,
and merge that with the actual counters when creating the final report.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Peter Eisentraut [Thu, 28 Sep 2017 20:17:28 +0000 (16:17 -0400)]
Remove SGML marked sections
For XML compatibility, replace marked sections <![IGNORE[ ]]> with
comments <!-- -->. In some cases it seemed better to remove the ignored
text altogether, and in one case the text should not have been ignored.
Vacuum calls page-level HOT prune to remove dead HOT tuples before doing
liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples. But
concurrent transaction commit/abort may turn DEAD some of the HOT tuples
that survived the prune, before HeapTupleSatisfiesVacuum tests them.
This happens to activate the code that decides to freeze the tuple ...
which resuscitates it, duplicating data.
(This is especially bad if there's any unique constraints, because those
are now internally violated due to the duplicate entries, though you
won't know until you try to REINDEX or dump/restore the table.)
One possible fix would be to simply skip doing anything to the tuple,
and hope that the next HOT prune would remove it. But there is a
problem: if the tuple is older than freeze horizon, this would leave an
unfrozen XID behind, and if no HOT prune happens to clean it up before
the containing pg_clog segment is truncated away, it'd later cause an
error when the XID is looked up.
Fix the problem by having the tuple freezing routines cope with the
situation: don't freeze the tuple (and keep it dead). In the cases that
the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag
so that there is no need to look up the XID in pg_clog later on.
An isolation test is included, authored by Michael Paquier, loosely
based on Daniel Wood's original reproducer. It only tests one
particular scenario, though, not all the possible ways for this problem
to surface; it be good to have a more reliable way to test this more
fully, but it'd require more work.
In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql
I outlined another test case (more closely matching Dan Wood's) that
exposed a few more ways for the problem to occur.
Backpatch all the way back to 9.3, where this problem was introduced by
multixact juggling. In branches 9.3 and 9.4, this includes a backpatch
of commit e5ff9fefcd50 (of 9.5 era), since the original is not
correctable without matching the coding pattern in 9.5 up.
Reported-by: Daniel Wood Diagnosed-by: Daniel Wood Reviewed-by: Yi Wen Wong, Michaƫl Paquier
Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com
Peter Eisentraut [Fri, 11 Aug 2017 03:33:47 +0000 (23:33 -0400)]
Run only top-level recursive lcov
This is the way lcov was intended to be used. It is much faster and
more robust and makes the makefiles simpler than running it in each
subdirectory.
The previous coding ran gcov before lcov, but that is useless because
lcov/geninfo call gcov internally and use that information. Moreover,
this led to complications and failures during parallel make. This
separates the two targets: You either use "make coverage" to get
textual output from gcov or "make coverage-html" to get an HTML report
via lcov. (Using both is still problematic because they write the same
output files.)
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Tom Lane [Wed, 27 Sep 2017 21:05:53 +0000 (17:05 -0400)]
Fix behavior when converting a float infinity to numeric.
float8_numeric() and float4_numeric() failed to consider the possibility
that the input is an IEEE infinity. The results depended on the
platform-specific behavior of sprintf(): on most platforms you'd get
something like
ERROR: invalid input syntax for type numeric: "inf"
but at least on Windows it's possible for the conversion to succeed and
deliver a finite value (typically 1), due to a nonstandard output format
from sprintf and lack of syntax error checking in these functions.
Since our numeric type lacks the concept of infinity, a suitable conversion
is impossible; the best thing to do is throw an explicit error before
letting sprintf do its thing.
While at it, let's use snprintf not sprintf. Overrunning the buffer
should be impossible if sprintf does what it's supposed to, but this
is cheap insurance against a stack smash if it doesn't.
Problem reported by Taiki Kondo. Patch by me based on fix suggestion
from KaiGai Kohei. Back-patch to all supported branches.
Tom Lane [Wed, 27 Sep 2017 20:14:37 +0000 (16:14 -0400)]
Revert to 9.6 treatment of ALTER TYPE enumtype ADD VALUE.
This reverts commit 15bc038f9, along with the followon commits 1635e80d3
and 984c92074 that tried to clean up the problems exposed by bug #14825.
The result was incomplete because it failed to address parallel-query
requirements. With 10.0 release so close upon us, now does not seem like
the time to be adding more code to fix that. I hope we can un-revert this
code and add the missing parallel query support during the v11 cycle.
Peter Eisentraut [Wed, 27 Sep 2017 19:51:04 +0000 (15:51 -0400)]
Fix plperl build
The changes in 639928c988c1c2f52bbe7ca89e8c7c78a041b3e2 turned out to
require Perl 5.9.3, which is newer than our minimum required version.
So revert back to the old code for the normal case and only use the new
variant when both coverage and vpath are used. As the minimum Perl
version moves forward, we can drop the old code sometime.
Dean Rasheed [Wed, 27 Sep 2017 16:16:15 +0000 (17:16 +0100)]
Improve the CREATE POLICY documentation.
Provide a correct description of how multiple policies are combined,
clarify when SELECT permissions are required, mention SELECT FOR
UPDATE/SHARE, and do some other more minor tidying up.
Peter Eisentraut [Fri, 11 Aug 2017 03:33:47 +0000 (23:33 -0400)]
Improve vpath support in plperl build
Run xsubpp with the -output option instead of redirecting stdout. That
ensures that the #line directives in the output file point to the right
place in a vpath build. This in turn fixes an error in coverage builds
that it can't find the source files.
Refactor the makefile rules while we're here.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Peter Eisentraut [Fri, 15 Sep 2017 14:17:37 +0000 (10:17 -0400)]
Get rid of parameterized marked sections in SGML
Previously, we created a variant of the installation instructions for
producing the plain-text INSTALL file by marking up certain parts of
installation.sgml using SGML parameterized marked sections. Marked
sections will not work anymore in XML, so before we can convert the
documentation to XML, we need a new approach.
DocBook provides a "profiling" feature that allows selecting content
based on attributes, which would work here. But it imposes a noticeable
overhead when building the full documentation and causes complications
when building some output formats, and given that we recently spent a
fair amount of effort optimizing the documentation build time, it seems
sad to have to accept that.
So as an alternative, (1) we create our own mini-profiling layer that
adjusts just the text we want, and (2) assemble the pieces of content
that we want in the INSTALL file using XInclude. That way, there is no
overhead when building the full documentation and most of the "ugly"
stuff in installation.sgml can be removed and dealt with out of line.
Peter Eisentraut [Tue, 26 Sep 2017 20:07:52 +0000 (16:07 -0400)]
pg_basebackup: Add option to create replication slot
When requesting a particular replication slot, the new pg_basebackup
option -C/--create-slot creates it before starting to replicate from it.
Further refactor the slot creation logic to include the temporary slot
creation logic into the same function. Add new arguments is_temporary
and preserve_wal to CreateReplicationSlot(). Print in --verbose mode
that a slot has been created.
It drops objects outside information_schema that depend on objects
inside information_schema. For example, it will drop a user-defined
view if the view query refers to information_schema.
Peter Eisentraut [Tue, 26 Sep 2017 20:41:20 +0000 (16:41 -0400)]
Add some more pg_receivewal tests
Add some more tests for the --create-slot and --drop-slot options,
verifying that the right kind of slot was created and that the slot was
dropped. While working on an unrelated patch for pg_basebackup, some of
this was temporarily broken without any tests noticing.
posix_fallocate() is not quite a drop-in replacement for fallocate(),
because it is defined to return the error code as its function result,
not in "errno". I (tgl) missed this because RHEL6's version seems
to set errno as well. That is not the case on more modern Linuxen,
though, as per buildfarm results.
Aside from fixing the return-convention confusion, remove the test
for ENOSYS; we expect that glibc will mask that for posix_fallocate,
though it does not for fallocate. Keep the test for EINTR, because
POSIX specifies that as a possible result, and buildfarm results
suggest that it can happen in practice.
Tom Lane [Tue, 26 Sep 2017 17:12:13 +0000 (13:12 -0400)]
Remove heuristic same-transaction test from check_safe_enum_use().
The blacklist mechanism added by the preceding commit directly fixes
most of the practical cases that the same-transaction test was meant
to cover. What remains is use-cases like
begin;
create type e as enum('x');
alter type e add value 'y';
-- use 'y' somehow
commit;
However, because the same-transaction test is heuristic, it fails on
small variants of that, such as renaming the type or changing its
owner. Rather than try to explain the behavior to users, let's
remove it and just have a rule that the newly added value can't be
used before being committed, full stop. Perhaps later it will be
worth the implementation effort and overhead to have a more accurate
test for type-was-created-in-this-transaction. We'll wait for some
field experience with v10 before deciding to do that.
Tom Lane [Tue, 26 Sep 2017 17:12:03 +0000 (13:12 -0400)]
Use a blacklist to distinguish original from add-on enum values.
Commit 15bc038f9 allowed ALTER TYPE ADD VALUE to be executed inside
transaction blocks, by disallowing the use of the added value later
in the same transaction, except under limited circumstances. However,
the test for "limited circumstances" was heuristic and could reject
references to enum values that were created during CREATE TYPE AS ENUM,
not just later. This breaks the use-case of restoring pg_dump scripts
in a single transaction, as reported in bug #14825 from Balazs Szilfai.
We can improve this by keeping a "blacklist" table of enum value OIDs
created by ALTER TYPE ADD VALUE during the current transaction. Any
visible-but-uncommitted value whose OID is not in the blacklist must
have been created by CREATE TYPE AS ENUM, and can be used safely
because it could not have a lifespan shorter than its parent enum type.
This change also removes the restriction that a renamed enum value
can't be used before being committed (unless it was on the blacklist).
Andrew Dunstan, with cosmetic improvements by me.
Back-patch to v10.
Peter Eisentraut [Tue, 26 Sep 2017 15:58:22 +0000 (11:58 -0400)]
Sort pg_basebackup options better
The --slot option somehow ended up under options controlling the output,
and some other options were in a nonsensical place or were not moved
after recent renamings, so tidy all that up a bit.
Peter Eisentraut [Tue, 26 Sep 2017 14:03:56 +0000 (10:03 -0400)]
Handle heap rewrites better in logical replication
A FOR ALL TABLES publication naturally considers all base tables to be a
candidate for replication. This includes transient heaps that are
created during a table rewrite during DDL. This causes failures on the
subscriber side because it will not have a table like pg_temp_16386 to
receive data (and if it did, it would be the wrong table).
The prevent this problem, we filter out any tables that match this
naming pattern and match an actual table from FOR ALL TABLES
publications. This is only a heuristic, meaning that user tables that
match that naming could accidentally be omitted. A more robust solution
might require an explicit marking of such tables in pg_class somehow.
Reported-by: yxq <yxq@o2.pl>
Bug: #14785 Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Robert Haas [Tue, 26 Sep 2017 13:16:45 +0000 (09:16 -0400)]
Remove lsn from HashScanPosData.
This was intended as infrastructure for weakening VACUUM's locking
requirements, similar to what was done for btree indexes in commit 2ed5b87f96d473962ec5230fd820abfeaccb2069. However, for hash indexes,
it seems that the improvements which are possible are actually
extremely marginal. Furthermore, performing the LSN cross-check will
end up skipping cleanup far more often than is necessary; we only care
about page modifications due to a VACUUM, but the LSN check will fail
if ANY modification has occurred. So, rather than pressing forward
with that "optimization", just rip the LSN field out.
Patch by me, reviewed by Ashutosh Sharma and Amit Kapila
Tom Lane [Mon, 25 Sep 2017 20:09:19 +0000 (16:09 -0400)]
Avoid SIGBUS on Linux when a DSM memory request overruns tmpfs.
On Linux, shared memory segments created with shm_open() are backed by
swap files created in tmpfs. If the swap file needs to be extended,
but there's no tmpfs space left, you get a very unfriendly SIGBUS trap.
To avoid this, force allocation of the full request size when we create
the segment. This adds a few cycles, but none that we wouldn't expend
later anyway, assuming the request isn't hugely bigger than the actual
need.
Make this code #ifdef __linux__, because (a) there's not currently a
reason to think the same problem exists on other platforms, and (b)
applying posix_fallocate() to an FD created by shm_open() isn't very
portable anyway.
Tom Lane [Mon, 25 Sep 2017 15:55:24 +0000 (11:55 -0400)]
Make construct_[md_]array return a valid empty array for zero-size input.
If construct_array() or construct_md_array() were given a dimension of
zero, they'd produce an array that contains no elements but has positive
dimension. This violates a general expectation that empty arrays should
have ndims = 0; in particular, while arrays like this print as empty,
they don't compare equal to other empty arrays.
Up to now we've expected callers to avoid making such calls and instead
be careful to call construct_empty_array() if there would be no elements.
But this has always been an easily missed case, and we've repeatedly had to
fix callers to do it right. In bug #14826, Erwin Brandstetter pointed out
yet another such oversight, in ts_lexize(); and a bit of examination of
other call sites found at least two more with similar issues. So let's
fix the problem centrally and permanently by changing these two functions
to construct a proper zero-D empty array whenever the array would be empty.
This renders a few explicit calls of construct_empty_array() redundant,
but the only such place I found that really seemed worth changing was in
ExecEvalArrayExpr().
Although this fixes some very old bugs, no back-patch: the problem is
pretty minor and the risk of changing behavior seems to outweigh the
benefit in stable branches.
Tom Lane [Sun, 24 Sep 2017 16:05:06 +0000 (12:05 -0400)]
Fix assorted infelicities in new SetWALSegSize() function.
* Failure to check for malloc failure (ok, pretty unlikely here, but
that's not an excuse).
* Leakage of open fd on read error, and of malloc'd buffer always.
* Incorrect assumption that a short read would set errno to zero.
* Failure to adhere to message style conventions (in particular,
not reporting errno where relevant; using "couldn't open" rather than
"could not open" is not really in line with project style either).
* Missing newlines on some messages.
Coverity spotted the leak problems; I noticed the rest while
fixing the leaks.
Peter Eisentraut [Sun, 24 Sep 2017 04:56:31 +0000 (00:56 -0400)]
Allow ICU to use SortSupport on Windows with UTF-8
There is no reason to ever prevent the use of SortSupport on Windows
when ICU locales are used. We previously avoided SortSupport on Windows
with UTF-8 server encoding and a non C-locale due to restrictions in
Windows' libc functionality.
This is now considered to be a restriction in one platform's libc
collation provider, and not a more general platform restriction.
Peter Eisentraut [Sun, 24 Sep 2017 04:29:59 +0000 (00:29 -0400)]
doc: Expand user documentation on SCRAM
Explain more about how the different password authentication methods and
the password_encryption settings relate to each other, give some
upgrading advice, and set a better link from the release notes.
Peter Eisentraut [Sun, 24 Sep 2017 02:59:26 +0000 (22:59 -0400)]
Fix pg_basebackup test to original intent
One test case was meant to check that pg_basebackup does not succeed
when a slot is specified with -S but WAL streaming is not selected,
which used to require specifying -X stream. Since -X stream is the
default in PostgreSQL 10, this test case no longer covers that meaning,
but the pg_basebackup invocation happened to fail anyway for the
unrelated reason that the specified replication slot does not exist. To
fix, move the test case to later in the file where the slot does exist,
and add -X none to the invocation so that it covers the originally meant
behavior.
extracted from a patch by Michael Banck <michael.banck@credativ.de>
Tom Lane [Sat, 23 Sep 2017 17:28:16 +0000 (13:28 -0400)]
Improve memory management in autovacuum.c.
Invoke vacuum(), as well as "work item" processing, in the PortalContext
that do_autovacuum() has manufactured, which will be reset before each
such invocation. This ensures cleanup of any memory leaked by these
operations. It also avoids the rather dangerous practice of calling
vacuum() in a context that vacuum() itself will destroy while it runs.
There's no known live bug there, but it's not hard to imagine introducing
one if we leave it like this.
Tom Lane, reviewed by Michael Paquier and Alvaro Herrera
Tom Lane [Sat, 23 Sep 2017 17:02:30 +0000 (13:02 -0400)]
Remove pgbench "progress" test pending solution of its timing issues.
Buildfarm member skink shows that this is even more flaky than
I thought. There are probably some actual pgbench bugs here
as well as a timing dependency. But we can't have stuff this
unstable in the buildfarm, it obscures other issues.
Peter Eisentraut [Sat, 23 Sep 2017 13:49:22 +0000 (09:49 -0400)]
Refactor new file permission handling
The file handling functions from fd.c were called with a diverse mix of
notations for the file permissions when they were opening new files.
Almost all files created by the server should have the same permissions
set. So change the API so that e.g. OpenTransientFile() automatically
uses the standard permissions set, and OpenTransientFilePerm() is a new
function that takes an explicit permissions set for the few cases where
it is needed. This also saves an unnecessary argument for call sites
that are just opening an existing file.
While we're reviewing these APIs, get rid of the FileName typedef and
use the standard const char * for the file name and mode_t for the file
mode. This makes these functions match other file handling functions
and removes an unnecessary layer of mysteriousness. We can also get rid
of a few casts that way.
Peter Eisentraut [Fri, 22 Sep 2017 20:50:59 +0000 (16:50 -0400)]
Fix saving and restoring umask
In two cases, we set a different umask for some piece of code and
restore it afterwards. But if the contained code errors out, the umask
is not restored. So add TRY/CATCH blocks to fix that.
The build farm client would run the pg_upgrade tests twice, once as part
of the existing pg_upgrade check run and once as part of picking up all
TAP tests by looking for "t" directories. Since the pg_upgrade tests
are pretty slow, we will need a better solution or possibly a build farm
client change before we can proceed with this.
Andres Freund [Fri, 22 Sep 2017 18:29:05 +0000 (11:29 -0700)]
Expand expected output for recovery test even further.
I'd assumed that the backend being killed should be able to get out an
error message - but it turns out it's not guaranteed that it's not
still sending a ready-for-query. Really need to do something about
getting these error message to the client.
Reported-By: Thomas Munro, Tom Lane
Discussion:
https://postgr.es/m/CAEepm=0TE90nded+bNthP45_PEvGAAr=3gxhHJObL4xmOLtX0w@mail.gmail.com
https://postgr.es/m/14968.1506101414@sss.pgh.pa.us
Robert Haas [Fri, 22 Sep 2017 17:26:25 +0000 (13:26 -0400)]
hash: Implement page-at-a-time scan.
Commit 09cb5c0e7d6fbc9dee26dc429e4fc0f2a88e5272 added a similar
optimization to btree back in 2006, but nobody bothered to implement
the same thing for hash indexes, probably because they weren't
WAL-logged and had lots of other performance problems as well. As
with the corresponding btree case, this eliminates the problem of
potentially needing to refind our position within the page, and cuts
down on pin/unpin traffic as well.
Ashutosh Sharma, reviewed by Alexander Korotkov, Jesper Pedersen,
Amit Kapila, and me. Some final edits to comments and README by
me.
Tom Lane [Fri, 22 Sep 2017 16:59:44 +0000 (12:59 -0400)]
Allow up to 3 "-P 1" reports per thread in pgbench run of 2 seconds.
There seems to be some considerable imprecision in the timing of -P
progress reports. Nominally each thread ought to produce 2 reports
in this test, but about 10% of the time we only get one, and 1% of
the time we get three, as per buildfarm results so far. Pending
further investigation, treat the last case as a "pass". (I, tgl,
am suspicious that this still might not be lax enough, now that it's
obvious that the behavior is load-dependent; but there's not yet
buildfarm evidence to confirm that suspicion.)
Peter Eisentraut [Fri, 22 Sep 2017 14:59:46 +0000 (10:59 -0400)]
Remove contrib/chkpass
The recent addition of a test suite for this module revealed a few
problems. It uses a crypt() method that is no longer considered secure
and doesn't work anymore on some platforms. Using a volatile input
function violates internal sanity check assumptions and leads to
failures on the build farm.
So this module is neither a usable security tool nor a good example for
an extension. No one wanted to argue for keeping or improving it, so
remove it.
Adjust commentary in regc_pg_locale.c to remove mention of the possibility
of not having <wctype.h> functions, since we no longer consider that.
Eliminate duplicate code in wparser_def.c by generalizing the p_iswhat
macro to take a parameter saying what to return for non-ASCII chars
in C locale. (That's not really a consequence of the
USE_WIDE_UPPER_LOWER-ectomy, but I noticed it while doing that.)
Tom Lane [Fri, 22 Sep 2017 15:00:58 +0000 (11:00 -0400)]
Assume wcstombs(), towlower(), and sibling functions are always present.
These functions are required by SUS v2, which is our minimum baseline
for Unix platforms, and are present on all interesting Windows versions
as well. Even our oldest buildfarm members have them. Thus, we were not
testing the "!USE_WIDE_UPPER_LOWER" code paths, which explains why the bug
fixed in commit e6023ee7f escaped detection. Per discussion, there seems
to be no more real-world value in maintaining this option. Hence, remove
the configure-time tests for wcstombs() and towlower(), remove the
USE_WIDE_UPPER_LOWER symbol, and remove all the !USE_WIDE_UPPER_LOWER code.
There's not actually all that much of the latter, but simplifying the #if
nests is a win in itself.
Reported-by: Jeremy Schneider
Author: Michael Paquier
Discussion: https://postgr.es/m/CA+fnDAZaPCwfY8Lp-pfLnUGFAXRu1VfLyRgdup-L-kwcBj8MqQ@mail.gmail.com
Tom Lane [Fri, 22 Sep 2017 04:04:21 +0000 (00:04 -0400)]
Sync our copy of the timezone library with IANA tzcode master.
This patch absorbs a few unreleased fixes in the IANA code.
It corresponds to commit 2d8b944c1cec0808ac4f7a9ee1a463c28f9cd00a
in https://github.com/eggert/tz. Non-cosmetic changes include:
TZDEFRULESTRING is updated to match current US DST practice,
rather than what it was over ten years ago. This only matters
for interpretation of POSIX-style zone names (e.g., "EST5EDT"),
and only if the timezone database doesn't include either an exact
match for the zone name or a "posixrules" entry. The latter
should not be true in any current Postgres installation, but
this could possibly matter when using --with-system-tzdata.
Get rid of a nonportable use of "++var" on a bool var.
This is part of a larger fix that eliminates some vestigial
support for consecutive leap seconds, and adds checks to
the "zic" compiler that the data files do not specify that.
Remove a couple of ancient compatibility hacks. The IANA
crew think these are obsolete, and I tend to agree. But
perhaps our buildfarm will think different.
Back-patch to all supported branches, in line with our policy
that all branches should be using current IANA code. Before v10,
this includes application of current pgindent rules, to avoid
whitespace problems in future back-patches.
Andrew Dunstan [Thu, 21 Sep 2017 23:02:23 +0000 (19:02 -0400)]
Provide a test for variable existence in psql
"\if :{?variable_name}" will be translated to "\if TRUE" if the variable
exists and "\if FALSE" otherwise. Thus it will be possible to execute code
conditionally on the existence of the variable, regardless of its value.
Fabien Coelho, with some review by Robins Tharakan and some light text
editing by me.
Tom Lane [Thu, 21 Sep 2017 22:13:11 +0000 (18:13 -0400)]
Give a better error for duplicate entries in VACUUM/ANALYZE column list.
Previously, the code didn't think about this case and would just try to
analyze such a column twice. That would fail at the point of inserting
the second version of the pg_statistic row, with obscure error messsages
like "duplicate key value violates unique constraint" or "tuple already
updated by self", depending on context and PG version. We could allow
the case by ignoring duplicate column specifications, but it seems better
to reject it explicitly.
The bogus error messages seem like arguably a bug, so back-patch to
all supported versions.
Nathan Bossart, per a report from Michael Paquier, and whacked
around a bit by me.
Andrew Dunstan [Thu, 21 Sep 2017 12:41:14 +0000 (08:41 -0400)]
Quieten warnings about unused variables
These variables are only ever written to in assertion-enabled builds,
and the latest Microsoft compilers complain about such variables in
non-assertion-enabled builds.
Apparently they don't worry so much about variables that are written to
but not read from, so most of our PG_USED_FOR_ASSERTS_ONLY variables
don't cause the problem.
Robert Haas [Thu, 21 Sep 2017 03:33:04 +0000 (23:33 -0400)]
Associate partitioning information with each RelOptInfo.
This is not used for anything yet, but it is necessary infrastructure
for partition-wise join and for partition pruning without constraint
exclusion.
Ashutosh Bapat, reviewed by Amit Langote and with quite a few changes,
mostly cosmetic, by me. Additional review and testing of this patch
series by Antonin Houska, Amit Khandekar, Rafia Sabih, Rajkumar
Raghuwanshi, Thomas Munro, and Dilip Kumar.
Tom Lane [Wed, 20 Sep 2017 17:52:36 +0000 (13:52 -0400)]
Improve dubious memory management in pg_newlocale_from_collation().
pg_newlocale_from_collation() used malloc() and strdup() directly,
which is generally not per backend coding style, and it didn't bother
to check for failure results, but would just SIGSEGV instead. Also,
if one of the numerous error checks in the middle of the function
failed, the already-allocated memory would be leaked permanently.
Admittedly, it's not a lot of memory, but it could build up if this
function were called repeatedly for a bad collation.
The first two problems are easily cured by palloc'ing in TopMemoryContext
instead of calling libc directly. We can fairly easily dodge the leakage
problem for the struct pg_locale_struct by filling in a temporary variable
and allocating permanent storage only once we reach the bottom of the
function. It's harder to get rid of the potential leakage for ICU's copy
of the collcollate string, but at least that's only allocated after most
of the error checks; so live with that aspect.
Back-patch to v10 where this code came in, with one or another of the
ICU patches.
Tom Lane [Wed, 20 Sep 2017 15:28:34 +0000 (11:28 -0400)]
Fix instability in subscription regression test.
005_encoding.pl neglected to wait for the subscriber's initial
synchronization to happen. While we have not seen this fail in
the buildfarm, it's pretty easy to demonstrate there's an issue
by hacking logicalrep_worker_launch() to fail most of the time.
Tom Lane [Wed, 20 Sep 2017 15:10:36 +0000 (11:10 -0400)]
Fix erroneous documentation about noise word GROUP.
GRANT, REVOKE, and some allied commands allow the noise word GROUP
before a role name (cf. grantee production in gram.y). This option
does not exist elsewhere, but it had nonetheless snuck into the
documentation for ALTER ROLE, ALTER USER, and CREATE SCHEMA.
Seems to be a copy-and-pasteo in commit 31eae6028, which did expand the
syntax choices here, but not in that way. Back-patch to 9.5 where that
came in.
Magnus Hagander [Wed, 20 Sep 2017 12:09:05 +0000 (14:09 +0200)]
Mention need for --no-inc-recursive in rsync command
Since rsync 3.0.0 (released in 2008), the default way to enumerate
changes was changed in a way that makes it less likely that the hardlink
sync mode works. Since the whole point of the documented procedure is
for the hardlinks to work, change our docs to suggest using the
backwards compatibility switch.
Andres Freund [Wed, 20 Sep 2017 05:03:48 +0000 (22:03 -0700)]
Make WAL segment size configurable at initdb time.
For performance reasons a larger segment size than the default 16MB
can be useful. A larger segment size has two main benefits: Firstly,
in setups using archiving, it makes it easier to write scripts that
can keep up with higher amounts of WAL, secondly, the WAL has to be
written and synced to disk less frequently.
But at the same time large segment size are disadvantageous for
smaller databases. So far the segment size had to be configured at
compile time, often making it unrealistic to choose one fitting to a
particularly load. Therefore change it to a initdb time setting.
This includes a breaking changes to the xlogreader.h API, which now
requires the current segment size to be configured. For that and
similar reasons a number of binaries had to be taught how to recognize
the current segment size.
Author: Beena Emerson, editorialized by Andres Freund Reviewed-By: Andres Freund, David Steele, Kuntal Ghosh, Michael
Paquier, Peter Eisentraut, Robert Hass, Tushar Ahuja
Discussion: https://postgr.es/m/CAOG9ApEAcQ--1ieKbhFzXSQPw_YLmepaa4hNdnY5+ZULpt81Mw@mail.gmail.com
Andres Freund [Wed, 20 Sep 2017 04:29:51 +0000 (21:29 -0700)]
Accept that server might not be able to send error in crash recovery test.
As it turns out we can't rely that the script's monitoring session is
terminated with a proper error by the server, because the session
might be terminated while already trying to send data.
Also improve robustness and error reporting facilities of the test,
developed while debugging this issue.
Tom Lane [Wed, 20 Sep 2017 03:32:45 +0000 (23:32 -0400)]
Remove no-op GiST support functions in the core GiST opclasses.
The preceding patch allowed us to remove useless GiST support functions.
This patch actually does that for all the no-op cases in the core GiST
code. This buys us whatever performance gain is to be had, and more
importantly exercises the preceding patch.
There remain no-op functions in the contrib GiST opclasses, but those
will take more work to remove.
Tom Lane [Wed, 20 Sep 2017 03:32:27 +0000 (23:32 -0400)]
Allow no-op GiST support functions to be omitted.
There are common use-cases in which the compress and/or decompress
functions can be omitted, with the result being that we make no
data transformation when storing or retrieving index values.
Previously, you had to provide a no-op function anyway, but this
patch allows such opclass support functions to be omitted.
Furthermore, if the compress function is omitted, then the core code
knows that the stored representation is the same as the original data.
This means we can allow index-only scans without requiring a fetch
function to be provided either. Previously you had to provide a
no-op fetch function if you wanted IOS to work.
This reportedly provides a small performance benefit in such cases,
but IMO the real reason for doing it is just to reduce the amount of
useless boilerplate code that has to be written for GiST opclasses.
Peter Eisentraut [Tue, 19 Sep 2017 22:29:12 +0000 (18:29 -0400)]
Add basic TAP test setup for pg_upgrade
The plan is to convert the current pg_upgrade test to the TAP
framework. This commit just puts a basic TAP test in place so that we
can see how the build farm behaves, since the build farm client has some
special knowledge of the pg_upgrade tests.
Author: Michael Paquier <michael.paquier@gmail.com>
Andres Freund [Tue, 19 Sep 2017 21:17:20 +0000 (14:17 -0700)]
Avoid use of non-portable strnlen() in pgstat_clip_activity().
The use of strnlen rather than strlen was just paranoia. Instead of
giving up on the paranoia, just implement the safeguard
differently. And add a comment explaining why we're careful.
Author: Andres Freund
Discussion: https://postgr.es/m/E1duOkJ-0001Mc-U5@gemulon.postgresql.org
Andres Freund [Tue, 19 Sep 2017 18:46:07 +0000 (11:46 -0700)]
Speedup pgstat_report_activity by moving mb-aware truncation to read side.
Previously multi-byte aware truncation was done on every
pgstat_report_activity() call - proving to be a bottleneck for
workloads with long query strings that execute quickly.
Instead move the truncation to the read side, which commonly is
executed far less frequently. That's possible because all server
encodings allow to determine the length of a multi-byte string from
the first byte.
Rename PgBackendStatus.st_activity to st_activity_raw so existing
extension users of the field break - their code has to be adjusted to
use pgstat_clip_activity().
Author: Andres Freund Tested-By: Khuntal Ghosh Reviewed-By: Robert Haas, Tom Lane
Discussion: https://postgr.es/m/20170912071948.pa7igbpkkkviecpz@alap3.anarazel.de
Andrew Dunstan [Tue, 19 Sep 2017 19:31:37 +0000 (15:31 -0400)]
Disable multi-byte citext tests
This reverts commit 890faaf1 which attempted unsuccessfully to deal with
the problem, and instead just comments out these tests like other similar
tests elsewhere in the script.
Andres Freund [Tue, 19 Sep 2017 17:32:51 +0000 (10:32 -0700)]
Make new crash restart test a bit more robust.
Add timeouts in case psql doesn't deliver the expected output, and try
to cause the monitoring psql to be fully connected to a backend. This
isn't necessarily everything needed, but at least the timeouts should
reduce the pain for buildfarm owners.
Author: Andres Freund Reported-By: Tom Lane, BF animals prairiedog and calliphoridae
Discussion: https://postgr.es/m/E1du6ZT-00043I-91@gemulon.postgresql.org
Andres Freund [Tue, 19 Sep 2017 02:36:44 +0000 (19:36 -0700)]
Rearm statement_timeout after each executed query.
Previously statement_timeout, in the extended protocol, affected all
messages till a Sync message. For clients that pipeline/batch query
execution that's problematic.
Instead disable timeout after each Execute message, and enable, if
necessary, the timer in start_xact_command(). As that's done only for
Execute and not Parse / Bind, pipelining the latter two could still
cause undesirable timeouts. But a survey of protocol implementations
shows that all drivers issue Sync messages when preparing, and adding
timeout rearming to both is fairly expensive for the common parse /
bind / execute sequence.
Author: Tatsuo Ishii, editorialized by Andres Freund Reviewed-By: Takayuki Tsunakawa, Andres Freund
Discussion: https://postgr.es/m/20170222.115044.1665674502985097185.t-ishii@sraoss.co.jp
Andres Freund [Tue, 19 Sep 2017 00:43:37 +0000 (17:43 -0700)]
Fix uninitialized variable in dshash.c.
A bugfix for commit 8c0d7bafad36434cb08ac2c78e69ae72c194ca20. The code
would have crashed if hashtable->size_log2 ever had the same value as
hashtable->control->size_log2 by coincidence.
Per Valgrind.
Author: Thomas Munro Reported-By: Tomas Vondra
Discussion: https://postgr.es/m/e72fb33c-4f31-f276-e972-263d9b59554d%402ndquadrant.com
The bug was caused by not re-reading the control file during crash
recovery restarts, which lead to an attempt to pfree() shared memory
contents. The fix is to re-read the control file, which seems good
anyway.
It's unclear as of this moment, whether we want to keep the
refactoring introduced in the commit referenced above, or come up with
an alternative approach. But fixing the bug in the mean time seems
like a good idea regardless.
A followup commit will introduce regression test coverage for crash
restarts.
Reported-By: Tom Lane
Discussion: https://postgr.es/m/14134.1505572349@sss.pgh.pa.us
Tom Lane [Mon, 18 Sep 2017 20:36:28 +0000 (16:36 -0400)]
Minor code-cleanliness improvements for btree.
Make the btree page-flags test macros (P_ISLEAF and friends) return clean
boolean values, rather than values that might not fit in a bool. Use them
in a few places that were randomly referencing the flag bits directly.
In passing, change access/nbtree/'s only direct use of BUFFER_LOCK_SHARE to
BT_READ. (Some think we should go the other way, but as long as we have
BT_READ/BT_WRITE, let's use them consistently.)
Tom Lane [Mon, 18 Sep 2017 19:21:23 +0000 (15:21 -0400)]
Make DatumGetFoo/PG_GETARG_FOO/PG_RETURN_FOO macro names more consistent.
By project convention, these names should include "P" when dealing with a
pointer type; that is, if the result of a GETARG macro is of type FOO *,
it should be called PG_GETARG_FOO_P not just PG_GETARG_FOO. Some newer
types such as JSONB and ranges had not followed the convention, and a
number of contrib modules hadn't gotten that memo either. Rename the
offending macros to improve consistency.
In passing, fix a few places that thought PG_DETOAST_DATUM() returns
a Datum; it does not, it returns "struct varlena *". Applying
DatumGetPointer to that happens not to cause any bad effects today,
but it's formally wrong. Also, adjust an ltree macro that was designed
without any thought for what pgindent would do with it.
This is all cosmetic and shouldn't have any impact on generated code.
Tom Lane [Mon, 18 Sep 2017 15:39:44 +0000 (11:39 -0400)]
Fix, or at least ameliorate, bugs in logicalrep_worker_launch().
If we failed to get a background worker slot, the code just walked
away from the logicalrep-worker slot it already had, leaving that
looking like the worker is still starting up. This led to an indefinite
hang in subscription startup, as reported by Thomas Munro. We must
release the slot on failure.
Also fix a thinko: we must capture the worker slot's generation before
releasing LogicalRepWorkerLock the first time, else testing to see if
it's changed is pretty meaningless.
BTW, the CHECK_FOR_INTERRUPTS() in WaitForReplicationWorkerAttach is a
ticking time bomb, even without considering the possibility of elog(ERROR)
in one of the other functions it calls. Really, this entire business needs
a redesign with some actual thought about error recovery. But for now
I'm just band-aiding the case observed in testing.
Peter Eisentraut [Mon, 18 Sep 2017 01:37:02 +0000 (21:37 -0400)]
Fix DROP SUBSCRIPTION hang
When ALTER SUBSCRIPTION DISABLE is run in the same transaction before
DROP SUBSCRIPTION, the latter will hang because workers will still be
running, not having seen the DISABLE committed, and DROP SUBSCRIPTION
will wait until the workers have vacated the replication origin slots.
Previously, DROP SUBSCRIPTION killed the logical replication workers
immediately only if it was going to drop the replication slot, otherwise
it scheduled the worker killing for the end of the transaction, as a
result of 7e174fa793a2df89fe03d002a5087ef67abcdde8. This, however,
causes the present problem. To fix, kill the workers immediately in all
cases. This covers all cases: A subscription that doesn't have a
replication slot must be disabled. It was either disabled in the same
transaction, or it was already disabled before the current transaction,
but then there shouldn't be any workers left and this won't make a
difference.