Kevin Grittner [Fri, 6 May 2016 12:47:12 +0000 (07:47 -0500)]
Fix hash index vs "snapshot too old" problemms
Hash indexes are not WAL-logged, and so do not maintain the LSN of
index pages. Since the "snapshot too old" feature counts on
detecting error conditions using the LSN of a table and all indexes
on it, this makes it impossible to safely do early vacuuming on any
table with a hash index, so add this to the tests for whether the
xid used to vacuum a table can be adjusted based on
old_snapshot_threshold.
While at it, add a paragraph to the docs for old_snapshot_threshold
which specifically mentions this and other aspects of the feature
which may otherwise surprise users.
Problem reported and patch reviewed by Amit Kapila
Dean Rasheed [Fri, 6 May 2016 11:48:27 +0000 (12:48 +0100)]
Fix psql's \ev and \sv commands so that they handle view reloptions.
Commit 8eb6407aaeb6cbd972839e356b436bb698f51cff added support for
editing and showing view definitions, but neglected to account for
view options such as security_barrier and WITH CHECK OPTION which are
not returned by pg_get_viewdef() and so need special handling.
Author: Dean Rasheed Reviewed-by: Peter Eisentraut
Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
Dean Rasheed [Fri, 6 May 2016 11:45:36 +0000 (12:45 +0100)]
Move and rename fmtReloptionsArray().
Move fmtReloptionsArray() from pg_dump.c to string_utils.c so that it
is available to other frontend code. In particular psql's \ev and \sv
commands need it to handle view reloptions. Also rename the function
to appendReloptionsArray(), which is a more accurate description of
what it does.
Author: Dean Rasheed Reviewed-by: Peter Eisentraut
Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
Tom Lane [Fri, 6 May 2016 02:37:30 +0000 (22:37 -0400)]
Further 9.6 release note improvements.
Call out the major enhancements in this release as identified by
pgsql-advocacy discussion, and rearrange some of the entries to
make those items more prominent. Other minor improvements per
advice from Vitaly Burovoy, Masahiko Sawada, Peter Geoghegan,
and Andres Freund.
Tom Lane [Fri, 6 May 2016 00:08:58 +0000 (20:08 -0400)]
Update time zone data files to tzdata release 2016d.
DST law changes in Russia (Magadan, Tomsk regions) and Venezuela.
Historical corrections for Russia. There are new zone names Europe/Kirov
and Asia/Tomsk reflecting the fact that these regions now have different
time zone histories from adjacent regions.
Tom Lane [Thu, 5 May 2016 18:51:00 +0000 (14:51 -0400)]
Rename pgbench min/max to least/greatest, and fix handling of double args.
These functions behave like the backend's least/greatest functions,
not like min/max, so the originally-chosen names invite confusion.
Per discussion, rename to least/greatest.
I also took it upon myself to make them return double if any input is
double. The previous behavior of silently coercing all inputs to int
surely does not meet the principle of least astonishment.
Copy-edit some of the other new functions' documentation, too.
Tom Lane [Thu, 5 May 2016 16:33:12 +0000 (12:33 -0400)]
Fix ordering/categorization of some recently-added system views.
Somebody added pg_replication_origin, pg_replication_origin_status and
pg_replication_slots to catalogs.sgml without a whole lot of concern for
either alphabetical order or the difference between a table and a view.
Clean up the mess.
Back-patch to 9.5, not so much because this is critical as because if
I don't it will result in a cross-branch divergence in release-9.5.sgml,
which would be a maintenance hazard.
Dean Rasheed [Thu, 5 May 2016 10:16:17 +0000 (11:16 +0100)]
Fix corner-case loss of precision in numeric pow() calculation
Commit 7d9a4737c268f61fb8800957631f12d3f13be218 greatly improved the
accuracy of the numeric transcendental functions, however it failed to
consider the case where the result from pow() is close to the overflow
threshold, for example 0.12 ^ -2345.6. For such inputs, where the
result has more than 2000 digits before the decimal point, the decimal
result weight estimate was being clamped to 2000, leading to a loss of
precision in the final calculation.
Fix this by replacing the clamping code with an overflow test that
aborts the calculation early if the final result is sure to overflow,
based on the overflow limit in exp_var(). This provides the same
protection against integer overflow in the subsequent result scale
computation as the original clamping code, but it also ensures that
precision is never lost and saves compute cycles in cases that are
sure to overflow.
The new early overflow test works with the initial low-precision
result (expected to be accurate to around 8 significant digits) and
includes a small fuzz factor to ensure that it doesn't kick in for
values that would not overflow exp_var(), so the overall overflow
threshold of pow() is unchanged and consistent for all inputs with
non-integer exponents.
Author: Dean Rasheed Reviewed-by: Tom Lane
Discussion: http://www.postgresql.org/message-id/CAEZATCUj3U-cQj0jjoia=qgs0SjE3auroxh8swvNKvZWUqegrg@mail.gmail.com
See-also: http://www.postgresql.org/message-id/CAEZATCV7w+8iB=07dJ8Q0zihXQT1semcQuTeK+4_rogC_zq5Hw@mail.gmail.com
This feature has shown enough immaturity that it was deemed better to
rip it out before rushing some more fixes at the last minute. There are
discussions on larger changes in this area for the next release.
Andres Freund [Wed, 4 May 2016 08:54:20 +0000 (01:54 -0700)]
Fix transient mdsync() errors of truncated relations due to 72a98a6395.
Unfortunately the segment size checks from 72a98a6395 had the negative
side-effect of breaking a corner case in mdsync(): When processing a
fsync request for a truncated away segment mdsync() could fail with
"could not fsync file" (if previous segment < RELSEG_SIZE) because
_mdfd_getseg() now wouldn't return the relevant segment anymore.
The cleanest fix seems to be to allow the caller of _mdfd_getseg() to
specify whether checks for RELSEG_SIZE are performed. To allow doing so,
change the ExtensionBehavior enum into a bitmask. Besides allowing for
the addition of EXTENSION_DONT_CHECK_SIZE, this makes for a nicer
implementation of EXTENSION_REALLY_RETURN_NULL.
Besides mdsync() the only callsite that should change behaviour due to
this is mdprefetch() which now doesn't create segments anymore, even in
recovery. Given the uses of mdprefetch() that seems better.
Reported-By: Thom Brown
Discussion: CAA-aLv72QazLvPdKZYpVn4a_Eh+i4_cxuB03k+iCuZM_xjc+6Q@mail.gmail.com
Robert Haas [Tue, 3 May 2016 18:36:38 +0000 (14:36 -0400)]
Fix more things to be parallel-safe.
Conversion functions were previously marked as parallel-unsafe, since
that is the default, but in fact they are safe. Parallel-safe
functions defined in pg_proc.h and redefined in system_views.sql were
ending up as parallel-unsafe because the redeclarations were not
marked PARALLEL SAFE. While editing system_views.sql, mark ts_debug()
parallel safe also.
Robert Haas [Tue, 3 May 2016 14:52:25 +0000 (10:52 -0400)]
Tweak a few more things in preparation for upcoming pgindent run.
These adjustments adjust code and comments in minor ways to prevent
pgindent from mangling them. Among other things, I tried to avoid
situations where pgindent would emit "a +b" instead of "a + b", and I
tried to avoid having it break up inline comments across multiple
lines.
Alvaro Herrera [Mon, 2 May 2016 19:04:29 +0000 (16:04 -0300)]
Fix code comments regarding logical decoding
Back in 3b02ea4f0780 I added some comments in various places to explain
how logical decoding and other things worked. Not all of the changes
were welcome, because they were misleading or wrong. This changes them
a little bit to make them more accurate.
Some other comments are also changed to be more accurate. Also, fix a
bunch of typos.
Tom Lane [Mon, 2 May 2016 15:18:10 +0000 (11:18 -0400)]
Fix configure's incorrect version tests for flex and perl.
awk's equality-comparison operator is "==" not "=". We got this right
in many places, but not in configure's checks for supported version
numbers of flex and perl. It hadn't been noticed because unsupported
versions are so old as to be basically extinct in the wild, and because
the only consequence is whether or not a WARNING flies by during
configure.
Daniel Gustafsson noted the problem with respect to the test for flex,
I found the other by reviewing other awk calls.
Robert Haas [Mon, 2 May 2016 14:42:34 +0000 (10:42 -0400)]
Fix parallel safety markings for pg_start_backup.
Commit 7117685461af50f50c03f43e6a622284c8d54694 made pg_start_backup
parallel-restricted rather than parallel-safe, because it now relies
on backend-private state that won't be synchronized with the parallel
worker. However, it didn't update pg_proc.h. Separately, Andreas
Karlsson observed that system_views.sql neglected to reiterate the
parallel-safety markings whe redefining various functions, including
this one; so add a PARALLEL RESTRICTED declaration there to match
the new value in pg_proc.h.
CHECK_PAGE_OFFSET_RANGE() has been unused forever.
CHECK_RELATION_BLOCK_RANGE() has been unused in pgstatindex.c ever since
bt_page_stats() and bt_page_items() functions were moved from pgstattuple
to pageinspect module. It still exists in pageinspect/btreefuncs.c.
Tom Lane [Sun, 1 May 2016 15:24:32 +0000 (11:24 -0400)]
Add a --non-master-only option to git_changelog.
This has the inverse effect of --master-only. It's needed to help find
cases where a commit should not be described in major release notes
because it was back-patched into older branches, though not at the same
time as the HEAD commit.
Tom Lane [Sat, 30 Apr 2016 18:08:00 +0000 (14:08 -0400)]
Small improvements to OPTIMIZER_DEBUG code.
Now that Paths have their own rows field, print that rather than
the parent relation's rowcount.
Show the relid sets associated with Paths using table names rather
than numbers; since this code is able to print simple Var references
using table names, it seems a bit silly that print_relids can't.
Print the cheapest_parameterized_paths list for a RelOptInfo, and
include information about a parameterized path's required_outer rels.
Noted while trying to use this feature to debug Alexander Kirkouski's
recent bug report.
Tom Lane [Sat, 30 Apr 2016 16:29:21 +0000 (12:29 -0400)]
Fix planner crash from pfree'ing a partial path that a GatherPath uses.
We mustn't run generate_gather_paths() during add_paths_to_joinrel(),
because that function can be invoked multiple times for the same target
joinrel. Not only is it wasteful to build GatherPaths repeatedly, but
a later add_partial_path() could delete the partial path that a previously
created GatherPath depends on. Instead establish the convention that we
do generate_gather_paths() for a rel only just before set_cheapest().
The code was accidentally not broken for baserels, because as of today there
never is more than one partial path for a baserel. But that assumption
obviously has a pretty short half-life, so move the generate_gather_paths()
calls for those cases as well.
Also add some generic comments explaining how and why this all works.
Tom Lane [Sat, 30 Apr 2016 14:54:45 +0000 (10:54 -0400)]
Remove warning about num_sync being too large in synchronous_standby_names.
If we're not going to reject such setups entirely, throwing a WARNING in
check_synchronous_standby_names() is unhelpful, because it will cause the
warning to be logged again every time the postmaster receives SIGHUP.
Per discussion, just remove the warning.
In passing, improve the documentation for synchronous_commit, which had not
gotten the word that now there can be more than one synchronous standby.
Tom Lane [Sat, 30 Apr 2016 00:19:38 +0000 (20:19 -0400)]
Fix mishandling of equivalence-class tests in parameterized plans.
Given a three-or-more-way equivalence class, such as X.Y = Y.Y = Z.Z,
it was possible for the planner to omit one of the quals needed to
enforce that all members of the equivalence class are actually equal.
This only happened in the case of a parameterized join node for two
of the relations, that is a plan tree like
Nested Loop
-> Scan X
-> Nested Loop
-> Scan Y
-> Scan Z
Filter: Z.Z = X.X
The eclass machinery normally expects to apply X.X = Y.Y when those
two relations are joined, but in this shape of plan tree they aren't
joined until the top node --- and, if the lower nested loop is marked
as parameterized by X, the top node will assume that the relevant eclass
condition(s) got pushed down into the lower node. On the other hand,
the scan of Z assumes that it's only responsible for constraining Z.Z
to match any one of the other eclass members. So one or another of
the required quals sometimes fell between the cracks, depending on
whether consideration of the eclass in get_joinrel_parampathinfo()
for the lower nested loop chanced to generate X.X = Y.Y or X.X = Z.Z
as the appropriate constraint there. If it generated the latter,
it'd erroneously suppose that the Z scan would take care of matters.
To fix, force X.X = Y.Y to be generated and applied at that join node
when this case occurs.
This is *extremely* hard to hit in practice, because various planner
behaviors conspire to mask the problem; starting with the fact that the
planner doesn't really like to generate a parameterized plan of the
above shape. (It might have been impossible to hit it before we
tweaked things to allow this plan shape for star-schema cases.) Many
thanks to Alexander Kirkouski for submitting a reproducible test case.
The bug can be demonstrated in all branches back to 9.2 where parameterized
paths were introduced, so back-patch that far.
Kevin Grittner [Fri, 29 Apr 2016 21:46:08 +0000 (16:46 -0500)]
Add a few entries to the tail of time mapping, to see old values.
Without a few entries beyond old_snapshot_threshold, the lookup
would often fail, resulting in the more aggressive pruning or
vacuum being skipped often enough to matter. This was very clearly
shown by a python test script posted by Ants Aasma, and was likely
a factor in an earlier but somewhat less clear-cut test case posted
by Jeff Janes.
This patch makes no change to the logic, per se -- it just makes
the array of mapping entries big enough to make lookup misses based
on timing much less likely. An occasional miss is still possible
if a thread stalls for more than 10 minutes, but that does not
create any problem with correctness of behavior. Besides, if
things are so busy that a thread is stalling for more than 10
minutes, it is probably OK to skip the more aggressive cleanup at
that particular point in time.
Andrew Dunstan [Fri, 29 Apr 2016 11:59:47 +0000 (07:59 -0400)]
Support building with Visual Studio 2015
Adjust the way we detect the locale. As a result the minumum Windows
version supported by VS2015 and later is Windows Vista. Add some tweaks
to remove new compiler warnings. Remove documentation references to the
now obsolete msysGit.
Michael Paquier, somewhat edited by me, reviewed by Christian Ullrich.
Andres Freund [Fri, 29 Apr 2016 05:05:37 +0000 (22:05 -0700)]
Remember asking for feedback during walsender shutdown.
Since 5a991ef8 we're explicitly asking for feedback from the receiving
side when shutting down walsender, if there's not yet replicated
data.
Unfortunately we didn't remember (i.e. set waiting_for_ping_response to
true) having asked for feedback, leading to scenarios in which replies
were requested at a high frequency.
I can't reproduce this problem on my laptop, I think that's because the
problem requires a significant TCP window to manifest due to the
!pq_is_send_pending() condition. But since this clearly is a bug, let's
fix it. There's quite possibly more wrong than just this though.
While fiddling with WalSndDone(), I rewrote a hard to understand comment
about looking at the flush vs. the write position.
Reported-By: Nick Cleaton, Magnus Hagander
Author: Nick Cleaton
Discussion: CAFgz3kus=rC_avEgBV=+hRK5HYJ8vXskJRh8yEAbahJGTzF2VQ@mail.gmail.com
CABUevExsjROqDcD0A2rnJ6HK6FuKGyewJr3PL12pw85BHFGS2Q@mail.gmail.com
Backpatch: 9.4, were 5a991ef8 introduced the use of feedback messages
during shutdown.
Tom Lane [Thu, 28 Apr 2016 15:50:58 +0000 (11:50 -0400)]
Adjust DatumGetBool macro, this time for sure.
Commit 23a41573c attempted to fix the DatumGetBool macro to ignore bits
in a Datum that are to the left of the actual bool value. But it did that
by casting the Datum to bool; and on compilers that use C99 semantics for
bool, that ends up being a whole-word test, not a 1-byte test. This seems
to be the true explanation for contrib/seg failing in VS2015. To fix, use
GET_1_BYTE() explicitly. I think in the previous patch, I'd had some idea
of not having to commit to bool being exactly 1 byte wide, but regardless
of what the compiler's bool is, boolean columns and Datums are certainly
1 byte wide.
The previous fix was (eventually) back-patched into all active versions,
so do likewise with this one.
Tom Lane [Thu, 28 Apr 2016 15:46:07 +0000 (11:46 -0400)]
Revert "Convert contrib/seg's bool-returning SQL functions to V1 call convention."
This reverts commit c8e81afc60093b199a128ccdfbb692ced8e0c9cd.
That turns out to have been based on a faulty diagnosis of why the
VS2015 build was misbehaving. Instead, we need to fix DatumGetBool().
Prevent multiple cleanup process for pending list in GIN.
Previously, ginInsertCleanup could exit early if it detects that someone else
is cleaning up the pending list, without waiting for that someone else to
finish the job. But in this case vacuum could miss tuples to be deleted.
Cleanup process now locks metapage with a help of heavyweight
LockPage(ExclusiveLock), and it guarantees that there is no another cleanup
process at the same time. Lock is taken differently depending on caller of
cleanup process: any vacuums and gin_clean_pending_list() will be blocked
until lock becomes available, ordinary insert uses conditional lock to
prevent indefinite waiting on lock.
Insert into pending list doesn't use this lock, so insertion isn't blocked.
Also, patch adds stopping of cleanup process when at-start-cleanup-tail is
reached in order to prevent infinite cleanup in case of massive insertion. But
it will stop only for automatic maintenance tasks like autovacuum.
Patch introduces choice of limit of memory to use: autovacuum_work_mem,
maintenance_work_mem or work_mem depending on call path.
Patch for previous releases should be reworked due to changes between 9.6 and
previous ones in this area.
Discover and diagnostics by Jeff Janes and Tomas Vondra
Tom Lane [Wed, 27 Apr 2016 22:19:28 +0000 (18:19 -0400)]
Use memmove() not memcpy() to slide some pointers down.
The previous coding here was formally undefined, though it seems to
accidentally work on most platforms in the buildfarm. Caught by some
OpenBSD platforms in which libc contains an assertion check for
overlapping areas passed to memcpy().
Tom Lane [Wed, 27 Apr 2016 21:55:19 +0000 (17:55 -0400)]
Clean up parsing of synchronous_standby_names GUC variable.
Commit 989be0810dffd08b added a flex/bison lexer/parser to interpret
synchronous_standby_names. It was done in a pretty crufty way, though,
making assorted end-use sites responsible for calling the parser at the
right times. That was not only vulnerable to errors of omission, but made
it possible that lexer/parser errors occur at very undesirable times,
and created memory leakages even if there was no error.
Instead, perform the parsing once during check_synchronous_standby_names
and let guc.c manage the resulting data. To do that, we have to flatten
the parsed representation into a single hunk of malloc'd memory, but that
is not very hard.
While at it, work a little harder on making useful error reports for
parsing problems; the previous code felt that "synchronous_standby_names
parser returned 1" was an appropriate user-facing error message. (To
be fair, it did also log a syntax error message, but separately from the
GUC problem report, which is at best confusing.) It had some outright
bugs in the face of invalid input, too.
I (tgl) also concluded that we need to restrict unquoted names in
synchronous_standby_names to be just SQL identifiers. The previous coding
would accept darn near anything, which (1) makes the quoting convention
both nearly-unnecessary and formally ambiguous, (2) makes it very hard to
understand what is a syntax error and what is a creative interpretation of
the input as a standby name, and (3) makes it impossible to further extend
the syntax in future without a compatibility break. I presume that we're
intending future extensions of the syntax, else this parsing infrastructure
is massive overkill, so (3) is an important objection. Since we've taken
a compatibility hit for non-identifier names with this change anyway, we
might as well lock things down now and insist that users use double quotes
for standby names that aren't identifiers.
Robert Haas [Wed, 27 Apr 2016 15:47:28 +0000 (11:47 -0400)]
Update typedefs.list file in preparation for pgindent run
In addition to adding new typedefs, I also re-sorted the file so that
various entries add piecemeal, mostly or entirely by me, were alphabetized
the same way as other entries in the file.
Robert Haas [Wed, 27 Apr 2016 15:29:45 +0000 (11:29 -0400)]
Clean up a few parallelism-related things that pgindent wants to mangle.
In nodeFuncs.c, pgindent wants to introduce spurious indentation into
the definitions of planstate_tree_walker and planstate_walk_subplans.
Fix that by spreading the definition out across several lines, similar
to what is already done for other walker functions in that file.
In execParallel.c, in the definition of SharedExecutorInstrumentation,
pgindent wants to insert more whitespace between the type name and the
member name. That causes it to mangle comments later on the line. Fix
by moving the comments out of line. Now that we have a bit more room,
add some more details that may be useful to the next person reading
this code.
Robert Haas [Wed, 27 Apr 2016 11:33:33 +0000 (07:33 -0400)]
Fix EXPLAIN VERBOSE output for parallel aggregate.
The way that PartialAggregate and FinalizeAggregate plan nodes were
displaying output columns before was bogus. Now, FinalizeAggregate
produces the same outputs as an Aggregate would have produced, while
PartialAggregate produces each of those outputs prefixed by the word
PARTIAL.
Andres Freund [Wed, 27 Apr 2016 03:32:51 +0000 (20:32 -0700)]
Don't open formally non-existent segments in _mdfd_getseg().
Before this commit _mdfd_getseg(), in contrast to mdnblocks(), did not
verify whether all segments leading up to the to-be-opened one, were
RELSEG_SIZE sized. That is e.g. not the case after truncating a
relation, because later segments just get truncated to zero length, not
removed.
Once a "non-existent" segment has been opened in a session, mdnblocks()
will return wrong results, causing errors like "could not read block %u
in file" when accessing blocks. Closing the session, or the later
arrival of relevant invalidation messages, would "fix" the problem.
That, so far, was mostly harmless, because most segment accesses are
only done after an mdnblocks() call. But since 428b1d6b29ca we try to
open segments that might have been deleted, to trigger kernel writeback
from a backend's queue of recent writes.
To fix check segment sizes in _mdfd_getseg() when opening previously
unopened segments. In practice this shouldn't imply a lot of additional
lseek() calls, because mdnblocks() will most of the time already have
opened all relevant segments.
This commit also fixes a second problem, namely that _mdfd_getseg(
EXTENSION_RETURN_NULL) extends files during recovery, which is not
desirable for the mdwriteback() case. Add EXTENSION_REALLY_RETURN_NULL,
which does not behave that way, and use it.
Reported-By: Thom Brown
Author: Andres Freund, Abhijit Menon-Sen Reviewd-By: Robert Haas, Fabien Coehlo
Discussion: CAA-aLv6Dp_ZsV-44QA-2zgkqWKQq=GedBX2dRSrWpxqovXK=Pg@mail.gmail.com Fixes: 428b1d6b29ca599c5700d4bc4f4ce4c5880369bf
Andres Freund [Sun, 24 Apr 2016 02:18:00 +0000 (19:18 -0700)]
Emit invalidations to standby for transactions without xid.
So far, when a transaction with pending invalidations, but without an
assigned xid, committed, we simply ignored those invalidation
messages. That's problematic, because those are actually sent for a
reason.
Known symptoms of this include that existing sessions on a hot-standby
replica sometimes fail to notice new concurrently built indexes and
visibility map updates.
The solution is to WAL log such invalidations in transactions without an
xid. We considered to alternatively force-assign an xid, but that'd be
problematic for vacuum, which might be run in systems with few xids.
Important: This adds a new WAL record, but as the patch has to be
back-patched, we can't bump the WAL page magic. This means that standbys
have to be updated before primaries; otherwise
"PANIC: standby_redo: unknown op code 32" errors can be encountered.
XXX:
Reported-By: Васильев Дмитрий, Masahiko Sawada
Discussion:
CAB-SwXY6oH=9twBkXJtgR4UC1NqT-vpYAtxCseME62ADwyK5OA@mail.gmail.com
CAD21AoDpZ6Xjg=gFrGPnSn4oTRRcwK1EBrWCq9OqOHuAcMMC=w@mail.gmail.com
Impose a full barrier in generic-xlc.h atomics functions.
pg_atomic_compare_exchange_*_impl() were providing only the semantics of
an acquire barrier. Buildfarm members hornet and mandrill revealed this
deficit beginning with commit 008608b9d51061b1f598c197477b3dc7be9c4a64.
While we have no report of symptoms in 9.5, we can't rule out the
possibility of certain compilers, hardware, or extension code relying on
these functions' specified barrier semantics. Back-patch to 9.5, where
commit b64d92f1a5602c55ee8b27a7ac474f03b7aee340 introduced atomics.
Tom Lane [Tue, 26 Apr 2016 22:52:17 +0000 (18:52 -0400)]
Add a --brief option to git_changelog.
In commit c0b050192, Andres introduced the idea of including one-line
commit references in our major release notes. Teach git_changelog to
emit a (lightly adapted) version of that format, so that we don't
have to laboriously add it to the notes after the fact. The default
output isn't changed, since I anticipate still using that for minor
release notes.
Tom Lane [Tue, 26 Apr 2016 16:43:03 +0000 (12:43 -0400)]
Fix order of shutdown cleanup operations in PostgresNode.pm.
Previously, database clusters created by a TAP test were shut down by
DESTROY methods attached to the PostgresNode objects representing them.
The trouble with that is that if the objects survive into the final global
destruction phase (which they do), Perl executes the DESTROY methods in an
unspecified order. Thus, the order of shutdown of multiple clusters was
indeterminate, which might lead to not-very-reproducible errors getting
logged (eg from a slave whose master might or might not get killed first).
Worse, the File::Temp objects representing the temporary PGDATA directories
might get destroyed before the PostgresNode objects, resulting in attempts
to delete PGDATA directories that still have live servers in them. On
Windows, this would lead to directory deletion failures; on Unix, it
usually had no effects worse than erratic "could not open temporary
statistics file "pg_stat/global.tmp": No such file or directory" log
messages.
While none of this would affect the reported result of the TAP test, which
is already determined, it could be very confusing when one is trying to
understand from the logs what went wrong with a failed test.
To fix, do the postmaster shutdowns in an END block rather than at object
destruction time. The END block will execute at a well-defined (and
reasonable) time during script termination, and it will stop the
postmasters in order of PostgresNode object creation. (Perhaps we should
change that to be reverse order of creation, but the main point here is
that we now have control which we did not before.) Use "pg_ctl stop", not
an asynchronous kill(SIGQUIT), so that we wait for the postmasters to shut
down before proceeding with directory deletion.
Deletion of temporary directories still happens in an unspecified order
during global destruction, but I can see no reason to care about that
once the postmasters are stopped.
Tom Lane [Tue, 26 Apr 2016 15:24:15 +0000 (11:24 -0400)]
Yet more portability hacking for degree-based trig functions.
The true explanation for Peter Eisentraut's report of inexact asind results
seems to be that (a) he's compiling into x87 instruction set, which uses
wider-than-double float registers, plus (b) the library function asin() on
his platform returns a result that is wider than double and is not rounded
to double width. To fix, we have to force the function's result to be
rounded comparably to what happened to the scaling constant asin_0_5.
Experimentation suggests that storing it into a volatile local variable is
the least ugly way of making that happen. Although only asin() is known to
exhibit an observable inexact result, we'd better do this in all the places
where we're hoping to get an exact result by scaling.
Robert Haas [Tue, 26 Apr 2016 12:31:38 +0000 (08:31 -0400)]
Enable parallel query by default.
Change max_parallel_degree default from 0 to 2. It is possible that
this is not a good idea, or that we should go with 1 worker rather
than 2, but we won't find out without trying it. Along the way,
reword the documentation for max_parallel_degree a little bit to
hopefully make it more clear.
Tom Lane [Mon, 25 Apr 2016 19:21:04 +0000 (15:21 -0400)]
New method for preventing compile-time calculation of degree constants.
Commit 65abaab547a5758b tried to prevent the scaling constants used in
the degree-based trig functions from being precomputed at compile time,
because some compilers do that with functions that don't yield results
identical-to-the-last-bit to what you get at runtime. A report from
Peter Eisentraut suggests that some recent compilers are smart enough
to see through that trick, though. Instead, let's put the inputs to
these calculations into non-const global variables, which should be a
more reliable way of convincing the compiler that it can't assume that
they are compile-time constants. (If we really get desperate, we could
mark these variables "volatile", but I do not believe we should have to.)
Tom Lane [Mon, 25 Apr 2016 16:28:49 +0000 (12:28 -0400)]
Try harder to detect a port conflict in PostgresNode.pm.
Commit fab84c7787f25756 tried to get away without doing an actual bind(),
but buildfarm results show that that doesn't get the job done. So we must
really bind to the target port --- and at least on my Linux box, we need a
listen() as well, or conflicts won't be detected. We rely on SO_REUSEADDR
to prevent problems from starting a postmaster on the socket immediately
after we've bound to it in the test code. (There may be platforms where
that doesn't work too well. But fortunately, we only really care whether
this works on Windows, and there the default behavior should be OK.)
Tom Lane [Sun, 24 Apr 2016 19:31:36 +0000 (15:31 -0400)]
Improve PostgresNode.pm's logic for detecting already-in-use ports.
Buildfarm members bowerbird and jacana have shown intermittent "could not
bind IPv4 socket" failures in the BinInstallCheck stage since mid-December,
shortly after commits 1caef31d9e550408 and 9821492ee417a591 changed the
logic for selecting which port to use in temporary installations. One
plausible explanation is that we are randomly selecting ports that are
already in use for some non-Postgres purpose. Although the code tried
to defend against already-in-use ports, it used pg_isready to probe
the port which is quite unhelpful: if some non-Postgres server responds
at the given address, pg_isready will generally say "no response",
leading to exactly the wrong conclusion about whether the port is free.
Instead, let's use a simple TCP connect() call to see if anything answers
without making assumptions about what it is. Note that this means there's
no direct check for a conflicting Unix socket, but that should be okay
because there should be no other Unix sockets in use in the temporary
socket directory created for a test run.
This is only a partial solution for the TCP case, since if the port number
is in use for an outgoing connection rather than a listening socket, we'll
fail to detect that. We could try to bind() to the proposed port as a
means of detecting that case, but that would introduce its own failure
modes, since the system might consider the address to remain reserved for
some period of time after we drop the bound socket. Close study of the
errors returned by bowerbird and jacana suggests that what we're seeing
there may be conflicts with listening not outgoing sockets, so let's try
this and see if it improves matters. It's certainly better than what's
there now, in any case.
Michael Paquier, adjusted by me to work on non-Windows as well as Windows
Andres Freund [Sun, 24 Apr 2016 19:26:55 +0000 (12:26 -0700)]
Fix documentation & config inconsistencies around 428b1d6b2.
Several issues:
1) checkpoint_flush_after doc and code disagreed about the default
2) new GUCs were missing from postgresql.conf.sample
3) Outdated source-code comment about bgwriter_flush_after's default
4) Sub-optimal categories assigned to new GUCs
5) Docs suggested backend_flush_after is PGC_SIGHUP, but it's PGC_USERSET.
6) Spell out int as integer in the docs, as done elsewhere
Reported-By: Magnus Hagander, Fujii Masao
Discussion: CAHGQGwETyTG5VYQQ5C_srwxWX7RXvFcD3dKROhvAWWhoSBdmZw@mail.gmail.com
Tom Lane [Sat, 23 Apr 2016 20:53:15 +0000 (16:53 -0400)]
Rename strtoi() to strtoint().
NetBSD has seen fit to invent a libc function named strtoi(), which
conflicts with the long-established static functions of the same name in
datetime.c and ecpg's interval.c. While muttering darkly about intrusions
on application namespace, we'll rename our functions to avoid the conflict.
Back-patch to all supported branches, since this would affect attempts
to build any of them on recent NetBSD.
Tom Lane [Fri, 22 Apr 2016 15:54:23 +0000 (11:54 -0400)]
Convert contrib/seg's bool-returning SQL functions to V1 call convention.
It appears that we can no longer get away with using V0 call convention
for bool-returning functions in newer versions of MSVC. The compiler
seems to generate code that doesn't clear the higher-order bits of the
result register, causing the bool result Datum to often read as "true"
when "false" was intended. This is not very surprising, since the
function thinks it's returning a bool-width result but fmgr_oldstyle
assumes that V0 functions return "char *"; what's surprising is that
that hack worked for so long on so many platforms.
The only functions of this description in core+contrib are in contrib/seg,
which we'd intentionally left mostly in V0 style to serve as a warning
canary if V0 call convention breaks. We could imagine hacking things
so that they're still V0 (we'd have to redeclare the bool-returning
functions as returning some suitably wide integer type, like size_t,
at the C level). But on the whole it seems better to convert 'em to V1.
We can still leave the pointer- and int-returning functions in V0 style,
so that the test coverage isn't gone entirely.
Back-patch to 9.5, since our intention is to support VS2015 in 9.5
and later. There's no SQL-level change in the functions' behavior
so back-patching should be safe enough.
Tom Lane [Fri, 22 Apr 2016 03:17:36 +0000 (23:17 -0400)]
Fix unexpected side-effects of operator_precedence_warning.
The implementation of that feature involves injecting nodes into the
raw parsetree where explicit parentheses appear. Various places in
parse_expr.c that test to see "is this child node of type Foo" need to
look through such nodes, else we'll get different behavior when
operator_precedence_warning is on than when it is off. Note that we only
need to handle this when testing untransformed child nodes, since the
AEXPR_PAREN nodes will be gone anyway after transformExprRecurse.
Per report from Scott Ribe and additional code-reading. Back-patch
to 9.5 where this feature was added.
Tom Lane [Fri, 22 Apr 2016 00:05:58 +0000 (20:05 -0400)]
Fix planner failure with full join in RHS of left join.
Given a left join containing a full join in its righthand side, with
the left join's joinclause referencing only one side of the full join
(in a non-strict fashion, so that the full join doesn't get simplified),
the planner could fail with "failed to build any N-way joins" or related
errors. This happened because the full join was seen as overlapping the
left join's RHS, and then recent changes within join_is_legal() caused
that function to conclude that the full join couldn't validly be formed.
Rather than try to rejigger join_is_legal() yet more to allow this,
I think it's better to fix initsplan.c so that the required join order
is explicit in the SpecialJoinInfo data structure. The previous coding
there essentially ignored full joins, relying on the fact that we don't
flatten them in the joinlist data structure to preserve their ordering.
That's sufficient to prevent a wrong plan from being formed, but as this
example shows, it's not sufficient to ensure that the right plan will
be formed. We need to work a bit harder to ensure that the right plan
looks sane according to the SpecialJoinInfos.
Per bug #14105 from Vojtech Rylko. This was apparently induced by
commit 8703059c6 (though now that I've seen it, I wonder whether there
are related cases that could have failed before that); so back-patch
to all active branches. Unfortunately, that patch also went into 9.0,
so this bug is a regression that won't be fixed in that branch.
Tom Lane [Thu, 21 Apr 2016 20:58:47 +0000 (16:58 -0400)]
Improve TranslateSocketError() to handle more Windows error codes.
The coverage was rather lean for cases that bind() or listen() might
return. Add entries for everything that there's a direct equivalent
for in the set of Unix errnos that elog.c has heard of.
Tom Lane [Thu, 21 Apr 2016 20:16:19 +0000 (16:16 -0400)]
Remove dead code in win32.h.
There's no longer a need for the MSVC-version-specific code stanza that
forcibly redefines errno code symbols, because since commit 73838b52 we're
unconditionally redefining them in the stanza before this one anyway.
Now it's merely confusing and ugly, so get rid of it; and improve the
comment that explains what's going on here.
Although this is just cosmetic, back-patch anyway since I'm intending
to back-patch some less-cosmetic changes in this same hunk of code.
Tom Lane [Thu, 21 Apr 2016 18:33:34 +0000 (14:33 -0400)]
PGDLLIMPORT-ify old_snapshot_threshold.
Revert commit 7cb1db1d9599f0a09d6920d2149d956ef6d88b0e, which represented
a misunderstanding of the problem (if snapmgr.h weren't already included
in bufmgr.h, things wouldn't compile anywhere). Instead install what
I think is the real fix.
Tom Lane [Thu, 21 Apr 2016 18:20:18 +0000 (14:20 -0400)]
Fix ruleutils.c's dumping of ScalarArrayOpExpr containing an EXPR_SUBLINK.
When we shoehorned "x op ANY (array)" into the SQL syntax, we created a
fundamental ambiguity as to the proper treatment of a sub-SELECT on the
righthand side: perhaps what's meant is to compare x against each row of
the sub-SELECT's result, or perhaps the sub-SELECT is meant as a scalar
sub-SELECT that delivers a single array value whose members should be
compared against x. The grammar resolves it as the former case whenever
the RHS is a select_with_parens, making the latter case hard to reach ---
but you can get at it, with tricks such as attaching a no-op cast to the
sub-SELECT. Parse analysis would throw away the no-op cast, leaving a
parsetree with an EXPR_SUBLINK SubLink directly under a ScalarArrayOpExpr.
ruleutils.c was not clued in on this fine point, and would naively emit
"x op ANY ((SELECT ...))", which would be parsed as the first alternative,
typically leading to errors like "operator does not exist: text = text[]"
during dump/reload of a view or rule containing such a construct. To fix,
emit a no-op cast when dumping such a parsetree. This might well be
exactly what the user wrote to get the construct accepted in the first
place; and even if she got there with some other dodge, it is a valid
representation of the parsetree.
Per report from Karl Czajkowski. He mentioned only a case involving
RLS policies, but actually the problem is very old, so back-patch to
all supported branches.
Robert Haas [Thu, 21 Apr 2016 18:02:15 +0000 (14:02 -0400)]
Prevent possible crash reading pg_stat_activity.
Also, avoid reading PGPROC's wait_event field twice, once for the wait
event and again for the wait_event_type, because the value might change
in the middle.
That commit increased all shared memory allocations to the next higher
multiple of PG_CACHE_LINE_SIZE, but it didn't ensure that allocation
started on a cache line boundary. It also failed to remove a couple
other pieces of now-useless code.
BUFFERALIGN() is perhaps obsolete at this point, and likely should be
removed at some point, too, but that seems like it can be left to a
future cleanup.
Mistakes all pointed out by Andres Freund. The patch is mine, with
a few extra assertions which I adopted from his version of this fix.
Robert Haas [Thu, 21 Apr 2016 14:46:09 +0000 (10:46 -0400)]
Allow queries submitted by postgres_fdw to be canceled.
This fixes a problem which is not new, but with the advent of direct
foreign table modification in 0bf3ae88af330496517722e391e7c975e6bad219,
it's somewhat more likely to be annoying than previously. So,
arrange for a local query cancelation to propagate to the remote side.
Michael Paquier, reviewed by Etsuro Fujita. Original report by
Thom Brown.
Kevin Grittner [Thu, 21 Apr 2016 13:40:08 +0000 (08:40 -0500)]
Inline initial comparisons in TestForOldSnapshot()
Even with old_snapshot_threshold = -1 (which disables the "snapshot
too old" feature), performance regressions were seen at moderate to
high concurrency. For example, a one-socket, four-core system
running 200 connections at saturation could see up to a 2.3%
regression, with larger regressions possible on NUMA machines.
By inlining the early (smaller, faster) tests in the
TestForOldSnapshot() function, the i7 case dropped to a 0.2%
regression, which could easily just be noise, and is clearly an
improvement. Further testing will show whether more is needed.
Robert Haas [Thu, 21 Apr 2016 03:34:07 +0000 (23:34 -0400)]
postgres_fdw: Don't push down certain full joins.
If there's a filter condition on either side of a full outer join,
it is neither correct to attach it to the join's ON clause nor to
throw it into the toplevel WHERE clause. Just don't push down the
join in that case.
To maximize the number of cases where we can still push down full
joins, push inner join conditions into the ON clause at the first
opportunity rather than postponing them to the top-level WHERE
clause. This produces nicer SQL, anyway.
Tom Lane [Thu, 21 Apr 2016 03:48:13 +0000 (23:48 -0400)]
Honor PGCTLTIMEOUT environment variable for pg_regress' startup wait.
In commit 2ffa86962077c588 we made pg_ctl recognize an environment variable
PGCTLTIMEOUT to set the default timeout for starting and stopping the
postmaster. However, pg_regress uses pg_ctl only for the "stop" end of
that; it has bespoke code for starting the postmaster, and that code has
historically had a hard-wired 60-second timeout. Further buildfarm
experience says it'd be a good idea if that timeout were also controlled
by PGCTLTIMEOUT, so let's make it so. Like the previous patch, back-patch
to all active branches.
Magnus Hagander [Wed, 20 Apr 2016 18:40:04 +0000 (14:40 -0400)]
Update backup documentation for new APIs
This includes the rest of the documentation that was not included
in 7117685. A larger restructure would still be wanted, but with
this commit the documentation of the new features is complete.
Tom Lane [Wed, 20 Apr 2016 18:25:15 +0000 (14:25 -0400)]
Fix memory leak and other bugs in ginPlaceToPage() & subroutines.
Commit 36a35c550ac114ca turned the interface between ginPlaceToPage and
its subroutines in gindatapage.c and ginentrypage.c into a royal mess:
page-update critical sections were started in one place and finished in
another place not even in the same file, and the very same subroutine
might return having started a critical section or not. Subsequent patches
band-aided over some of the problems with this design by making things
even messier.
One user-visible resulting problem is memory leaks caused by the need for
the subroutines to allocate storage that would survive until ginPlaceToPage
calls XLogInsert (as reported by Julien Rouhaud). This would not typically
be noticeable during retail index updates. It could be visible in a GIN
index build, in the form of memory consumption swelling to several times
the commanded maintenance_work_mem.
Another rather nasty problem is that in the internal-page-splitting code
path, we would clear the child page's GIN_INCOMPLETE_SPLIT flag well before
entering the critical section that it's supposed to be cleared in; a
failure in between would leave the index in a corrupt state. There were
also assorted coding-rule violations with little immediate consequence but
possible long-term hazards, such as beginning an XLogInsert sequence before
entering a critical section, or calling elog(DEBUG) inside a critical
section.
To fix, redefine the API between ginPlaceToPage() and its subroutines
by splitting the subroutines into two parts. The "beginPlaceToPage"
subroutine does what can be done outside a critical section, including
full computation of the result pages into temporary storage when we're
going to split the target page. The "execPlaceToPage" subroutine is called
within a critical section established by ginPlaceToPage(), and it handles
the actual page update in the non-split code path. The critical section,
as well as the XLOG insertion call sequence, are both now always started
and finished in ginPlaceToPage(). Also, make ginPlaceToPage() create and
work in a short-lived memory context to eliminate the leakage problem.
(Since a short-lived memory context had been getting created in the most
common code path in the subroutines, this shouldn't cause any noticeable
performance penalty; we're just moving the overhead up one call level.)
In passing, fix a bunch of comments that had gone unmaintained throughout
all this klugery.
Kevin Grittner [Wed, 20 Apr 2016 13:31:19 +0000 (08:31 -0500)]
Revert no-op changes to BufferGetPage()
The reverted changes were intended to force a choice of whether any
newly-added BufferGetPage() calls needed to be accompanied by a
test of the snapshot age, to support the "snapshot too old"
feature. Such an accompanying test is needed in about 7% of the
cases, where the page is being used as part of a scan rather than
positioning for other purposes (such as DML or vacuuming). The
additional effort required for back-patching, and the doubt whether
the intended benefit would really be there, have indicated it is
best just to rely on developers to do the right thing based on
comments and existing usage, as we do with many other conventions.
This change should have little or no effect on generated executable
code.
Motivated by the back-patching pain of Tom Lane and Robert Haas
Tom Lane [Tue, 19 Apr 2016 20:47:21 +0000 (16:47 -0400)]
Improve regression tests for degree-based trigonometric functions.
Print the actual value of each function result that's expected to be exact,
rather than merely emitting a NULL if it's not right. Although we print
these with extra_float_digits = 3, we should not trust that the platform
will produce a result visibly different from the expected value if it's off
only in the last place; hence, also include comparisons against the exact
values as before. This is a bit bulkier and uglier than the previous
printout, but it will provide more information and be easier to interpret
if there's a test failure.
Tom Lane [Mon, 18 Apr 2016 22:05:56 +0000 (18:05 -0400)]
Make partition-lock-release coding more transparent in BufferAlloc().
Coverity complained that oldPartitionLock was possibly dereferenced after
having been set to NULL. That actually can't happen, because we'd only use
it if (oldFlags & BM_TAG_VALID) is true. But nonetheless Coverity is
justified in complaining, because at line 1275 we actually overwrite
oldFlags, and then still expect its BM_TAG_VALID bit to be a safe guide to
whether to release the oldPartitionLock. Thus, the code would be incorrect
if someone else had changed the buffer's BM_TAG_VALID flag meanwhile.
That should not happen, since we hold pin on the buffer throughout this
sequence, but it's starting to look like a rather shaky chain of logic.
And there's no need for such assumptions, because we can simply replace
the (oldFlags & BM_TAG_VALID) tests with (oldPartitionLock != NULL),
which has identical results and makes it plain to all comers that we don't
dereference a null pointer. A small side benefit is that the range of
liveness of oldFlags is greatly reduced, possibly allowing the compiler
to save a register.
This is just cleanup, not an actual bug fix, so there seems no need
for a back-patch.
Tom Lane [Mon, 18 Apr 2016 17:33:06 +0000 (13:33 -0400)]
Further reduce the number of semaphores used under --disable-spinlocks.
Per discussion, there doesn't seem to be much value in having
NUM_SPINLOCK_SEMAPHORES set to 1024: under any scenario where you are
running more than a few backends concurrently, you really had better have a
real spinlock implementation if you want tolerable performance. And 1024
semaphores is a sizable fraction of the system-wide SysV semaphore limit
on many platforms. Therefore, reduce this setting's default value to 128
to make it less likely to cause out-of-semaphores problems.