Robert Haas [Wed, 6 Dec 2017 13:49:30 +0000 (08:49 -0500)]
Report failure to start a background worker.
When a worker is flagged as BGW_NEVER_RESTART and we fail to start it,
or if it is not marked BGW_NEVER_RESTART but is terminated before
startup succeeds, what BgwHandleStatus should be reported? The
previous code really hadn't considered this possibility (as indicated
by the comments which ignore it completely) and would typically return
BGWH_NOT_YET_STARTED, but that's not a good answer, because then
there's no way for code using GetBackgroundWorkerPid() to tell the
difference between a worker that has not started but will start
later and a worker that has not started and will never be started.
So, when this case happens, return BGWH_STOPPED instead. Update the
comments to reflect this.
The preceding fix by itself is insufficient to fix the problem,
because the old code also didn't send a notification to the process
identified in bgw_notify_pid when startup failed. That might've
been technically correct under the theory that the status of the
worker was BGWH_NOT_YET_STARTED, because the status would indeed not
change when the worker failed to start, but now that we're more
usefully reporting BGWH_STOPPED, a notification is needed.
Without these fixes, code which starts background workers and then
uses the recommended APIs to wait for those background workers to
start would hang indefinitely if the postmaster failed to fork a
worker.
Robert Haas [Tue, 5 Dec 2017 19:35:42 +0000 (14:35 -0500)]
Fix accumulation of parallel worker instrumentation.
When a Gather or Gather Merge node is started and stopped multiple
times, the old code wouldn't reset the shared state between executions,
potentially resulting in dramatically inflated instrumentation data
for nodes beneath it. (The per-worker instrumentation ended up OK,
I think, but the overall totals were inflated.)
Report by hubert depesz lubaczewski. Analysis and fix by Amit Kapila,
reviewed and tweaked a bit by me.
Tom Lane [Mon, 4 Dec 2017 22:02:52 +0000 (17:02 -0500)]
Clean up assorted messiness around AllocateDir() usage.
This patch fixes a couple of low-probability bugs that could lead to
reporting an irrelevant errno value (and hence possibly a wrong SQLSTATE)
concerning directory-open or file-open failures. It also fixes places
where we took shortcuts in reporting such errors, either by using elog
instead of ereport or by using ereport but forgetting to specify an
errcode. And it eliminates a lot of just plain redundant error-handling
code.
In service of all this, export fd.c's formerly-static function
ReadDirExtended, so that external callers can make use of the coding
pattern
dir = AllocateDir(path);
while ((de = ReadDirExtended(dir, path, LOG)) != NULL)
if they'd like to treat directory-open failures as mere LOG conditions
rather than errors. Also fix FreeDir to be a no-op if we reach it
with dir == NULL, as such a coding pattern would cause.
Then, remove code at many call sites that was throwing an error or log
message for AllocateDir failure, as ReadDir or ReadDirExtended can handle
that job just fine. Aside from being a net code savings, this gets rid of
a lot of not-quite-up-to-snuff reports, as mentioned above. (In some
places these changes result in replacing a custom error message such as
"could not open tablespace directory" with more generic wording "could not
open directory", but it was agreed that the custom wording buys little as
long as we report the directory name.) In some other call sites where we
can't just remove code, change the error reports to be fully
project-style-compliant.
Also reorder code in restoreTwoPhaseData that was acquiring a lock
between AllocateDir and ReadDir; in the unlikely but surely not
impossible case that LWLockAcquire changes errno, AllocateDir failures
would be misreported. There is no great value in opening the directory
before acquiring TwoPhaseStateLock, so just do it in the other order.
Also fix CheckXLogRemoved to guarantee that it preserves errno,
as quite a number of call sites are implicitly assuming. (Again,
it's unlikely but I think not impossible that errno could change
during a SpinLockAcquire. If so, this function was broken for its
own purposes as well as breaking callers.)
And change a few places that were using not-per-project-style messages,
such as "could not read directory" when "could not open directory" is
more correct.
Back-patch the exporting of ReadDirExtended, in case we have occasion
to back-patch some fix that makes use of it; it's not needed right now
but surely making it global is pretty harmless. Also back-patch the
restoreTwoPhaseData and CheckXLogRemoved fixes. The rest of this is
essentially cosmetic and need not get back-patched.
Michael Paquier, with a bit of additional work by me
Tom Lane [Mon, 4 Dec 2017 16:51:43 +0000 (11:51 -0500)]
Support boolean columns in functional-dependency statistics.
There's no good reason that the multicolumn stats stuff shouldn't work on
booleans. But it looked only for "Var = pseudoconstant" clauses, and it
will seldom find those for boolean Vars, since earlier phases of planning
will fold "boolvar = true" or "boolvar = false" to just "boolvar" or
"NOT boolvar" respectively. Improve dependencies_clauselist_selectivity()
to recognize such clauses as equivalent to equality restrictions.
This fixes a failure of the extended stats mechanism to apply in a case
reported by Vitaliy Garnashevich. It's not a complete solution to his
problem because the bitmap-scan costing code isn't consulting extended
stats where it should, but that's surely an independent issue.
In passing, improve some comments, get rid of a NumRelids() test that's
redundant with the preceding bms_membership() test, and fix
dependencies_clauselist_selectivity() so that estimatedclauses actually
is a pure output argument as stated by its API contract.
Noah Misch [Thu, 30 Nov 2017 08:57:22 +0000 (00:57 -0800)]
Fix non-GNU makefiles for AIX make.
Invoking the Makefile without an explicit target was building every
possible target instead of just the "all" target. Back-patch to 9.3
(all supported versions).
Robert Haas [Tue, 28 Nov 2017 17:05:52 +0000 (12:05 -0500)]
Fix ReinitializeParallelDSM to tolerate finding no error queues.
Commit d4663350646ca0c069a36d906155a0f7e3372eb7 changed things so
that shm_toc_lookup would fail with an error rather than silently
returning NULL in the hope that such failures would be reported
in a useful way rather than via a system crash. However, it
overlooked the fact that the lookup of PARALLEL_KEY_ERROR_QUEUE
in ReinitializeParallelDSM is expected to fail when no DSM segment
was created in the first place; in that case, we end up with a
backend-private memory segment that still contains an entry for
PARALLEL_KEY_FIXED but no others. Consequently a benign failure
to initialize parallelism can escalate into an elog(ERROR);
repair.
Robert Haas [Tue, 28 Nov 2017 16:39:16 +0000 (11:39 -0500)]
Teach bitmap heap scan to cope with absence of a DSA.
If we have a plan that uses parallelism but are unable to execute it
using parallelism, for example due to a lack of available DSM
segments, then the EState's es_query_dsa will be NULL. Parallel
bitmap heap scan needs to fall back to a non-parallel scan in such
cases.
Tom Lane [Tue, 28 Nov 2017 00:22:08 +0000 (19:22 -0500)]
Fix assorted syscache lookup sloppiness in partition-related code.
heap_drop_with_catalog and ATExecDetachPartition neglected to check for
SearchSysCache failures, as noted in bugs #14927 and #14928 from Pan Bian.
Such failures are pretty unlikely, since we should already have some sort
of lock on the rel at these points, but it's neither a good idea nor
per project style to omit a check for failure.
Also, StorePartitionKey contained a syscache lookup that it never did
anything with, including never releasing the result. Presumably the
reason why we don't see refcount-leak complaints is that the lookup
always fails; but in any case it's pretty useless, so remove it.
All of these errors were evidently introduced by the relation
partitioning feature. Back-patch to v10 where that came in.
Tom Lane [Mon, 27 Nov 2017 22:53:56 +0000 (17:53 -0500)]
Fix creation of resjunk tlist entries for inherited mixed UPDATE/DELETE.
rewriteTargetListUD's processing is dependent on the relkind of the query's
target table. That was fine at the time it was made to act that way, even
for queries on inheritance trees, because all tables in an inheritance tree
would necessarily be plain tables. However, the 9.5 feature addition
allowing some members of an inheritance tree to be foreign tables broke the
assumption that rewriteTargetListUD's output tlist could be applied to all
child tables with nothing more than column-number mapping. This led to
visible failures if foreign child tables had row-level triggers, and would
also break in cases where child tables belonged to FDWs that used methods
other than CTID for row identification.
To fix, delay running rewriteTargetListUD until after the planner has
expanded inheritance, so that it is applied separately to the (already
mapped) tlist for each child table. We can conveniently call it from
preprocess_targetlist. Refactor associated code slightly to avoid the
need to heap_open the target relation multiple times during
preprocess_targetlist. (The APIs remain a bit ugly, particularly around
the point of which steps scribble on parse->targetList and which don't.
But avoiding such scribbling would require a change in FDW callback APIs,
which is more pain than it's worth.)
Also fix ExecModifyTable to ensure that "tupleid" is reset to NULL when
we transition from rows providing a CTID to rows that don't. (That's
really an independent bug, but it manifests in much the same cases.)
Add a regression test checking one manifestation of this problem, which
was that row-level triggers on a foreign child table did not work right.
Back-patch to 9.5 where the problem was introduced.
Etsuro Fujita, reviewed by Ildus Kurbangaliev and Ashutosh Bapat
Tom Lane [Sun, 26 Nov 2017 20:17:25 +0000 (15:17 -0500)]
Pad XLogReaderState's main_data buffer more aggressively.
Originally, we palloc'd this buffer just barely big enough to hold the
largest xlog record seen so far. It turns out that that can result in
valgrind complaints, because some compilers will emit code that assumes
it can safely fetch padding bytes at the end of a struct, and those
padding bytes were unallocated so far as aset.c was concerned. We can
fix that by MAXALIGN'ing the palloc request size, ensuring that it is big
enough to include any possible padding that might've been omitted from
the on-disk record.
An additional objection to the original coding is that it could result in
many repeated palloc cycles, in the worst case where we see a series of
gradually larger xlog records. We can ameliorate that cheaply by
imposing a minimum buffer size that's large enough for most xlog records.
BLCKSZ/2 was chosen after a bit of discussion.
In passing, remove an obsolete comment in struct xl_heap_new_cid that the
combocid field is free due to alignment considerations. Perhaps that was
true at some point, but it's not now.
Joe Conway [Sun, 26 Nov 2017 17:50:00 +0000 (09:50 -0800)]
Make has_sequence_privilege support WITH GRANT OPTION
The various has_*_privilege() functions all support an optional
WITH GRANT OPTION added to the supported privilege types to test
whether the privilege is held with grant option. That is, all except
has_sequence_privilege() variations. Fix that.
Tom Lane [Sat, 25 Nov 2017 20:30:11 +0000 (15:30 -0500)]
Replace raw timezone source data with IANA's new compact format.
Traditionally IANA has distributed their timezone data in pure source
form, replete with extensive historical comments. As of release 2017c,
they've added a compact single-file format that omits comments and
abbreviates command keywords. This form is way shorter than the pure
source, even before considering its allegedly better compressibility.
Hence, let's distribute the data in that form rather than pure source.
I'm pushing this now, rather than at the next timezone database update,
so that it's easy to confirm that this data file produces compiled zic
output that's identical to what we were getting before.
Tom Lane [Sat, 25 Nov 2017 19:42:10 +0000 (14:42 -0500)]
Avoid formally-undefined use of memcpy() in hstoreUniquePairs().
hstoreUniquePairs() often called memcpy with equal source and destination
pointers. Although this is almost surely harmless in practice, it's
undefined according to the letter of the C standard. Some versions of
valgrind will complain about it, and some versions of libc as well
(cf. commit ad520ec4a). Tweak the code to avoid doing that.
Noted by Tomas Vondra. Back-patch to all supported versions because
of the hazard of libc assertions.
Tom Lane [Sat, 25 Nov 2017 19:15:48 +0000 (14:15 -0500)]
Repair failure with SubPlans in multi-row VALUES lists.
When nodeValuesscan.c was written, it was impossible to have a SubPlan in
VALUES --- any sub-SELECT there would have to be uncorrelated and thereby
would produce an InitPlan instead. We therefore took a shortcut in the
logic that throws away a ValuesScan's per-row expression evaluation data
structures. This was broken by the introduction of LATERAL however; a
sub-SELECT containing a lateral reference produces a correlated SubPlan.
The cleanest fix for this would be to give up the optimization of
discarding the expression eval state. But that still seems pretty
unappetizing for long VALUES lists. It seems to work to just prevent
the subexpressions from hooking into the ValuesScan node's subPlan
list, so let's do that and see how well it works. (If this breaks,
due to additional connections between the subexpressions and the outer
query structures, we might consider compromises like throwing away data
only for VALUES rows not containing SubPlans.)
Per bug #14924 from Christian Duta. Back-patch to 9.3 where LATERAL
was introduced.
Tom Lane [Sat, 25 Nov 2017 16:48:09 +0000 (11:48 -0500)]
Improve planner's handling of set-returning functions in grouping columns.
Improve query_is_distinct_for() to accept SRFs in the targetlist when
we can prove distinctness from a DISTINCT clause. In that case the
de-duplication will surely happen after SRF expansion, so the proof
still works. Continue to punt in the case where we'd try to prove
distinctness from GROUP BY (or, in the future, source relations).
To do that, we'd have to determine whether the SRFs were in the
grouping columns or elsewhere in the tlist, and it still doesn't
seem worth the trouble. But this trivial change allows us to
recognize that "SELECT DISTINCT unnest(foo) FROM ..." produces
unique-ified output, which seems worth having.
Also, fix estimate_num_groups() to consider the possibility of SRFs in
the grouping columns. Its failure to do so was masked before v10 because
grouping_planner() scaled up plan rowcount estimates by the estimated SRF
multiplier after performing grouping. That doesn't happen anymore, which
is more correct, but it means we need an adjustment in the estimate for
the number of groups. Failure to do this leads to an underestimate for
the number of output rows of subqueries like "SELECT DISTINCT unnest(foo)"
compared to what 9.6 and earlier estimated, thus breaking plan choices
in some cases.
Per report from Dmitry Shalashov. Back-patch to v10 to avoid degraded
plan choices compared to previous releases.
Dean Rasheed [Fri, 24 Nov 2017 14:12:50 +0000 (14:12 +0000)]
RLS comment fixes.
The comments in get_policies_for_relation() say that CREATE POLICY
does not support defining restrictive policies. This is no longer
true, starting from PG10.
Dean Rasheed [Fri, 24 Nov 2017 12:00:37 +0000 (12:00 +0000)]
Doc: add a summary table to the CREATE POLICY docs.
This table summarizes which RLS policy expressions apply to each
command type, and whether they apply to the old or new tuples (or
both), which saves reading through a lot of text.
Rod Taylor, hacked on by me. Reviewed by Fabien Coelho.
Tom Lane [Fri, 24 Nov 2017 05:29:20 +0000 (00:29 -0500)]
Fix unstable regression test added by commits 59b71c6fe et al.
The query didn't really have a preferred index, leading to platform-
specific choices of which one to use. Adjust it to make sure tenk1_hundred
is always chosen.
Noah Misch [Fri, 24 Nov 2017 04:22:04 +0000 (20:22 -0800)]
Support linking with MinGW-built Perl.
This is necessary for ActivePerl 5.18 onwards and for Strawberry Perl.
It is not sufficient for 32-bit builds with newer Visual Studio; these
fail with error LINK2026. Back-patch to 9.3 (all supported versions).
Andres Freund [Fri, 24 Nov 2017 01:13:09 +0000 (17:13 -0800)]
Fix handling of NULLs returned by aggregate combine functions.
When strict aggregate combine functions, used in multi-stage/parallel
aggregation, returned NULL, we didn't check for that, invoking the
combine function with NULL the next round, despite it being strict.
The equivalent code invoking normal transition functions has a check
for that situation, which did not get copied in a7de3dc5c346. Fix the
bug by adding the equivalent check.
Based on a quick look I could not find any strict combine functions in
core actually returning NULL, and it doesn't seem very likely external
users have done so. So this isn't likely to have caused issues in
practice.
Reported-By: Andres Freund
Author: Andres Freund
Discussion: https://postgr.es/m/20171121033642.7xvmjqrl4jdaaat3@alap3.anarazel.de
Backpatch: 9.6, where parallel aggregation was introduced
Fujii Masao [Thu, 23 Nov 2017 07:46:42 +0000 (16:46 +0900)]
doc: mention wal_receiver_status_interval as GUC affecting logical rep worker.
wal_receiver_timeout, wal_receiver_status_interval and
wal_retrieve_retry_interval configuration parameters affect the logical rep
worker, but previously only wal_receiver_status_interval was not mentioned
as such parameter in the doc.
Noah Misch [Thu, 23 Nov 2017 04:18:15 +0000 (20:18 -0800)]
Build src/test/isolation during "make" and "make install".
This hack closes a race condition in "make -j check-world" and "make -j
installcheck-world". Back-patch to v10, before which these parallel
invocations had worse problems.
Robert Haas [Tue, 21 Nov 2017 18:56:24 +0000 (13:56 -0500)]
Provide for forward compatibility with future minor protocol versions.
Previously, any attempt to request a 3.x protocol version other than
3.0 would lead to a hard connection failure, which made the minor
protocol version really no different from the major protocol version
and precluded gentle protocol version breaks. Instead, when the
client requests a 3.x protocol version where x is greater than 0, send
the new NegotiateProtocolVersion message to convey that we support
only 3.0. This makes it possible to introduce new minor protocol
versions without requiring a connection retry when the server is
older.
In addition, if the startup packet includes name/value pairs where
the name starts with "_pq_.", assume that those are protocol options,
not GUCs. Include those we don't support (i.e. all of them, at
present) in the NegotiateProtocolVersion message so that the client
knows they were not understood. This makes it possible for the
client to request previously-unsupported features without bumping
the protocol version at all; the client can tell from the server's
response whether the option was understood.
It will take some time before servers that support these new
facilities become common in the wild; to speed things up and make
things easier for a future 3.1 protocol version, back-patch to all
supported releases.
Tom Lane [Mon, 20 Nov 2017 22:57:46 +0000 (17:57 -0500)]
Add support for Motorola 88K to s_lock.h.
Apparently there are still people out there who care about this old
architecture. They probably care about dusty versions of Postgres
too, so back-patch to all supported branches.
David Carlier (from a patch being carried by OpenBSD packagers)
Tom Lane [Sat, 18 Nov 2017 21:46:29 +0000 (16:46 -0500)]
Fix compiler warning in rangetypes_spgist.c.
On gcc 7.2.0, comparing pointer to (Datum) 0 produces a warning.
Treat it as a simple pointer to avoid that; this is more consistent
with comparable code elsewhere, anyway.
Tom Lane [Fri, 17 Nov 2017 17:46:52 +0000 (12:46 -0500)]
Provide modern examples of how to auto-start Postgres on macOS.
The scripts in contrib/start-scripts/osx don't work at all on macOS
10.10 (Yosemite) or later, because they depend on SystemStarter which
Apple deprecated long ago and removed in 10.10. Add a new subdirectory
contrib/start-scripts/macos with scripts that use the newer launchd
infrastructure.
Since this problem is independent of which Postgres version you're using,
back-patch to all supported branches.
Robert Haas [Thu, 16 Nov 2017 19:19:27 +0000 (14:19 -0500)]
Fix broken cleanup interlock for GIN pending list.
The pending list must (for correctness) always be cleaned up by vacuum, and
should (for the avoidance of surprising behavior) always be cleaned up
by an explicit call to gin_clean_pending_list, but cleanup is optional
when inserting. The old logic got this backward: cleanup was forced
if (stats == NULL), but that's going to be *false* when vacuuming and
*true* for inserts.
Tom Lane [Thu, 16 Nov 2017 16:16:53 +0000 (11:16 -0500)]
Fix bogus logic for checking data dirs' versions within pg_upgrade.
Commit 9be95ef15 failed to cure all of the redundancy here: we were
actually calling get_major_server_version() three times for each
of the old and new data directories. While that's not enormously
expensive, it's still sloppy.
Tom Lane [Tue, 14 Nov 2017 22:49:49 +0000 (17:49 -0500)]
Prevent int128 from requiring more than MAXALIGN alignment.
Our initial work with int128 neglected alignment considerations, an
oversight that came back to bite us in bug #14897 from Vincent Lachenal.
It is unsurprising that int128 might have a 16-byte alignment requirement;
what's slightly more surprising is that even notoriously lax Intel chips
sometimes enforce that.
Raising MAXALIGN seems out of the question: the costs in wasted disk and
memory space would be significant, and there would also be an on-disk
compatibility break. Nor does it seem very practical to try to allow some
data structures to have more-than-MAXALIGN alignment requirement, as we'd
have to push knowledge of that throughout various code that copies data
structures around.
The only way out of the box is to make type int128 conform to the system's
alignment assumptions. Fortunately, gcc supports that via its
__attribute__(aligned()) pragma; and since we don't currently support
int128 on non-gcc-workalike compilers, we shouldn't be losing any platform
support this way.
Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and
called it a day, I did a little bit of extra work to make the code more
portable than that: it will also support int128 on compilers without
__attribute__(aligned()), if the native alignment of their 128-bit-int
type is no more than that of int64.
Add a regression test case that exercises the one known instance of the
problem, in parallel aggregation over a bigint column.
Back-patch of commit 751804998. The code known to be affected only exists
in 9.6 and later, but we do have some stuff using int128 in 9.5, so patch
back to 9.5.
Tom Lane [Tue, 14 Nov 2017 22:22:42 +0000 (17:22 -0500)]
Rearrange c.h to create a "compiler characteristics" section.
Generalize section 1 to handle stuff that is principally about the
compiler (not libraries), such as attributes, and collect stuff there
that had been dropped into various other parts of c.h. Also, push
all the gettext macros into section 8, so that section 0 is really
just inclusions rather than inclusions and random other stuff.
The primary goal here is to get pg_attribute_aligned() defined before
section 3, so that we can use it with int128. But this seems like good
cleanup anyway.
This patch just moves macro definitions around, and shouldn't result
in any changes in generated code.
Noah Misch [Sun, 12 Nov 2017 22:31:00 +0000 (14:31 -0800)]
Install Windows crash dump handler before all else.
Apart from calling write_stderr() on failure, the handler depends on no
PostgreSQL facilities. We have experienced crashes before reaching the
former call site. Given such an early crash, this change cannot hurt
and may produce a helpful dump. Absent an early crash, this change has
no effect. Back-patch to 9.3 (all supported versions).
Noah Misch [Sun, 12 Nov 2017 21:03:15 +0000 (13:03 -0800)]
Don't call pgwin32_message_to_UTF16() without CurrentMemoryContext.
PostgreSQL running as a Windows service crashed upon calling
write_stderr() before MemoryContextInit(). This fix completes work
started in 5735efee15540765315aa8c1a230575e756037f7. Messages this
early contain only ASCII bytes; if we removed the CurrentMemoryContext
requirement, the ensuing conversions would have no effect. Back-patch
to 9.3 (all supported versions).
Noah Misch [Sat, 11 Nov 2017 22:35:22 +0000 (14:35 -0800)]
Add post-2010 ecpg tests to checktcp.
This suite had been a proper superset of the regular ecpg test suite,
but the three newest tests didn't reach it. To make this less likely to
recur, delete the extra schedule file and pass the TCP-specific test on
the command line. Back-patch to 9.3 (all supported versions).
Noah Misch [Sat, 11 Nov 2017 22:33:02 +0000 (14:33 -0800)]
Make connect/test1 independent of localhost IPv6.
Since commit 868898739a8da9ab74c105b8349b7b5c711f265a, it has assumed
"localhost" resolves to both ::1 and 127.0.0.1. We gain nothing from
that assumption, and it does not hold in a default installation of Red
Hat Enterprise Linux 5. Back-patch to 9.3 (all supported versions).
Noah Misch [Sat, 11 Nov 2017 19:10:53 +0000 (11:10 -0800)]
Ignore XML declaration in xpath_internal(), for UTF8 databases.
When a value contained an XML declaration naming some other encoding,
this function interpreted UTF8 bytes as the named encoding, yielding
mojibake. xml_parse() already has similar logic. This would be
necessary but not sufficient for non-UTF8 databases, so preserve
behavior there until the xpath facility can support such databases
comprehensively. Back-patch to 9.3 (all supported versions).
Tom Lane [Fri, 10 Nov 2017 17:30:01 +0000 (12:30 -0500)]
Tighten test in contrib/bloom/t/001_wal.pl.
Make bloom WAL test compare psql output text, not just result codes;
this was evidently the intent all along, but it was mis-coded.
In passing, make sure we will notice any failure in setup steps.
Alexander Korotkov, reviewed by Michael Paquier and Masahiko Sawada
Tom Lane [Thu, 9 Nov 2017 16:30:30 +0000 (11:30 -0500)]
Fix bogus logic for checking executables' versions within pg_upgrade.
Somebody messed up a refactoring here. As it stood, we'd check pg_ctl's
--version output twice for each cluster. Worse, the first check for the
new cluster's version happened before we'd done any validate_exec checks
there, breaking the check ordering the code intended.
Tom Lane [Thu, 9 Nov 2017 16:00:36 +0000 (11:00 -0500)]
Revert "Allow --with-bonjour to work with non-macOS implementations of Bonjour."
Upon further review, our Bonjour code doesn't actually work with the
Avahi not-too-compatible compatibility library. While you can get it
to work on non-macOS platforms if you link to Apple's own mDNSResponder
code, there don't seem to be many people who care about that. Leaving in
the AC_SEARCH_LIBS call seems more likely to encourage people to build
broken configurations than to do anything very useful.
Hence, remove the AC_SEARCH_LIBS call and put in a warning comment instead.
Tom Lane [Wed, 8 Nov 2017 21:50:13 +0000 (16:50 -0500)]
Fix two violations of the ResourceOwnerEnlarge/Remember protocol.
The point of having separate ResourceOwnerEnlargeFoo and
ResourceOwnerRememberFoo functions is so that resource allocation
can happen in between. Doing it in some other order is just wrong.
OpenTemporaryFile() did open(), enlarge, remember, which would leak the
open file if the enlarge step ran out of memory. Because fd.c has its own
layer of resource-remembering, the consequences look like they'd be limited
to an intratransaction FD leak, but it's still not good.
IncrBufferRefCount() did enlarge, remember, incr-refcount, which would blow
up if the incr-refcount step ever failed. It was safe enough when written,
but since the introduction of PrivateRefCountHash, I think the assumption
that no error could happen there is pretty shaky.
The odds of real problems from either bug are probably small, but still,
back-patch to supported branches.
Thomas Munro and Tom Lane, per a comment from Andres Freund
Tom Lane [Tue, 7 Nov 2017 18:49:36 +0000 (13:49 -0500)]
Fix unportable usage of <ctype.h> functions.
isdigit(), isspace(), etc are likely to give surprising results if passed a
signed char. We should always cast the argument to unsigned char to avoid
that. Error in commit 63d6b97fd, found by buildfarm member gaur.
Back-patch to 9.3, like that commit.
Tom Lane [Mon, 6 Nov 2017 15:29:16 +0000 (10:29 -0500)]
Make json{b}_populate_recordset() use the right tuple descriptor.
json{b}_populate_recordset() used the tuple descriptor created from the
query-level AS clause without worrying about whether it matched the actual
input record type. If it didn't, that would usually result in a crash,
though disclosure of server memory contents seems possible as well, for a
skilled attacker capable of issuing crafted SQL commands. Instead, use
the query-supplied descriptor only when there is no input tuple to look at,
and otherwise get a tuple descriptor based on the input tuple's own type
marking. The core code will detect any type mismatch in the latter case.
Michael Paquier and Tom Lane, per a report from David Rowley.
Back-patch to 9.3 where this functionality was introduced.
Noah Misch [Mon, 6 Nov 2017 15:11:10 +0000 (07:11 -0800)]
start-scripts: switch to $PGUSER before opening $PGLOG.
By default, $PGUSER has permission to unlink $PGLOG. If $PGUSER
replaces $PGLOG with a symbolic link, the server will corrupt the
link-targeted file by appending log messages. Since these scripts open
$PGLOG as root, the attack works regardless of target file ownership.
"make install" does not install these scripts anywhere. Users having
manually installed them in the past should repeat that process to
acquire this fix. Most script users have $PGLOG writable to root only,
located in $PGDATA. Just before updating one of these scripts, such
users should rename $PGLOG to $PGLOG.old. The script will then recreate
$PGLOG with proper ownership.
Reviewed by Peter Eisentraut. Reported by Antoine Scemama.
Dean Rasheed [Mon, 6 Nov 2017 09:17:44 +0000 (09:17 +0000)]
Always require SELECT permission for ON CONFLICT DO UPDATE.
The update path of an INSERT ... ON CONFLICT DO UPDATE requires SELECT
permission on the columns of the arbiter index, but it failed to check
for that in the case of an arbiter specified by constraint name.
In addition, for a table with row level security enabled, it failed to
check updated rows against the table's SELECT policies when the update
path was taken (regardless of how the arbiter index was specified).
Backpatch to 9.5 where ON CONFLICT DO UPDATE and RLS were introduced.
Noah Misch [Mon, 6 Nov 2017 02:51:08 +0000 (18:51 -0800)]
Add a temp-install prerequisite to "check"-like targets not having one.
Makefile.global assigns this prerequisite to every target named "check",
but similar targets must mention it explicitly. Affected targets
failed, tested $PATH binaries, or tested a stale temporary installation.
The src/test/modules examples worked properly when called as "make -C
src/test/modules/$FOO check", but "make -j" allowed the test to start
before the temporary installation was in place. Back-patch to 9.5,
where commit dcae5faccab64776376d354decda0017c648bb53 introduced the
shared temp-install.
Tom Lane [Sun, 5 Nov 2017 18:47:56 +0000 (13:47 -0500)]
Release notes for 10.1, 9.6.6, 9.5.10, 9.4.15, 9.3.20, 9.2.24.
In the v10 branch, also back-patch the effects of 1ff01b390 and c29c57890
on these files, to reduce future maintenance issues. (I'd do it further
back, except that the 9.X branches differ anyway due to xlog-to-wal
link tag renaming.)
Noah Misch [Sun, 5 Nov 2017 17:25:52 +0000 (09:25 -0800)]
Ignore CatalogSnapshot when checking COPY FREEZE prerequisites.
This restores the ability, essentially lost in commit ffaa44cb559db332baeee7d25dedd74a61974203, to use COPY FREEZE under
REPEATABLE READ isolation. Back-patch to 9.4, like that commit.
Alvaro Herrera [Fri, 3 Nov 2017 19:36:32 +0000 (20:36 +0100)]
Fix thinkos in BRIN summarization
The previous commit contained a thinko that made a single-range
summarization request process from there to end of table. Fix by
setting the correct end range point. Per buildfarm.
Don't reset additional columns on subscriber to NULL on UPDATE
When a publisher table has fewer columns than a subscriber, the update
of a row on the publisher should result in updating of only the columns
in common. The previous coding mistakenly reset the values of
additional columns on the subscriber to NULL because it failed to skip
updates of columns not found in the attribute map.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Alvaro Herrera [Fri, 3 Nov 2017 16:23:13 +0000 (17:23 +0100)]
Fix BRIN summarization concurrent with extension
If a process is extending a table concurrently with some BRIN
summarization process, it is possible for the latter to miss pages added
by the former because the number of pages is computed ahead of time.
Fix by determining a fresh relation size after inserting the placeholder
tuple: any process that further extends the table concurrently will
update the placeholder tuple, while previous pages will be processed by
the heap scan.
Reported-by: Tomas Vondra Reviewed-by: Tom Lane
Author: Álvaro Herrera
Discussion: https://postgr.es/m/083d996a-4a8a-0e13-800a-851dd09ad8cc@2ndquadrant.com Backpatch-to: 9.5
Tom Lane [Thu, 2 Nov 2017 16:54:22 +0000 (12:54 -0400)]
Fix corner-case errors in brin_doupdate().
In some cases the BRIN code releases lock on an index page, and later
re-acquires lock and tries to check that the tuple it was working on is
still there. That check was a couple bricks shy of a load. It didn't
consider that the page might have turned into a "revmap" page. (The
samepage code path doesn't call brin_getinsertbuffer(), so it isn't
protected by the checks for revmap status there.) It also didn't check
whether the tuple offset was now off the end of the linepointer array.
Since commit 24992c6db the latter case is pretty common, but at least
in principle it could have occurred before that. The net result is
that concurrent updates of a BRIN index could fail with errors like
"invalid index offnum" or "inconsistent range map".
Per report from Tomas Vondra. Back-patch to 9.5, since this code is
substantially the same in all versions containing BRIN.
Alvaro Herrera [Thu, 2 Nov 2017 14:51:05 +0000 (15:51 +0100)]
Revert bogus fixes of HOT-freezing bug
It turns out we misdiagnosed what the real problem was. Revert the
previous changes, because they may have worse consequences going
forward. A better fix is forthcoming.
The simplistic test case is kept, though disabled.
pg_basebackup: Fix comparison handling of tablespace mappings on Windows
A candidate path needs to be canonicalized before being checked against
the mappings, because the mappings are also canonicalized. This is
especially relevant on Windows
Reported-by: nb <nbedxp@gmail.com>
Author: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Ashutosh Sharma <ashu.coek88@gmail.com>
Tom Lane [Tue, 31 Oct 2017 17:40:23 +0000 (13:40 -0400)]
Fix underqualified cast-target type names in pg_dump and psql queries.
Queries running with some non-pg_catalog schema frontmost in their search
path need to be careful to schema-qualify type names that should be sought
in pg_catalog. Vitaly Burovoy reported an oversight of this sort in
pg_dump's dumpSequence, and grepping detected another one in psql's
describeOneTableDetails, both introduced by sequence-related changes in
v10. In pg_dump, we can fix things by removing the cast altogether, since
it doesn't really matter what data types are reported for these query
result columns. Likewise in psql, the query seemed to be working unduly
hard to get a result that's guaranteed to be exactly 'bigint'.
I also changed a couple of occurrences of "::char" similarly. These are
not bugs, since "char" is a typename keyword and not subject to search_path
rules, but it seems better to use uniform style.
Tom Lane [Mon, 30 Oct 2017 20:44:26 +0000 (16:44 -0400)]
Doc: call out UPDATE syntax change as a v10 compatibility issue.
The change made by commit 906bfcad7 means that if you're writing
a parenthesized column list in UPDATE ... SET, but that column list
is only one column, you now need to write ROW(expression) on the
righthand side, not just a parenthesized expression. This was an
intentional change for spec compatibility and potential future
expansion of the possibilities for the RHS, but I'd neglected to
document it as a compatibility issue, figuring that hardly anyone
would bother with parenthesized syntax for a single target column.
I was wrong, as shown by questions from Justin Pryzby, Adam Brusselback,
and others. Move the release note item into the compatibility section
and point out the behavior change for a single target column.
Alvaro Herrera [Mon, 30 Oct 2017 14:52:13 +0000 (15:52 +0100)]
Fix autovacuum work item error handling
In autovacuum's "work item" processing, a few strings were allocated in
the current transaction's memory context, which goes away during error
handling; if an error happened during execution of the work item, the
pfree() calls to clean up afterwards would try to release already-released
memory, possibly leading to a crash. In branch master, this was already
fixed by commit 335f3d04e4c8, so backpatch that to REL_10_STABLE to fix
the problem there too.
As a secondary problem, verify that the autovacuum worker is connected
to the right database for each work item; otherwise some items would be
discarded by workers in other databases.
Robert Haas [Fri, 27 Oct 2017 20:22:39 +0000 (22:22 +0200)]
Allow parallel query for prepared statements with generic plans.
This was always intended to work, but due to an oversight in
max_parallel_hazard_walker, it didn't. In testing, we missed the
fact that it was only working for custom plans, where the parameter
value has been substituted for the parameter itself early enough
that everything worked. In a generic plan, the Param node survives
and must be treated as parallel-safe. SerializeParamList provides
for the transmission of parameter values to workers.
Amit Kapila with help from Kuntal Ghosh. Some changes by me.
Robert Haas [Sun, 29 Oct 2017 07:28:40 +0000 (12:58 +0530)]
Fix problems with the "role" GUC and parallel query.
Without this fix, dropping a role can sometimes result in parallel
query failures in sessions that have used "SET ROLE" to assume the
dropped role, even if that setting isn't active any more.
Report by Pavan Deolasee. Patch by Amit Kapila, reviewed by me.
Tom Lane [Fri, 27 Oct 2017 22:16:25 +0000 (18:16 -0400)]
Dept of second thoughts: keep aliasp_item in sync with tlistitem.
Commit d5b760ecb wasn't quite right, on second thought: if the
caller didn't ask for column names then it would happily emit
more Vars than if the caller did ask for column names. This
is surely not a good idea. Advance the aliasp_item whether or
not we're preparing a colnames list.
Tom Lane [Fri, 27 Oct 2017 21:10:21 +0000 (17:10 -0400)]
Fix crash when columns have been added to the end of a view.
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly
as many non-junk tlist items as the RTE has column aliases for it.
This was true at the time the code was written, and is still true so
far as parse analysis is concerned --- but when the function is used
during planning, the subquery might have appeared through insertion
of a view that now has more columns than it did when the outer query
was parsed. This results in a core dump if, for instance, we have
to expand a whole-row Var that references the subquery.
To avoid crashing, we can either stop expanding the RTE when we run
out of aliases, or invent new aliases for the added columns. While
the latter might be more useful, the former is consistent with what
expandRTE() does for composite-returning functions in the RTE_FUNCTION
case, so it seems like we'd better do it that way.
Per bug #14876 from Samuel Horwitz. This has been busted since commit ff1ea2173 allowed views to acquire more columns, so back-patch to all
supported branches.
Tom Lane [Fri, 27 Oct 2017 16:18:56 +0000 (12:18 -0400)]
Rethink the dependencies recorded for FieldSelect/FieldStore nodes.
On closer investigation, commits f3ea3e3e8 et al were a few bricks
shy of a load. What we need is not so much to lock down the result
type of a FieldSelect, as to lock down the existence of the column
it's trying to extract. Otherwise, we can break it by dropping that
column. The dependency on the result type is then held indirectly
through the column, and doesn't need to be recorded explicitly.
Out of paranoia, I left in the code to record a dependency on the
result type, but it's used only if we can't identify the pg_class OID
for the column. That shouldn't ever happen right now, AFAICS, but
it seems possible that in future the input node could be marked as
being of type RECORD rather than some specific composite type.
Likewise for FieldStore.
Like the previous patch, back-patch to all supported branches.
Robert Haas [Fri, 27 Oct 2017 14:04:01 +0000 (16:04 +0200)]
Fix mistaken failure to allow parallelism in corner case.
If we try to run a parallel plan in serial mode because, for example,
it's going to be scanned via a cursor, but for some reason we're
already in parallel mode (for example because an outer query is
running in parallel), we'd incorrectly try to launch workers.
Fix by adding a flag to the EState, so that we can be certain that
ExecutePlan() and ExecGather()/ExecGatherMerge() will have the same
idea about whether we are executing serially or in parallel.
Report and fix by Amit Kapila with help from Kuntal Ghosh. A few
tweaks by me.
Tom Lane [Thu, 26 Oct 2017 16:17:40 +0000 (12:17 -0400)]
Make setrefs.c match by ressortgroupref even for plain Vars.
Previously, we skipped using search_indexed_tlist_for_sortgroupref()
if the tlist expression being sought in the child plan node was merely
a Var. This is purely an optimization, based on the theory that
search_indexed_tlist_for_var() is faster, and one copy of a Var should
be as good as another. However, the GROUPING SETS patch broke the
latter assumption: grouping columns containing the "same" Var can
sometimes have different outputs, as shown in the test case added here.
So do it the hard way whenever a ressortgroupref marking exists.
(If this seems like a bottleneck, we could imagine building a tlist index
data structure for ressortgroupref values, as we do for Vars. But I'll
let that idea go until there's some evidence it's worthwhile.)
Back-patch to 9.6. The problem also exists in 9.5 where GROUPING SETS
came in, but this patch is insufficient to resolve the problem in 9.5:
there is some obscure dependency on the upper-planner-pathification
work that happened in 9.6. Given that this is such a weird corner case,
and no end users have complained about it, it doesn't seem worth the work
to develop a fix for 9.5.
Patch by me, per a report from Heikki Linnakangas. (This does not fix
Heikki's original complaint, just the follow-on one.)
Andrew Dunstan [Thu, 26 Oct 2017 14:01:02 +0000 (10:01 -0400)]
Improve gendef.pl diagnostic on failure to open sym file
There have been numerous buildfarm failures but the diagnostic is
currently silent about the reason for failure to open the file. Let's
see if we can get to the bottom of it.
Tom Lane [Wed, 25 Oct 2017 23:32:24 +0000 (19:32 -0400)]
Fix libpq to not require user's home directory to exist.
Some people like to run libpq-using applications in environments where
there's no home directory. We've broken that scenario before (cf commits 5b4067798 and bd58d9d88), and commit ba005f193 broke it again, by making
it a hard error if we fail to get the home directory name while looking
for ~/.pgpass. The previous precedent is that if we can't get the home
directory name, we should just silently act as though the file we hoped
to find there doesn't exist. Rearrange the new code to honor that.
Looking around, the service-file code added by commit 41a4e4595 had the
same disease. Apparently, that escaped notice because it only runs when
a service name has been specified, which I guess the people who use this
scenario don't do. Nonetheless, it's wrong too, so fix that case as well.
Add a comment about this policy to pqGetHomeDirectory, in the probably
vain hope of forestalling the same error in future. And upgrade the
rather miserable commenting in parseServiceInfo, too.
In passing, also back off parseServiceInfo's assumption that only ENOENT
is an ignorable error from stat() when checking a service file. We would
need to ignore at least ENOTDIR as well (cf 5b4067798), and seeing that
the far-better-tested code for ~/.pgpass treats all stat() failures alike,
I think this code ought to as well.
Per bug #14872 from Dan Watson. Back-patch the .pgpass change to v10
where ba005f193 came in. The service-file bugs are far older, so
back-patch the other changes to all supported branches.