Alvaro Herrera [Wed, 14 Mar 2018 14:53:56 +0000 (11:53 -0300)]
Log when a BRIN autosummarization request fails
Autovacuum's 'workitem' request queue is of limited size, so requests
can fail if they arrive more quickly than autovacuum can process them.
Emit a log message when this happens, to provide better visibility of
this.
Backpatch to 10. While this represents an API change for
AutoVacuumRequestWork, that function is not yet prepared to deal with
external modules calling it, so there doesn't seem to be any risk (other
than log spam, that is.)
Andres Freund [Fri, 9 Mar 2018 01:07:16 +0000 (17:07 -0800)]
Expand AND/OR regression tests around NULL handling.
Previously there were no tests verifying that NULL handling in AND/OR
was correct (i.e. that NULL rather than false is returned if
expression doesn't return true).
Robert Haas [Tue, 13 Mar 2018 20:34:08 +0000 (16:34 -0400)]
Let Parallel Append over simple UNION ALL have partial subpaths.
A simple UNION ALL gets flattened into an appendrel of subquery
RTEs, but up until now it's been impossible for the appendrel to use
the partial paths for the subqueries, so we can implement the
appendrel as a Parallel Append but only one with non-partial paths
as children.
There are three separate obstacles to removing that limitation.
First, when planning a subquery, propagate any partial paths to the
final_rel so that they are potentially visible to outer query levels
(but not if they have initPlans attached, because that wouldn't be
safe). Second, after planning a subquery, propagate any partial paths
for the final_rel to the subquery RTE in the outer query level in the
same way we do for non-partial paths. Third, teach finalize_plan() to
account for the possibility that the fake parameter we use for rescan
signalling when the plan contains a Gather (Merge) node may be
propagated from an outer query level.
Patch by me, reviewed and tested by Amit Khandekar, Rajkumar
Raghuwanshi, and Ashutosh Bapat. Test cases based on examples by
Rajkumar Raghuwanshi.
Tom Lane [Tue, 13 Mar 2018 17:24:27 +0000 (13:24 -0400)]
When updating reltuples after ANALYZE, just extrapolate from our sample.
The existing logic for updating pg_class.reltuples trusted the sampling
results only for the pages ANALYZE actually visited, preferring to
believe the previous tuple density estimate for all the unvisited pages.
While there's some rationale for doing that for VACUUM (first that
VACUUM is likely to visit a very nonrandom subset of pages, and second
that we know for sure that the unvisited pages did not change), there's
no such rationale for ANALYZE: by assumption, it's looked at an unbiased
random sample of the table's pages. Furthermore, in a very large table
ANALYZE will have examined only a tiny fraction of the table's pages,
meaning it cannot slew the overall density estimate very far at all.
In a table that is physically growing, this causes reltuples to increase
nearly proportionally to the change in relpages, regardless of what is
actually happening in the table. This has been observed to cause reltuples
to become so much larger than reality that it effectively shuts off
autovacuum, whose threshold for doing anything is a fraction of reltuples.
(Getting to the point where that would happen seems to require some
additional, not well understood, conditions. But it's undeniable that if
reltuples is seriously off in a large table, ANALYZE alone will not fix it
in any reasonable number of iterations, especially not if the table is
continuing to grow.)
Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone,
and in ANALYZE, just extrapolate from the sample pages on the assumption
that they provide an accurate model of the whole table. If, by very bad
luck, they don't, at least another ANALYZE will fix it; in the old logic
a single bad estimate could cause problems indefinitely.
In HEAD, let's remove vac_estimate_reltuples' is_analyze argument
altogether; it was never used for anything and now it's totally pointless.
But keep it in the back branches, in case any third-party code is calling
this function.
Per bug #15005. Back-patch to all supported branches.
David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me
Tom Lane [Tue, 13 Mar 2018 16:28:15 +0000 (12:28 -0400)]
Avoid holding AutovacuumScheduleLock while rechecking table statistics.
In databases with many tables, re-fetching the statistics takes some time,
so that this behavior seriously decreases the available concurrency for
multiple autovac workers. There's discussion afoot about more complete
fixes, but a simple and back-patchable amelioration is to claim the table
and release the lock before rechecking stats. If we find out there's no
longer a reason to process the table, re-taking the lock to un-claim the
table is cheap enough.
(This patch is quite old, but got lost amongst a discussion of more
aggressive fixes. It's not clear when or if such a fix will be
accepted, but in any case it'd be unlikely to get back-patched.
Let's do this now so we have some improvement for the back branches.)
In passing, make the normal un-claim step take AutovacuumScheduleLock
not AutovacuumLock, since that is what is documented to protect the
wi_tableoid field. This wasn't an actual bug in view of the fact that
readers of that field hold both locks, but it creates some concurrency
penalty against operations that need only AutovacuumLock.
Peter Eisentraut [Mon, 12 Mar 2018 16:17:58 +0000 (12:17 -0400)]
Change internal integer representation of Value node
A Value node would store an integer as a long. This causes needless
portability risks, as long can be of varying sizes. Change it to use
int instead. All code using this was already careful to only store
32-bit values anyway.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Fix CREATE TABLE / LIKE with bigint identity column
CREATE TABLE / LIKE with a bigint identity column would fail on
platforms where long is 32 bits. Copying the sequence values used
makeInteger(), which would truncate the 64-bit sequence data to 32 bits.
To fix, use makeFloat() instead, like the parser. (This does not
actually make use of floats, but stores the values as strings.)
Bug: #15096 Reviewed-by: Michael Paquier <michael@paquier.xyz>
Alvaro Herrera [Mon, 12 Mar 2018 22:42:32 +0000 (19:42 -0300)]
Avoid having two PKs in a partition
If a table containing a primary key is attach as partition to a
partitioned table which has a primary key with a different definition,
we would happily create a second one in the new partition. Oops. It
turns out that this is because an error check in DefineIndex is executed
only if you tell it that it's being run by ALTER TABLE, and the original
code here wasn't. Change it so that it does.
Added a couple of test cases for this, also. A previously working test
started to fail in a different way than before patch because the new
check is called earlier; change the PK to plain UNIQUE so that the new
behavior isn't invoked, so that the test continues to verify what we
want it to verify.
Tom Lane [Sun, 11 Mar 2018 22:10:42 +0000 (18:10 -0400)]
Fix improper uses of canonicalize_qual().
One of the things canonicalize_qual() does is to remove constant-NULL
subexpressions of top-level AND/OR clauses. It does that on the assumption
that what it's given is a top-level WHERE clause, so that NULL can be
treated like FALSE. Although this is documented down inside a subroutine
of canonicalize_qual(), it wasn't mentioned in the documentation of that
function itself, and some callers hadn't gotten that memo.
Notably, commit d007a9505 caused get_relation_constraints() to apply
canonicalize_qual() to CHECK constraints. That allowed constraint
exclusion to misoptimize situations in which a CHECK constraint had a
provably-NULL subclause, as seen in the regression test case added here,
in which a child table that should be scanned is not. (Although this
thinko is ancient, the test case doesn't fail before 9.2, for reasons
I've not bothered to track down in detail. There may be related cases
that do fail before that.)
More recently, commit f0e44751d added an independent bug by applying
canonicalize_qual() to index expressions, which is even sillier since
those might not even be boolean. If they are, though, I think this
could lead to making incorrect index entries for affected index
expressions in v10. I haven't attempted to prove that though.
To fix, add an "is_check" parameter to canonicalize_qual() to specify
whether it should assume WHERE or CHECK semantics, and make it perform
NULL-elimination accordingly. Adjust the callers to apply the right
semantics, or remove the call entirely in cases where it's not known
that the expression has one or the other semantics. I also removed
the call in some cases involving partition expressions, where it should
be a no-op because such expressions should be canonical already ...
and was a no-op, independently of whether it could in principle have
done something, because it was being handed the qual in implicit-AND
format which isn't what it expects. In HEAD, add an Assert to catch
that type of mistake in future.
This represents an API break for external callers of canonicalize_qual().
While that's intentional in HEAD to make such callers think about which
case applies to them, it seems like something we probably wouldn't be
thanked for in released branches. Hence, in released branches, the
extra parameter is added to a new function canonicalize_qual_ext(),
and canonicalize_qual() is a wrapper that retains its old behavior.
Patch by me with suggestions from Dean Rasheed. Back-patch to all
supported branches.
Tom Lane [Sat, 10 Mar 2018 18:18:21 +0000 (13:18 -0500)]
In psql, restore old behavior of Query_for_list_of_functions.
Historically, tab completion for functions has offered the names of
aggregates as well. This is essential in at least one context, namely
GRANT/REVOKE, because there is no GRANT ON AGGREGATE syntax. There
are other cases where a command that nominally is for functions will
allow aggregates as well, though not all do.
Commit fd1a421fe changed this query to disallow aggregates, but that
doesn't seem to have been thought through very carefully. Change it
to allow aggregates (but still ignore procedures).
We might at some point tighten this up, but it'd require sorting through
all the uses of this query to see which ones should offer aggregate
names and which shouldn't. Given the lack of field complaints about
the historical laxity here, that's work I'm not eager to do right now.
Tom Lane [Fri, 9 Mar 2018 21:58:26 +0000 (16:58 -0500)]
Improve predtest.c's internal docs, and enhance its functionality a bit.
Commit b08df9cab left things rather poorly documented as far as the
exact semantics of "clause_is_check" mode went. Also, that mode did
not really work correctly for predicate_refuted_by; although given the
lack of specification as to what it should do, as well as the lack
of any actual use-case, that's perhaps not surprising.
Rename "clause_is_check" to "weak" proof mode, and provide specifications
for what it should do. I defined weak refutation as meaning "truth of A
implies non-truth of B", which makes it possible to use the mode in the
part of relation_excluded_by_constraints that checks for mutually
contradictory WHERE clauses. Fix up several places that did things wrong
for that definition. (As far as I can see, these errors would only lead
to failure-to-prove, not incorrect claims of proof, making them not
serious bugs even aside from the fact that v10 contains no use of this
mode. So there seems no need for back-patching.)
In addition, teach predicate_refuted_by_recurse that it can use
predicate_implied_by_recurse after all when processing a strong NOT-clause,
so long as it asks for the correct proof strength. This is an optimization
that could have been included in commit b08df9cab, but wasn't.
Also, simplify and generalize the logic that checks for whether nullness of
the argument of IS [NOT] NULL would force overall nullness of the predicate
or clause. (This results in a change in the partition_prune test's output,
as it is now able to prune an all-nulls partition that it did not recognize
before.)
In passing, in PartConstraintImpliedByRelConstraint, remove bogus
conversion of the constraint list to explicit-AND form and then right back
again; that accomplished nothing except forcing a useless extra level of
recursion inside predicate_implied_by.
Tom Lane [Fri, 9 Mar 2018 00:44:14 +0000 (19:44 -0500)]
Fix test_predtest's idea of what weak refutation means.
I'd initially supposed that predicate_refuted_by(..., true) ought to
say that "A refutes B" means "non-falsity of A implies non-truth of B".
But it seems better to define it as "truth of A implies non-truth of B".
This is more useful in the current system, slightly easier to prove,
and in closer correspondence to the existing code behavior.
With this change, test_predtest no longer claims that any existing
test cases show false proof reports, though there still are cases
where we could prove something and fail to.
Robert Haas [Thu, 8 Mar 2018 19:25:31 +0000 (14:25 -0500)]
Correctly assess parallel-safety of tlists when SRFs are used.
Since commit 69f4b9c85f168ae006929eec44fc44d569e846b9, the existing
code was no longer assessing the parallel-safety of the real tlist
for each upper rel, but rather the first of possibly several tlists
created by split_pathtarget_at_srfs(). Repair.
Even though this is clearly wrong, it's not clear that it has any
user-visible consequences at the moment, so no back-patch for now. If
we discover later that it does have user-visible consequences, we
might need to back-patch this to v10.
Patch by me, per a report from Rajkumar Raghuwanshi.
Tom Lane [Thu, 8 Mar 2018 18:25:26 +0000 (13:25 -0500)]
Add test scaffolding for exercising optimizer's predicate-proof logic.
The predicate-proof code in predtest.c is a bit hard to test as-is:
you have to construct a query whose plan changes depending on the success
of a test, and in tests that have been around for awhile, it's always
possible that the plan shape is now being determined by some other factor.
Our existing regression tests aren't doing real well at providing full
code coverage of predtest.c, either. So, let's add a small test module
that allows directly inspecting the results of predicate_implied_by()
and predicate_refuted_by() for arbitrary boolean expressions.
I chose the set of tests committed here in order to get reasonably
complete code coverage of predtest.c just from running this test
module, and to cover some cases called out as being interesting in
the existing comments. We might want to add more later. But this
set already shows a few cases where perhaps things could be improved.
Indeed, this exercise proves that predicate_refuted_by() is buggy for
the case of clause_is_check = true, though fortunately we aren't using
that case anywhere yet. I'll look into doing something about that in
a separate commit. For now, just memorialize the current behavior.
Tom Lane [Thu, 8 Mar 2018 16:33:27 +0000 (11:33 -0500)]
Revert "Temporarily instrument postgres_fdw test to look for statistics changes."
This reverts commit c2c537c56dc30ec3cdc12051f4ea5363aa66d73c.
It's now clear that whatever is going on there, it can't be blamed
on unexpected ANALYZE runs, because the statistics are the same
just before the failing query as they were at the start of the test.
Tom Lane [Thu, 8 Mar 2018 16:26:20 +0000 (11:26 -0500)]
In initdb, don't bother trying max_connections = 10.
The server won't actually start with that setting anymore, not since
we raised the default max_wal_senders to 10. Per discussion, we don't
wish to back down on that default, so instead raise the effective floor
for max_connections (to 20). It's still possible to configure a smaller
setting manually, but initdb won't set it that way.
Since that change happened in v10, back-patch to v10.
Tom Lane [Thu, 8 Mar 2018 16:25:26 +0000 (11:25 -0500)]
Fix cross-checking of ReservedBackends/max_wal_senders/MaxConnections.
We were independently checking ReservedBackends < MaxConnections and
max_wal_senders < MaxConnections, but because walsenders aren't allowed
to use superuser-reserved connections, that's really the wrong thing.
Correct behavior is to insist on ReservedBackends + max_wal_senders being
less than MaxConnections. Fix the code and associated documentation.
This has been wrong for a long time, but since the situation probably
hardly ever arises in the field (especially pre-v10, when the default
for max_wal_senders was zero), no back-patch.
Alvaro Herrera [Wed, 7 Mar 2018 14:47:35 +0000 (11:47 -0300)]
Add missing debug lines during bootstrap
Noticed while playing with changes that mess with the bootstrap
sequence; the operations patched here failed to emit anything, leading
the developer to think that the bug was in the previous operation that
did emit a message.
Commit 1804284042e659e7d16904e7bbb0ad546394b6a3 established that single-batch
parallel-aware hash joins could create one large shared hash table using the
combined work_mem budget of all participants. The costing accidentally
assumed that parallel-oblivious hash joins could also do that. The
documentation for initial_cost_hashjoin() also failed to mention the new
argument. Repair.
Alvaro Herrera [Tue, 6 Mar 2018 19:22:03 +0000 (16:22 -0300)]
Refrain from duplicating data in reorderbuffers
If a walsender exits leaving data in reorderbuffers, the next walsender
that tries to decode the same transaction would append its decoded data
in the same spill files without truncating it first, which effectively
duplicate the data. Avoid that by removing any leftover reorderbuffer
spill files when a walsender starts.
Backpatch to 9.4; this bug has been there from the very beginning of
logical decoding.
Author: Craig Ringer, revised by me
Reviewed by: Álvaro Herrera, Petr Jelínek, Masahiko Sawada
Alvaro Herrera [Tue, 6 Mar 2018 16:17:13 +0000 (13:17 -0300)]
Fix bogus Name assignment in CreateStatistics
Apparently, it doesn't work to use a plain cstring as a Name datum: you
may end up having random bytes because of failing to zero the bytes
after the terminating \0, as indicated by valgrind. I introduced this
bug in 5564c1181548, so backpatch this fix to REL_10_STABLE, like that
commit.
While at it, fix a slightly misleading comment, pointed out by David
Rowley.
Andres Freund [Tue, 6 Mar 2018 01:49:59 +0000 (17:49 -0800)]
Fix parent node of WCO expressions in partitioned tables.
Since edd44738bc8814 WCO expressions of partitioned tables are
initialized with the first subplan as parent. That's not correct, as
the correct context is the ModifyTableState node. That's also what is
used for RETURNING processing, initialized nearby.
This appears not to cause any visible problems for in core code, but
is problematic for in development patch.
Andres Freund [Tue, 6 Mar 2018 00:21:05 +0000 (16:21 -0800)]
Add parenthesized options syntax for ANALYZE.
This is analogous to the syntax allowed for VACUUM. This allows us to
avoid making new options reserved keywords and makes it easier to
allow arbitrary argument order. Oh, and it's consistent with the other
commands, too.
Author: Nathan Bossart Reviewed-By: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/D3FC73E2-9B1A-4DB4-8180-55F57D116B4E@amazon.com
Alvaro Herrera [Mon, 5 Mar 2018 22:37:19 +0000 (19:37 -0300)]
Clone extended stats in CREATE TABLE (LIKE INCLUDING ALL)
The LIKE INCLUDING ALL clause to CREATE TABLE intuitively indicates
cloning of extended statistics on the source table, but it failed to do
so. Patch it up so that it does. Also include an INCLUDING STATISTICS
option to the LIKE clause, so that the behavior can be requested
individually, or excluded individually.
While at it, reorder the INCLUDING options, both in code and in docs, in
alphabetical order which makes more sense than feature-implementation
order that was previously used.
Backpatch this to Postgres 10, where extended statistics were
introduced, because this is seen as an oversight in a fresh feature
which is better to get consistent from the get-go instead of changing
only in pg11.
In pg11, comments on statistics objects are cloned too. In pg10 they
are not, because I (Álvaro) was too coward to change the parse node as
required to support it. Also, in pg10 I chose not to renumber the
parser symbols for the various INCLUDING options in LIKE, for the same
reason. Any corresponding user-visible changes (docs) are backpatched,
though.
Reported-by: Stephen Froehlich
Author: David Rowley Reviewed-by: Álvaro Herrera, Tomas Vondra
Discussion: https://postgr.es/m/CY1PR0601MB1927315B45667A1B679D0FD5E5EF0@CY1PR0601MB1927.namprd06.prod.outlook.com
Tom Lane [Mon, 5 Mar 2018 21:20:06 +0000 (16:20 -0500)]
Temporarily instrument postgres_fdw test to look for statistics changes.
It seems fairly hard to explain recent buildfarm failures without the
theory that something is doing an ANALYZE behind our backs. Probe
for this directly to see if it's true.
In principle the outputs of these queries should be stable, since the table
in question is small enough that ANALYZE's sample will include all rows.
But even if that turns out to be wrong, we can put up with some failures
for a bit. I don't intend to leave this here indefinitely.
Tom Lane [Mon, 5 Mar 2018 20:37:23 +0000 (15:37 -0500)]
Add infrastructure to support server-version-dependent tab completion.
Up to now we've not worried about whether psql's tab completion queries
would work against prior server versions. But since we support older
server versions in describe.c, we really ought to do so here as well.
Failing to take care of this not only leads to loss of tab-completion
service when using an older server, but risks aborting a user's open
transaction when we send an incompatible query to the server.
The recent changes in pg_proc.prokind are one reason to take this more
seriously now than before, and the proposed patch for completion after
SELECT needs some such capability as well.
Hence, create some infrastructure to allow selection of one of several
versions of a query depending on server version. For cases where we
just need to select one of several query strings, introduce VersionedQuery
and COMPLETE_WITH_VERSIONED_QUERY(). Likewise extend the SchemaQuery
infrastructure to allow versioning of those.
I went ahead and fixed the prokind issues, to serve as an illustration
of how to use versioned SchemaQuery. To have some illustration of
VersionedQuery, change the support for completion of publication and
subscription names so that psql will not send sure-to-fail queries to
pre-v10 servers. There is much more that should be done to make tab
completion more friendly to older servers, but this commit is mainly
meant to get the infrastructure in place, so I stopped here.
Robert Haas [Mon, 5 Mar 2018 20:12:49 +0000 (15:12 -0500)]
shm_mq: Fix detach race condition.
Commit 34db06ef9a1d7f36391c64293bf1e0ce44a33915 adopted a lock-free
design for shm_mq.c, but it introduced a race condition that could
lose messages. When shm_mq_receive_bytes() detects that the other end
has detached, it must make sure that it has seen the final version of
mq_bytes_written, or it might miss a message sent before detaching.
Fujii Masao [Mon, 5 Mar 2018 17:08:18 +0000 (02:08 +0900)]
Fix pg_rewind to handle relation data files in tablespaces properly.
pg_rewind checks whether each file is a relation data file, from its path.
Previously this check logic had the bug which made pg_rewind fail to
recognize any relation data files in tablespaces. Which also caused
an assertion failure in pg_rewind.
Tom Lane [Sun, 4 Mar 2018 01:31:35 +0000 (20:31 -0500)]
Fix assorted issues in convert_to_scalar().
If convert_to_scalar is passed a pair of datatypes it can't cope with,
its former behavior was just to elog(ERROR). While this is OK so far as
the core code is concerned, there's extension code that would like to use
scalarltsel/scalargtsel/etc as selectivity estimators for operators that
work on non-core datatypes, and this behavior is a show-stopper for that
use-case. If we simply allow convert_to_scalar to return FALSE instead of
outright failing, then the main logic of scalarltsel/scalargtsel will work
fine for any operator that behaves like a scalar inequality comparison.
The lack of conversion capability will mean that we can't estimate to
better than histogram-bin-width precision, since the code will effectively
assume that the comparison constant falls at the middle of its bin. But
that's still a lot better than nothing. (Someday we should provide a way
for extension code to supply a custom version of convert_to_scalar, but
today is not that day.)
While poking at this issue, we noted that the existing code for handling
type bytea in convert_to_scalar is several bricks shy of a load.
It assumes without checking that if the comparison value is type bytea,
the bounds values are too; in the worst case this could lead to a crash.
It also fails to detoast the input values, so that the comparison result is
complete garbage if any input is toasted out-of-line, compressed, or even
just short-header. I'm not sure how often such cases actually occur ---
the bounds values, at least, are probably safe since they are elements of
an array and hence can't be toasted. But that doesn't make this code OK.
Back-patch to all supported branches, partly because author requested that,
but mostly because of the bytea bugs. The change in API for the exposed
routine convert_network_to_scalar() is theoretically a back-patch hazard,
but it seems pretty unlikely that any third-party code is calling that
function directly.
Tom Lane [Sat, 3 Mar 2018 17:05:28 +0000 (12:05 -0500)]
Minor cleanup in genbki.pl.
Separate out the pg_attribute logic of genbki.pl into its own function.
Drop unnecessary "defined $catalog->{data}" check. This both narrows
and shortens the data writing loop of the script. There is no functional
change (the emitted files are the same as before).
Tom Lane [Sat, 3 Mar 2018 16:23:33 +0000 (11:23 -0500)]
Trivial adjustments in preparation for bootstrap data conversion.
Rationalize a couple of macro names:
* In catalog/pg_init_privs.h, rename Anum_pg_init_privs_privs to
Anum_pg_init_privs_initprivs to match the column's actual name.
* In ecpg, rename ZPBITOID to BITOID to match catalog/pg_type.h.
This reduces reader confusion, and will allow us to generate these
macros automatically in future.
In catalog/pg_tablespace.h, fix the ordering of related DATA and
#define lines to agree with how it's done elsewhere. This has no
impact today, but simplifies life for the bootstrap data conversion
scripts.
Prevent LDAP and SSL tests from running without support in build
Add checks in each test file that the build supports the feature,
otherwise skip all the tests. Before, if someone were to (accidentally)
invoke these tests without build support, they would fail in confusing
ways.
based on patch from Michael Paquier <michael@paquier.xyz>
The SSL and LDAP test suites are not run by default, as they are not
secure for multi-user environments. This commit adds an extra make
variable to optionally enable them, for example:
Tom Lane [Fri, 2 Mar 2018 22:40:48 +0000 (17:40 -0500)]
Fix VM buffer pin management in heap_lock_updated_tuple_rec().
Sloppy coding in this function could lead to leaking a VM buffer pin,
or to attempting to free the same pin twice. Repair. While at it,
reduce the code's tendency to free and reacquire the same page pin.
Back-patch to 9.6; before that, this routine did not concern itself
with VM pages.
Tom Lane [Fri, 2 Mar 2018 19:48:26 +0000 (14:48 -0500)]
Fix pgbench TAP test to work in VPATH builds.
Previously, it'd try to create log files under the source directory
not the build directory. This fell over if the source isn't writable
by the building user.
Add prokind column, replacing proisagg and proiswindow
The new column distinguishes normal functions, procedures, aggregates,
and window functions. This replaces the existing columns proisagg and
proiswindow, and replaces the convention that procedures are indicated
by prorettype == 0. Also change prorettype to be VOIDOID for procedures.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Michael Paquier <michael@paquier.xyz>
Robert Haas [Fri, 2 Mar 2018 17:20:30 +0000 (12:20 -0500)]
shm_mq: Have the receiver set the sender's less frequently.
Instead of marking data from the ringer buffer consumed and setting the
sender's latch for every message, do it only when the amount of data we
can consume is at least 1/4 of the size of the ring buffer, or when no
data remains in the ring buffer. This is dramatically faster in my
testing; apparently, the savings from sending signals less frequently
outweighs the benefit of letting the sender know about available buffer
space sooner.
Patch by me, reviewed by Andres Freund and tested by Rafia Sabih.
Robert Haas [Fri, 2 Mar 2018 17:16:59 +0000 (12:16 -0500)]
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Tom Lane [Fri, 2 Mar 2018 16:22:42 +0000 (11:22 -0500)]
Make gistvacuumcleanup() count the actual number of index tuples.
Previously, it just returned the heap tuple count, which might be only an
estimate, and would be completely the wrong thing if the index is partial.
Since this function scans every index page anyway to find free pages,
it's practically free to count the surviving index tuples. Let's do that
and return an accurate count.
This is easily visible as a wrong reltuples value for a partial GiST
index following VACUUM, so back-patch to all supported branches.
Magnus Hagander [Fri, 2 Mar 2018 11:40:49 +0000 (12:40 +0100)]
Fix msvc builds for ActivePerl > 5.24
From this version ActivePerl ships both a .lib and a .a file for the
perl library, which our code would detect as there being no library
available. Instead, we should pick the .lib version and use that.
Andres Freund [Fri, 2 Mar 2018 00:25:46 +0000 (16:25 -0800)]
Minor clean-up in dshash.{c,h}.
For consistency with other code that deals in numbers of buckets, the
macro BUCKETS_PER_PARTITION should produce a value of type size_t.
Also, fix a mention of an obsolete proposed name for dshash.c that
appeared in a comment.
Author: Thomas Munro, based on an observation from Amit Kapila
Discussion: https://postgr.es/m/CAA4eK1%2BBOp5aaW3aHEkg5Bptf8Ga_BkBnmA-%3DXcAXShs0yCiYQ%40mail.gmail.com
Andres Freund [Fri, 2 Mar 2018 00:21:52 +0000 (16:21 -0800)]
Remove volatile qualifiers from shm_mq.c.
Since commit 0709b7ee, spinlock primitives include a compiler barrier
so it is no longer necessary to access either spinlocks or the memory
they protect through pointer-to-volatile. Like earlier commits e93b6298, d53e3d5f, 430008b5, 8f6bb851, df4077cd.
Author: Thomas Munro
Discussion: https://postgr.es/m/CAEepm=204T37SxcHo4=xw5btho9jQ-=ZYYrVdcKyz82XYzMoqg@mail.gmail.com
Tom Lane [Thu, 1 Mar 2018 21:23:29 +0000 (16:23 -0500)]
Use ereport not elog for some corrupt-HOT-chain reports.
These errors have been seen in the field in corrupted-data situations.
It seems worthwhile to report them with ERRCODE_DATA_CORRUPTED, rather
than the generic ERRCODE_INTERNAL_ERROR, for the benefit of log monitoring
and tools like amcheck. However, use errmsg_internal so that the text
strings still aren't translated; it seems unlikely to be worth
translators' time to do so.
Back-patch to 9.3, like the predecessor commit d70cf811f that introduced
these elog calls originally (replacing Asserts).
Alvaro Herrera [Thu, 1 Mar 2018 21:07:46 +0000 (18:07 -0300)]
Relax overly strict sanity check for upgraded ancient databases
Commit 4800f16a7ad0 added some sanity checks to ensure we don't
accidentally corrupt data, but in one of them we failed to consider the
effects of a database upgraded from 9.2 or earlier, where a tuple
exclusively locked prior to the upgrade has a slightly different bit
pattern. Fix that by using the macro that we fixed in commit 74ebba84aeb6 for similar situations.
Reported-by: Alexandre Garcia Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAPYLKR6yxV4=pfW0Gwij7aPNiiPx+3ib4USVYnbuQdUtmkMaEA@mail.gmail.com
Andres suspects that this bug may have wider ranging consequences, but I
couldn't find anything.
Tom Lane [Thu, 1 Mar 2018 20:35:03 +0000 (15:35 -0500)]
Fix IOS planning when only some index columns can return an attribute.
Since 9.5, it's possible that some but not all columns of an index
support returning the indexed value for index-only scans. If the
same indexed column appears in index columns that behave both ways,
check_index_only() supposed that it'd be OK to do an index-only scan
testing that column; but that fails if we have to recheck the indexed
condition on one of the columns that doesn't support this.
In principle we could make this work by remapping the recheck expressions
to pull the value from a column that does support returning the indexed
value. But such cases are so weird and rare that, at least for now,
it doesn't seem worth the trouble. Instead, just teach check_index_only
that a value is returnable only if all the index columns containing it
are returnable, rather than any of them.
Per report from David Pereiro Lagares. Back-patch to 9.5 where the
possibility of this situation appeared.
Tom Lane [Thu, 1 Mar 2018 17:03:29 +0000 (12:03 -0500)]
Remove out-of-date comment about formrdesc().
formrdesc's comment listed the specific catalogs it is called for,
but the list was out of date. Rather than jumping back onto that
maintenance treadmill, let's just remove the list. It tells the
reader nothing that can't be learned quickly and more reliably by
searching relcache.c for callers of formrdesc().
Tom Lane [Thu, 1 Mar 2018 16:37:46 +0000 (11:37 -0500)]
Fix format_type() to restore its old behavior.
Commit a26116c6c accidentally changed the behavior of the SQL format_type()
function while refactoring. For the reasons explained in that function's
comment, a NULL typemod argument should behave differently from a -1
argument. Since we've managed to break this, add a regression test
memorializing the intended behavior.
In passing, be consistent about the type of the "flags" parameter.
Noted by Rushabh Lathia, though I revised the patch some more.
Andres Freund [Thu, 1 Mar 2018 09:46:04 +0000 (01:46 -0800)]
doc: Add WaitForBackgroundWorkerShutdown() to bgw docs.
Commit 924bcf4f16d added WaitForBackgroundWorkerShutdown, but didn't
add it to the documentation. Fix that and two small spelling errors in
the WaitForBackgroundWorkerStartup paragraph.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/C8738949-0350-4999-A1DA-26E209FF248D@yesql.se
Tom Lane [Thu, 1 Mar 2018 00:25:54 +0000 (19:25 -0500)]
Remove redundant IndexTupleDSize macro.
Use IndexTupleSize everywhere, instead. Also, remove IndexTupleSize's
internal typecast, as that's not really needed and might mask coding
errors. Change some pointer variable datatypes in the call sites
to compensate for that and make it clearer what we're assuming.
Tom Lane [Wed, 28 Feb 2018 23:33:45 +0000 (18:33 -0500)]
Rename base64 routines to avoid conflict with Solaris built-in functions.
Solaris 11.4 has built-in functions named b64_encode and b64_decode.
Rename ours to something else to avoid the conflict (fortunately,
ours are static so the impact is limited).
One could wish for less duplication of code in this area, but that
would be a larger patch and not very suitable for back-patching.
Since this is a portability fix, we want to put it into all supported
branches.
Report and initial patch by Rainer Orth, reviewed and adjusted a bit
by Michael Paquier
Tom Lane [Wed, 28 Feb 2018 21:57:37 +0000 (16:57 -0500)]
Remove restriction on SQL block length in isolationtester scanner.
specscanner.l had a fixed limit of 1024 bytes on the length of
individual SQL stanzas in an isolation test script. People are
starting to run into that, so fix it by making the buffer resizable.
Once we allow this in HEAD, it seems inevitable that somebody will
try to back-patch a test that exceeds the old limit, so back-patch
this change as a preventive measure.
Robert Haas [Wed, 28 Feb 2018 17:16:09 +0000 (12:16 -0500)]
For partitionwise join, match on partcollation, not parttypcoll.
The previous code considered two tables to have the partition scheme
if the underlying columns had the same collation, but what we
actually need to compare is not the collations associated with the
column but the collation used for partitioning. Fix that.
Robert Haas [Wed, 28 Feb 2018 15:56:06 +0000 (10:56 -0500)]
Fix assertion failure when Parallel Append is run serially.
Parallel-aware plan nodes must be prepared to run without parallelism
if it's not possible at execution time for whatever reason. Commit ab72716778128fb63d54ac256adf7fe6820a1185, which introduced Parallel
Append, overlooked this.
Rajkumar Raghuwanshi reported this problem, and I included his test
case in this patch. The code changes are by me.
Peter Eisentraut [Sat, 24 Feb 2018 00:52:30 +0000 (19:52 -0500)]
doc: Improve man build speed
Turn off man.endnotes.are.numbered parameter, which we don't need, but
which increases performance vastly if off. Also turn on
man.output.quietly, which also makes things a bit faster, but which is
also less useful now as a progress indicator because the build is so
fast now.
Peter Eisentraut [Wed, 28 Feb 2018 13:22:51 +0000 (08:22 -0500)]
Fix warnings in man page build
The changes in the CREATE POLICY man page from commit 87c2a17fee784c7e1004ba3d3c5d8147da676783 triggered a stylesheet bug that
created some warning messages and incorrect output. This installs a
workaround.
Also improve the whitespace a bit so it looks better.
Tom Lane [Tue, 27 Feb 2018 21:46:52 +0000 (16:46 -0500)]
Fix up ecpg's configuration so it handles "long long int" in MSVC builds.
Although configure-based builds correctly define HAVE_LONG_LONG_INT when
appropriate (in both pg_config.h and ecpg_config.h), builds using the MSVC
scripts failed to do so. This currently has no impact on the backend,
since it uses that symbol nowhere; but it does prevent ecpg from
supporting "long long int". Fix that.
Also, adjust Solution.pm so that in the constructed ecpg_config.h file,
the "#if (_MSC_VER > 1200)" covers only the LONG_LONG_INT-related
#defines, not the whole file. AFAICS this was a thinko on somebody's
part: ENABLE_THREAD_SAFETY should always be defined in Windows builds,
and in branches using USE_INTEGER_DATETIMES, the setting of that shouldn't
depend on the compiler version either. If I'm wrong, I imagine the
buildfarm will say so.
Per bug #15080 from Jonathan Allen; issue diagnosed by Michael Meskes
and Andrew Gierth. Back-patch to all supported branches.
Tom Lane [Tue, 27 Feb 2018 20:56:51 +0000 (15:56 -0500)]
Use the correct tuplestore read pointer in a NamedTuplestoreScan.
Tom Kazimiers reported that transition tables don't work correctly when
they are scanned by more than one executor node. That's because commit 18ce3a4ab allocated separate read pointers for each executor node, as it
must, but failed to make them active at the appropriate times. Repair.