Tom Lane [Tue, 12 Apr 2016 23:52:21 +0000 (19:52 -0400)]
Provide errno-translation wrappers around bind() and listen() on Windows.
I've seen one too many "could not bind IPv4 socket: No error" log entries
from the Windows buildfarm members. Per previous discussion, this is
likely caused by the fact that we're doing nothing to translate
WSAGetLastError() to errno. Put in a wrapper layer to do that.
If this works as expected, it should get back-patched, but let's see what
happens in the buildfarm first.
Robert Haas [Tue, 12 Apr 2016 20:24:55 +0000 (16:24 -0400)]
Fix costing for parallel aggregation.
The original patch kind of ignored the fact that we were doing something
different from a costing point of view, but nobody noticed. This patch
fixes that oversight.
Remove unused function GetOldestWALSendPointer from walsender code.
That unused function was introduced as a sample because synchronous
replication or replication monitoring tools might need it in the future.
Recently commit 989be08 added the function SyncRepGetOldestSyncRecPtr
which provides almost the same functionality for multiple synchronous
standbys feature. So it's time to remove that unused sample function.
This commit does that.
Tom Lane [Tue, 12 Apr 2016 19:23:14 +0000 (15:23 -0400)]
Redefine create_upper_paths_hook as being invoked once per upper relation.
Per discussion, this gives potential users of the hook more flexibility,
because they can build custom Paths that implement only one stage of
upper processing atop core-provided Paths for earlier stages.
Tom Lane [Tue, 12 Apr 2016 16:52:35 +0000 (12:52 -0400)]
Improve coding of column-name parsing in psql's new crosstabview.c.
Coverity complained about this code, not without reason because it was
rather messy. Adjust it to not scribble on the passed string; that adds
one malloc/free cycle per column name, which is going to be insignificant
in context. We can actually const-ify both the string argument and the
PGresult.
Kevin Grittner [Tue, 12 Apr 2016 16:48:02 +0000 (11:48 -0500)]
Avoid extra locks in GetSnapshotData if old_snapshot_threshold < 0
On a big NUMA machine with 1000 connections in saturation load
there was a performance regression due to spinlock contention, for
acquiring values which were never used. Just fill with dummy
values if we're not going to use them.
This patch has not been benchmarked yet on a big NUMA machine, but
it seems like a good idea on general principle, and it seemed to
prevent an apparent 2.2% regression on a single-socket i7 box
running 200 connections at saturation load.
Tom Lane [Tue, 12 Apr 2016 15:42:06 +0000 (11:42 -0400)]
Improve API of GenericXLogRegister().
Rename this function to GenericXLogRegisterBuffer() to make it clearer
what it does, and leave room for other sorts of "register" actions in
future. Also, replace its "bool isNew" argument with an integer flags
argument, so as to allow adding more flags in future without an API
break.
Tom Lane [Tue, 12 Apr 2016 15:13:52 +0000 (11:13 -0400)]
In generic WAL application and replay, ensure page "hole" is always zero.
The previous coding could allow the contents of the "hole" between pd_lower
and pd_upper to diverge during replay from what it had been when the update
was originally applied. This would pose a problem if checksums were in
use, and in any case would complicate forensic comparisons between master
and slave servers. So force the "hole" to contain zeroes, both at initial
application of a generically-logged action, and at replay.
Added to ensure that bloom index pages can be distinguished from other pages
by pg_filedump. Because there wasn't any public/production versions before,
it doesn't pay attention to any compatibility issues.
Tom Lane [Tue, 12 Apr 2016 14:52:58 +0000 (10:52 -0400)]
Remove unnecessary definition of _WIN64 in libpq/win32.mak.
In commit b0e40d189325dc7a54d2546245e766f8c47a7c8d, I should have just
removed the /D switch defining WIN64. The reason the code worked before
is that all Windows64 compilers automatically predefine _WIN64. Perhaps
at one time we had code that depended on WIN64 being defined, but it's
long gone, and we should not encourage any reappearance. Per discussion
with Christian Ullrich.
Tom Lane [Tue, 12 Apr 2016 00:07:17 +0000 (20:07 -0400)]
Fix _SPI_execute_plan() for CREATE TABLE IF NOT EXISTS foo AS ...
When IF NOT EXISTS was added to CREATE TABLE AS, this logic didn't get
the memo, possibly resulting in an Assert failure. It looks like there
would have been no ill effects in a non-Assert build, though. Back-patch
to 9.5 where the IF NOT EXISTS option was added.
Tom Lane [Mon, 11 Apr 2016 23:37:04 +0000 (19:37 -0400)]
Fix two places that thought Windows64 is indicated by WIN64 macro.
Everyplace else thinks it's _WIN64, so make these places fall in line.
The pg_regress.c usage is not going to result in any change in behavior,
only suppressing (or not) a compiler warning about downcasting HANDLEs.
So there seems no need for back-patching there.
The libpq/win32.mak usage might represent an actual bug, if anyone were
using this script to build for Windows64, which perhaps nobody is.
Given the lack of field complaints, no back-patch here either.
pg_regress.c problem found by Christian Ullrich, the other by me.
Tom Lane [Mon, 11 Apr 2016 22:17:02 +0000 (18:17 -0400)]
Fix freshly-introduced PL/Python portability bug.
It turns out that those PyErr_Clear() calls I removed from plpy_elog.c
in 7e3bb080387f4143 et al were not quite as random as they appeared: they
mask a Python 2.3.x bug. (Specifically, it turns out that PyType_Ready()
can fail if the error indicator is set on entry, and PLy_traceback's fetch
of frame.f_code may be the first operation in a session that requires the
"frame" type to be readied. Ick.) Put back the clear call, but in a more
centralized place closer to what it's protecting, and this time with a
comment warning what it's really for.
Per buildfarm member prairiedog. Although prairiedog was only failing
on HEAD, it seems clearly possible for this to occur in older branches
as well, so back-patch to 9.2 the same as the previous patch.
Kevin Grittner [Mon, 11 Apr 2016 21:47:50 +0000 (16:47 -0500)]
Use static inline function for BufferGetPage()
I was initially concerned that the some of the hundreds of
references to BufferGetPage() where the literal
BGP_NO_SNAPSHOT_TEST were passed might not optimize as well as a
macro, leading to some hard-to-find performance regressions in
corner cases. Inspection of disassembled code has shown identical
code at all inspected locations, and the size difference doesn't
amount to even one byte per such call. So make it readable.
Kevin Grittner [Mon, 11 Apr 2016 20:43:52 +0000 (15:43 -0500)]
Make oldSnapshotControl a pointer to a volatile structure
It was incorrectly declared as a volatile pointer to a non-volatile
structure. Eliminate the OldSnapshotControl struct definition; it
is really not needed. Pointed out by Tom Lane.
While at it, add OldSnapshotControlData to pgindent's list of
structures.
Tom Lane [Mon, 11 Apr 2016 15:49:48 +0000 (11:49 -0400)]
Fix missing "volatile" in PLy_output().
Commit 5c3c3cd0a3046339 plastered "volatile" on a bunch of variables
in PLy_output(), but removed the one that actually mattered, ie the
one on "oldcontext". This allows some versions of clang to generate
code in which "oldcontext" has been trashed when control reaches the
PG_CATCH block. Per buildfarm member tick.
Use ereport(ERROR) instead of Assert() to emit syncrep_parser error.
The existing code would either Assert or generate an invalid
SyncRepConfig variable, neither of which is desirable. A regular
error should be thrown instead.
This commit silences compiler warning in non assertion-enabled builds.
Per report from Jeff Janes.
Suggested fix by Tom Lane.
It's not entirely clear to me whether PyString_AsString can return
null (looks like the answer might vary between Python 2 and 3).
But in any case, this code's attempt to cope with the possibility
was quite broken, because pstrdup() neither allows a null argument
nor ever returns a null.
Moreover, the code below this point assumes that "message" is a
palloc'd string, which would not be the case for a dgettext result.
Fix both problems by doing the pstrdup step separately.
Tom Lane [Mon, 11 Apr 2016 04:00:08 +0000 (00:00 -0400)]
pg_dump: add missing "destroyPQExpBuffer(query)" in dumpForeignServer().
Coverity complained about this resource leak (why now, I don't know,
since it's been like that a long time). Our general policy in pg_dump
is that PQExpBuffers are worth cleaning up, so do it here too. But
don't bother with a back-patch, because it seems unlikely that very
many databases contain enough FOREIGN SERVER objects to notice.
Tom Lane [Mon, 11 Apr 2016 03:52:34 +0000 (23:52 -0400)]
Add comment about intentional fallthrough in switch.
Coverity complained about an apparent missing "break" in a switch
added by bb140506df605fab. The human-readable comments are pretty
clear that this is intentional, but add a standard /* FALL THRU */
comment to make it clear to tools too.
Tom Lane [Mon, 11 Apr 2016 03:47:30 +0000 (23:47 -0400)]
Clean up foreign-key caching code in planner.
Coverity complained that the code added by 015e88942aa50f0d lacked an
error check for SearchSysCache1 failures, which it should have. But
the code was pretty duff in other ways too, including failure to think
about whether it could really cope with arrays of different lengths.
Tom Lane [Mon, 11 Apr 2016 03:15:55 +0000 (23:15 -0400)]
Fix access-to-already-freed-memory issue in plpython's error handling.
PLy_elog() could attempt to access strings that Python had already freed,
because the strings that PLy_get_spi_error_data() returns are simply
pointers into storage associated with the error "val" PyObject. That's
fine at the instant PLy_get_spi_error_data() returns them, but just after
that PLy_traceback() intentionally releases the only refcount on that
object, allowing it to be freed --- so that the strings we pass to
ereport() are dangling pointers.
In principle this could result in garbage output or a coredump. In
practice, I think the risk is pretty low, because there are no Python
operations between where we decrement that refcount and where we use the
strings (and copy them into PG storage), and thus no reason for Python
to recycle the storage. Still, it's clearly hazardous, and it leads to
Valgrind complaints when running under a Valgrind that hasn't been
lobotomized to ignore Python memory allocations.
The code was a mess anyway: we fetched the error data out of Python
(clearing Python's error indicator) with PyErr_Fetch, examined it, pushed
it back into Python with PyErr_Restore (re-setting the error indicator),
then immediately pulled it back out with another PyErr_Fetch. Just to
confuse matters even more, there were some gratuitous-and-yet-hazardous
PyErr_Clear calls in the "examine" step, and we didn't get around to doing
PyErr_NormalizeException until after the second PyErr_Fetch, making it even
less clear which object was being manipulated where and whether we still
had a refcount on it. (If PyErr_NormalizeException did substitute a
different "val" object, it's possible that the problem could manifest for
real, because then we'd be doing assorted Python stuff with no refcount
on the object we have string pointers into.)
So, rearrange all that into some semblance of sanity, and don't decrement
the refcount on the Python error objects until the end of PLy_elog().
In HEAD, I failed to resist the temptation to reformat some messy bits
from 5c3c3cd0a3046339 along the way.
Back-patch as far as 9.2, because the code is substantially the same
that far back. I believe that 9.1 has the bug as well; but the code
around it is rather different and I don't want to take a chance on
breaking something for what seems a low-probability problem.
Andres Freund [Mon, 11 Apr 2016 03:12:32 +0000 (20:12 -0700)]
Avoid the use of a separate spinlock to protect a LWLock's wait queue.
Previously we used a spinlock, in adition to the atomically manipulated
->state field, to protect the wait queue. But it's pretty simple to
instead perform the locking using a flag in state.
Due to 6150a1b0 BufferDescs, on platforms (like PPC) with > 1 byte
spinlocks, increased their size above 64byte. As 64 bytes are the size
we pad allocated BufferDescs to, this can increase false sharing;
causing performance problems in turn. Together with the previous commit
this reduces the size to <= 64 bytes on all common platforms.
Author: Andres Freund
Discussion: CAA4eK1+ZeB8PMwwktf+3bRS0Pt4Ux6Rs6Aom0uip8c6shJWmyg@mail.gmail.com 20160327121858.zrmrjegmji2ymnvr@alap3.anarazel.de
Andres Freund [Mon, 11 Apr 2016 03:12:32 +0000 (20:12 -0700)]
Allow Pin/UnpinBuffer to operate in a lockfree manner.
Pinning/Unpinning a buffer is a very frequent operation; especially in
read-mostly cache resident workloads. Benchmarking shows that in various
scenarios the spinlock protecting a buffer header's state becomes a
significant bottleneck. The problem can be reproduced with pgbench -S on
larger machines, but can be considerably worse for queries which touch
the same buffers over and over at a high frequency (e.g. nested loops
over a small inner table).
To allow atomic operations to be used, cram BufferDesc's flags,
usage_count, buf_hdr_lock, refcount into a single 32bit atomic variable;
that allows to manipulate them together using 32bit compare-and-swap
operations. This requires reducing MAX_BACKENDS to 2^18-1 (which could
be lifted by using a 64bit field, but it's not a realistic configuration
atm).
As not all operations can easily implemented in a lockfree manner,
implement the previous buf_hdr_lock via a flag bit in the atomic
variable. That way we can continue to lock the header in places where
it's needed, but can get away without acquiring it in the more frequent
hot-paths. There's some additional operations which can be done without
the lock, but aren't in this patch; but the most important places are
covered.
As bufmgr.c now essentially re-implements spinlocks, abstract the delay
logic from s_lock.c into something more generic. It now has already two
users, and more are coming up; there's a follupw patch for lwlock.c at
least.
This patch is based on a proof-of-concept written by me, which Alexander
Korotkov made into a fully working patch; the committed version is again
revised by me. Benchmarking and testing has, amongst others, been
provided by Dilip Kumar, Alexander Korotkov, Robert Haas.
On a large x86 system improvements for readonly pgbench, with a high
client count, of a factor of 8 have been observed.
Author: Alexander Korotkov and Andres Freund
Discussion: 2400449.GjM57CE0Yg@dinodell
Tom Lane [Sun, 10 Apr 2016 17:12:24 +0000 (13:12 -0400)]
Improve contrib/bloom regression test using code coverage info.
Originally, this test created a 100000-row test table, which made it
run rather slowly compared to other contrib tests. Investigation with
gcov showed that we got no further improvement in code coverage after
the first 700 or so rows, making the large table 99% a waste of time.
Cut it back to 2000 rows to fix the runtime problem and still leave
some headroom for testing behaviors that may appear later.
A closer look at the gcov results showed that the main coverage
omissions in contrib/bloom occurred because the test never filled more
than one entry in the notFullPage array; which is unsurprising because
it exercised index cleanup only in the scenario of complete table
deletion, allowing every page in the index to become deleted rather
than not-full. Add testing that allows the not-full path to be
exercised as well.
Also, test the amvalidate function, because blvalidate.c had zero
coverage without that, and besides it's a good idea to check for
mistakes in the bloom opclass definitions.
Fix possible NULL dereference in ExecAlterObjectDependsStmt
I used the wrong variable here. Doesn't make a difference today because
the only plausible caller passes a non-NULL variable, but someday it
will be wrong, and even today's correctness is subtle: the caller that
does pass a NULL is never invoked because of object type constraints.
Surely not a condition to rely on.
Tom Lane [Sun, 10 Apr 2016 04:24:28 +0000 (00:24 -0400)]
Further minor improvement in generic_xlog.c: always say REGBUF_STANDARD.
Since we're requiring pages handled by generic_xlog.c to be standard
format, specify REGBUF_STANDARD when doing a full-page image, so that
xloginsert.c can compress out the "hole" between pd_lower and pd_upper.
Given the current API in which this path will be taken only for a newly
initialized page, the hole is likely to be particularly large in such
cases, so that this oversight could easily be performance-significant.
I don't notice any particular change in the runtime of contrib/bloom's
regression test, though.
Tom Lane [Sat, 9 Apr 2016 23:30:56 +0000 (19:30 -0400)]
Micro-optimize GenericXLogFinish().
Make the inner comparison loops of computeDelta() as tight as possible by
pulling considerations of valid and invalid ranges out of the inner loops,
and extending a match or non-match detection as far as possible before
deciding what to do next. To keep this tractable, give up the possibility
of merging fragments across the pd_lower to pd_upper gap. The fraction of
pages where that could happen (ie, there are 4 or fewer bytes in the gap,
*and* data changes immediately adjacent to it on both sides) is too small
to be worth spending cycles on.
Also, avoid two BLCKSZ-length memcpy()s by computing the delta before
moving data into the target buffer, instead of after. This doesn't save
nearly as many cycles as being tenser about computeDelta(), but it still
seems worth doing.
On my machine, this patch cuts a full 40% off the runtime of
contrib/bloom's regression test.
Tom Lane [Sat, 9 Apr 2016 20:39:30 +0000 (16:39 -0400)]
Get rid of GenericXLogUnregister().
This routine is unsafe as implemented, because it invalidates the page
image pointers returned by previous GenericXLogRegister() calls.
Rather than complicate the API or the implementation to avoid that,
let's just get rid of it; the use-case for having it seems much
too thin to justify a lot of work here.
While at it, do some wordsmithing on the SGML docs for generic WAL.
Tom Lane [Sat, 9 Apr 2016 19:02:19 +0000 (15:02 -0400)]
Code review/prettification for generic_xlog.c.
Improve commentary, use more specific names for the delta fields,
const-ify pointer arguments where possible, avoid assuming that
initializing only the first element of a local array will guarantee
that the remaining elements end up as we need them. (I think that
code in generic_redo actually worked, but only because InvalidBuffer
is zero; this is a particularly ugly way of depending on that ...)
\crosstabview is a completely different way to display results from a
query: instead of a vertical display of rows, the data values are placed
in a grid where the column and row headers come from the data itself,
similar to a spreadsheet.
The sort order of the horizontal header can be specified by using
another column in the query, and the vertical header determines its
ordering from the order in which they appear in the query.
This only allows displaying a single value in each cell. If more than
one value correspond to the same cell, an error is thrown. Merging of
values can be done in the query itself, if necessary. This may be
revisited in the future.
Author: Daniel Verité Reviewed-by: Pavel Stehule, Dean Rasheed
Kevin Grittner [Fri, 8 Apr 2016 22:18:10 +0000 (17:18 -0500)]
Add snapshot_too_old to NSVC @contrib_excludes
The buildfarm showed failure for Windows MSVC builds due to this
omission. This might not be the only problem with the Makefile for
this feature, but hopefully this will get it past the immediate
problem.
Andres Freund [Fri, 8 Apr 2016 21:26:36 +0000 (14:26 -0700)]
Expose more out/readfuncs support functions.
Previously bcac23d exposed a subset of support functions, namely the
ones Kaigai found useful. In 20160304193704.elq773pyg5fyl3mi@alap3.anarazel.de I mentioned that
there's some functions missing to use the facility in an external
project.
To avoid having to add functions piecemeal, add all the functions which
are used to define READ_* and WRITE_* macros; users of the extensible
node functionality are likely to need these. Additionally expose
outDatum(), which doesn't have it's own WRITE_ macro, as it needs
information from the embedding struct.
Stephen Frost [Fri, 8 Apr 2016 20:56:27 +0000 (16:56 -0400)]
Create default roles
This creates an initial set of default roles which administrators may
use to grant access to, historically, superuser-only functions. Using
these roles instead of granting superuser access reduces the number of
superuser roles required for a system. Documention for each of the
default roles has been added to user-manag.sgml.
Bump catversion to 201604082, as we had a commit that bumped it to 201604081 and another that set it back to 201604071...
Kevin Grittner [Fri, 8 Apr 2016 19:36:30 +0000 (14:36 -0500)]
Add the "snapshot too old" feature
This feature is controlled by a new old_snapshot_threshold GUC. A
value of -1 disables the feature, and that is the default. The
value of 0 is just intended for testing. Above that it is the
number of minutes a snapshot can reach before pruning and vacuum
are allowed to remove dead tuples which the snapshot would
otherwise protect. The xmin associated with a transaction ID does
still protect dead tuples. A connection which is using an "old"
snapshot does not get an error unless it accesses a page modified
recently enough that it might not be able to produce accurate
results.
This is similar to the Oracle feature, and we use the same SQLSTATE
and error message for compatibility.
Kevin Grittner [Fri, 8 Apr 2016 19:30:10 +0000 (14:30 -0500)]
Modify BufferGetPage() to prepare for "snapshot too old" feature
This patch is a no-op patch which is intended to reduce the chances
of failures of omission once the functional part of the "snapshot
too old" patch goes in. It adds parameters for snapshot, relation,
and an enum to specify whether the snapshot age check needs to be
done for the page at this point. This initial patch passes NULL
for the first two new parameters and BGP_NO_SNAPSHOT_TEST for the
third. The follow-on patch will change the places where the test
needs to be made.
Stephen Frost [Fri, 8 Apr 2016 19:00:44 +0000 (15:00 -0400)]
In dumpTable, re-instate the skipping logic
Pretty sure I removed this based on some incorrect thinking that it was
no longer possible to reach this point for a table which will not be
dumped, but that's clearly wrong.
Magnus Hagander [Fri, 8 Apr 2016 18:23:52 +0000 (20:23 +0200)]
Add authentication parameters compat_realm and upn_usename for SSPI
These parameters are available for SSPI authentication only, to make
it possible to make it behave more like "normal gssapi", while
making it possible to maintain compatibility.
compat_realm is on by default, but can be turned off to make the
authentication use the full Kerberos realm instead of the NetBIOS name.
upn_username is off by default, and can be turned on to return the users
Kerberos UPN rather than the SAM-compatible name (a user in Active
Directory can have both a legacy SAM-compatible username and a new
Kerberos one. Normally they are the same, but not always)
Author: Christian Ullrich
Reviewed by: Robbie Harwood, Alvaro Herrera, me
Tom Lane [Fri, 8 Apr 2016 18:15:12 +0000 (14:15 -0400)]
Fix unstable regression test output.
Output order from the pg_indexes view might vary depending on the
phase of the moon, so add ORDER BY to ensure stable results of tests
added by commit 386e3d7609c49505e079c40c65919d99feb82505.
Per buildfarm.
OpenSSL has an unfortunate tendency to mix per-session state error
handling with per-thread error handling. This can cause problems when
programs that link to libpq with OpenSSL enabled have some other use of
OpenSSL; without care, one caller of OpenSSL may cause problems for the
other caller. Backend code might similarly be affected, for example
when a third party extension independently uses OpenSSL without taking
the appropriate precautions.
To fix, don't trust other users of OpenSSL to clear the per-thread error
queue. Instead, clear the entire per-thread queue ahead of certain I/O
operations when it appears that there might be trouble (these I/O
operations mostly need to call SSL_get_error() to check for success,
which relies on the queue being empty). This is slightly aggressive,
but it's pretty clear that the other callers have a very dubious claim
to ownership of the per-thread queue. Do this is both frontend and
backend code.
Finally, be more careful about clearing our own error queue, so as to
not cause these problems ourself. It's possibly that control previously
did not always reach SSLerrmessage(), where ERR_get_error() was supposed
to be called to clear the queue's earliest code. Make sure
ERR_get_error() is always called, so as to spare other users of OpenSSL
the possibility of similar problems caused by libpq (as opposed to
problems caused by a third party OpenSSL library like PHP's OpenSSL
extension). Again, do this is both frontend and backend code.
See bug #12799 and https://bugs.php.net/bug.php?id=68276
Based on patches by Dave Vitek and Peter Eisentraut.
Tom Lane [Fri, 8 Apr 2016 17:51:54 +0000 (13:51 -0400)]
Add BSD authentication method.
Create a "bsd" auth method that works the same as "password" so far as
clients are concerned, but calls the BSD Authentication service to
check the password. This is currently only available on OpenBSD.
Robert Haas [Fri, 8 Apr 2016 17:44:50 +0000 (13:44 -0400)]
Add combine functions for various floating-point aggregates.
This allows parallel aggregation to use them. It may seem surprising
that we use float8_combine for both float4_accum and float8_accum
transition functions, but that's because those functions differ only
in the type of the non-transition-state argument.
Haribabu Kommi, reviewed by David Rowley and Tomas Vondra
As noticed by Tom Lane changing operation's number in commit bb140506df605fab58f48926ee1db1f80bdafb59 causes on-disk format incompatibility.
Revert to previous numbering, that is reason to add special array to store
priorities of operation. Also it reverts order of tsquery to previous.
Now indexes (but only B-tree for now) can contain "extra" column(s) which
doesn't participate in index structure, they are just stored in leaf
tuples. It allows to use index only scan by using single index instead
of two or more indexes.
Author: Anastasia Lubennikova with minor editorializing by me
Reviewers: David Rowley, Peter Geoghegan, Jeff Janes
Tom Lane [Fri, 8 Apr 2016 16:31:42 +0000 (12:31 -0400)]
Fix multiple bugs in tablespace symlink removal.
Don't try to examine S_ISLNK(st.st_mode) after a failed lstat().
It's undefined.
Also, if the lstat() reported ENOENT, we do not wish that to be a hard
error, but the code might nonetheless treat it as one (giving an entirely
misleading error message, too) depending on luck-of-the-draw as to what
S_ISLNK() returned.
Don't throw error for ENOENT from rmdir(), either. (We're not really
expecting ENOENT because we just stat'd the file successfully; but
if we're going to allow ENOENT in the symlink code path, surely the
directory code path should too.)
Generate an appropriate errcode for its-the-wrong-type-of-file complaints.
(ERRCODE_SYSTEM_ERROR doesn't seem appropriate, and failing to write
errcode() around it certainly doesn't work, and not writing an errcode
at all is not per project policy.)
Valgrind noticed the undefined S_ISLNK result; the other problems emerged
while reading the code in the area.
All of this appears to have been introduced in 8f15f74a44f68f9c.
Back-patch to 9.5 where that commit appeared.
Andres Freund [Fri, 8 Apr 2016 15:18:52 +0000 (08:18 -0700)]
Increase maximum number of clog buffers.
Benchmarking has shown that the current number of clog buffers limits
scalability. We've previously increased the number in 33aaa139, but
that's not sufficient with a large number of clients.
We've benchmarked the cost of increasing the limit by benchmarking worst
case scenarios; testing showed that 128 buffers don't cause a
regression, even in contrived scenarios, whereas 256 does
There are a number of more complex patches flying around to address
various clog scalability problems, but this is simple enough that we can
get it into 9.6; and is beneficial even after those patches have been
applied.
It is a bit unsatisfactory to increase this in small steps every few
releases, but a better solution seems to require a rewrite of slru.c;
not something done quickly.
Author: Amit Kapila and Andres Freund
Discussion: CAA4eK1+-=18HOrdqtLXqOMwZDbC_15WTyHiFruz7BvVArZPaAw@mail.gmail.com
Robert Haas [Fri, 8 Apr 2016 15:14:56 +0000 (11:14 -0400)]
Add a 'parallel_degree' reloption.
The code that estimates what parallel degree should be uesd for the
scan of a relation is currently rather stupid, so add a parallel_degree
reloption that can be used to override the planner's rather limited
judgement.
Julien Rouhaud, reviewed by David Rowley, James Sewell, Amit Kapila,
and me. Some further hacking by me.
The PAM_RHOST item is set to the remote IP address or host name and can
be used by PAM modules. A pg_hba.conf option is provided to choose
between IP address and resolved host name.
From: Grzegorz Sampolski <grzsmp@gmail.com> Reviewed-by: Haribabu Kommi <kommi.haribabu@gmail.com>
Robert Haas [Fri, 8 Apr 2016 06:36:26 +0000 (02:36 -0400)]
Use quicksort, not replacement selection, for external sorting.
We still use replacement selection for the first run of the sort only
and only when the number of tuples is relatively small. Otherwise,
the first run, and subsequent runs in all cases, are produced using
quicksort. This tends to be faster except perhaps for very small
amounts of working memory.
Peter Geoghegan, reviewed by Tomas Vondra, Jeff Janes, Mithun Cy,
Greg Stark, and me.
Robert Haas [Fri, 8 Apr 2016 06:04:46 +0000 (02:04 -0400)]
Extend relations multiple blocks at a time to improve scalability.
Contention on the relation extension lock can become quite fierce when
multiple processes are inserting data into the same relation at the same
time at a high rate. Experimentation shows the extending the relation
multiple blocks at a time improves scalability.
Dilip Kumar, reviewed by Petr Jelinek, Amit Kapila, and me.
Simon Riggs [Fri, 8 Apr 2016 01:51:09 +0000 (02:51 +0100)]
Use Foreign Key relationships to infer multi-column join selectivity
In cases where joins use multiple columns we currently assess each join
separately causing gross mis-estimates for join cardinality.
This patch adds use of FK information for the first time into the
planner. When FKs are present and we have multi-column join information,
plan estimates will be drastically improved. Cases with multiple FKs
are handled, though partial matches are ignored currently.
Net effect is substantial performance improvements for joins in many
common cases. Additional planning time is isolated to cases that are
currently performing poorly, measured at 0.08 - 0.15 ms.
Please watch for planner performance regressions; circumstances seem
unlikely but the law of unintended consequences may apply somewhen.
Additional complex tests welcome to prove this before release.
Tests can be performed using SET enable_fkey_estimates = on | off
using scripts provided during Hackers discussions, message id: 552335D9.3090707@2ndquadrant.com
Authors: Tomas Vondra and David Rowley
Reviewed and tested by Simon Riggs, adding comments only
Stephen Frost [Thu, 7 Apr 2016 18:40:23 +0000 (14:40 -0400)]
GRANT rights to CURRENT_USER instead of adding roles
We shouldn't be adding roles during the regression tests as that can
cause back-to-back installcheck runs to fail and users running the
regression tests likley don't want those extra roles.
Tom Lane [Thu, 7 Apr 2016 17:11:30 +0000 (13:11 -0400)]
Refactor join_is_removable() to separate out distinctness-proving logic.
Extracted from pending unique-join patch, since this is a rather large
delta but it's simply moving code out into separately-accessible
subroutines.
I (tgl) did choose to add a bit more logic to rel_supports_distinctness,
so that it verifies that there's at least one potentially usable unique
index rather than just checking indexlist != NIL. Otherwise there's
no functional change here.
Kevin Grittner [Thu, 7 Apr 2016 16:12:35 +0000 (11:12 -0500)]
Detect SSI conflicts before reporting constraint violations
While prior to this patch the user-visible effect on the database
of any set of successfully committed serializable transactions was
always consistent with some one-at-a-time order of execution of
those transactions, the presence of declarative constraints could
allow errors to occur which were not possible in any such ordering,
and developers had no good workarounds to prevent user-facing
errors where they were not necessary or desired. This patch adds
a check for serialization failure ahead of duplicate key checking
so that if a developer explicitly (redundantly) checks for the
pre-existing value they will get the desired serialization failure
where the problem is caused by a concurrent serializable
transaction; otherwise they will get a duplicate key error.
While it would be better if the reads performed by the constraints
could count as part of the work of the transaction for
serialization failure checking, and we will hopefully get there
some day, this patch allows a clean and reliable way for developers
to work around the issue. In many cases existing code will already
be doing the right thing for this to "just work".
Author: Thomas Munro, with minor editing of docs by me Reviewed-by: Marko Tiikkaja, Kevin Grittner
Patch introduces new text search operator (<-> or <DISTANCE>) into tsquery.
On-disk and binary in/out format of tsquery are backward compatible.
It has two side effect:
- change order for tsquery, so, users, who has a btree index over tsquery,
should reindex it
- less number of parenthesis in tsquery output, and tsquery becomes more
readable
Authors: Teodor Sigaev, Oleg Bartunov, Dmitry Ivanov
Reviewers: Alexander Korotkov, Artur Zakirov
Commit c22650cd6450854e1a75064b698d7dcbb4a8821a sparked a discussion
about diverse interpretations of "token user" in error messages. Expel
old and new specimens of that phrase by making all GetTokenInformation()
callers report errors the way GetTokenUser() has been reporting them.
These error conditions almost can't happen, so users are unlikely to
observe this change.
Stephen Frost [Thu, 7 Apr 2016 01:45:32 +0000 (21:45 -0400)]
Use GRANT system to manage access to sensitive functions
Now that pg_dump will properly dump out any ACL changes made to
functions which exist in pg_catalog, switch to using the GRANT system
to manage access to those functions.
This means removing 'if (!superuser()) ereport()' checks from the
functions themselves and then REVOKEing EXECUTE right from 'public' for
these functions in system_views.sql.
Stephen Frost [Thu, 7 Apr 2016 01:45:32 +0000 (21:45 -0400)]
In pg_dump, include pg_catalog and extension ACLs, if changed
Now that all of the infrastructure exists, add in the ability to
dump out the ACLs of the objects inside of pg_catalog or the ACLs
for objects which are members of extensions, but only if they have
been changed from their original values.
The original values are tracked in pg_init_privs. When pg_dump'ing
9.6-and-above databases, we will dump out the ACLs for all objects
in pg_catalog and the ACLs for all extension members, where the ACL
has been changed from the original value which was set during either
initdb or CREATE EXTENSION.
This should not change dumps against pre-9.6 databases.
Stephen Frost [Thu, 7 Apr 2016 01:45:32 +0000 (21:45 -0400)]
In pg_dump, split "dump" into "dump" and "dump_contains"
Historically, the "dump" component of the namespace has been used
to decide if the objects inside of the namespace should be dumped
also. Given that "dump" is now a bitmask and may be partial, and
we may want to dump out all components of the namespace object but
only some of the components of objects contained in the namespace,
create a "dump_contains" bitmask which will represent what components
of the objects inside of a namespace should be dumped out.
No behavior change here, but in preparation for a change where we
will dump out just the ACLs of objects in pg_catalog, but we might
not dump out the ACL of the pg_catalog namespace itself (for instance,
when it hasn't been changed from the value set at initdb time).
Stephen Frost [Thu, 7 Apr 2016 01:45:32 +0000 (21:45 -0400)]
In pg_dump, use a bitmap to represent what to include
pg_dump has historically used a simple boolean 'dump' value to indicate
if a given object should be included in the dump or not. Instead, use
a bitmap which breaks down the components of an object into their
distinct pieces and use that bitmap to only include the components
requested.
This does not include any behavioral change, but is in preperation for
the change to dump out just ACLs for objects in pg_catalog.
Stephen Frost [Thu, 7 Apr 2016 01:45:32 +0000 (21:45 -0400)]
Add new catalog called pg_init_privs
This new catalog holds the privileges which the system was
initialized with at initdb time, along with any permissions set
by extensions at CREATE EXTENSION time. This allows pg_dump
(and any other similar use-cases) to detect when the privileges
set on initdb-created or extension-created objects have been
changed from what they were set to at initdb/extension-creation
time and handle those changes appropriately.
Peter Eisentraut [Thu, 25 Feb 2016 23:56:37 +0000 (18:56 -0500)]
pg_dump: Add table qualifications to some tags
Some object types have names that are only unique for one table. But
for those we generally didn't put the table name into the dump TOC tag.
So it was impossible to identify these objects if the same name was used
for multiple tables. This affects policies, column defaults,
constraints, triggers, and rules.
Fix by adding the table name to the TOC tag, so that it now reads
"$schema $table $object".
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>