Tom Lane [Tue, 27 Sep 2016 15:38:33 +0000 (11:38 -0400)]
Improve contrib/cube's handling of zero-D cubes, infinities, and NaNs.
It's always been possible to create a zero-dimensional cube by converting
from a zero-length float8 array, but cube_in failed to accept the '()'
representation that cube_out produced for that case, resulting in a
dump/reload hazard. Make it accept the case. Also fix a couple of
other places that didn't behave sanely for zero-dimensional cubes:
cube_size would produce 1.0 when surely the answer should be 0.0,
and g_cube_distance risked a divide-by-zero failure.
Likewise, it's always been possible to create cubes containing float8
infinity or NaN coordinate values, but cube_in couldn't parse such input,
and cube_out produced platform-dependent spellings of the values. Convert
them to use float8in_internal and float8out_internal so that the behavior
will be the same as for float8, as we recently did for the core geometric
types (cf commit 50861cd68). As in that commit, I don't pretend that this
patch fixes all insane corner-case behaviors that may exist for NaNs, but
it's a step forward.
(This change allows removal of the separate cube_1.out and cube_3.out
expected-files, as the platform dependency that previously required them
is now gone: an underflowing coordinate value will now produce an error
not plus or minus zero.)
Make errors from cube_in follow project conventions as to spelling
("invalid input syntax for cube" not "bad cube representation")
and errcode (INVALID_TEXT_REPRESENTATION not SYNTAX_ERROR).
Also a few marginal code cleanups and comment improvements.
<sys/select.h> is required by POSIX.1-2001 to get the prototype of
select(2), but nearly no systems enforce that because older standards
let you get away with including some other headers. Recent OpenBSD
hacking has removed that frail touch of friendliness, however, which
broke some compiles; fix all the way back to 9.1 by adding the required
standard. Only vacuumdb.c was reported to fail, but it seems easier to
fix the whole lot in a fell swoop.
Tom Lane [Tue, 27 Sep 2016 00:23:50 +0000 (20:23 -0400)]
Fix newly-introduced issues in pgbench.
The result of FD_ISSET() doesn't necessarily fit in a bool, though
assigning it to one might accidentally work depending on platform and which
socket FD number is being inquired of. Rewrite to test it with if(),
rather than making any specific assumption about the result width,
to match the way every other such call in PG is written.
Don't break out of the input_mask-filling loop after finding the first
client that we're waiting for results from. That mostly breaks parallel
query management.
Also, if we choose not to call select(), be sure to clear out any bits
the mask-filling loop might have set, so that we don't accidentally call
doCustom for clients we don't know have input. Doing so would likely
be harmless, but it's a waste of cycles and doesn't seem to be intended.
Make this_usec wide enough. (Yeah, the value would usually fit in an
int, but then why are we using int64 everywhere else?)
Minor cosmetic adjustments, mostly comment improvements.
Problems introduced by commit 12788ae49. The first issue was discovered
by buildfarm testing, the others by code review.
Tom Lane [Mon, 26 Sep 2016 18:52:44 +0000 (14:52 -0400)]
Replace the built-in GIN array opclasses with a single polymorphic opclass.
We had thirty different GIN array opclasses sharing the same operators and
support functions. That still didn't cover all the built-in types, nor
did it cover arrays of extension-added types. What we want is a single
polymorphic opclass for "anyarray". There were two missing features needed
to make this possible:
1. We have to be able to declare the index storage type as ANYELEMENT
when the opclass is declared to index ANYARRAY. This just takes a few
more lines in index_create(). Although this currently seems of use only
for GIN, there's no reason to make index_create() restrict it to that.
2. We have to be able to identify the proper GIN compare function for
the index storage type. This patch proceeds by making the compare function
optional in GIN opclass definitions, and specifying that the default btree
comparison function for the index storage type will be looked up when the
opclass omits it. Again, that seems pretty generically useful.
Since the comparison function lookup is done in initGinState(), making
use of the second feature adds an additional cache lookup to GIN index
access setup. It seems unlikely that that would be very noticeable given
the other costs involved, but maybe at some point we should consider
making GinState data persist longer than it now does --- we could keep it
in the index relcache entry, perhaps.
Rather fortuitously, we don't seem to need to do anything to get this
change to play nice with dump/reload or pg_upgrade scenarios: the new
opclass definition is automatically selected to replace existing index
definitions, and the on-disk data remains compatible. Also, if a user has
created a custom opclass definition for a non-builtin type, this doesn't
break that, since CREATE INDEX will prefer an exact match to opcintype
over a match to ANYARRAY. However, if there's anyone out there with
handwritten DDL that explicitly specifies _bool_ops or one of the other
replaced opclass names, they'll need to adjust that.
Refactor script execution state machine in pgbench.
The doCustom() function had grown into quite a mess. Rewrite it, in a more
explicit state machine style, for readability.
This also fixes one minor bug: if a script consisted entirely of meta
commands, doCustom() never returned to the caller, so progress reports
with the -P option were not printed. I don't want to backpatch this
refactoring, and the bug is quite insignificant, so only commit this to
master, and leave the bug unfixed in back-branches.
Tom Lane [Sun, 25 Sep 2016 19:40:57 +0000 (15:40 -0400)]
Refer to OS X as "macOS", except for the port name which is still "darwin".
We weren't terribly consistent about whether to call Apple's OS "OS X"
or "Mac OS X", and the former is probably confusing to people who aren't
Apple users. Now that Apple has rebranded it "macOS", follow their lead
to establish a consistent naming pattern. Also, avoid the use of the
ancient project name "Darwin", except as the port code name which does not
seem desirable to change. (In short, this patch touches documentation and
comments, but no actual code.)
I didn't touch contrib/start-scripts/osx/, either. I suspect those are
obsolete and due for a rewrite, anyway.
I dithered about whether to apply this edit to old release notes, but
those were responsible for quite a lot of the inconsistencies, so I ended
up changing them too. Anyway, Apple's being ahistorical about this,
so why shouldn't we be?
Tom Lane [Fri, 23 Sep 2016 19:50:00 +0000 (15:50 -0400)]
Install TAP test infrastructure so it's available for extension testing.
When configured with --enable-tap-tests, "make install" will now install
the Perl support files for TAP testing where PGXS will find them.
This allows extensions to rely on $(prove_check) even when being built
out-of-tree. Back-patch to 9.4 where we first started to support TAP
testing, to reduce the number of cases extension makefiles need to
consider.
Tom Lane [Fri, 23 Sep 2016 18:22:07 +0000 (14:22 -0400)]
Doc: fix examples of # operators so they actually work.
These worked as-is until around 7.0, but fail in newer versions because
there are more operators named "#". Besides it's a bit inconsistent that
only two of the examples on this page lack type names on their constants.
Tom Lane [Fri, 23 Sep 2016 17:49:26 +0000 (13:49 -0400)]
Fix incorrect logic for excluding range constructor functions in pg_dump.
Faulty AND/OR nesting in the WHERE clause of getFuncs' SQL query led to
dumping range constructor functions if they are part of an extension
and we're in binary-upgrade mode. Actually, we don't want to dump them
separately even then, since CREATE TYPE AS RANGE will create the range's
constructor functions regardless. Per report from Andrew Dunstan.
It looks like this mistake was introduced by me, in commit b985d4877, in
perhaps-overzealous refactoring to reduce code duplication. I'm suitably
embarrassed.
Tom Lane [Fri, 23 Sep 2016 14:09:52 +0000 (10:09 -0400)]
Don't trust CreateFileMapping() to clear the error code on success.
We must test GetLastError() even when CreateFileMapping() returns a
non-null handle. If that value were left over from some previous system
call, we might be fooled into thinking the segment already existed.
Experimentation on Windows 7 suggests that CreateFileMapping() clears
the error code on success, but it is not documented to do so, so let's
not rely on that happening in all Windows releases.
Tom Lane [Fri, 23 Sep 2016 13:54:11 +0000 (09:54 -0400)]
Avoid using PostmasterRandom() for DSM control segment ID.
Commits 470d886c3 et al intended to fix the problem that the postmaster
selected the same "random" DSM control segment ID on every start. But
using PostmasterRandom() for that destroys the intended property that the
delay between random_start_time and random_stop_time will be unpredictable.
(Said delay is probably already more predictable than we could wish, but
that doesn't mean that reducing it by a couple orders of magnitude is OK.)
Revert the previous patch and add a comment warning against misuse of
PostmasterRandom. Fix the original problem by calling srandom() early in
PostmasterMain, using a low-security seed that will later be overwritten
by PostmasterRandom.
Tom Lane [Thu, 22 Sep 2016 18:30:33 +0000 (14:30 -0400)]
Remove nearly-unused SizeOfIptrData macro.
Past refactorings have removed all but one reference to SizeOfIptrData
(and that one place was in a pretty noncritical spot). Since nobody's
complained, it seems probable that there are no supported compilers
that don't think sizeof(ItemPointerData) is 6. If there are, we're
wasting MAXALIGN per heap tuple anyway, so it's rather silly to worry
about whether we can shave space in places like WAL records.
Tom Lane [Thu, 22 Sep 2016 15:34:44 +0000 (11:34 -0400)]
Be sure to rewind the tuplestore read pointer in non-leader CTEScan nodes.
ExecInitCteScan supposed that it didn't have to do anything to the extra
tuplestore read pointer it gets from tuplestore_alloc_read_pointer.
However, it needs this read pointer to be positioned at the start of the
tuplestore, while tuplestore_alloc_read_pointer is actually defined as
cloning the current position of read pointer 0. In normal situations
that accidentally works because we initialize the whole plan tree at once,
before anything gets read. But it fails in an EvalPlanQual recheck, as
illustrated in bug #14328 from Dima Pavlov. To fix, just forcibly rewind
the pointer after tuplestore_alloc_read_pointer. The cost of doing so is
negligible unless the tuplestore is already in TSS_READFILE state, which
wouldn't happen in normal cases. We could consider altering tuplestore's
API to make that case cheaper, but that would make for a more invasive
back-patch and it doesn't seem worth it.
This has been broken probably for as long as we've had CTEs, so back-patch
to all supported branches.
Add tests for consistent support of connection strings in frontend
programs as well as proper handling of unusual characters in database
and user names. These tests were developed for the issues of
CVE-2016-5424.
To allow testing of names with spaces, change the pg_regress
command-line options --create-role and --dbname to split their arguments
by comma only, not space or comma as before. Only commas were actually
used in existing uses.
When waiting is selected for the promote action, look into pg_control
until the state changes, then use the PQping-based waiting until the
server is reachable.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Peter Eisentraut [Tue, 26 Jul 2016 16:08:49 +0000 (12:08 -0400)]
Delay updating control file to "in production"
Move the updating of the control file to "in production" status until
the point where WAL writes are allowed. Before, there could be a
significant gap between the control file update and write transactions
actually being allowed. This makes it more reliable to use the control
status to verify the end of a promotion.
Peter Eisentraut [Tue, 26 Jul 2016 15:23:43 +0000 (11:23 -0400)]
pg_ctl: Detect current standby state from pg_control
pg_ctl used to determine whether a server was in standby mode by looking
for a recovery.conf file. With this change, it instead looks into
pg_control, which is potentially more accurate. There are also
occasional discussions about removing recovery.conf, so this removes one
dependency.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Peter Eisentraut [Tue, 26 Jul 2016 15:39:43 +0000 (11:39 -0400)]
Make command_like output more compact
Consistently print the test name, not the full command, which can be
quite lenghty and include temporary directory names and other
distracting details.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Print test parameters like "foo: 123", and results like "foo = 123".
The way "latency average" was printed was differently if it was calculated
from the overall run time or was measured on a per-transaction basis.
Also, the per-script weight is a test parameter, rather than a result, so
use the "weight: %f" style for that.
Backpatch to 9.6, since the inconsistency on "latency average" was
introduced there.
Fix pgbench's calculation of average latency, when -T is not used.
If the test duration was given in # of transactions (-t or no option),
rather as a duration (-T), the latency average was always printed as 0.
It has been broken ever since the display of latency average was added,
in 9.4.
Peter Eisentraut [Tue, 20 Sep 2016 16:00:00 +0000 (12:00 -0400)]
Re-add translation markers that were lost
When win32security.c was moved from src/backend/port/win32/security.c,
the message writing function was changed from write_stderr to log_error,
but nls.mk was not updated. We could add log_error to GETTEXT_TRIGGERS,
but it's also used in src/common/exec.c in a different way and that
would create some confusion or a larger patch. For now, just put an
explicit translation marker onto the strings that were previously
translated.
Robert Haas [Tue, 20 Sep 2016 16:04:41 +0000 (12:04 -0400)]
Retry DSM control segment creation if Windows indicates access denied.
Otherwise, attempts to run multiple postmasters running on the same
machine may fail, because Windows sometimes returns ERROR_ACCESS_DENIED
rather than ERROR_ALREADY_EXISTS when there is an existing segment.
Hitting this bug is much more likely because of another defect not
fixed by this patch, namely that dsm_postmaster_startup() uses
random() which returns the same value every time. But that's not
a reason not to fix this.
Kyotaro Horiguchi and Amit Kapila, reviewed by Michael Paquier
Fix outdated comments, GIST search queue is not an RBTree anymore.
The GiST search queue is implemented as a pairing heap rather than as
Red-Black Tree, since 9.5 (commit e7032610). I neglected these comments
in that commit.
Fix latency calculation when there are \sleep commands in the script.
We can't use txn_scheduled to hold the sleep-until time for \sleep, because
that interferes with calculation of the latency of the transaction as whole.
Fix ecpg -? option on Windows, add -V alias for --version.
This makes the -? and -V options work consistently with other binaries.
--help and --version are now only recognized as the first option, i.e.
"ecpg --foobar --help" no longer prints the help, but that's consistent
with most of our other binaries, too.
Tom Lane [Fri, 16 Sep 2016 13:36:19 +0000 (09:36 -0400)]
Add debugging aid "bmsToString(Bitmapset *bms)".
This function has no direct callers at present, but it's convenient for
manual use in a debugger, rather than having to inspect memory and do
bit-counting in your head.
In passing, get rid of useless outBitmapset() wrapper around
_outBitmapset(); let's just export the function that does the work.
Likewise for outToken().
LibreSSL defines OPENSSL_VERSION_NUMBER to claim that it is version 2.0.0,
but it doesn't have the functions added in OpenSSL 1.1.0. Add autoconf
checks for the individual functions we need, and stop relying on
OPENSSL_VERSION_NUMBER.
Backport to 9.5 and 9.6, like the patch that broke this. In the
back-branches, there are still a few OPENSSL_VERSION_NUMBER checks left,
to check for OpenSSL 0.9.8 or 0.9.7. I left them as they were - LibreSSL
has all those functions, so they work as intended.
Tom Lane [Thu, 15 Sep 2016 15:23:25 +0000 (11:23 -0400)]
Make min_parallel_relation_size's default value platform-independent.
The documentation states that the default value is 8MB, but this was
only true at BLCKSZ = 8kB, because the default was hard-coded as 1024.
Make the code match the docs by computing the default as 8MB/BLCKSZ.
Oversight in commit 75be66464, noted pursuant to a gripe from Peter E.
Robert Haas [Thu, 15 Sep 2016 13:22:52 +0000 (09:22 -0400)]
pg_buffercache: Allow huge allocations.
Otherwise, users who have configured shared_buffers >= 256GB won't
be able to use this module. There probably aren't many of those, but
it doesn't hurt anything to fix it so that it works.
Backpatch to 9.4, where MemoryContextAllocHuge was introduced. The
same problem exists in older branches, but there's no easy way to
fix it there.
- Check for SSL_new in configure, now that SSL_library_init is a macro.
- Do not access struct members directly. This includes some new code in
pgcrypto, to use the resource owner mechanism to ensure that we don't
leak OpenSSL handles, now that we can't embed them in other structs
anymore.
- RAND_SSLeay() -> RAND_OpenSSL()
Changes that were needed to silence deprecation warnings, but were not
strictly necessary:
- RAND_pseudo_bytes() -> RAND_bytes().
- SSL_library_init() and OpenSSL_config() -> OPENSSL_init_ssl()
- ASN1_STRING_data() -> ASN1_STRING_get0_data()
- DH_generate_parameters() -> DH_generate_parameters()
- Locking callbacks are not needed with OpenSSL 1.1.0 anymore. (Good
riddance!)
Also change references to SSLEAY_VERSION_NUMBER with OPENSSL_VERSION_NUMBER,
for the sake of consistency. OPENSSL_VERSION_NUMBER has existed since time
immemorial.
Fix SSL test suite to work with OpenSSL 1.1.0. CA certificates must have
the "CA:true" basic constraint extension now, or OpenSSL will refuse them.
Regenerate the test certificates with that. The "openssl" binary, used to
generate the certificates, is also now more picky, and throws an error
if an X509 extension is specified in "req_extensions", but that section
is empty.
Backpatch to all supported branches, per popular demand. In back-branches,
we still support OpenSSL 0.9.7 and above. OpenSSL 0.9.6 should still work
too, but I didn't test it. In master, we only support 0.9.8 and above.
Patch by Andreas Karlsson, with additional changes by me.
The money type input function did not have any overflow checks at all.
There were some regression tests that purported to check for overflow,
but they actually checked for the overflow behavior of the int8 type
before casting to money. Remove those unnecessary checks and add some
that actually check the money input function.
Tom Lane [Tue, 13 Sep 2016 21:17:48 +0000 (17:17 -0400)]
Be pickier about converting between Name and Datum.
We were misapplying NameGetDatum() to plain C strings in some places.
This worked, because it was just a pointer cast anyway, but it's a type
cheat in some sense. Use CStringGetDatum instead, and modify the
NameGetDatum macro so it won't compile if applied to something that's
not a pointer to NameData. This should result in no changes to
generated code, but it is logically cleaner.
Tom Lane [Tue, 13 Sep 2016 17:54:24 +0000 (13:54 -0400)]
Improve parser's and planner's handling of set-returning functions.
Teach the parser to reject misplaced set-returning functions during parse
analysis using p_expr_kind, in much the same way as we do for aggregates
and window functions (cf commit eaccfded9). While this isn't complete
(it misses nesting-based restrictions), it's much better than the previous
error reporting for such cases, and it allows elimination of assorted
ad-hoc expression_returns_set() error checks. We could add nesting checks
later if it seems important to catch all cases at parse time.
There is one case the parser will now throw error for although previous
versions allowed it, which is SRFs in the tlist of an UPDATE. That never
behaved sensibly (since it's ill-defined which generated row should be
used to perform the update) and it's hard to see why it should not be
treated as an error. It's a release-note-worthy change though.
Also, add a new Query field hasTargetSRFs reporting whether there are
any SRFs in the targetlist (including GROUP BY/ORDER BY expressions).
The parser can now set that basically for free during parse analysis,
and we can use it in a number of places to avoid expression_returns_set
searches. (There will be more such checks soon.) In some places, this
allows decontorting the logic since it's no longer expensive to check for
SRFs in the tlist --- so I made the checks parallel to the handling of
hasAggs/hasWindowFuncs wherever it seemed appropriate.
catversion bump because adding a Query field changes stored rules.
Robert Haas [Tue, 13 Sep 2016 13:21:35 +0000 (09:21 -0400)]
Have heapam.h include lockdefs.h rather than lock.h.
lockdefs.h was only split from lock.h relatively recently, and
represents a minimal subset of the old lock.h. heapam.h only needs
that smaller subset, so adjust it to include only that. This requires
some corresponding adjustments elsewhere.
Andres Freund [Tue, 13 Sep 2016 02:01:16 +0000 (19:01 -0700)]
Remove user_relns() SRF from regression tests.
The output of the function changes whenever previous (or, as in this
case, concurrent) tests leave a table in place. That causes unneeded
churn.
This should fix failures due to the tests added bfe16d1a5, like on
lapwing, caused by the tsrf test running concurrently with misc. Those
could also have been addressed by using temp tables, but that test has
annoyed me before.
Andres Freund [Thu, 4 Aug 2016 01:29:42 +0000 (18:29 -0700)]
Add more tests for targetlist SRFs.
We're considering changing the implementation of targetlist SRFs
considerably, and a lot of the current behaviour isn't tested in our
regression tests. Thus it seems useful to increase coverage to avoid
accidental behaviour changes.
It's quite possible that some of the plans here will require adjustments
to avoid falling afoul of ordering differences (e.g. hashed group
bys). The buildfarm will tell us.
Reviewed-By: Tom Lane
Discussion: <20160827214829.zo2dfb5jaikii5nw@alap3.anarazel.de>
Tom Lane [Mon, 12 Sep 2016 23:19:24 +0000 (19:19 -0400)]
Docs: assorted minor cleanups.
Standardize on "user_name" for a field name in related examples in
ddl.sgml; before we had variously "user_name", "username", and "user".
The last is flat wrong because it conflicts with a reserved word.
Be consistent about entry capitalization in a table in func.sgml.
Kevin Grittner [Mon, 12 Sep 2016 14:22:57 +0000 (09:22 -0500)]
Fix recent commit for tab-completion of database template.
The details of commit 52803098ab26051c0c9802f3c19edffc90de8843 were
based on a misunderstanding of the role inheritance allowing use
of a database for a template. While the CREATEDB privilege is not
inherited, the database ownership is privileges are.
Pointed out by Vitaly Burovoy and Tom Lane.
Fix provided by Tom Lane, reviewed by Vitaly Burovoy.
Simon Riggs [Mon, 12 Sep 2016 07:57:14 +0000 (08:57 +0100)]
Identify walsenders in pg_stat_activity
Following 8299471c37fff0b walsender procs are now visible in pg_stat_activity.
Set query to ‘walsender’ for walsender procs to allow them to be identified.
Tom Lane [Sun, 11 Sep 2016 18:15:07 +0000 (14:15 -0400)]
Allow CREATE EXTENSION to follow extension update paths.
Previously, to update an extension you had to produce both a version-update
script and a new base installation script. It's become more and more
obvious that that's tedious, duplicative, and error-prone. This patch
attempts to improve matters by allowing the new base installation script
to be omitted. CREATE EXTENSION will install a requested version if it
can find a base script and a chain of update scripts that will get there.
As in the existing update logic, shorter chains are preferred if there's
more than one possibility, with an arbitrary tie-break rule for chains
of equal length.
Also adjust the pg_available_extension_versions view to show such versions
as installable.
While at it, refactor the code so that CASCADE processing works for
extensions requested during ApplyExtensionUpdates(). Without this,
addition of a new requirement in an updated extension would require
creating a new base script, even if there was no other reason to do that.
(It would be easy at this point to add a CASCADE option to ALTER EXTENSION
UPDATE, to allow the same thing to happen during a manually-commanded
version update, but I have not done that here.)
Tom Lane [Sun, 11 Sep 2016 16:46:55 +0000 (12:46 -0400)]
Fix and simplify MSVC build's handling of xml/xslt/uuid dependencies.
Solution.pm mistakenly believed that the xml option requires the xslt
option, when actually the dependency is the other way around; and it
believed that libxml requires libiconv, which is not necessarily so,
so we shouldn't enforce it here. Fix the option cross-checking logic.
Also, since AddProject already takes care of adding libxml and libxslt
include and library dependencies to every project, there's no need
for the custom code that did that in mkvcbuild. While at it, let's
handle the similar dependencies for uuid in a similar fashion.
Given the lack of field complaints about these overly strict build
dependency requirements, there seems no need for a back-patch.
Implement binary heap replace-top operation in a smarter way.
In external sort's merge phase, we maintain a binary heap holding the next
tuple from each input tape. On each step, the topmost tuple is returned,
and replaced with the next tuple from the same tape. We were doing the
replacement by deleting the top node in one operation, and inserting the
next tuple after that. However, you can do a "replace-top" operation more
efficiently, in one "sift-up". A deletion will always walk the heap from
top to bottom, but in a replacement, we can stop as soon as we find the
right place for the new tuple. This is particularly helpful, if the tapes
are not in completely random order, so that the next tuple from a tape is
likely to land near the top of the heap.
Peter Geoghegan, reviewed by Claudio Freire, with some editing by me.
Tom Lane [Sat, 10 Sep 2016 21:54:23 +0000 (17:54 -0400)]
Improve unreachability recognition in elog() macro.
Some experimentation with an older version of gcc showed that it is able
to determine whether "if (elevel_ >= ERROR)" is compile-time constant
if elevel_ is declared "const", but otherwise not so much. We had
accounted for that in ereport() but were too miserly with braces to
make it so in elog(). I don't know how many currently-interesting
compilers have the same quirk, but in case it will save some code
space, let's make sure that elog() is on the same footing as ereport()
for this purpose.
Back-patch to 9.3 where we introduced pg_unreachable() calls into
elog/ereport.
Tom Lane [Sat, 10 Sep 2016 17:49:04 +0000 (13:49 -0400)]
Fix miserable coding in pg_stat_get_activity().
Commit dd1a3bccc replaced a test on whether a subroutine returned a
null pointer with a test on whether &pointer->backendStatus was null.
This accidentally failed to fail, at least on common compilers, because
backendStatus is the first field in the struct; but it was surely trouble
waiting to happen. Commit f91feba87 then messed things up further,
changing the logic to
local_beentry = pgstat_fetch_stat_local_beentry(curr_backend);
if (!local_beentry)
continue;
beentry = &local_beentry->backendStatus;
if (!beentry)
{
where the second "if" is now dead code, so that the intended behavior of
printing a row with "<backend information not available>" cannot occur.
I suspect this is all moot because pgstat_fetch_stat_local_beentry
will never actually return null in this function's usage, but it's still
very poor coding. Repair back to 9.4 where the original problem was
introduced.
Tom Lane [Fri, 9 Sep 2016 23:00:59 +0000 (19:00 -0400)]
Rewrite PageIndexDeleteNoCompact into a form that only deletes 1 tuple.
The full generality of deleting an arbitrary number of tuples is no longer
needed, so let's save some code and cycles by replacing the original coding
with an implementation based on PageIndexTupleDelete.
We can always get back the old code from git if we need it again for new
callers (though I don't care for its willingness to mess with line pointers
it wasn't told to mess with).
Tom Lane [Fri, 9 Sep 2016 22:17:07 +0000 (18:17 -0400)]
Convert PageAddItem into a macro to save a few cycles.
Nowadays this is just a backwards-compatibility wrapper around
PageAddItemExtended, so let's avoid the extra level of function call.
In addition, because pretty much all callers are passing constants
for the two bool arguments, compilers will be able to constant-fold
the conversion to a flags bitmask.
Tom Lane [Fri, 9 Sep 2016 22:02:24 +0000 (18:02 -0400)]
Invent PageIndexTupleOverwrite, and teach BRIN and GiST to use it.
PageIndexTupleOverwrite performs approximately the same function as
PageIndexTupleDelete (or PageIndexDeleteNoCompact) followed by PageAddItem
targeting the same item pointer offset. But in the case where the new
tuple is the same size as the old, it avoids shuffling other data around on
the page, because the new tuple is placed where the old one was rather than
being appended to the end of the page. This has been shown to provide a
substantial speedup for some GiST use-cases.
Also, this change allows some API simplifications: we can get rid of
the rather klugy and error-prone PAI_ALLOW_FAR_OFFSET flag for
PageAddItemExtended, since that was used only to cover a corner case
for BRIN that's better expressed by using PageIndexTupleOverwrite.
Note that this patch causes a rather subtle WAL incompatibility: the
physical page content change represented by certain WAL records is now
different than it was before, because while the tuples have the same
itempointer line numbers, the tuples themselves are in different places.
I have not bumped the WAL version number because I think it doesn't matter
unless you are trying to do bitwise comparisons of original and replayed
pages, and in any case we're early in a devel cycle and there will probably
be more WAL changes before v10 gets out the door.
There is probably room to make use of PageIndexTupleOverwrite in SP-GiST
and GIN too, but that is left for a future patch.
Andrey Borodin, reviewed by Anastasia Lubennikova, whacked around a bit
by me
Fix locking a tuple updated by an aborted (sub)transaction
When heap_lock_tuple decides to follow the update chain, it tried to
also lock any version of the tuple that was created by an update that
was subsequently rolled back. This is pointless, since for all intents
and purposes that tuple exists no more; and moreover it causes
misbehavior, as reported independently by Marko Tiikkaja and Marti
Raudsepp: some SELECT FOR UPDATE/SHARE queries may fail to return
the tuples, and assertion-enabled builds crash.
Fix by having heap_lock_updated_tuple test the xmin and return success
immediately if the tuple was created by an aborted transaction.
The condition where tuples become invisible occurs when an updated tuple
chain is followed by heap_lock_updated_tuple, which reports the problem
as HeapTupleSelfUpdated to its caller heap_lock_tuple, which in turn
propagates that code outwards possibly leading the calling code
(ExecLockRows) to believe that the tuple exists no longer.
Backpatch to 9.3. Only on 9.5 and newer this leads to a visible
failure, because of commit 27846f02c176; before that, heap_lock_tuple
skips the whole dance when the tuple is already locked by the same
transaction, because of the ancient HeapTupleSatisfiesUpdate behavior.
Still, the buggy condition may also exist in more convoluted scenarios
involving concurrent transactions, so it seems safer to fix the bug in
the old branches too.
Tom Lane [Fri, 9 Sep 2016 16:20:58 +0000 (12:20 -0400)]
In PageIndexTupleDelete, don't assume stored item lengths are MAXALIGNed.
PageAddItem stores the item length as-is. It MAXALIGN's the amount of
space actually allocated for each tuple, but not the stored length.
PageRepairFragmentation, PageIndexMultiDelete, and PageIndexDeleteNoCompact
are all on board with this and MAXALIGN item lengths after fetching them.
But PageIndexTupleDelete expects the stored length to be a MAXALIGN
multiple already. This accidentally works for existing index AMs because
they all maxalign their tuple sizes internally; but we don't do that for
heap tuples, and it shouldn't be a requirement for index tuples either.
So, sync PageIndexTupleDelete with the rest of bufpage.c by having it
maxalign the item size after fetching.
Also add a check that pd_special is maxaligned, to ensure that the test
"(offset + size) > phdr->pd_special" is still doing the right thing.
(If offset and pd_special are aligned, it doesn't matter whether size is.)
Again, this is in sync with the rest of the routines here, except for
PageAddItem which doesn't test because it doesn't actually do anything
for which pd_special alignment matters.
This shouldn't have any immediate functional impact; it just adds the
flexibility to use PageIndexTupleDelete on index tuples with non-aligned
lengths.
Peter Eisentraut [Thu, 18 Aug 2016 16:00:00 +0000 (12:00 -0400)]
Make better use of existing enums in plpgsql
plpgsql.h defines a number of enums, but most of the code passes them
around as ints. Update structs and function prototypes to take enum
types instead. This clarifies the struct definitions in plpgsql.h in
particular.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Tom Lane [Fri, 9 Sep 2016 13:20:34 +0000 (09:20 -0400)]
Avoid reporting "cache lookup failed" for some user-reachable cases.
We have a not-terribly-thoroughly-enforced-yet project policy that internal
errors with SQLSTATE XX000 (ie, plain elog) should not be triggerable from
SQL. record_in, domain_in, and PL validator functions all failed to meet
this standard, because they threw plain elog("cache lookup failed for XXX")
errors on bad OIDs, and those are all invokable from SQL.
For record_in, the best fix is to upgrade typcache.c (lookup_type_cache)
to throw a user-facing error for this case. That seems consistent because
it was more than halfway there already, having user-facing errors for shell
types and non-composite types. Having done that, tweak domain_in to rely
on the typcache to throw an appropriate error. (This costs little because
InitDomainConstraintRef would fetch the typcache entry anyway.)
For the PL validator functions, we already have a single choke point at
CheckFunctionValidatorAccess, so just fix its error to be user-facing.
Simon Riggs [Fri, 9 Sep 2016 10:55:12 +0000 (11:55 +0100)]
Fix corruption of 2PC recovery with subxacts
Reading 2PC state files during recovery was borked, causing corruptions during
recovery. Effect limited to servers with 2PC, subtransactions and
recovery/replication.
Stas Kelvich, reviewed by Michael Paquier and Pavan Deolasee
Andres Freund [Fri, 9 Sep 2016 00:02:43 +0000 (17:02 -0700)]
Improve scalability of md.c for large relations.
So far md.c used a linked list of segments. That proved to be a problem
when processing large relations, because every smgr.c/md.c level access
to a page incurred walking through a linked list of all preceding
segments. Thus making accessing pages O(#segments).
Replace the linked list of segments hanging off SMgrRelationData with an
array of opened segments. That allows O(1) access to individual
segments, if they've previously been opened.
Discussion: <20140331101001.GE13135@alap3.anarazel.de> Reviewed-By: Peter Geoghegan, Tom Lane (in an older version)
Andres Freund [Thu, 8 Sep 2016 23:51:09 +0000 (16:51 -0700)]
Fix mdtruncate() to close fd.c handle of deleted segments.
mdtruncate() forgot to FileClose() a segment's mdfd_vfd, when deleting
it. That lead to a fd.c handle to a truncated file being kept open until
backend exit.
The issue appears to have been introduced way back in 1a5c450f3024ac5,
before that the handle was closed inside FileUnlink().
The impact of this bug is limited - only VACUUM and ON COMMIT TRUNCATE
for temporary tables, truncate files in place (i.e. TRUNCATE itself is
not affected), and the relation has to be bigger than 1GB. The
consequences of a leaked fd.c handle aren't severe either.
Discussion: <20160908220748.oqh37ukwqqncbl3n@alap3.anarazel.de>
Backpatch: all supported releases
commit_ts and test_pg_dump were declaring targets before including the
PGXS stanza, which meant that the "all" target customarily defined as
the first (and therefore default target) was not the default anymore.
Fix that by moving those target definitions to after PGXS.
commit_ts was initially good, but I broke it in commit 9def031bd2;
test_pg_dump was born broken, probably copying from commit_ts' mistake.
In passing, fix a comment mistake in test_pg_dump/Makefile.
Tom Lane [Thu, 8 Sep 2016 17:12:01 +0000 (13:12 -0400)]
Allow pg_dump to dump non-extension members of an extension-owned schema.
Previously, if a schema was created by an extension, a normal pg_dump run
(not --binary-upgrade) would summarily skip every object in that schema.
In a case where an extension creates a schema and then users create other
objects within that schema, this does the wrong thing: we want pg_dump
to skip the schema but still create the non-extension-owned objects.
There's no easy way to fix this pre-9.6, because in earlier versions the
"dump" status for a schema is just a bool and there's no way to distinguish
"dump me" from "dump my members". However, as of 9.6 we do have enough
state to represent that, so this is a simple correction of the logic in
selectDumpableNamespace.
In passing, make some cosmetic fixes in nearby code.
Tom Lane [Thu, 8 Sep 2016 14:48:03 +0000 (10:48 -0400)]
Don't print database's tablespace in pg_dump -C --no-tablespaces output.
If the database has a non-default tablespace, we emitted a TABLESPACE
clause in the CREATE DATABASE command emitted by -C, even if
--no-tablespaces was also specified. This seems wrong, and it's
inconsistent with what pg_dumpall does, so change it. Per bug #14315
from Danylo Hlynskyi.
Back-patch to 9.5. The bug is much older, but it'd be a more invasive
change before 9.5 because dumpDatabase() hasn't got an easy way to get
to the outputNoTablespaces flag. Doesn't seem worth the work given
the lack of previous complaints.
Simon Riggs [Thu, 8 Sep 2016 09:32:58 +0000 (10:32 +0100)]
Fix minor memory leak in Standby startup
StandbyRecoverPreparedTransactions() leaked the buffer
used for two phase state file. This was leaked once
at startup and at every shutdown checkpoint seen.
MSVC: Place gendef.pl temporary file in the target directory.
Until now, it used the current working directory. This makes it safe
for simultaneous invocations of gendef.pl, with different target
directories, to run from a single current working directory, such as
$(top_srcdir). The MSVC build system will soon rely on this.
Tom Lane [Wed, 7 Sep 2016 20:11:56 +0000 (16:11 -0400)]
Support renaming an existing value of an enum type.
Not much to be said about this patch: it does what it says on the tin.
In passing, rename AlterEnumStmt.skipIfExists to skipIfNewValExists
to clarify what it actually does. In the discussion of this patch
we considered supporting other similar options, such as IF EXISTS
on the type as a whole or IF NOT EXISTS on the target name. This
patch doesn't actually add any such feature, but it might happen later.
Dagfinn Ilmari Mannsåker, reviewed by Emre Hasegeli
Tom Lane [Wed, 7 Sep 2016 17:36:08 +0000 (13:36 -0400)]
Doc: minor documentation improvements about extensions.
Document the formerly-undocumented behavior that schema and comment
control-file entries for an extension are honored only during initial
installation, whereas other properties are also honored during updates.
While at it, do some copy-editing on the recently-added docs for CREATE
EXTENSION ... CASCADE, use links for some formerly vague cross references,
and make a couple other minor improvements.
Back-patch to 9.6 where CASCADE was added. The other parts of this
could go further back, but they're probably not important enough to
bother.
Tom Lane [Wed, 7 Sep 2016 03:55:55 +0000 (23:55 -0400)]
Add a HINT for client-vs-server COPY failure cases.
Users often get confused between COPY and \copy and try to use client-side
paths with COPY. The server then cannot find the file (if remote), or sees
a permissions problem (if local), or some variant of that. Emit a hint
about this in the most common cases.
In future we might want to expand the set of errnos for which the hint
gets printed, but be conservative for now.
Craig Ringer, reviewed by Christoph Berg and Tom Lane
Add a location field to the DefElem struct, used to parse many utility
commands. Update various error messages to supply error position
information.
To propogate the error position information in a more systematic way,
create a ParseState in standard_ProcessUtility() and pass that to
interested functions implementing the utility commands. This seems
better than passing the query string and then reassembling a parse state
ad hoc, which violates the encapsulation of the ParseState type.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Tom Lane [Tue, 6 Sep 2016 19:50:31 +0000 (15:50 -0400)]
Guard against possible memory allocation botch in batchmemtuples().
Negative availMemLessRefund would be problematic. It's not entirely
clear whether the case can be hit in the code as it stands, but this
seems like good future-proofing in any case. While we're at it,
insist that the value be not merely positive but not tiny, so as to
avoid doing a lot of repalloc work for little gain.
Tom Lane [Tue, 6 Sep 2016 18:53:31 +0000 (14:53 -0400)]
Teach appendShellString() to not quote strings containing "-".
Brain fade in commit a00c58314: I was thinking that a string starting with
"-" could be taken as a switch depending on command line syntax. That's
true, but having appendShellString() quote it will not help, so we may as
well not do so. Per complaint from Peter Eisentraut.