Robert Haas [Fri, 19 Jun 2015 15:28:30 +0000 (11:28 -0400)]
Fix corner case in autovacuum-forcing logic for multixact wraparound.
Since find_multixact_start() relies on SimpleLruDoesPhysicalPageExist(),
and that function looks only at the on-disk state, it's possible for it
to fail to find a page that exists in the in-memory SLRU that has not
been written yet. If that happens, SetOffsetVacuumLimit() will
erroneously decide to force emergency autovacuuming immediately.
We should probably fix find_multixact_start() to consider the data
cached in memory as well as on the on-disk state, but that's no excuse
for SetOffsetVacuumLimit() to be stupid about the case where it can
no longer read the value after having previously succeeded in doing so.
POSIX permits setlocale() calls to invalidate any previous setlocale()
return values. Commit 5f538ad004aa00cf0881f179f0cde789aad4f47e
neglected to account for that. In advance of fixing that bug, switch to
failing hard on affected configurations. This is a planned temporary
commit to assay buildfarm-represented configurations.
jsonb_set() and other clients of the setPathArray() utility function
could get spurious results when an array integer subscript is provided
that is not within the range of int.
To fix, ensure that the value returned by strtol() within setPathArray()
is within the range of int; when it isn't, assume an invalid input in
line with existing, similar cases. The path-orientated operators that
appeared in PostgreSQL 9.3 and 9.4 do not call setPathArray(), and
already independently take this precaution, so no change there.
Tom Lane [Fri, 12 Jun 2015 17:44:06 +0000 (13:44 -0400)]
Fix failure to cover scalar-vs-rowtype cases in exec_stmt_return().
In commit 9e3ad1aac52454569393a947c06be0d301749362 I modified plpgsql
to use exec_stmt_return's simple-variables fast path in more cases.
However, I overlooked that there are really two different return
conventions in use here, depending on whether estate->retistuple is true,
and the existing fast-path code had only bothered to handle one of them.
So trying to return a scalar in a function returning composite, or vice
versa, could lead to unexpected error messages (typically "cache lookup
failed for type 0") or to a null-pointer-dereference crash.
In the DTYPE_VAR case, we can just throw error if retistuple is true,
corresponding to what happens in the general-expression code path that was
being used previously. (Perhaps someday both of these code paths should
attempt a coercion, but today is not that day.)
In the REC and ROW cases, just hand the problem to exec_eval_datum()
when not retistuple. Also clean up the ROW coding slightly so it looks
more like exec_eval_datum().
The previous commit also caused exec_stmt_return_next() to be used in
more cases, but that code seems to be OK as-is.
Per off-list report from Serge Rielau. This bug is new in 9.5 so no need
to back-patch.
Tom Lane [Fri, 12 Jun 2015 15:54:03 +0000 (11:54 -0400)]
Improve error message and hint for ALTER COLUMN TYPE can't-cast failure.
We already tried to improve this once, but the "improved" text was rather
off-target if you had provided a USING clause. Also, it seems helpful
to provide the exact text of a suggested USING clause, so users can just
copy-and-paste it when needed. Per complaint from Keith Rarick and a
suggestion from Merlin Moncure.
Back-patch to 9.2 where the current wording was adopted.
Fujii Masao [Fri, 12 Jun 2015 14:11:51 +0000 (23:11 +0900)]
Make postmaster restart archiver soon after it dies, even during recovery.
After the archiver dies, postmaster tries to start a new one immediately.
But previously this could happen only while server was running normally
even though archiving was enabled always (i.e., archive_mode was set to
always). So the archiver running during recovery could not restart soon
after it died. This is an oversight in commit ffd3774.
This commit changes reaper(), postmaster's signal handler to cleanup
after a child process dies, so that it tries to a new archiver even during
recovery if necessary.
Fujii Masao [Fri, 12 Jun 2015 03:32:48 +0000 (12:32 +0900)]
Clean up useless mention of RMGRDESCSOURCES in pg_rewind Makefile.
RMGRDESCSOURCES is defined and used only in pg_xlogdump Makefile,
but pg_rewind Makefile mentioned it as extra files to remove in "make clean".
This patch removes that useless mention from pg_rewind Makefile.
Andrew Dunstan [Thu, 11 Jun 2015 14:06:58 +0000 (10:06 -0400)]
Rename jsonb - text[] operator to #- to avoid ambiguity.
Following recent discussion on -hackers. The underlying function is
also renamed to jsonb_delete_path. The regression tests now don't need
ugly type casts to avoid the ambiguity, so they are also removed.
Tom Lane [Tue, 9 Jun 2015 17:37:08 +0000 (13:37 -0400)]
Report more information if pg_perm_setlocale() fails at startup.
We don't know why a few Windows users have seen this fail, but the
taciturnity of the error message certainly isn't helping debug it.
Let's at least find out which LC category isn't working.
Fujii Masao [Mon, 8 Jun 2015 18:03:24 +0000 (03:03 +0900)]
Refactor WAL segment copying code.
* Remove unused argument "dstfname" and related code from XLogFileCopy().
* Previously XLogFileCopy() returned a pstrdup'd string so that
InstallXLogFileSegment() used it later. Since the pstrdup'd string was never
free'd, there could be a risk of memory leak. It was almost harmless because
the startup process exited just after calling XLogFileCopy(), it existed.
This commit changes XLogFileCopy() so that it directly calls
InstallXLogFileSegment() and doesn't call pstrdup() at all. Which fixes that
memory leak problem.
* Extend InstallXLogFileSegment() so that the caller can specify the log level.
Which allows us to emit an error when InstallXLogFileSegment() fails a disk
file access like link() and rename(). Previously it was always logged with
LOG level and additionally needed to be logged with ERROR when we wanted
to treat it as an error.
Andrew Dunstan [Mon, 8 Jun 2015 00:46:00 +0000 (20:46 -0400)]
Desupport jsonb subscript deletion on objects
Supporting deletion of JSON pairs within jsonb objects using an
array-style integer subscript allowed for surprising outcomes. This was
mostly due to the implementation-defined ordering of pairs within
objects for jsonb.
It also seems desirable to make jsonb integer subscript deletion
consistent with the 9.4 era general purpose integer subscripting
operator for jsonb (although that operator returns NULL when an object
is encountered, while we prefer here to throw an error).
Peter Geoghegan, following discussion on -hackers.
Tom Lane [Sun, 7 Jun 2015 19:32:09 +0000 (15:32 -0400)]
Use a safer method for determining whether relcache init file is stale.
When we invalidate the relcache entry for a system catalog or index, we
must also delete the relcache "init file" if the init file contains a copy
of that rel's entry. The old way of doing this relied on a specially
maintained list of the OIDs of relations present in the init file: we made
the list either when reading the file in, or when writing the file out.
The problem is that when writing the file out, we included only rels
present in our local relcache, which might have already suffered some
deletions due to relcache inval events. In such cases we correctly decided
not to overwrite the real init file with incomplete data --- but we still
used the incomplete initFileRelationIds list for the rest of the current
session. This could result in wrong decisions about whether the session's
own actions require deletion of the init file, potentially allowing an init
file created by some other concurrent session to be left around even though
it's been made stale.
Since we don't support changing the schema of a system catalog at runtime,
the only likely scenario in which this would cause a problem in the field
involves a "vacuum full" on a catalog concurrently with other activity, and
even then it's far from easy to provoke. Remarkably, this has been broken
since 2002 (in commit 786340441706ac1957a031f11ad1c2e5b6e18314), but we had
never seen a reproducible test case until recently. If it did happen in
the field, the symptoms would probably involve unexpected "cache lookup
failed" errors to begin with, then "could not open file" failures after the
next checkpoint, as all accesses to the affected catalog stopped working.
Recovery would require manually removing the stale "pg_internal.init" file.
To fix, get rid of the initFileRelationIds list, and instead consult
syscache.c's list of relations used in catalog caches to decide whether a
relation is included in the init file. This should be a tad more efficient
anyway, since we're replacing linear search of a list with ~100 entries
with a binary search. It's a bit ugly that the init file contents are now
so directly tied to the catalog caches, but in practice that won't make
much difference.
Tom Lane [Fri, 5 Jun 2015 21:04:07 +0000 (17:04 -0400)]
Get rid of a //-style comment.
Not sure how "//XXX" got into a committed patch in the first place,
as it's both content-free and against project style. pgindent made a
bit of a hash of it, too.
Going forward, we should have at least one buildfarm member using
"gcc -ansi" to catch such things, at least till such time as we
decide the project target language isn't C90 any more. I've turned
this option on on dromedary.
Tom Lane [Fri, 5 Jun 2015 17:22:27 +0000 (13:22 -0400)]
Fix incorrect order of database-locking operations in InitPostgres().
We should set MyProc->databaseId after acquiring the per-database lock,
not beforehand. The old way risked deadlock against processes trying to
copy or delete the target database, since they would first acquire the lock
and then wait for processes with matching databaseId to exit; that left a
window wherein an incoming process could set its databaseId and then block
on the lock, while the other process had the lock and waited in vain for
the incoming process to exit.
CountOtherDBBackends() would time out and fail after 5 seconds, so this
just resulted in an unexpected failure not a permanent lockup, but it's
still annoying when it happens. A real-world example of a use-case is that
short-duration connections to a template database should not cause CREATE
DATABASE to fail.
Doing it in the other order should be fine since the contract has always
been that processes searching the ProcArray for a database ID must hold the
relevant per-database lock while searching. Thus, this actually removes
the former race condition that required an assumption that storing to
MyProc->databaseId is atomic.
It's been like this for a long time, so back-patch to all active branches.
Robert Haas [Fri, 5 Jun 2015 12:34:52 +0000 (08:34 -0400)]
Cope with possible failure of the oldest MultiXact to exist.
Recent commits, mainly b69bf30b9bfacafc733a9ba77c9587cf54d06c0c and 53bb309d2d5a9432d2602c93ed18e58bd2924e15, introduced mechanisms to
protect against wraparound of the MultiXact member space: the number
of multixacts that can exist at one time is limited to 2^32, but the
total number of members in those multixacts is also limited to 2^32,
and older code did not take care to enforce the second limit,
potentially allowing old data to be overwritten while it was still
needed.
Unfortunately, these new mechanisms failed to account for the fact
that the code paths in which they run might be executed during
recovery or while the cluster was in an inconsistent state. Also,
they failed to account for the fact that users who used pg_upgrade
to upgrade a PostgreSQL version between 9.3.0 and 9.3.4 might have
might oldestMultiXid = 1 in the control file despite the true value
being larger.
To fix these problems, first, avoid unnecessarily examining the
mmembers of MultiXacts when the cluster is not known to be consistent.
TruncateMultiXact has done this for a long time, and this patch does
not fix that. But the new calls used to prevent member wraparound
are not needed until we reach normal running, so avoid calling them
earlier. (SetMultiXactIdLimit is actually called before InRecovery
is set, so we can't rely on that; we invent our own multixact-specific
flag instead.)
Second, make failure to look up the members of a MultiXact a non-fatal
error. Instead, if we're unable to determine the member offset at
which wraparound would occur, postpone arming the member wraparound
defenses until we are able to do so. If we're unable to determine the
member offset that should force autovacuum, force it continuously
until we are able to do so. If we're unable to deterine the member
offset at which we should truncate the members SLRU, log a message and
skip truncation.
An important consequence of these changes is that anyone who does have
a bogus oldestMultiXid = 1 value in pg_control will experience
immediate emergency autovacuuming when upgrading to a release that
contains this fix. The release notes should highlight this fact. If
a user has no pg_multixact/offsets/0000 file, but has oldestMultiXid = 1
in the control file, they may wish to vacuum any tables with
relminmxid = 1 prior to upgrading in order to avoid an immediate
emergency autovacuum after the upgrade. This must be done with a
PostgreSQL version 9.3.5 or newer and with vacuum_multixact_freeze_min_age
and vacuum_multixact_freeze_table_age set to 0.
This patch also adds an additional log message at each database server
startup, indicating either that protections against member wraparound
have been engaged, or that they have not. In the latter case, once
autovacuum has advanced oldestMultiXid to a sane value, the message
indicating that the guards have been engaged will appear at the next
checkpoint. A few additional messages have also been added at the DEBUG1
level so that the correct operation of this code can be properly audited.
Along the way, this patch fixes another, related bug in TruncateMultiXact
that has existed since PostgreSQL 9.3.0: when no MultiXacts exist at
all, the truncation code looks up NextMultiXactId, which doesn't exist
yet. This can lead to TruncateMultiXact removing every file in
pg_multixact/offsets instead of keeping one around, as it should.
This in turn will cause the database server to refuse to start
afterwards.
Patch by me. Review by Álvaro Herrera, Andres Freund, Noah Misch, and
Thomas Munro.
On second thought, the tables involved are not large enough that
autovacuum or autoanalyze would notice them; what seems far more
likely to be the culprit is the database-wide "vacuum analyze"
in the concurrent gist test. That thing has given us one headache
too many, so get rid of it in favor of targeted vacuuming of that
test's own tables only.
Tom Lane [Thu, 4 Jun 2015 18:39:52 +0000 (14:39 -0400)]
Tighten the per-operator testing done in brin regression test.
Verify that the number of matches is exactly what it should be, not just
that it not be zero. This should help us detect any environment-dependent
issues.
Also, verify that we're getting the expected type of scan plan (either
bitmap or seqscan as appropriate). Right now, this is failing on the
cidrcol test cases, as shown in the output file. I'll look into that
in a bit, but it seems good to commit this as-is temporarily to verify
that it behaves as expected on the buildfarm.
Tom Lane [Thu, 4 Jun 2015 17:50:32 +0000 (13:50 -0400)]
Fix brin "char" test to actually test what it meant to test.
Casting to char, without quotes, does not give the same results as casting
to "char". That meant we were not testing the brin "char" paths at all,
since we ended up with a text operator not a "char" operator.
Tom Lane [Thu, 4 Jun 2015 17:46:34 +0000 (13:46 -0400)]
Stabilize results of brin regression test.
This test used seqscans on tenk1, with LIMIT, to build test data.
That works most of the time, but if the synchronized-seqscan logic
kicks in, we get varying test data. This seems likely to explain
the erratic test failures on buildfarm member chipmunk, which uses
smaller-than-default shared_buffers. To fix, add ORDER BY clauses to
force the ordering to be what it was implicitly being assumed to be.
Peter Geoghegan had noticed this with respect to one of the trouble
spots, though not the ones actually causing the chipmunk issue.
Tom Lane [Thu, 4 Jun 2015 14:37:06 +0000 (10:37 -0400)]
Stabilize query plans in rowsecurity regression test.
Some recent buildfarm failures can be explained by supposing that
autovacuum or autoanalyze fired on the tables created by this test,
resulting in plan changes. Do a proactive VACUUM ANALYZE on the
test's principal tables to try to forestall such changes.
Fujii Masao [Thu, 4 Jun 2015 10:54:43 +0000 (19:54 +0900)]
Remove -i/--ignore-version option from pg_dump, pg_dumpall and pg_restore.
The commit c22ed3d523782c43836c163c16fa5a7bb3912826 turned
the -i/--ignore-version options into no-ops and marked as deprecated.
Considering we shipped that in 8.4, it's time to remove all trace of
those switches, per discussion. We'd still have to wait a couple releases
before it'd be safe to use -i for something else, but it'd be a start.
Fujii Masao [Thu, 4 Jun 2015 04:22:49 +0000 (13:22 +0900)]
Fix some issues in pg_class.relminmxid and pg_database.datminmxid documentation.
- Correct the name of directory which those catalog columns allow to be shrunk.
- Correct the name of symbol which is used as the value of pg_class.relminmxid
when the relation is not a table.
- Fix "ID ID" typo.
Backpatch to 9.3 where those cataog columns were introduced.
Because of a bug in the DocBook XSL FO style sheet, an xref to a
varlistentry whose term includes an indexterm fails to build. One such
instance was introduced in commit 5086dfceba79ecd5d1eb28b8f4ed5221838ff3a6. Fix by adding the upstream
bug fix to our customization layer.
Tom Lane [Wed, 3 Jun 2015 22:02:39 +0000 (18:02 -0400)]
Fix some questionable edge-case behaviors in add_path() and friends.
add_path_precheck was doing exact comparisons of path costs, but it really
needs to do them fuzzily to be sure it won't reject paths that could
survive add_path's comparisons. (This can only matter if the initial cost
estimate is very close to the final one, but that turns out to often be
true.)
Also, it should ignore startup cost for this purpose if and only if
compare_path_costs_fuzzily would do so. The previous coding always ignored
startup cost for parameterized paths, which is wrong as of commit 3f59be836c555fa6; it could result in improper early rejection of paths that
we care about for SEMI/ANTI joins. It also always considered startup cost
for unparameterized paths, which is just as wrong though the only effect is
to waste planner cycles on paths that can't survive. Instead, it should
consider startup cost only when directed to by the consider_startup/
consider_param_startup relation flags.
Likewise, compare_path_costs_fuzzily should have symmetrical behavior
for parameterized and unparameterized paths. In this case, the best
answer seems to be that after establishing that total costs are fuzzily
equal, we should compare startup costs whether or not the consider_xxx
flags are on. That is what it's always done for unparameterized paths,
so let's make the behavior for parameterized paths match.
These issues were noted while developing the SEMI/ANTI join costing fix
of commit 3f59be836c555fa6, but we chose not to back-patch these fixes,
because they can cause changes in the planner's choices among
nearly-same-cost plans. (There is in fact one minor change in plan choice
within the core regression tests.) Destabilizing plan choices in back
branches without very clear improvements is frowned on, so we'll just fix
this in HEAD.
Tom Lane [Wed, 3 Jun 2015 15:58:47 +0000 (11:58 -0400)]
Fix planner's cost estimation for SEMI/ANTI joins with inner indexscans.
When the inner side of a nestloop SEMI or ANTI join is an indexscan that
uses all the join clauses as indexquals, it can be presumed that both
matched and unmatched outer rows will be processed very quickly: for
matched rows, we'll stop after fetching one row from the indexscan, while
for unmatched rows we'll have an indexscan that finds no matching index
entries, which should also be quick. The planner already knew about this,
but it was nonetheless charging for at least one full run of the inner
indexscan, as a consequence of concerns about the behavior of materialized
inner scans --- but those concerns don't apply in the fast case. If the
inner side has low cardinality (many matching rows) this could make an
indexscan plan look far more expensive than it actually is. To fix,
rearrange the work in initial_cost_nestloop/final_cost_nestloop so that we
don't add the inner scan cost until we've inspected the indexquals, and
then we can add either the full-run cost or just the first tuple's cost as
appropriate.
Experimentation with this fix uncovered another problem: add_path and
friends were coded to disregard cheap startup cost when considering
parameterized paths. That's usually okay (and desirable, because it thins
the path herd faster); but in this fast case for SEMI/ANTI joins, it could
result in throwing away the desired plain indexscan path in favor of a
bitmap scan path before we ever get to the join costing logic. In the
many-matching-rows cases of interest here, a bitmap scan will do a lot more
work than required, so this is a problem. To fix, add a per-relation flag
consider_param_startup that works like the existing consider_startup flag,
but applies to parameterized paths, and set it for relations that are the
inside of a SEMI or ANTI join.
To make this patch reasonably safe to back-patch, care has been taken to
avoid changing the planner's behavior except in the very narrow case of
SEMI/ANTI joins with inner indexscans. There are places in
compare_path_costs_fuzzily and add_path_precheck that are not terribly
consistent with the new approach, but changing them will affect planner
decisions at the margins in other cases, so we'll leave that for a
HEAD-only fix.
Back-patch to 9.3; before that, the consider_startup flag didn't exist,
meaning that the second aspect of the patch would be too invasive.
Per a complaint from Peter Holzer and analysis by Tomas Vondra.
Tom Lane [Mon, 1 Jun 2015 17:27:43 +0000 (13:27 -0400)]
Release notes for 9.4.3, 9.3.8, 9.2.12, 9.1.17, 9.0.21.
Also sneak entries for commits 97ff2a564 et al into the sections for
the previous releases in the relevant branches. Those fixes did go out
in the previous releases, but missed getting documented.
Andrew Dunstan [Mon, 1 Jun 2015 00:34:10 +0000 (20:34 -0400)]
Rename jsonb_replace to jsonb_set and allow it to add new values
The function is given a fourth parameter, which defaults to true. When
this parameter is true, if the last element of the path is missing
in the original json, jsonb_set creates it in the result and assigns it
the new value. If it is false then the function does nothing unless all
elements of the path are present, including the last.
Based on some original code from Dmitry Dolgov, heavily modified by me.
Tom Lane [Fri, 29 May 2015 21:02:58 +0000 (17:02 -0400)]
initdb -S should now have an explicit check that $PGDATA is valid.
The fsync code from the backend essentially assumes that somebody's already
validated PGDATA, at least to the extent of it being a readable directory.
That's safe enough for initdb's normal code path too, but "initdb -S"
doesn't have any other processing at all that touches the target directory.
To have reasonable error-case behavior, add a pg_check_dir call.
Per gripe from Peter E.
Tom Lane [Fri, 29 May 2015 19:11:36 +0000 (15:11 -0400)]
Remove special cases for ETXTBSY from new fsync'ing logic.
The argument that this is a sufficiently-expected case to be silently
ignored seems pretty thin. Andres had brought it up back when we were
still considering that most fsync failures should be hard errors, and it
probably would be legit not to fail hard for ETXTBSY --- but the same is
true for EROFS and other cases, which is why we gave up on hard failures.
ETXTBSY is surely not a normal case, so logging the failure seems fine
from here.
Tom Lane [Fri, 29 May 2015 17:26:21 +0000 (13:26 -0400)]
Check that all aliases of a built-in function have same leakproof property.
opr_sanity.sql has a test checking that relevant properties of built-in
functions match when the same C function is referenced by multiple pg_proc
entries. The test neglected to check proleakproof, though, and when
I added that condition it exposed that xideqint4 hadn't been updated to
match xideq. So fix that as well, and in consequence bump catversion.
This isn't very critical, so no need to worry about fixing back branches.
Tom Lane [Fri, 29 May 2015 17:05:16 +0000 (13:05 -0400)]
Adjust initdb to also not consider fsync'ing failures fatal.
Make initdb's version of this logic look as much like the backend's
as possible. This is much less critical than in the backend since not
so many people use "initdb -S", but we want the same corner-case error
handling in both cases.
Back-patch to 9.3 where initdb -S option was introduced. Before that,
initdb only had to deal with freshly-created data directories, wherein
no failures should be expected.
Tom Lane [Fri, 29 May 2015 15:57:33 +0000 (11:57 -0400)]
Revert exporting of internal GUC variable "data_directory".
This undoes a poorly-thought-out choice in commit 970a18687f9b3058, namely
to export guc.c's internal variable data_directory. The authoritative
variable so far as C code is concerned is DataDir; there is no reason for
anything except specific bits of GUC code to look at the GUC variable.
After yesterday's commits fixing the fsync-on-restart patch, the only
remaining misuse of data_directory was in AlterSystemSetConfigFile(),
which would be much better off just using a relative path anyhow: it's
less code and it doesn't break if the DBA moves the data directory of a
running system, which is a case we've taken some pains over in the past.
This is mostly cosmetic, so no need for a back-patch (and I'd be hesitant
to remove a global variable in stable branches anyway).
Tom Lane [Thu, 28 May 2015 21:33:03 +0000 (17:33 -0400)]
Fix fsync-at-startup code to not treat errors as fatal.
Commit 2ce439f3379aed857517c8ce207485655000fc8e introduced a rather serious
regression, namely that if its scan of the data directory came across any
un-fsync-able files, it would fail and thereby prevent database startup.
Worse yet, symlinks to such files also caused the problem, which meant that
crash restart was guaranteed to fail on certain common installations such
as older Debian.
After discussion, we agreed that (1) failure to start is worse than any
consequence of not fsync'ing is likely to be, therefore treat all errors
in this code as nonfatal; (2) we should not chase symlinks other than
those that are expected to exist, namely pg_xlog/ and tablespace links
under pg_tblspc/. The latter restriction avoids possibly fsync'ing a
much larger part of the filesystem than intended, if the user has left
random symlinks hanging about in the data directory.
This commit takes care of that and also does some code beautification,
mainly moving the relevant code into fd.c, which seems a much better place
for it than xlog.c, and making sure that the conditional compilation for
the pre_sync_fname pass has something to do with whether pg_flush_data
works.
I also relocated the call site in xlog.c down a few lines; it seems a
bit silly to be doing this before ValidateXLOGDirectoryStructure().
The similar logic in initdb.c ought to be made to match this, but that
change is noncritical and will be dealt with separately.
Back-patch to all active branches, like the prior commit.
Tom Lane [Thu, 28 May 2015 16:44:31 +0000 (12:44 -0400)]
Fix pg_rewind's handling of top-level symlinks.
The previous coding suffered a null-pointer dereference if it found any
symlink at the top level of $PGDATA. Fix that, and teach it to recurse
into a symlink for pg_xlog, but not anything else.
Tom Lane [Thu, 28 May 2015 16:17:22 +0000 (12:17 -0400)]
Fix assorted inconsistencies in our calls of readlink().
Ensure that we null-terminate the result string (one place in pg_rewind).
Be paranoid about out-of-range results from readlink() (should not happen,
but there is no good reason for some call sites to be careful about it and
others not). Consistently use the whole buffer, not sometimes one byte
less. Ensure we emit an appropriate errcode() in all cases. Spell the
error messages the same way.
The only serious bug here is the missing null-termination in pg_rewind,
which is new code, so no need for a back-patch.
Tom Lane [Wed, 27 May 2015 23:14:39 +0000 (19:14 -0400)]
Fix portability issue in isolationtester grammar.
specparse.y and specscanner.l used "string" as a token name. Now, bison
likes to define each token name as a macro for the token code it assigns,
which means those names are basically off-limits for any other use within
the grammar file or included headers. So names as generic as "string" are
dangerous. This is what was causing the recent failures on protosciurus:
some versions of Solaris' sys/kstat.h use "string" as a field name.
With late-model bison we don't see this problem because the token macros
aren't defined till later (that is why castoroides didn't show the problem
even though it's on the same machine). But protosciurus uses bison 1.875
which defines the token macros up front.
This land mine has been there from day one; we'd have found it sooner
except that protosciurus wasn't trying to run the isolation tests till
recently.
To fix, rename the token to "string_literal" which is hopefully less
likely to collide with names used by system headers. Back-patch to
all branches containing the isolation tests.
Tom Lane [Wed, 27 May 2015 02:14:59 +0000 (22:14 -0400)]
Remove configure check prohibiting threaded libpython on OpenBSD.
According to recent tests, this case now works fine, so there's no reason
to reject it anymore. (Even if there are still some OpenBSD platforms
in the wild where it doesn't work, removing the check won't break any case
that worked before.)
We can actually remove the entire test that discovers whether libpython
is threaded, since without the OpenBSD case there's no need to know that
at all.
Per report from Davin Potts. Back-patch to all active branches.
Tom Lane [Tue, 26 May 2015 18:10:46 +0000 (14:10 -0400)]
Suppress occasional failures in brin regression test.
brin.sql included a call of brin_summarize_new_values(), and expected
it to always report exactly 5 summarization events. This failed sometimes
during parallel regression tests, as a consequence of the database-wide
VACUUM in gist.sql getting there first. The most future-proof way
to avoid variation in the test results is to forget about using
brin_summarize_new_values() and just do a plain "VACUUM brintest",
which will exercise the same code anyway.
Having done that, there's no need for preventing autovacuum on brintest;
doing so just reduces the scope of test coverage, so let's not.
Andrew Dunstan [Tue, 26 May 2015 15:16:52 +0000 (11:16 -0400)]
Add all structured objects passed to pushJsonbValue piecewise.
Commit 9b74f32cdbff8b9be47fc69164eae552050509ff did this for objects of
type jbvBinary, but in trying further to simplify some of the new jsonb
code I discovered that objects of type jbvObject or jbvArray passed as
WJB_ELEM or WJB_VALUE also caused problems. These too are now added
component by component.
Tom Lane [Tue, 26 May 2015 01:56:19 +0000 (21:56 -0400)]
Fix valgrind's "unaddressable bytes" whining about BRIN code.
brin_form_tuple calculated an exact tuple size, then palloc'd and
filled just that much. Later, brin_doinsert or brin_doupdate would
MAXALIGN the tuple size and tell PageAddItem that that was the size
of the tuple to insert. If the original tuple size wasn't a multiple
of MAXALIGN, the net result would be that PageAddItem would memcpy
a few more bytes than the palloc request had been for.
AFAICS, this is totally harmless in the real world: the error is a
read overrun not a write overrun, and palloc would certainly have
rounded the request up to a MAXALIGN multiple internally, so there's
no chance of the memcpy fetching off the end of memory. Valgrind,
however, is picky to the byte level not the MAXALIGN level.
Fix it by pushing the MAXALIGN step back to brin_form_tuple. (The other
possible source of tuples in this code, brin_form_placeholder_tuple,
was already producing a MAXALIGN'd result.)
In passing, be a bit more paranoid about internal allocations in
brin_form_tuple.
Tom Lane [Mon, 25 May 2015 18:12:51 +0000 (14:12 -0400)]
Explain CHECK constraint handling in postgres_fdw's IMPORT FOREIGN SCHEMA.
The existing documentation could easily be misinterpreted, and it failed to
explain the inconsistent-evaluation hazard that deterred us from supporting
automatic importing of check constraints. Revise it.