Fix race condition in preparing a transaction for two-phase commit.
To lock a prepared transaction's shared memory entry, we used to mark it
with the XID of the backend. When the XID was no longer active according
to the proc array, the entry was implicitly considered as not locked
anymore. However, when preparing a transaction, the backend's proc array
entry was cleared before transfering the locks (and some other state) to
the prepared transaction's dummy PGPROC entry, so there was a window where
another backend could finish the transaction before it was in fact fully
prepared.
To fix, rewrite the locking mechanism of global transaction entries. Instead
of an XID, just have simple locked-or-not flag in each entry (we store the
locking backend's backend id rather than a simple boolean, but that's just
for debugging purposes). The backend is responsible for explicitly unlocking
the entry, and to make sure that that happens, install a callback to unlock
it on abort or process exit.
Tom Lane [Thu, 15 May 2014 01:13:54 +0000 (21:13 -0400)]
In initdb, ensure stdout/stderr buffering behavior is what we expect.
Since this program may print to either stdout or stderr, the relative
ordering of its messages depends on the buffering behavior of those files.
Force stdout to be line-buffered and stderr to be unbuffered, ensuring
that the behavior will match standard Unix interactive behavior, even
when stdout and stderr are rerouted to a file.
Per complaint from Tomas Vondra. The particular case he pointed out is
new in HEAD, but issues of the same sort could arise in any branch with
other error messages, so back-patch to all branches.
I'm unsure whether we might not want to do this in other client programs
as well. For the moment, just fix initdb.
Tom Lane [Wed, 14 May 2014 18:55:48 +0000 (14:55 -0400)]
Code review for recent changes in relcache.c.
rd_replidindex should be managed the same as rd_oidindex, and rd_keyattr
and rd_idattr should be managed like rd_indexattr. Omissions in this area
meant that the bitmapsets computed for rd_keyattr and rd_idattr would be
leaked during any relcache flush, resulting in a slow but permanent leak in
CacheMemoryContext. There was also a tiny probability of relcache entry
corruption if we ran out of memory at just the wrong point in
RelationGetIndexAttrBitmap. Otherwise, the fields were not zeroed where
expected, which would not bother the code any AFAICS but could greatly
confuse anyone examining the relcache entry while debugging.
Also, create an API function RelationGetReplicaIndex rather than letting
non-relcache code be intimate with the mechanisms underlying caching of
that value (we won't even mention the memory leak there).
Also, fix a relcache flush hazard identified by Andres Freund:
RelationGetIndexAttrBitmap must not assume that rd_replidindex stays valid
across index_open.
The aspects of this involving rd_keyattr date back to 9.3, so back-patch
those changes.
Tom Lane [Wed, 14 May 2014 15:51:10 +0000 (11:51 -0400)]
Make initdb throw error for bad locale values.
Historically we've printed a complaint for a bad locale setting, but then
fallen back to the environment default. Per discussion, this is not such
a great idea, because rectifying an erroneous locale choice post-initdb
(perhaps long after data has been loaded) could be enormously expensive.
Better to complain and give the user a chance to double-check things.
The behavior was particularly bad if the bad setting came from environment
variables rather than a bogus command-line switch: in that case not only
was there a fallback to C/SQL_ASCII, but the printed complaint was quite
unhelpful. It's hard to be entirely sure what variables setlocale looked
at, but we can at least give a hint where the problem might be.
When cache invalidations arrive while ri_LoadConstraintInfo() is busy
filling a new cache entry, InvalidateConstraintCacheCallBack() compares
the - not yet initialized - oidHashValue field with the to-be-invalidated
hash value. To fix, check whether the entry is already marked as invalid.
Initialize padding bytes in btree_gist varbit support.
The code expands a varbit gist leaf key to a node key by copying the bit
data twice in a varlen datum, as both the lower and upper key. The lower key
was expanded to INTALIGN size, but the padding bytes were not initialized.
That's a problem because when the lower/upper keys are compared, the padding
bytes are used compared too, when the values are otherwise equal. That could
lead to incorrect query results.
REINDEX is advised for any btree_gist indexes on bit or bit varying data
type, to fix any garbage padding bytes on disk.
Per Valgrind, reported by Andres Freund. Backpatch to all supported
versions.
Tom Lane [Tue, 13 May 2014 00:21:16 +0000 (20:21 -0400)]
Be more wary in choice of timezone names to test make_timestamptz with.
America/Metlakatla hasn't been in the IANA database all that long, so
some installations might not have it. It does seem worthwhile to test
with a fractional-minute GMT offset, but we can get that from almost
any pre-1900 date; I chose Europe/Paris, whose LMT offset from Greenwich
should be pretty darn well established.
Also, assuming that Mars/Mons_Olympus will never be in the IANA database
seems less than future-proof, so let's use a more fanciful location for
the bad-zone-name check.
The leak is fairly small and rare, but a leak nevertheless.
Per Coverity report. Backpatch to 9.2, where pg_receivexlog was added.
pg_basebackup shares the code, but it always exits on error, so there is
no real leak.
Tom Lane [Sun, 11 May 2014 19:13:30 +0000 (15:13 -0400)]
Find postgresql.auto.conf in PGDATA even when postgresql.conf is elsewhere.
The original coding for ALTER SYSTEM made a fundamentally bogus assumption
that postgresql.auto.conf could be sought relative to the main config file
if we hadn't yet determined the value of data_directory. This fails for
common arrangements with the config file elsewhere, as reported by
Christoph Berg.
The simplest fix is to not try to read postgresql.auto.conf until after
SelectConfigFiles has chosen (and locked down) the data_directory setting.
Because of the logic in ProcessConfigFile for handling resetting of GUCs
that've been removed from the config file, we cannot easily read the main
and auto config files separately; so this patch adopts a brute force
approach of reading the main config file twice during postmaster startup.
That's a tad ugly, but the actual time cost is likely to be negligible,
and there's no time for a more invasive redesign before beta.
With this patch, any attempt to set data_directory via ALTER SYSTEM
will be silently ignored. It would probably be better to throw an
error, but that can be dealt with later. This bug, however, would
prevent any testing of ALTER SYSTEM by a significant fraction of the
userbase, so it seems important to get it fixed before beta.
Tom Lane [Sun, 11 May 2014 16:06:04 +0000 (12:06 -0400)]
Rename jsonb_hash_ops to jsonb_path_ops.
There's no longer much pressure to switch the default GIN opclass for
jsonb, but there was still some unhappiness with the name "jsonb_hash_ops",
since hashing is no longer a distinguishing property of that opclass,
and anyway it seems like a relatively minor detail. At the suggestion of
Heikki Linnakangas, we'll use "jsonb_path_ops" instead; that captures the
important characteristic that each index entry depends on the entire path
from the document root to the indexed value.
Also add a user-facing explanation of the implementation properties of
these two opclasses.
Tom Lane [Sat, 10 May 2014 22:56:52 +0000 (18:56 -0400)]
More work on the JSON/JSONB user documentation.
Document existence operator adequately; fix obsolete claim that no
Unicode-escape semantic checks happen on input (it's still true for
json, but not for jsonb); improve examples; assorted wordsmithing.
When returning rows from a bitmap, as done with partial match queries, we
would get stuck in an infinite loop if the bitmap contained a lossy page
reference.
This bug is new in master, it was introduced by the patch to allow skipping
items refuted by other entries in GIN scans.
Tom Lane [Fri, 9 May 2014 22:24:17 +0000 (18:24 -0400)]
Fix broken allocation logic in recently-rewritten jsonb_util.c.
reserveFromBuffer() failed to consider the possibility that it needs to
more-than-double the current buffer size. Beyond that, it seems likely
that we'd someday need to worry about integer overflow of the buffer
length variable. Rather than reinvent the logic that's already been
debugged in stringinfo.c, let's go back to using that logic. We can
still have the same targeted API, but we'll rely on stringinfo.c to
manage reallocation.
Tom Lane [Fri, 9 May 2014 20:33:25 +0000 (16:33 -0400)]
Improve user-facing JSON documentation.
I started out with the intention of just fixing the info about the jsonb
operator classes, but soon found myself copy-editing most of the JSON
material. Hopefully it's more readable now.
Tom Lane [Fri, 9 May 2014 16:55:00 +0000 (12:55 -0400)]
Get rid of bogus dependency on typcategory in to_json() and friends.
These functions were relying on typcategory to identify arrays and
composites, which is not reliable and not the normal way to do it.
Using typcategory to identify boolean, numeric types, and json itself is
also pretty questionable, though the code in those cases didn't seem to be
at risk of anything worse than wrong output. Instead, use the standard
lsyscache functions to identify arrays and composites, and rely on a direct
check of the type OID for the other cases.
In HEAD, also be sure to look through domains so that a domain is treated
the same as its base type for conversions to JSON. However, this is a
small behavioral change; given the lack of field complaints, we won't
back-patch it.
In passing, refactor so that there's only one copy of the code that decides
which conversion strategy to apply, not multiple copies that could (and
have) gotten out of sync.
Robert Haas [Fri, 9 May 2014 14:44:04 +0000 (10:44 -0400)]
Code review for logical decoding patch.
Post-commit review identified a number of places where addition was
used instead of multiplication or memory wasn't zeroed where it should
have been. This commit also fixes one case where a structure member
was mis-initialized, and moves another memory allocation closer to
the place where the allocated storage is used for clarity.
Tom Lane [Fri, 9 May 2014 13:44:11 +0000 (09:44 -0400)]
Teach add_json() that jsonb is of TYPCATEGORY_JSON.
This code really needs to be refactored so that there aren't so many copies
that can diverge. Not to mention that this whole approach is probably
wrong. But for the moment I'll just stick my finger in the dike.
Per report from Michael Paquier.
Fix JSONB_MAX_ELEMS and JSONB_MAX_PAIRS macros to use CB_MASK in the
calculation. JENTRY_POSMASK happens to have the same value at the moment,
but that's just coincidental.
Refactor jsonb iterator functions, for readability.
Get rid of the JENTRY_ISFIRST flag. Whenever we handle JEntrys, we have
access to the whole array and have enough context information to know
which entry is the first. This frees up one bit in the JEntry header for
future use. While we're at it, shuffle the JEntry bits so that boolean
true and false go together, for aesthetic reasons.
Bump catalog version as this changes the on-disk format slightly.
Tom Lane [Fri, 9 May 2014 12:41:26 +0000 (08:41 -0400)]
Improve key representation for GIN jsonb_ops, and fix existence-search bug.
Change the key representation so that values that would exceed 127 bytes
are hashed into short strings, and so that the original JSON datatype of
each value is recorded in the index. The hashing rule eliminates the major
objection to having this opclass be the default for jsonb, namely that it
could fail for plausible input data (due to GIN's restrictions on maximum
key length). Preserving datatype information doesn't really buy us much
right now, but it requires no extra space compared to the previous way,
and it might be useful later.
Also, change the consistency-checking functions to request recheck for
exists (jsonb ? text) and related operators. The original analysis that
this is an exactly checkable query was incorrect, since the index does
not preserve information about whether a key appears at top level in
the indexed JSON object. Add a test case demonstrating the problem.
Make some other, mostly cosmetic improvements to the code in jsonb_gin.c
as well.
catversion bump due to on-disk data format change in jsonb_ops indexes.
Move the functions around to group related functions together. Remove
binequal argument from lengthCompareJsonbStringValue, moving that
responsibility to lengthCompareJsonbPair. Fix typo in comment.
Tom Lane [Fri, 9 May 2014 02:34:51 +0000 (22:34 -0400)]
Fix missing dependencies in ecpg's test Makefiles.
Ensure that ecpg preprocessor output files are rebuilt when re-testing
after a change in the ecpg preprocessor itself, or a change in any of
several include files that get copied verbatim into the output files.
The lack of these dependencies was what created problems for Kevin Grittner
after the recent pgindent run. There's no way for --enable-depend to
discover these dependencies automatically, so we've gotta put them into
the Makefiles by hand.
While at it, reduce the amount of duplication in the ecpg invocations.
Tom Lane [Fri, 9 May 2014 01:45:02 +0000 (21:45 -0400)]
Document permissions needed for pg_database_size and pg_tablespace_size.
Back in 8.3, we installed permissions checks in these functions (see
commits 8bc225e7990a and cc26599b7206). But we forgot to document that
anywhere in the user-facing docs; it did get mentioned in the 8.3 release
notes, but nobody's looking at that any more. Per gripe from Suya Huang.
Tom Lane [Fri, 9 May 2014 01:11:47 +0000 (21:11 -0400)]
Increase the default value of effective_cache_size to 4GB.
Per discussion, the old value of 128MB is ridiculously small on modern
machines; in fact, it's not even any larger than the default value of
shared_buffers, which it certainly should be. Increase to 4GB, which
is unlikely to be any worse than the old default for anyone, and should
be noticeably better for most. Eventually we might have an autotuning
scheme for this setting, but the recent attempt crashed and burned,
so for now just do this.
Tom Lane [Fri, 9 May 2014 00:49:38 +0000 (20:49 -0400)]
Revert "Auto-tune effective_cache size to be 4x shared buffers"
This reverts commit ee1e5662d8d8330726eaef7d3110cb7add24d058, as well as
a remarkably large number of followup commits, which were mostly concerned
with the fact that the implementation didn't work terribly well. It still
doesn't: we probably need some rather basic work in the GUC infrastructure
if we want to fully support GUCs whose default varies depending on the
value of another GUC. Meanwhile, it also emerged that there wasn't really
consensus in favor of the definition the patch tried to implement (ie,
effective_cache_size should default to 4 times shared_buffers). So whack
it all back to where it was. In a followup commit, I'll do what was
recently agreed to, which is to simply change the default to a higher
value.
Noah Misch [Thu, 8 May 2014 23:29:02 +0000 (19:29 -0400)]
Un-break ecpg test suite under --disable-integer-datetimes.
Commit 4318daecc959886d001a6e79c6ea853e8b1dfb4b broke it. The change in
sub-second precision at extreme dates is normal. The inconsistent
truncation vs. rounding is essentially a bug, albeit a longstanding one.
Back-patch to 8.4, like the causative commit.
Tom Lane [Thu, 8 May 2014 16:42:56 +0000 (12:42 -0400)]
Fix comment.
Previous commit was confused about the case we're handling: actually,
what the patch is dealing with is platforms that have optreset, *and*
have <getopt.h>, but the latter fails to declare the former. Because
we use a linking probe to set HAVE_INT_OPTRESET, we need to be sure we
have a declaration even if <getopt.h> doesn't think it exists.
Tom Lane [Thu, 8 May 2014 16:33:29 +0000 (12:33 -0400)]
Allow for platforms that have optreset but not <getopt.h>.
Reportedly, some versions of mingw are like that, and it seems plausible
in general that older platforms might be that way. However, we'd
determined experimentally that just doing "extern int" conflicts with
the way Cygwin declares these variables, so explicitly exclude Cygwin.
Michael Paquier, tweaked by me to hopefully not break Cygwin
Protect against torn pages when deleting GIN list pages.
To-be-deleted list pages contain no useful information, as they are being
deleted, but we must still protect the writes from being torn by a crash
after a partial write. To do that, re-initialize the pages on WAL replay.
Jeff Janes caught this with a test program to test partial writes.
Backpatch to all supported versions.
Tom Lane [Thu, 8 May 2014 01:38:36 +0000 (21:38 -0400)]
Avoid buffer bloat in libpq when server is consistently faster than client.
If the server sends a long stream of data, and the server + network are
consistently fast enough to force the recv() loop in pqReadData() to
iterate until libpq's input buffer is full, then upon processing the last
incomplete message in each bufferload we'd usually double the buffer size,
due to supposing that we didn't have enough room in the buffer to finish
collecting that message. After filling the newly-enlarged buffer, the
cycle repeats, eventually resulting in an out-of-memory situation (which
would be reported misleadingly as "lost synchronization with server").
Of course, we should not enlarge the buffer unless we still need room
after discarding already-processed messages.
This bug dates back quite a long time: pqParseInput3 has had the behavior
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad
in 2008. Probably the reason it's not been isolated before is that in
common environments the recv() loop would always be faster than the server
(if on the same machine) or faster than the network (if not); or at least
it wouldn't be slower consistently enough to let the buffer ramp up to a
problematic size. The reported cases involve Windows, which perhaps has
different timing behavior than other platforms.
Per bug #7914 from Shin-ichi Morita, though this is different from his
proposed solution. Back-patch to all supported branches.
The main target of this cleanup is the convertJsonb() function, but I also
touched a lot of other things that I spotted into in the process.
The new convertToJsonb() function uses an output buffer that's resized on
demand, so the code to estimate of the size of JsonbValue is removed.
The on-disk format was not changed, even though I refactored the structs
used to handle it. The term "superheader" is replaced with "container".
The jsonb_exists_any and jsonb_exists_all functions no longer sort the input
array. That was a premature optimization, the idea being that if there are
duplicates in the input array, you only need to check them once. Also,
sorting the array saves some effort in the binary search used to find a key
within an object. But there were drawbacks too: the sorting and
deduplicating obviously isn't free, and in the typical case there are no
duplicates to remove, and the gain in the binary search was minimal. Remove
all that, which makes the code simpler too.
This includes a bug-fix; the total length of the elements in a jsonb array
or object mustn't exceed 2^28. That is now checked.
Robert Haas [Wed, 7 May 2014 18:54:43 +0000 (14:54 -0400)]
Detach shared memory from bgworkers without shmem access.
Since the postmaster won't perform a crash-and-restart sequence
for background workers which don't request shared memory access,
we'd better make sure that they can't corrupt shared memory.
Tom Lane [Wed, 7 May 2014 18:25:11 +0000 (14:25 -0400)]
Fix failure to set ActiveSnapshot while rewinding a cursor.
ActiveSnapshot needs to be set when we call ExecutorRewind because some
plan node types may execute user-defined functions during their ReScan
calls (nodeLimit.c does so, at least). The wisdom of that is somewhat
debatable, perhaps, but for now the simplest fix is to make sure the
required context is valid. Failure to do this typically led to a
null-pointer-dereference core dump, though it's possible that in more
complex cases a function could be executed with the wrong snapshot
leading to very subtle misbehavior.
Per report from Leif Jensen. It's been broken for a long time, so
back-patch to all active branches.
Robert Haas [Wed, 7 May 2014 17:19:02 +0000 (13:19 -0400)]
Never crash-and-restart for bgworkers without shared memory access.
The motivation for a crash and restart cycle when a backend dies is
that it might have corrupted shared memory on the way down; and we
can't recover reliably except by reinitializing everything. But that
doesn't apply to processes that don't touch shared memory. Currently,
there's nothing to prevent a background worker that doesn't request
shared memory access from touching shared memory anyway, but that's a
separate bug.
Previous to this commit, the coding in postmaster.c was inconsistent:
an exit status other than 0 or 1 didn't provoke a crash-and-restart,
but failure to release the postmaster child slot did. This change
makes those cases consistent.
It was designed to test the longest possible interval output length,
so removing four zeros from the number of hours, as this patch does,
is not ideal. But the test still has some utility for its original
purpose, and there aren't a lot of other good options.
Noah Misch suggested a different approach where we test that the
output either matches what we expect from integer timestamps or what
we expect from floating-point timestamps. That seemed to obscure an
otherwise simple test, however.
Tom Lane [Wed, 7 May 2014 02:49:32 +0000 (22:49 -0400)]
hash_any returns Datum, not uint32 (and definitely not "int").
The coding in JsonbHashScalarValue might have accidentally failed to fail
given current representational choices, but the key word there would be
"accidental". Insert the appropriate datatype conversion macro. And
use the right conversion macro for hash_numeric's result, too.
In passing make the code a bit cleaner and less repetitive by factoring
out the xor step from the switch.
Jeff Davis [Sun, 4 May 2014 20:18:55 +0000 (13:18 -0700)]
Improve comment for tricky aspect of index-only scans.
Index-only scans avoid taking a lock on the VM buffer, which would
cause a lot of contention. To be correct, that requires some intricate
assumptions that weren't completely documented in the previous
comment.
There is currently nothing in the build system that enforces that things
stay valid, because that requires additional tools and will receive
separate consideration.
Simon Riggs [Tue, 6 May 2014 12:44:15 +0000 (13:44 +0100)]
pg_basebackup streaming: adjust version check msg
Commit d298b50a3b469c088bb40a4d36d38111b4cd574d by Heikki Linnakangas
requested that the version check message be updated at next release, suggesting
that the appropriate text would be “9.3 or later”. The logic used for the check
indicates that the correct text for 9.4 is “9.3 or 9.4”, since the logic would
cause this to fail for later releases.
Michael Meskes [Tue, 6 May 2014 11:04:30 +0000 (13:04 +0200)]
Fix handling of array of char pointers in ecpglib.
When array of char * was used as target for a FETCH statement returning more
than one row, it tried to store all the result in the first element. Instead it
should dump array of char pointers with right offset, use the address instead
of the value of the C variable while reading the array and treat such variable
as char **, instead of char * for pointer arithmetic.
Patch by Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
Tom Lane [Mon, 5 May 2014 18:43:39 +0000 (14:43 -0400)]
Fix possible cache invalidation failure in ReceiveSharedInvalidMessages.
Commit fad153ec45299bd4d4f29dec8d9e04e2f1c08148 modified sinval.c to reduce
the number of calls into sinvaladt.c (which require taking a shared lock)
by keeping a local buffer of collected-but-not-yet-processed messages.
However, if processing of the last message in a batch resulted in a
recursive call to ReceiveSharedInvalidMessages, we could overwrite that
message with a new one while the outer invalidation function was still
working on it. This would be likely to lead to invalidation of the wrong
cache entry, allowing subsequent processing to use stale cache data.
The fix is just to make a local copy of each message while we're processing
it.
Spotted by Andres Freund. Back-patch to 8.4 where the bug was introduced.
Tom Lane [Mon, 5 May 2014 17:37:54 +0000 (13:37 -0400)]
Fix pg_type.typlen for newly-revived line type.
Commit 261c7d4b653bc3e44c31fd456d94f292caa50d8f removed the "m" field
from struct LINE, but neglected to make pg_type.h's idea of the type's
size match. This resulted in reading past the end of palloc'd LINE
values when inserting them into tuples etc. In principle that could
cause a SIGSEGV, though the odds of detectable problems seem low.
Bump catversion since this makes an incompatible on-disk format change.
Note that if the line type had been in use in the field, this would
break pg_upgrade'ability of databases containing line values; but
it seems unlikely that there are any (they'd have had to be compiled
with -DENABLE_LINE_TYPE).
Tom Lane [Mon, 5 May 2014 15:26:41 +0000 (11:26 -0400)]
Fix case of pg_dump -Fc to an unseekable file (such as a pipe).
This was accidentally broken in commits cfa1b4a711/5e8e794e3b.
It saves a line or so to call ftello unconditionally in _CloseArchive,
but we have to expect that it might fail if we're not in hasSeek mode.
Per report from Bernd Helmle.
In passing, improve _getFilePos to print an appropriate message if
ftello fails unexpectedly, rather than just a vague complaint about
"ftell mismatch".