Fix race condition between hot standby and restoring a full-page image.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
Tom Lane [Wed, 12 Nov 2014 20:58:37 +0000 (15:58 -0500)]
Explicitly support the case that a plancache's raw_parse_tree is NULL.
This only happens if a client issues a Parse message with an empty query
string, which is a bit odd; but since it is explicitly called out as legal
by our FE/BE protocol spec, we'd probably better continue to allow it.
Fix by adding tests everywhere that the raw_parse_tree field is passed to
functions that don't or shouldn't accept NULL. Also make it clear in the
relevant comments that NULL is an expected case.
This reverts commits a73c9dbab0165b3395dfe8a44a7dfd16166963c4 and 2e9650cbcff8c8fb0d9ef807c73a44f241822eee, which fixed specific crash
symptoms by hacking things at what now seems to be the wrong end, ie the
callee functions. Making the callees allow NULL is superficially more
robust, but it's not always true that there is a defensible thing for the
callee to do in such cases. The caller has more context and is better
able to decide what the empty-query case ought to do.
Per followup discussion of bug #11335. Back-patch to 9.2. The code
before that is sufficiently different that it would require development
of a separate patch, which doesn't seem worthwhile for what is believed
to be an essentially cosmetic change.
Andres Freund [Wed, 12 Nov 2014 17:52:49 +0000 (18:52 +0100)]
Fix several weaknesses in slot and logical replication on-disk serialization.
Heikki noticed in 544E23C0.8090605@vmware.com that slot.c and
snapbuild.c were missing the FIN_CRC32 call when computing/checking
checksums of on disk files. That doesn't lower the the error detection
capabilities of the checksum, but is inconsistent with other usages.
In a followup mail Heikki also noticed that, contrary to a comment,
the 'version' and 'length' struct fields of replication slot's on disk
data where not covered by the checksum. That's not likely to lead to
actually missed corruption as those fields are cross checked with the
expected version and the actual file length. But it's wrong
nonetheless.
As fixing these issues makes existing on disk files unreadable, bump
the expected versions of on disk files for both slots and logical
decoding historic catalog snapshots. This means that loading old
files will fail with
ERROR: "replication slot file ... has unsupported version 1"
and
ERROR: "snapbuild state file ... has unsupported version 1 instead of
2" respectively. Given the low likelihood of anybody already using
these new features in a production setup that seems acceptable.
Fixing these issues made me notice that there's no regression test
covering the loading of historic snapshot from disk - so add one.
Backpatch to 9.4 where these features were introduced.
Andres Freund [Wed, 12 Nov 2014 17:52:49 +0000 (18:52 +0100)]
Add interrupt checks to contrib/pg_prewarm.
Currently the extension's pg_prewarm() function didn't check
interrupts once it started "warming" data. Since individual calls can
take a long while it's important for them to be interruptible.
Noah Misch [Wed, 12 Nov 2014 12:33:17 +0000 (07:33 -0500)]
Use just one database connection in the "tablespace" test.
On Windows, DROP TABLESPACE has a race condition when run concurrently
with other processes having opened files in the tablespace. This led to
a rare failure on buildfarm member frogmouth. Back-patch to 9.4, where
the reconnection was introduced.
Tom Lane [Tue, 11 Nov 2014 22:00:11 +0000 (17:00 -0500)]
Fix dependency searching for case where column is visited before table.
When the recursive search in dependency.c visits a column and then later
visits the whole table containing the column, it needs to propagate the
drop-context flags for the table to the existing target-object entry for
the column. Otherwise we might refuse the DROP (if not CASCADE) on the
incorrect grounds that there was no automatic drop pathway to the column.
Remarkably, this has not been reported before, though it's possible at
least when an extension creates both a datatype and a table using that
datatype.
Rather than just marking the column as allowed to be dropped, it might
seem good to skip the DROP COLUMN step altogether, since the later DROP
of the table will surely get the job done. The problem with that is that
the datatype would then be dropped before the table (since the whole
situation occurred because we visited the datatype, and then recursed to
the dependent column, before visiting the table). That seems pretty risky,
and the case is rare enough that it doesn't seem worth expending a lot of
effort or risk to make the drops happen in a safe order. So we just play
dumb and delete the column separately according to the existing drop
ordering rules.
Per report from Petr Jelinek, though this is different from his proposed
patch.
Back-patch to 9.1, where extensions were introduced. There's currently
no evidence that such cases can arise before 9.1, and in any case we would
also need to back-patch cb5c2ba2d82688d29b5902d86b993a54355cad4d to 9.0
if we wanted to back-patch this.
Fujii Masao [Tue, 11 Nov 2014 12:08:21 +0000 (21:08 +0900)]
Add GUC and storage parameter to set the maximum size of GIN pending list.
Previously the maximum size of GIN pending list was controlled only by
work_mem. But the reasonable value of work_mem and the reasonable size
of the list are basically not the same, so it was not appropriate to
control both of them by only one GUC, i.e., work_mem. This commit
separates new GUC, pending_list_cleanup_size, from work_mem to allow
users to control only the size of the list.
Also this commit adds pending_list_cleanup_size as new storage parameter
to allow users to specify the size of the list per index. This is useful,
for example, when users want to increase the size of the list only for
the GIN index which can be updated heavily, and decrease it otherwise.
I missed an additional colon in previous patch. Oops. to make that mistake
less likely in the future, add comments as placeholders for unused inputs
and outputs in inline assembly.
Alvaro Herrera [Mon, 10 Nov 2014 21:13:49 +0000 (18:13 -0300)]
BRIN: fix bug in xlog backup block counting
The code that generates the BRIN_XLOG_UPDATE removes the buffer
reference when the page that's target for the updated tuple is freshly
initialized. This is a pretty usual optimization, but was breaking the
case where the revmap buffer, which is referenced in the same WAL
record, is getting a backup block: the replay code was using backup
block index 1, which is not valid when the update target buffer gets
pruned; the revmap buffer gets assigned 0 instead. Make sure to use the
correct backup block index for revmap when replaying.
Robert Haas [Mon, 10 Nov 2014 20:19:56 +0000 (15:19 -0500)]
Fix potential NULL-pointer dereference.
Commit 2781b4bea7db357be59f9a5fd73ca1eb12ff5a79 arranged to defer
the setup of after-trigger-related data structures, but
AfterTriggerPendingOnRel didn't get the memo.
Tom Lane [Mon, 10 Nov 2014 20:21:09 +0000 (15:21 -0500)]
Ensure that RowExprs and whole-row Vars produce the expected column names.
At one time it wasn't terribly important what column names were associated
with the fields of a composite Datum, but since the introduction of
operations like row_to_json(), it's important that looking up the rowtype
ID embedded in the Datum returns the column names that users would expect.
That did not work terribly well before this patch: you could get the column
names of the underlying table, or column aliases from any level of the
query, depending on minor details of the plan tree. You could even get
totally empty field names, which is disastrous for cases like row_to_json().
To fix this for whole-row Vars, look to the RTE referenced by the Var, and
make sure its column aliases are applied to the rowtype associated with
the result Datums. This is a tad scary because we might have to return
a transient RECORD type even though the Var is declared as having some
named rowtype. In principle it should be all right because the record
type will still be physically compatible with the named rowtype; but
I had to weaken one Assert in ExecEvalConvertRowtype, and there might be
third-party code containing similar assumptions.
Similarly, RowExprs have to be willing to override the column names coming
from a named composite result type and produce a RECORD when the column
aliases visible at the site of the RowExpr differ from the underlying
table's column names.
In passing, revert the decision made in commit 398f70ec070fe601 to add
an alias-list argument to ExecTypeFromExprList: better to provide that
functionality in a separate function. This also reverts most of the code
changes in d68581483564ec0f, which we don't need because we're no longer
depending on the tupdesc found in the child plan node's result slot to be
blessed.
Back-patch to 9.4, but not earlier, since this solution changes the results
in some cases that users might not have realized were buggy. We'll apply a
more restricted form of this patch in older branches.
Alvaro Herrera [Mon, 10 Nov 2014 18:56:08 +0000 (15:56 -0300)]
Further code and wording tweaks in BRIN
Besides a couple of typo fixes, per David Rowley, Thom Brown, and Amit
Langote, and mentions of BRIN in the general CREATE INDEX page again per
David, this includes silencing MSVC compiler warnings (thanks Microsoft)
and an additional variable initialization per Coverity scanner.
Alvaro Herrera [Sat, 8 Nov 2014 03:31:03 +0000 (00:31 -0300)]
Fix some coding issues in BRIN
Reported by David Rowley: variadic macros are a problem. Get rid of
them using a trick suggested by Tom Lane: add extra parentheses where
needed. In the future we might decide we don't need the calls at all
and remove them, but it seems appropriate to keep them while this code
is still new.
Also from David Rowley: brininsert() was trying to use a variable before
initializing it. Fix by moving the brin_form_tuple call (which
initializes the variable) to within the locked section.
Reported by Peter Eisentraut: can't use "new" as a struct member name,
because C++ compilers will choke on it, as reported by cpluspluscheck.
pg_basebackup: Adjust tests for long file name issues
Work around accidental test failures because the working directory path
is too long by creating a temporary directory in the (hopefully shorter)
system location, symlinking that to the working directory, and creating
the tablespaces using the shorter path.
Robert Haas [Fri, 7 Nov 2014 22:26:02 +0000 (17:26 -0500)]
Introduce custom path and scan providers.
This allows extension modules to define their own methods for
scanning a relation, and get the core code to use them. It's
unclear as yet how much use this capability will find, but we
won't find out if we never commit it.
KaiGai Kohei, reviewed at various times and in various levels
of detail by Shigeru Hanada, Tom Lane, Andres Freund, Álvaro
Herrera, and myself.
Now that the backup blocks are appended to the WAL record in xloginsert.c,
XLogInsert doesn't see them anymore and cannot remove them from the version
reconstructed for xlog_outdesc. This makes running with wal_debug=on more
expensive, as we now make (unnecessary) temporary copies of the backup
blocks, but it doesn't seem worth convoluting the code to keep that
optimization.
Alvaro Herrera [Fri, 7 Nov 2014 20:05:26 +0000 (17:05 -0300)]
Fix serial schedule
Test misc depends on brin, but it was earlier in the serial schedule
file. I didn't notice this because I only run the parallel schedule,
but the buildfarm exposed my folly ...
Alvaro Herrera [Fri, 7 Nov 2014 19:38:14 +0000 (16:38 -0300)]
BRIN: Block Range Indexes
BRIN is a new index access method intended to accelerate scans of very
large tables, without the maintenance overhead of btrees or other
traditional indexes. They work by maintaining "summary" data about
block ranges. Bitmap index scans work by reading each summary tuple and
comparing them with the query quals; all pages in the range are returned
in a lossy TID bitmap if the quals are consistent with the values in the
summary tuple, otherwise not. Normal index scans are not supported
because these indexes do not store TIDs.
As new tuples are added into the index, the summary information is
updated (if the block range in which the tuple is added is already
summarized) or not; in the latter case, a subsequent pass of VACUUM or
the brin_summarize_new_values() function will create the summary
information.
For data types with natural 1-D sort orders, the summary info consists
of the maximum and the minimum values of each indexed column within each
page range. This type of operator class we call "Minmax", and we
supply a bunch of them for most data types with B-tree opclasses.
Since the BRIN code is generalized, other approaches are possible for
things such as arrays, geometric types, ranges, etc; even for things
such as enum types we could do something different than minmax with
better results. In this commit I only include minmax.
Catalog version bumped due to new builtin catalog entries.
There's more that could be done here, but this is a good step forwards.
Loosely based on ideas from Simon Riggs; code mostly by Álvaro Herrera,
with contribution by Heikki Linnakangas.
Patch reviewed by: Amit Kapila, Heikki Linnakangas, Robert Haas.
Testing help from Jeff Janes, Erik Rijkers, Emanuel Calvo.
PS:
The research leading to these results has received funding from the
European Union's Seventh Framework Programme (FP7/2007-2013) under
grant agreement n° 318633.
The code that generated a record to clear the F_TUPLES_DELETED flag hasn't
existed since we got rid of old-style VACUUM FULL. I kept the code that sets
the flag, although it's not used for anything anymore, because it might
still be interesting information for debugging purposes that some tuples
have been deleted from a page.
Likewise, the code to turn the root page from non-leaf to leaf page was
removed when we got rid of old-style VACUUM FULL. Remove the code to replay
that action, too.
Tom Lane [Fri, 7 Nov 2014 01:52:40 +0000 (20:52 -0500)]
Cope with more than 64K phrases in a thesaurus dictionary.
dict_thesaurus stored phrase IDs in uint16 fields, so it would get confused
and even crash if there were more than 64K entries in the configuration
file. It turns out to be basically free to widen the phrase IDs to uint32,
so let's just do so.
This was complained of some time ago by David Boutin (in bug #7793);
he later submitted an informal patch but it was never acted on.
We now have another complaint (bug #11901 from Luc Ouellette) so it's
time to make something happen.
This is basically Boutin's patch, but for future-proofing I also added a
defense against too many words per phrase. Note that we don't need any
explicit defense against overflow of the uint32 counters, since before that
happens we'd hit array allocation sizes that repalloc rejects.
Back-patch to all supported branches because of the crash risk.
Tom Lane [Thu, 6 Nov 2014 16:41:06 +0000 (11:41 -0500)]
Fix normalization of numeric values in JSONB GIN indexes.
The default JSONB GIN opclass (jsonb_ops) converts numeric data values
to strings for storage in the index. It must ensure that numeric values
that would compare equal (such as 12 and 12.00) produce identical strings,
else index searches would have behavior different from regular JSONB
comparisons. Unfortunately the function charged with doing this was
completely wrong: it could reduce distinct numeric values to the same
string, or reduce equivalent numeric values to different strings. The
former type of error would only lead to search inefficiency, but the
latter type of error would cause index entries that should be found by
a search to not be found.
Repairing this bug therefore means that it will be necessary for 9.4 beta
testers to reindex GIN jsonb_ops indexes, if they care about getting
correct results from index searches involving numeric data values within
the comparison JSONB object.
Fujii Masao [Thu, 6 Nov 2014 12:24:40 +0000 (21:24 +0900)]
Prevent the unnecessary creation of .ready file for the timeline history file.
Previously .ready file was created for the timeline history file at the end
of an archive recovery even when WAL archiving was not enabled.
This creation is unnecessary and causes .ready file to remain infinitely.
This commit changes an archive recovery so that it creates .ready file for
the timeline history file only when WAL archiving is enabled.
Move the backup-block logic from XLogInsert to a new file, xloginsert.c.
xlog.c is huge, this makes it a little bit smaller, which is nice. Functions
related to putting together the WAL record are in xloginsert.c, and the
lower level stuff for managing WAL buffers and such are in xlog.c.
Also move the definition of XLogRecord to a separate header file. This
causes churn in the #includes of all the files that write WAL records, and
redo routines, but it avoids pulling in xlog.h into most places.
Reviewed by Michael Paquier, Alvaro Herrera, Andres Freund and Amit Kapila.
Tom Lane [Thu, 6 Nov 2014 00:35:23 +0000 (19:35 -0500)]
Remove the last vestige of server-side autocommit.
Long ago we briefly had an "autocommit" GUC that turned server-side
autocommit on and off. That behavior was removed in 7.4 after concluding
that it broke far too much client-side logic, and making clients cope with
both behaviors was impractical. But the GUC variable was left behind, so
as not to break any client code that might be trying to read its value.
Enough time has now passed that we should remove the GUC completely.
Whatever vestigial backwards-compatibility benefit it had is outweighed by
the risk of confusion for newbies who assume it ought to do something,
as per a recent complaint from Wolfgang Wilhelm.
In passing, adjust what seemed to me a rather confusing documentation
reference to libpq's autocommit behavior. libpq as such knows nothing
about autocommit, so psql is probably what was meant.
Obviously, every translation unit should not be declaring this
separately. It needs to be PGDLLIMPORT as well, to avoid breaking
third-party code that uses any of the functions that the commit
mentioned above changed to macros.
Tom Lane [Wed, 5 Nov 2014 16:44:06 +0000 (11:44 -0500)]
Make CREATE TYPE print warnings if a datatype's I/O functions are volatile.
This is a followup to commit 43ac12c6e6e397fd9142ed908447eba32d3785b2,
which added regression tests checking that I/O functions of built-in
types are not marked volatile. Complaining in CREATE TYPE should push
developers of add-on types to fix any misdeclared functions in their
types. It's just a warning not an error, to avoid creating upgrade
problems for what might be just cosmetic mis-markings.
Aside from adding the warning code, fix a number of types that were
sloppily created in the regression tests.
Tom Lane [Wed, 5 Nov 2014 16:34:11 +0000 (11:34 -0500)]
Fix volatility markings of some contrib I/O functions.
In general, datatype I/O functions are supposed to be immutable or at
worst stable. Some contrib I/O functions were, through oversight, not
marked with any volatility property at all, which made them VOLATILE.
Since (most of) these functions actually behave immutably, the erroneous
marking isn't terribly harmful; but it can be user-visible in certain
circumstances, as per a recent bug report from Joe Van Dyk in which a
cast to text was disallowed in an expression index definition.
To fix, just adjust the declarations in the extension SQL scripts. If we
were being very fussy about this, we'd bump the extension version numbers,
but that seems like more trouble (for both developers and users) than the
problem is worth.
A fly in the ointment is that chkpass_in actually is volatile, because
of its use of random() to generate a fresh salt when presented with a
not-yet-encrypted password. This is bad because of the general assumption
that I/O functions aren't volatile: the consequence is that records or
arrays containing chkpass elements may have input behavior a bit different
from a bare chkpass column. But there seems no way to fix this without
breaking existing usage patterns for chkpass, and the consequences of the
inconsistency don't seem bad enough to justify that. So for the moment,
just document it in a comment.
Since we're not bumping version numbers, there seems no harm in
back-patching these fixes; at least future installations will get the
functions marked correctly.
Tom Lane [Tue, 4 Nov 2014 18:24:06 +0000 (13:24 -0500)]
Drop no-longer-needed buffers during ALTER DATABASE SET TABLESPACE.
The previous coding assumed that we could just let buffers for the
database's old tablespace age out of the buffer arena naturally.
The folly of that is exposed by bug #11867 from Marc Munro: the user could
later move the database back to its original tablespace, after which any
still-surviving buffers would match lookups again and appear to contain
valid data. But they'd be missing any changes applied while the database
was in the new tablespace.
This has been broken since ALTER SET TABLESPACE was introduced, so
back-patch to all supported branches.
The old algorithm was found to not be the usual CRC-32 algorithm, used by
Ethernet et al. We were using a non-reflected lookup table with code meant
for a reflected lookup table. That's a strange combination that AFAICS does
not correspond to any bit-wise CRC calculation, which makes it difficult to
reason about its properties. Although it has worked well in practice, seems
safer to use a well-known algorithm.
Since we're changing the algorithm anyway, we might as well choose a
different polynomial. The Castagnoli polynomial has better error-correcting
properties than the traditional CRC-32 polynomial, even if we had
implemented it correctly. Another reason for picking that is that some new
CPUs have hardware support for calculating CRC-32C, but not CRC-32, let
alone our strange variant of it. This patch doesn't add any support for such
hardware, but a future patch could now do that.
The old algorithm is kept around for tsquery and pg_trgm, which use the
values in indexes that need to remain compatible so that pg_upgrade works.
While we're at it, share the old lookup table for CRC-32 calculation
between hstore, ltree and core. They all use the same table, so might as
well.
Noah Misch [Mon, 3 Nov 2014 02:43:25 +0000 (21:43 -0500)]
Make ECPG test programs depend on "ecpg$(X)", not "ecpg".
Cygwin builds require this of dependencies pertaining to pattern rules.
On Cygwin, stat("foo") in the absence of a file with that exact name can
locate foo.exe. While GNU make uses stat() for dependencies of ordinary
rules, it uses readdir() to assess dependencies of pattern rules.
Therefore, a pattern rule dependency should match any underlying file
name exactly. Back-patch to 9.4, where the dependency was introduced.
Robert Haas [Fri, 31 Oct 2014 16:02:40 +0000 (12:02 -0400)]
Support frontend-backend protocol communication using a shm_mq.
A background worker can use pq_redirect_to_shm_mq() to direct protocol
that would normally be sent to the frontend to a shm_mq so that another
process may read them.
The receiving process may use pq_parse_errornotice() to parse an
ErrorResponse or NoticeResponse from the background worker and, if
it wishes, ThrowErrorData() to propagate the error (with or without
further modification).
Robert Haas [Thu, 30 Oct 2014 18:55:23 +0000 (14:55 -0400)]
Extend dsm API with a new function dsm_unpin_mapping.
This reassociates a dynamic shared memory handle previous passed to
dsm_pin_mapping with the current resource owner, so that it will be
cleaned up at the end of the current query.
Patch by me. Review of the function name by Andres Freund, Amit
Kapila, Jim Nasby, Petr Jelinek, and Álvaro Herrera.
Tom Lane [Thu, 30 Oct 2014 17:03:22 +0000 (13:03 -0400)]
Test IsInTransactionChain, not IsTransactionBlock, in vac_update_relstats.
As noted by Noah Misch, my initial cut at fixing bug #11638 didn't cover
all cases where ANALYZE might be invoked in an unsafe context. We need to
test the result of IsInTransactionChain not IsTransactionBlock; which is
notationally a pain because IsInTransactionChain requires an isTopLevel
flag, which would have to be passed down through several levels of callers.
I chose to pass in_outer_xact (ie, the result of IsInTransactionChain)
rather than isTopLevel per se, as that seemed marginally more apropos
for the intermediate functions to know about.
Robert Haas [Thu, 30 Oct 2014 15:35:55 +0000 (11:35 -0400)]
"Pin", rather than "keep", dynamic shared memory mappings and segments.
Nobody seemed concerned about this naming when it originally went in,
but there's a pending patch that implements the opposite of
dsm_keep_mapping, and the term "unkeep" was judged unpalatable.
"unpin" has existing precedent in the PostgreSQL code base, and the
English language, so use this terminology instead.
Tom Lane [Wed, 29 Oct 2014 22:12:02 +0000 (18:12 -0400)]
Avoid corrupting tables when ANALYZE inside a transaction is rolled back.
VACUUM and ANALYZE update the target table's pg_class row in-place, that is
nontransactionally. This is OK, more or less, for the statistical columns,
which are mostly nontransactional anyhow. It's not so OK for the DDL hint
flags (relhasindex etc), which might get changed in response to
transactional changes that could still be rolled back. This isn't a
problem for VACUUM, since it can't be run inside a transaction block nor
in parallel with DDL on the table. However, we allow ANALYZE inside a
transaction block, so if the transaction had earlier removed the last
index, rule, or trigger from the table, and then we roll back the
transaction after ANALYZE, the table would be left in a corrupted state
with the hint flags not set though they should be.
To fix, suppress the hint-flag updates if we are InTransactionBlock().
This is safe enough because it's always OK to postpone hint maintenance
some more; the worst-case consequence is a few extra searches of pg_index
et al. There was discussion of instead using a transactional update,
but that would change the behavior in ways that are not all desirable:
in most scenarios we're better off keeping ANALYZE's statistical values
even if the ANALYZE itself rolls back. In any case we probably don't want
to change this behavior in back branches.
Per bug #11638 from Casey Shobe. This has been broken for a good long
time, so back-patch to all supported branches.
Tom Lane and Michael Paquier, initial diagnosis by Andres Freund
Robert Haas [Wed, 29 Oct 2014 16:35:19 +0000 (12:35 -0400)]
Avoid setup work for invalidation messages at start-of-(sub)xact.
Instead of initializing a new TransInvalidationInfo for every
transaction or subtransaction, we can just do it for those
transactions or subtransactions that actually need to queue
invalidation messages. That also avoids needing to free those
entries at the end of a transaction or subtransaction that does
not generate any invalidation messages, which is by far the
common case.
Patch by me. Review by Simon Riggs and Andres Freund.
If you call PQreset() repeatedly, and the connection cannot be
re-established, the error messages from the failed connection attempts
kept accumulating in the error string.
Fixes bug #11455 reported by Caleb Epstein. Backpatch to all supported
versions.
Tom Lane [Tue, 28 Oct 2014 22:36:02 +0000 (18:36 -0400)]
Remove obsolete commentary.
Since we got rid of non-MVCC catalog scans, the fourth reason given for
using a non-transactional update in index_update_stats() is obsolete.
The other three are still good, so we're not going to change the code,
but fix the comment.
Noah Misch [Mon, 27 Oct 2014 23:59:39 +0000 (19:59 -0400)]
MinGW: Include .dll extension in .def file LIBRARY commands.
Newer toolchains append the extension implicitly if missing, but
buildfarm member narwhal (gcc 3.4.2, ld 2.15.91 20040904) does not.
This affects most core libraries having an exports.txt file, namely
libpq and the ECPG support libraries. On Windows Server 2003, Windows
API functions that load and unload DLLs internally will mistakenly
unload a libpq whose DLL header reports "LIBPQ" instead of "LIBPQ.dll".
When, subsequently, control would return to libpq, the backend crashes.
Back-patch to 9.4, like commit 846e91e0223cf9f2821c3ad4dfffffbb929cb027.
Before that commit, we used a different linking technique that yielded
"libpq.dll" in the DLL header.
Commit 53566fc0940cf557416b13252df57350a4511ce4 worked around this by
eliminating a call to a function that loads and unloads DLLs internally.
That commit is no longer necessary for correctness, but its improving
consistency with the MSVC build remains valid.
1. The comparison for matching terms used only the CRC to decide if there's
a match. Two different terms with the same CRC gave a match.
2. It assumed that if the second operand has more terms than the first, it's
never a match. That assumption is bogus, because there can be duplicate
terms in either operand.
Rewrite the implementation in a way that doesn't have those bugs.
Tom Lane [Mon, 27 Oct 2014 00:59:21 +0000 (20:59 -0400)]
Avoid unportable strftime() behavior in pg_dump/pg_dumpall.
Commit ad5d46a4494b0b480a3af246bb4227d9bdadca37 thought that we could
get around the known portability issues of strftime's %Z specifier by
using %z instead. However, that idea seems to have been innocent of
any actual research, as it certainly missed the facts that
(1) %z is not portable to pre-C99 systems, and
(2) %z doesn't actually act differently from %Z on Windows anyway.
Per failures on buildfarm member hamerkop.
While at it, centralize the code defining what strftime format we
want to use in pg_dump; three copies of that string seems a bit much.
Tom Lane [Sun, 26 Oct 2014 23:17:55 +0000 (19:17 -0400)]
Fix undersized result buffer in pset_quoted_string().
The malloc request was 1 byte too small for the worst-case output.
This seems relatively unlikely to cause any problems in practice,
as the worst case only occurs if the input string contains no
characters other than single-quote or newline, and even then
malloc alignment padding would probably save the day. But it's
definitely a bug.
Tom Lane [Sun, 26 Oct 2014 20:12:22 +0000 (16:12 -0400)]
Improve planning of btree index scans using ScalarArrayOpExpr quals.
Since we taught btree to handle ScalarArrayOpExpr quals natively (commit 9e8da0f75731aaa7605cf4656c21ea09e84d2eb1), the planner has always included
ScalarArrayOpExpr quals in index conditions if possible. However, if the
qual is for a non-first index column, this could result in an inferior plan
because we can no longer take advantage of index ordering (cf. commit 807a40c551dd30c8dd5a0b3bd82f5bbb1e7fd285). It can be better to omit the
ScalarArrayOpExpr qual from the index condition and let it be done as a
filter, so that the output doesn't need to get sorted. Indeed, this is
true for the query introduced as a test case by the latter commit.
To fix, restructure get_index_paths and build_index_paths so that we
consider paths both with and without ScalarArrayOpExpr quals in non-first
index columns. Redesign the API of build_index_paths so that it reports
what it found, saving useless second or third calls.
Report and patch by Andrew Gierth (though rather heavily modified by me).
Back-patch to 9.2 where this code was introduced, since the issue can
result in significant performance regressions compared to plans produced
by 9.1 and earlier.
Peter Eisentraut [Sun, 26 Oct 2014 13:47:01 +0000 (09:47 -0400)]
Fix TAP tests with Perl 5.8
The prove program included in Perl 5.8 does not support the --ext
option, so don't use that and use wildcards on the command line instead.
Note that the tests will still all be skipped, because, for instance,
the version of Test::More is too old, but at least the regular
mechanisms for handling that will apply, instead of failing to call
prove altogether.
Work around Windows locale name with non-ASCII character.
Windows has one a locale whose name contains a non-ASCII character:
"Norwegian (Bokmål)" (that's an 'a' with a ring on top). That causes
trouble; when passing it setlocale(), it's not clear what encoding the
argument should be in. Another problem is that the locale name is stored in
pg_database catalog table, and the encoding used there depends on what
server encoding happens to be in use when the database is created. For
example, if you issue the CREATE DATABASE when connected to a UTF-8
database, the locale name is stored in pg_database in UTF-8. As long as all
locale names are pure ASCII, that's not a problem.
To work around that, map the troublesome locale name to a pure-ASCII alias
of the same locale, "norwegian-bokmal".
Now, this doesn't change the existing values that are already in
pg_database and in postgresql.conf. Old clusters will need to be fixed
manually. Instructions for that need to be put in the release notes.
This fixes bug #11431 reported by Alon Siman-Tov. Backpatch to 9.2;
backpatching further would require more work than seems worth it.
Alvaro Herrera [Fri, 24 Oct 2014 10:14:09 +0000 (07:14 -0300)]
psql: complain if pg_dump custom-format is detected
Apparently, this is a very common mistake for users to make; it is
better to have it fail reasonably rather than throw potentially a large
number of errors. Since we have a magic string at the start of the
file, we can detect the case easily and there's no other possible useful
behavior anyway.
Tom Lane [Thu, 23 Oct 2014 19:59:40 +0000 (15:59 -0400)]
In type_sanity, check I/O functions of built-in types are not volatile.
We have a project policy that I/O functions must not be volatile, as per
commit aab353a60b95aadc00f81da0c6d99bde696c4b75, but we weren't doing
anything to enforce that. In most usage the marking of the function
doesn't matter as long as its behavior is sane --- but I/O casts can
expose the marking as user-visible behavior, as per today's complaint
from Joe Van Dyk about contrib/ltree.
This test as such will only protect us against future errors in built-in
data types. To catch the same error in contrib or third-party types,
perhaps we should make CREATE TYPE complain? But that's a separate
issue from enforcing the policy for built-in types.
Tom Lane [Thu, 23 Oct 2014 17:11:28 +0000 (13:11 -0400)]
Improve ispell dictionary's defenses against bad affix files.
Don't crash if an ispell dictionary definition contains flags but not
any compound affixes. (This isn't a security issue since only superusers
can install affix files, but still it's a bad thing.)
Also, be more careful about detecting whether an affix-file FLAG command
is old-format (ispell) or new-format (myspell/hunspell). And change the
error message about mixed old-format and new-format commands into something
intelligible.
Per bug #11770 from Emre Hasegeli. Back-patch to all supported branches.
Robert Haas [Thu, 23 Oct 2014 16:33:02 +0000 (12:33 -0400)]
Perform less setup work for AFTER triggers at transaction start.
Testing reveals that the memory allocation we do at transaction start
has small but measurable overhead on simple transactions. To cut
down on that overhead, defer some of that work to the point when
AFTER triggers are first used, thus avoiding it altogether if they
never are.
Fujii Masao [Thu, 23 Oct 2014 13:33:56 +0000 (22:33 +0900)]
Remove the unused argument of PSQLexec().
This commit simply removes the second argument of PSQLexec that was
set to the same value everywhere. Comments and code blocks related
to this parameter are removed.
Noticed by Heikki Linnakangas, reviewed by Michael Paquier
Robert Haas [Thu, 23 Oct 2014 12:18:45 +0000 (08:18 -0400)]
Add a function to get the authenticated user ID.
Previously, this was not exposed outside of miscinit.c. It is needed
for the pending pg_background patch, and will also be needed for
parallelism. Without it, there's no way for a background worker to
re-create the exact authentication environment that was present in the
process that started it, which could lead to security exposures.
Fujii Masao [Thu, 23 Oct 2014 07:21:27 +0000 (16:21 +0900)]
Prevent the already-archived WAL file from being archived again.
Previously the archive recovery always created .ready file for
the last WAL file of the old timeline at the end of recovery even when
it's restored from the archive and has .done file. That is, there was
the case where the WAL file had both .ready and .done files.
This caused the already-archived WAL file to be archived again.
This commit prevents the archive recovery from creating .ready file
for the last WAL file if it has .done file, in order to prevent it from
being archived again.
This bug was added when cascading replication feature was introduced,
i.e., the commit 5286105800c7d5902f98f32e11b209c471c0c69c.
So, back-patch to 9.2, where cascading replication was added.
Peter Eisentraut [Thu, 23 Oct 2014 01:41:43 +0000 (21:41 -0400)]
Minimize calls of pg_class_aclcheck to minimum necessary
In a couple of code paths, pg_class_aclcheck is called in succession
with multiple different modes set. This patch combines those modes to
have a single call of this function and reduce a bit process overhead
for permission checking.
Author: Michael Paquier <michael@otacoo.com> Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Tom Lane [Wed, 22 Oct 2014 22:41:44 +0000 (18:41 -0400)]
Ensure libpq reports a suitable error message on unexpected socket EOF.
The EOF-detection logic in pqReadData was a bit confused about who should
set up the error message in case the kernel gives us read-ready-but-no-data
rather than ECONNRESET or some other explicit error condition. Since the
whole point of this situation is that the lower-level functions don't know
there's anything wrong, pqReadData itself must set up the message. But
keep the assumption that if an errno was reported, a message was set up at
lower levels.
Per bug #11712 from Marko Tiikkaja. It's been like this for a very long
time, so back-patch to all supported branches.