The parsing of WAL filenames of segments larger than > 255 was broken,
making pg_receivexlog unable to restart streaming after stopping it.
The bug was introduced by the changes in 9.3 to represent WAL segment number
as a 64-bit integer instead of two ints, log and seg. To fix, replace the
plain sscanf call with XLogFromFileName macro, which does the conversion
from log+seg to a 64-bit integer correcly.
Tom Lane [Sun, 3 Nov 2013 16:33:09 +0000 (11:33 -0500)]
Prevent memory leaks from accumulating across printtup() calls.
Historically, printtup() has assumed that it could prevent memory leakage
by pfree'ing the string result of each output function and manually
managing detoasting of toasted values. This amounts to assuming that
datatype output functions never leak any memory internally; an assumption
we've already decided to be bogus elsewhere, for example in COPY OUT.
range_out in particular is known to leak multiple kilobytes per call, as
noted in bug #8573 from Godfried Vanluffelen. While we could go in and fix
that leak, it wouldn't be very notationally convenient, and in any case
there have been and undoubtedly will again be other leaks in other output
functions. So what seems like the best solution is to run the output
functions in a temporary memory context that can be reset after each row,
as we're doing in COPY OUT. Some quick experimentation suggests this is
actually a tad faster than the retail pfree's anyway.
This patch fixes all the variants of printtup, except for debugtup()
which is used in standalone mode. It doesn't seem worth worrying
about query-lifespan leaks in standalone mode, and fixing that case
would be a bit tedious since debugtup() doesn't currently have any
startup or shutdown functions.
While at it, remove manual detoast management from several other
output-function call sites that had copied it from printtup(). This
doesn't make a lot of difference right now, but in view of recent
discussions about supporting "non-flattened" Datums, we're going to
want that code gone eventually anyway.
Back-patch to 9.2 where range_out was introduced. We might eventually
decide to back-patch this further, but in the absence of known major
leaks in older output functions, I'll refrain for now.
Tom Lane [Sat, 2 Nov 2013 20:45:42 +0000 (16:45 -0400)]
Retry after buffer locking failure during SPGiST index creation.
The original coding thought this case was impossible, but it can happen
if the bgwriter or checkpointer processes decide to write out an index
page while creation is still proceeding, leading to a bogus "unexpected
spgdoinsert() failure" error. Problem reported by Jonathan S. Katz.
Tom Lane [Fri, 1 Nov 2013 20:09:52 +0000 (16:09 -0400)]
Ensure all files created for a single BufFile have the same resource owner.
Callers expect that they only have to set the right resource owner when
creating a BufFile, not during subsequent operations on it. While we could
insist this be fixed at the caller level, it seems more sensible for the
BufFile to take care of it. Without this, some temp files belonging to
a BufFile can go away too soon, eg at the end of a subtransaction,
leading to errors or crashes.
Reported and fixed by Andres Freund. Back-patch to all active branches.
Tom Lane [Fri, 1 Nov 2013 16:13:23 +0000 (12:13 -0400)]
Fix some odd behaviors when using a SQL-style simple GMT offset timezone.
Formerly, when using a SQL-spec timezone setting with a fixed GMT offset
(called a "brute force" timezone in the code), the session_timezone
variable was not updated to match the nominal timezone; rather, all code
was expected to ignore session_timezone if HasCTZSet was true. This is
of course obviously fragile, though a search of the code finds only
timeofday() failing to honor the rule. A bigger problem was that
DetermineTimeZoneOffset() supposed that if its pg_tz parameter was
pointer-equal to session_timezone, then HasCTZSet should override the
parameter. This would cause datetime input containing an explicit zone
name to be treated as referencing the brute-force zone instead, if the
zone name happened to match the session timezone that had prevailed
before installing the brute-force zone setting (as reported in bug #8572).
The same malady could affect AT TIME ZONE operators.
To fix, set up session_timezone so that it matches the brute-force zone
specification, which we can do using the POSIX timezone definition syntax
"<abbrev>offset", and get rid of the bogus lookaside check in
DetermineTimeZoneOffset(). Aside from fixing the erroneous behavior in
datetime parsing and AT TIME ZONE, this will cause the timeofday() function
to print its result in the user-requested time zone rather than some
previously-set zone. It might also affect results in third-party
extensions, if there are any that make use of session_timezone without
considering HasCTZSet, but in all cases the new behavior should be saner
than before.
Tom Lane [Tue, 29 Oct 2013 00:49:28 +0000 (20:49 -0400)]
Prevent using strncpy with src == dest in TupleDescInitEntry.
The C and POSIX standards state that strncpy's behavior is undefined when
source and destination areas overlap. While it remains dubious whether any
implementations really misbehave when the pointers are exactly equal, some
platforms are now starting to force the issue by complaining when an
undefined call occurs. (In particular OS X 10.9 has been seen to dump core
here, though the exact set of circumstances needed to trigger that remain
elusive. Similar behavior can be expected to be optional on Linux and
other platforms in the near future.) So tweak the code to explicitly do
nothing when nothing need be done.
Back-patch to all active branches. In HEAD, this also lets us get rid of
an exception in valgrind.supp.
Andrew Dunstan [Mon, 28 Oct 2013 15:45:50 +0000 (11:45 -0400)]
Work around NetBSD shell issue in pg_upgrade test script.
The NetBSD shell apparently returns non-zero from an unset command if
the variable is already unset. This matters when, as in pg_upgrade's
test.sh, we are working under 'set -e'. To protect against this, we
first set the PG variables to an empty string before unsetting them
completely.
Error found on buildfarm member coypu, solution from Rémi Zara.
Tom Lane [Mon, 28 Oct 2013 14:28:35 +0000 (10:28 -0400)]
Improve documentation about usage of FDW validator functions.
SGML documentation, as well as code comments, failed to note that an FDW's
validator will be applied to foreign-table options for foreign tables using
the FDW.
The absolute path to config file was not pfreed. There are probably more
small leaks here and there in the config file reload code and assign hooks,
and in practice no-one reloads the config files frequently enough for it to
be a problem, but this one is trivial enough that might as well fix it.
Fix two bugs in setting the vm bit of empty pages.
Use a critical section when setting the all-visible flag on an empty page,
and WAL-logging it. log_newpage_buffer() contains an assertion that it
must be called inside a critical section, and it's the right thing to do
when modifying a buffer anyway.
Also, the page should be marked dirty before calling log_newpage_buffer(),
per the comment in log_newpage_buffer() and src/backend/access/transam/README.
Patch by Andres Freund, in response to my report. Backpatch to 9.2, like
the patch that introduced these bugs (a6370fd9).
1. In heap_hot_search_buffer(), the PredicateLockTuple() call is passed
wrong offset number. heapTuple->t_self is set to the tid of the first
tuple in the chain that's visited, not the one actually being read.
2. CheckForSerializableConflictIn() uses the tuple's t_ctid field
instead of t_self to check for exiting predicate locks on the tuple. If
the tuple was updated, but the updater rolled back, t_ctid points to the
aborted dead tuple.
Kevin Grittner [Mon, 7 Oct 2013 19:26:54 +0000 (14:26 -0500)]
Eliminate xmin from hash tag for predicate locks on heap tuples.
If a tuple was frozen while its predicate locks mattered,
read-write dependencies could be missed, resulting in failure to
detect conflicts which could lead to anomalies in committed
serializable transactions.
This field was added to the tag when we still thought that it was
necessary to carry locks forward to a new version of an updated
row. That was later proven to be unnecessary, which allowed
simplification of the code, but elimination of xmin from the tag
was missed at the time.
Per report and analysis by Heikki Linnakangas.
Backpatch to 9.1.
Alvaro Herrera [Fri, 4 Oct 2013 13:32:48 +0000 (10:32 -0300)]
isolationtester: Allow tuples to be returned in more places
Previously, isolationtester would forbid returning tuples in
session-specific teardown (but not global teardown), as well as in
global setup. Allow these places to return tuples, too.
Alvaro Herrera [Tue, 1 Oct 2013 20:36:15 +0000 (17:36 -0300)]
Remove broken PGXS code for pg_xlogdump
With the PGXS boilerplate in place, pg_xlogdump currently fails with an
ominous error message that certain targets cannot be built because
certain files do not exist. Remove that and instead throw a quick error
message alerting the user of the actual problem, which should be easier
to diagnose that the statu quo.
Fix snapshot leak if lo_open called on non-existent object.
lo_open registers the currently active snapshot, and checks if the
large object exists after that. Normally, snapshots registered by lo_open
are unregistered at end of transaction when the lo descriptor is closed, but
if we error out before the lo descriptor is added to the list of open
descriptors, it is leaked. Fix by moving the snapshot registration to after
checking if the large object exists.
Reported by Pavel Stehule. Backpatch to 8.4. The snapshot registration
system was introduced in 8.4, so prior versions are not affected (and not
supported, anyway).
Andrew Dunstan [Sun, 29 Sep 2013 21:41:56 +0000 (17:41 -0400)]
Use a new hstore extension version for added json functions.
This should have been done when the json functionality was added to
hstore in 9.3.0. To handle this correctly, the upgrade script therefore
uses conditional logic by using plpgsql in a DO statement to add the two
new functions and the new cast. If hstore_to_json_loose is detected as
already present and dependent on the hstore extension nothing is done.
This will require that the database be loaded with plpgsql.
People who have installed the earlier and spurious 1.1 version of hstore
will need to do:
Andrew Dunstan [Sun, 29 Sep 2013 21:28:16 +0000 (17:28 -0400)]
Backpatch pgxs vpath build and installation fixes.
This is a backpatch of commits d942f9d9, 82b01026, and 6697aa2bc, back
to release 9.1 where we introduced extensions which make heavy use of
the PGXS infrastructure.
Fix spurious warning after vacuuming a page on a table with no indexes.
There is a rare race condition, when a transaction that inserted a tuple
aborts while vacuum is processing the page containing the inserted tuple.
Vacuum prunes the page first, which normally removes any dead tuples, but
if the inserting transaction aborts right after that, the loop after
pruning will see a dead tuple and remove it instead. That's OK, but if the
page is on a table with no indexes, and the page becomes completely empty
after removing the dead tuple (or tuples) on it, it will be immediately
marked as all-visible. That's OK, but the sanity check in vacuum would
throw a warning because it thinks that the page contains dead tuples and
was nevertheless marked as all-visible, even though it just vacuumed away
the dead tuples and so it doesn't actually contain any.
Spotted this while reading the code. It's difficult to hit the race
condition otherwise, but can be done by putting a breakpoint after the
heap_page_prune() call.
Backpatch all the way to 8.4, where this code first appeared.
B-tree operators are not allowed to leak memory into the current memory
context. Range_cmp leaked detoasted copies of the arguments. That caused
a quick out-of-memory error when creating an index on a range column.
Use @libdir@ in both of regress/{input,output}/security_label.source
Though @libdir@ almost always matches @abs_builddir@ in this context,
the test could only fail if they differed. Back-patch to 9.1, where the
test was introduced.
Stephen Frost [Mon, 23 Sep 2013 12:33:41 +0000 (08:33 -0400)]
Fix SSL deadlock risk in libpq
In libpq, we set up and pass to OpenSSL callback routines to handle
locking. When we run out of SSL connections, we try to clean things
up by de-registering the hooks. Unfortunately, we had a few calls
into the OpenSSL library after these hooks were de-registered during
SSL cleanup which lead to deadlocking. This moves the thread callback
cleanup to be after all SSL-cleanup related OpenSSL library calls.
I've been unable to reproduce the deadlock with this fix.
In passing, also move the close_SSL call to be after unlocking our
ssl_config mutex when in a failure state. While it looks pretty
unlikely to be an issue, it could have resulted in deadlocks if we
ended up in this code path due to something other than SSL_new
failing. Thanks to Heikki for pointing this out.
Back-patch to all supported versions; note that the close_SSL issue
only goes back to 9.0, so that hunk isn't included in the 8.4 patch.
Initially found and reported by Vesa-Matti J Kari; many thanks to
both Heikki and Andres for their help running down the specific
issue and reviewing the patch.
When a timeline history file is fetched from server, it is initially created
with a temporary file name, and renamed to place. However, the temporary
file name was constructed using an uninitialized buffer. Usually that meant
that the file was created in current directory instead of the target, which
usually goes unnoticed, but if the target is on a different filesystem than
the current dir, the rename() would fail. Fix that.
The second issue is that pg_receivexlog would not take .partial files into
account when determining when scanning the target directory for existing
WAL files. If the timeline has switched in the server several times in the
last WAL segment, and pg_receivexlog is restarted, it would choose a too
old starting point. That's not a problem as long as the old WAL segment
exists in the server and can be streamed over, but will cause a failure if
it's not.
Backpatch to 9.3, where this timeline handling code was written.
Analysed by Andrew Gierth, bug #8453, based on a bug report on IRC.
It seems to make more sense to use "cutoff multixact" terminology
throughout the backend code; "freeze" is associated with replacing of an
Xid with FrozenTransactionId, which is not what we do for MultiXactIds.
Once the administrator has called for an immediate shutdown or a backend
crash has triggered a reinitialization, no mere SIGINT or SIGTERM should
change that course. Such derailment remains possible when the signal
arrives before quickdie() blocks signals. That being a narrow race
affecting most PostgreSQL signal handlers in some way, leave it for
another patch. Back-patch this to all supported versions.
Tom Lane [Tue, 3 Sep 2013 22:32:23 +0000 (18:32 -0400)]
Don't fail for bad GUCs in CREATE FUNCTION with check_function_bodies off.
The previous coding attempted to activate all the GUC settings specified
in SET clauses, so that the function validator could operate in the GUC
environment expected by the function body. However, this is problematic
when restoring a dump, since the SET clauses might refer to database
objects that don't exist yet. We already have the parameter
check_function_bodies that's meant to prevent forward references in
function definitions from breaking dumps, so let's change CREATE FUNCTION
to not install the SET values if check_function_bodies is off.
Authors of function validators were already advised not to make any
"context sensitive" checks when check_function_bodies is off, if indeed
they're checking anything at all in that mode. But extend the
documentation to point out the GUC issue in particular.
(Note that we still check the SET clauses to some extent; the behavior
with !check_function_bodies is now approximately equivalent to what ALTER
DATABASE/ROLE have been doing for awhile with context-dependent GUCs.)
This problem can be demonstrated in all active branches, so back-patch
all the way.
Tom Lane [Sun, 1 Sep 2013 23:43:02 +0000 (19:43 -0400)]
Update "Using EXPLAIN" documentation examples using current code.
It seems like a good idea to update these examples since some fairly
basic planner behaviors have changed in 9.3; notably that the startup cost
for an indexscan plan node is no longer invariably estimated at 0.00.
Tom Lane [Fri, 30 Aug 2013 23:15:21 +0000 (19:15 -0400)]
Reset the binary heap in MergeAppend rescans.
Failing to do so can cause queries to return wrong data, error out or crash.
This requires adding a new binaryheap_reset() method to binaryheap.c,
but that probably should have been there anyway.
Per bug #8410 from Terje Elde. Diagnosis and patch by Andres Freund.
Andrew Dunstan [Mon, 26 Aug 2013 18:58:14 +0000 (14:58 -0400)]
Unconditionally use the WSA equivalents of Socket error constants.
This change will only apply to mingw compilers, and has been found
necessary by late versions of the mingw-w64 compiler. It's the same as
what is done elsewhere for the Microsoft compilers.
Tom Lane [Sat, 24 Aug 2013 19:14:21 +0000 (15:14 -0400)]
Account better for planning cost when choosing whether to use custom plans.
The previous coding in plancache.c essentially used 10% of the estimated
runtime as its cost estimate for planning. This can be pretty bogus,
especially when the estimated runtime is very small, such as in a simple
expression plan created by plpgsql, or a simple INSERT ... VALUES.
While we don't have a really good handle on how planning time compares
to runtime, it seems reasonable to use an estimate based on the number of
relations referenced in the query, with a rather large multiplier. This
patch uses 1000 * cpu_operator_cost * (nrelations + 1), so that even a
trivial query will be charged 1000 * cpu_operator_cost for planning.
This should address the problem reported by Marc Cousin and others that
9.2 and up prefer custom plans in cases where the planning time greatly
exceeds what can be saved.
Magnus Hagander [Sat, 24 Aug 2013 15:11:31 +0000 (17:11 +0200)]
Don't crash when pg_xlog is empty and pg_basebackup -x is used
The backup will not work (without a logarchive, and that's the whole
point of -x) in this case, this patch just changes it to throw an
error instead of crashing when this happens.
Tom Lane [Fri, 23 Aug 2013 21:30:56 +0000 (17:30 -0400)]
In locate_grouping_columns(), don't expect an exact match of Var typmods.
It's possible that inlining of SQL functions (or perhaps other changes?)
has exposed typmod information not known at parse time. In such cases,
Vars generated by query_planner might have valid typmod values while the
original grouping columns only have typmod -1. This isn't a semantic
problem since the behavior of grouping only depends on type not typmod,
but it breaks locate_grouping_columns' use of tlist_member to locate the
matching entry in query_planner's result tlist.
We can fix this without an excessive amount of new code or complexity by
relying on the fact that locate_grouping_columns only gets called when
make_subplanTargetList has set need_tlist_eval == false, and that can only
happen if all the grouping columns are simple Vars. Therefore we only need
to search the sub_tlist for a matching Var, and we can reasonably define a
"match" as being a match of the Var identity fields
varno/varattno/varlevelsup. The code still Asserts that vartype matches,
but ignores vartypmod.
Per bug #8393 from Evan Martin. The added regression test case is
basically the same as his example. This has been broken for a very long
time, so back-patch to all supported branches.
Tom Lane [Wed, 21 Aug 2013 17:38:20 +0000 (13:38 -0400)]
Fix hash table size estimation error in choose_hashed_distinct().
We should account for the per-group hashtable entry overhead when
considering whether to use a hash aggregate to implement DISTINCT. The
comparable logic in choose_hashed_grouping() gets this right, but I think
I omitted it here in the mistaken belief that there would be no overhead
if there were no aggregate functions to be evaluated. This can result in
more than 2X underestimate of the hash table size, if the tuples being
aggregated aren't very wide. Per report from Tomas Vondra.
This bug is of long standing, but per discussion we'll only back-patch into
9.3. Changing the estimation behavior in stable branches seems to carry too
much risk of destabilizing plan choices for already-tuned applications.
Alvaro Herrera [Mon, 19 Aug 2013 21:48:17 +0000 (17:48 -0400)]
Fix removal of files in pgstats directories
Instead of deleting all files in stats_temp_directory and the permanent
directory on a crash, only remove those files that match the pattern of
files we actually write in them, to avoid possibly clobbering existing
unrelated contents of the temporary directory. Per complaint from Jeff
Janes, and subsequent discussion, starting at message
CAMkU=1z9+7RsDODnT4=cDFBRBp8wYQbd_qsLcMtKEf-oFwuOdQ@mail.gmail.com
Also, fix a bug in the same routine to avoid removing files from the
permanent directory twice (instead of once from that directory and then
from the temporary directory), also per report from Jeff Janes, in
message
CAMkU=1wbk947=-pAosDMX5VC+sQw9W4ttq6RM9rXu=MjNeEQKA@mail.gmail.com
This keeps the usual trigger file name unchanged from 9.2, avoiding nasty
issues if you use a pre-9.3 pg_ctl binary with a 9.3 server or vice versa.
The fallback behavior of creating a full checkpoint before starting up is now
triggered by a file called "fallback_promote". That can be useful for
debugging purposes, but we don't expect any users to have to resort to that
and we might want to remove that in the future, which is why the fallback
mechanism is undocumented.
Tom Lane [Mon, 19 Aug 2013 17:19:28 +0000 (13:19 -0400)]
Fix qual-clause-misplacement issues with pulled-up LATERAL subqueries.
In an example such as
SELECT * FROM
i LEFT JOIN LATERAL (SELECT * FROM j WHERE i.n = j.n) j ON true;
it is safe to pull up the LATERAL subquery into its parent, but we must
then treat the "i.n = j.n" clause as a qual clause of the LEFT JOIN. The
previous coding in deconstruct_recurse mistakenly labeled the clause as
"is_pushed_down", resulting in wrong semantics if the clause were applied
at the join node, as per an example submitted awhile ago by Jeremy Evans.
To fix, postpone processing of such clauses until we return back up to
the appropriate recursion depth in deconstruct_recurse.
In addition, tighten the is-safe-to-pull-up checks in is_simple_subquery;
we previously missed the possibility that the LATERAL subquery might itself
contain an outer join that makes lateral references in lower quals unsafe.
A regression test case equivalent to Jeremy's example was already in my
commit of yesterday, but was giving the wrong results because of this
bug. This patch fixes the expected output for that, and also adds a
test case for the second problem.
Alvaro Herrera [Mon, 19 Aug 2013 16:33:07 +0000 (12:33 -0400)]
Fix pg_upgrade failure from servers older than 9.3
When upgrading from servers of versions 9.2 and older, and MultiXactIds
have been used in the old server beyond the first page (that is, 2048
multis or more in the default 8kB-page build), pg_upgrade would set the
next multixact offset to use beyond what has been allocated in the new
cluster. This would cause a failure the first time the new cluster
needs to use this value, because the pg_multixact/offsets/ file wouldn't
exist or wouldn't be large enough. To fix, ensure that the transient
server instances launched by pg_upgrade extend the file as necessary.
Per report from Jesse Denardo in
CANiVXAj4c88YqipsyFQPboqMudnjcNTdB3pqe8ReXqAFQ=HXyA@mail.gmail.com
Kevin Grittner [Sun, 18 Aug 2013 21:24:59 +0000 (16:24 -0500)]
Remove relcache entry invalidation in REFRESH MATERIALIZED VIEW.
This was added as part of the attempt to support unlogged matviews
along with a populated status. It got missed when unlogged
support was removed pre-commit.
Noticed by Noah Misch. Back-patched to 9.3 branch.
Tom Lane [Sun, 18 Aug 2013 00:22:41 +0000 (20:22 -0400)]
Fix planner problems with LATERAL references in PlaceHolderVars.
The planner largely failed to consider the possibility that a
PlaceHolderVar's expression might contain a lateral reference to a Var
coming from somewhere outside the PHV's syntactic scope. We had a previous
report of a problem in this area, which I tried to fix in a quick-hack way
in commit 4da6439bd8553059766011e2a42c6e39df08717f, but Antonin Houska
pointed out that there were still some problems, and investigation turned
up other issues. This patch largely reverts that commit in favor of a more
thoroughly thought-through solution. The new theory is that a PHV's
ph_eval_at level cannot be higher than its original syntactic level. If it
contains lateral references, those don't change the ph_eval_at level, but
rather they create a lateral-reference requirement for the ph_eval_at join
relation. The code in joinpath.c needs to handle that.
Another issue is that createplan.c wasn't handling nested PlaceHolderVars
properly.
In passing, push knowledge of lateral-reference checks for join clauses
into join_clause_is_movable_to. This is mainly so that FDWs don't need
to deal with it.
This patch doesn't fix the original join-qual-placement problem reported by
Jeremy Evans (and indeed, one of the new regression test cases shows the
wrong answer because of that). But the PlaceHolderVar problems need to be
fixed before that issue can be addressed, so committing this separately
seems reasonable.
Robert Haas [Fri, 16 Aug 2013 19:28:03 +0000 (15:28 -0400)]
Rename some bgworker functions as we've done in master.
Commit 2dee7998f93062e2ae7fcc9048ff170e528d1724 renames these
functions in master, for consistency; per discussion, backport
just the renaming portion of that commit to 9.3 to keep the
branches in sync.
Tom Lane [Wed, 14 Aug 2013 22:38:36 +0000 (18:38 -0400)]
Remove ph_may_need from PlaceHolderInfo, with attendant simplifications.
The planner logic that attempted to make a preliminary estimate of the
ph_needed levels for PlaceHolderVars seems to be completely broken by
lateral references. Fortunately, the potential join order optimization
that this code supported seems to be of relatively little value in
practice; so let's just get rid of it rather than trying to fix it.
Getting rid of this allows fairly substantial simplifications in
placeholder.c, too, so planning in such cases should be a bit faster.
Issue noted while pursuing bugs reported by Jeremy Evans and Antonin
Houska, though this doesn't in itself fix either of their reported cases.
What this does do is prevent an Assert crash in the kind of query
illustrated by the added regression test. (I'm not sure that the plan for
that query is stable enough across platforms to be usable as a regression
test output ... but we'll soon find out from the buildfarm.)
Back-patch to 9.3. The problem case can't arise without LATERAL, so
no need to touch older branches.