Bruce Momjian [Tue, 4 Sep 2012 02:15:09 +0000 (22:15 -0400)]
In pg_upgrade, pull the port number from postmaster.pid, like we do for
socket location. Also, prevent putting the socket in the current
directory for pre-9.1 servers in live check and non-live check mode,
because pre-9.1 pg_ctl -w can't handle it.
Andrew Dunstan [Mon, 3 Sep 2012 22:11:02 +0000 (18:11 -0400)]
Use correct path separator for Windows builtin commands.
pg_upgrade produces a platform-specific script to remove the old
directory, but on Windows it has not been making sure that the
paths it writes as arguments for rmdir and del use the backslash
path separator, which will cause these scripts to fail.
Tom Lane [Mon, 3 Sep 2012 19:03:08 +0000 (15:03 -0400)]
Back-patch recent pg_upgrade fixes into 9.2.
This syncs contrib/pg_upgrade in the 9.2 branch with HEAD, except for the
HEAD changes related to converting XLogRecPtr to 64-bit int. It includes
back-patching these commits:
Tom Lane [Mon, 3 Sep 2012 15:24:36 +0000 (11:24 -0400)]
Make psql's \d+ show reloptions for all relkinds.
Formerly it would only show them for relkinds 'r' and 'f' (plain tables
and foreign tables). However, as of 9.2, views can also have reloptions,
namely security_barrier. The relkind restriction seems pointless and
not at all future-proof, so just print reloptions whenever there are any.
In passing, make some cosmetic improvements to the code that pulls the
"tableinfo" fields out of the PGresult.
Noted and patched by Dean Rasheed, with adjustment for all relkinds by me.
Andrew Dunstan [Sat, 1 Sep 2012 00:50:31 +0000 (20:50 -0400)]
Restore setting of _USE_32BIT_TIME_T to 32 bit MSVC builds.
This was removed in commit cd004067742ee16ee63e55abfb4acbd5f09fbaab,
we're not quite sure why, but there have been reports of crashes due
to AS Perl being built with it when we are not, and it certainly
seems like the right thing to do. There is still some uncertainty
as to why it sometimes fails and sometimes doesn't.
Original patch from Owais Khani, substantially reworked and
extended by Andrew Dunstan.
Tom Lane [Fri, 31 Aug 2012 18:18:02 +0000 (14:18 -0400)]
Make configure probe for mbstowcs_l as well as wcstombs_l.
We previously supposed that any given platform would supply both or neither
of these functions, so that one configure test would be sufficient. It now
appears that at least on AIX this is not the case ... which is likely an
AIX bug, but nonetheless we need to cope with it. So use separate tests.
Per bug #6758; thanks to Andrew Hastie for doing the followup testing
needed to confirm what was happening.
Backpatch to 9.1, where we began using these functions.
Tom Lane [Fri, 31 Aug 2012 03:47:46 +0000 (23:47 -0400)]
Back-patch recent fixes for gistchoose and gistRelocateBuildBuffersOnSplit.
This back-ports commits c8ba697a4bdb934f0c51424c654e8db6133ea255 and e5db11c5582b469c04a11f217a0f32c827da5dd7, which fix one definite and one
speculative bug in gistchoose, and make the code a lot more intelligible as
well. In 9.2 only, this also affects the largely-copied-and-pasted logic
in gistRelocateBuildBuffersOnSplit.
The impact of the bugs was that the functions might make poor decisions
as to which index tree branch to push a new entry down into, resulting in
GiST index bloat and poor performance. The fixes rectify these decisions
for future insertions, but a REINDEX would be needed to clean up any
existing index bloat.
Robert Haas [Thu, 30 Aug 2012 18:14:22 +0000 (14:14 -0400)]
Document how to prevent PostgreSQL itself from exhausting memory.
The existing documentation in Linux Memory Overcommit seemed to
assume that PostgreSQL itself could never be the problem, or at
least it didn't tell you what to do about it.
Per discussion with Craig Ringer and Kevin Grittner.
Tom Lane [Tue, 28 Aug 2012 00:17:16 +0000 (20:17 -0400)]
Add section IDs to subsections of syntax.sgml that lacked them.
This is so that these sections will have stable HTML tags that one can
link to, rather than things like "AEN1902". Perhaps we should mount a
campaign to do this everywhere, but I've found myself pointing at
syntax.sgml subsections often enough to be sure it's useful here.
Tom Lane [Mon, 27 Aug 2012 16:45:51 +0000 (12:45 -0400)]
Fix DROP INDEX CONCURRENTLY IF EXISTS.
This threw ERROR, not the expected NOTICE, if the index didn't exist.
The bug was actually visible in not-as-expected regression test output,
so somebody wasn't paying too close attention in commit 8cb53654dbdb4c386369eb988062d0bbb6de725e.
Per report from Brendan Byrd.
Tom Lane [Fri, 24 Aug 2012 17:09:12 +0000 (13:09 -0400)]
Fix issues with checks for unsupported transaction states in Hot Standby.
The GUC check hooks for transaction_read_only and transaction_isolation
tried to check RecoveryInProgress(), so as to disallow setting read/write
mode or serializable isolation level (respectively) in hot standby
sessions. However, GUC check hooks can be called in many situations where
we're not connected to shared memory at all, resulting in a crash in
RecoveryInProgress(). Among other cases, this results in EXEC_BACKEND
builds crashing during child process start if default_transaction_isolation
is serializable, as reported by Heikki Linnakangas. Protect those calls
by silently allowing any setting when not inside a transaction; which is
okay anyway since these GUCs are always reset at start of transaction.
Also, add a check to GetSerializableTransactionSnapshot() to complain
if we are in hot standby. We need that check despite the one in
check_XactIsoLevel() because default_transaction_isolation could be
serializable. We don't want to complain any sooner than this in such
cases, since that would prevent running transactions at all in such a
state; but a transaction can be run, if SET TRANSACTION ISOLATION is done
before setting a snapshot. Per report some months ago from Robert Haas.
Back-patch to 9.1, since these problems were introduced by the SSI patch.
Kevin Grittner and Tom Lane, with ideas from Heikki Linnakangas
Tom Lane [Thu, 23 Aug 2012 21:25:17 +0000 (17:25 -0400)]
Fix cascading privilege revoke to notice when privileges are still held.
If we revoke a grant option from some role X, but X still holds the option
via another grant, we should not recursively revoke the privilege from
role(s) Y that X had granted it to. This was supposedly fixed as one
aspect of commit 4b2dafcc0b1a579ef5daaa2728223006d1ff98e9, but I must not
have tested it, because in fact that code never worked: it forgot to shift
the grant-option bits back over when masking the bits being revoked.
Per bug #6728 from Daniel German. Back-patch to all active branches,
since this has been wrong since 8.0.
Tom Lane [Thu, 23 Aug 2012 03:55:34 +0000 (23:55 -0400)]
Remove overly-pessimistic statement about constraint exclusion.
As of 9.2, constraint exclusion should work okay with prepared statements:
the planner will try custom plans with actual values of the parameters,
and observe that they are a lot cheaper than the generic plan, and thus
never fall back to using the generic plan. Noted by Tatsuhito Kasahara.
Tom Lane [Wed, 22 Aug 2012 18:04:02 +0000 (14:04 -0400)]
Fix erroneous documentation of synchronous_commit = remote_write.
The docs claimed that this mode only waits for the standby to receive WAL
data, but actually it waits for the data to be written out to the standby's
OS; which is a pretty significant difference because it removes the risk of
crash of the walreceiver process.
Tom Lane [Tue, 21 Aug 2012 19:18:36 +0000 (15:18 -0400)]
Fix dumping of security_barrier views with circular dependencies.
If a view has circular dependencies, pg_dump splits it into a CREATE TABLE
and a CREATE RULE command to break the dependency loop. However, if the
view has reloptions, those options cannot be applied in the CREATE TABLE
command, because views and tables have different allowed reloptions so
CREATE TABLE would reject them. Instead apply the reloptions after the
CREATE RULE, using ALTER VIEW SET.
Tom Lane [Mon, 20 Aug 2012 17:24:58 +0000 (13:24 -0400)]
Fix bugs in contrib/pg_trgm's LIKE pattern analysis code.
Extraction of trigrams did not process LIKE escape sequences properly,
leading to possible misidentification of trigrams near escapes, resulting
in incorrect index search results.
Tom Lane [Fri, 17 Aug 2012 04:05:33 +0000 (00:05 -0400)]
Check LIBXML_VERSION instead of testing in configure script.
We had put a test for libxml2's xmlStructuredErrorContext variable in
configure, but of course that doesn't work on Windows builds. The next
best alternative seems to be to test the LIBXML_VERSION symbol provided
by xmlversion.h.
Per report from Talha Bin Rizwan, though this fixes it in a different way
than his proposed patch.
Tom Lane [Thu, 16 Aug 2012 17:04:03 +0000 (13:04 -0400)]
Allow create_index_paths() to consider multiple join bitmapscan paths.
In the initial cut at the "parameterized paths" feature, I'd simplified
create_index_paths() to the point where it would only generate a single
parameterized bitmap path per relation. Experimentation with an example
supplied by Josh Berkus convinces me that that's not good enough: we really
need to consider a bitmap path for each possible outer relation. Otherwise
we have regressions relative to pre-9.2 versions, in which the planner
picks a plain indexscan where it should have used a bitmap scan in queries
involving three or more tables. Indeed, after fixing this, several queries
in the regression tests show improved plans as a result of using bitmap not
plain indexscans.
Fix GiST buffering build bug, which caused "failed to re-find parent" errors.
We use a hash table to track the parents of inner pages, but when inserting
to a leaf page, the caller of gistbufferinginserttuples() must pass a
correct block number of the leaf's parent page. Before gistProcessItup()
descends to a child page, it checks if the downlink needs to be adjusted to
accommodate the new tuple, and updates the downlink if necessary. However,
updating the downlink might require splitting the page, which might move the
downlink to a page to the right. gistProcessItup() doesn't realize that, so
when it descends to the leaf page, it might pass an out-of-date parent block
number as a result. Fix that by returning the block a tuple was inserted to
from gistbufferinginserttuples().
Tom Lane [Wed, 15 Aug 2012 23:01:23 +0000 (19:01 -0400)]
Fix rescan logic in nodeCtescan.
The previous coding essentially assumed that nodes would be rescanned in
the same order they were initialized in; or at least that the "leader" of
a group of CTEscans would be rescanned before any others were required to
execute. Unfortunately, that isn't even a little bit true. It's possible
to devise queries in which the leader isn't rescanned until other CTEscans
on the same CTE have run to completion, or even in which the leader never
gets a rescan call at all.
The fix makes the leader specially responsible only for initial creation
and final destruction of the tuplestore; rescan resets are now a
symmetrically shared responsibility. This means that we might reset the
tuplestore multiple times when restarting a plan subtree containing
multiple CTEscans; but resetting an already-empty tuplestore is cheap
enough that that doesn't seem like a problem.
Per report from Adam Mackler; the new regression test cases are based on
his example query.
Back-patch to 8.4 where CTE scans were introduced.
Tom Lane [Wed, 15 Aug 2012 15:27:00 +0000 (11:27 -0400)]
Disallow extensions from owning the schema they are assigned to.
This situation creates a dependency loop that confuses pg_dump and probably
other things. Moreover, since the mental model is that the extension
"contains" schemas it owns, but "is contained in" its extschema (even
though neither is strictly true), having both true at once is confusing for
people too. So prevent the situation from being set up.
Reported and patched by Thom Brown. Back-patch to 9.1 where extensions
were added.
Tom Lane [Wed, 15 Aug 2012 04:07:15 +0000 (00:07 -0400)]
Resurrect the "last ditch" code path in join_search_one_level().
This essentially reverts commit e54b10a62db2991235fe800c629baef4531a6d67,
in which I'd decided that the "last ditch" join logic was useless. The
folly of that is now exposed by a report from Pavel Stehule: although the
function should always find at least one join in a self-contained join
problem, it can still fail to do so in a sub-problem created by artificial
from_collapse_limit or join_collapse_limit constraints. Adjust the
comments to describe this, and simplify the code a bit to match the new
coding of the earlier loop in the function.
I'm not terribly happy about this: I still subscribe to the opinion stated
in the previous commit message that the "last ditch" code can obscure logic
bugs elsewhere. But the alternative seems to be to complicate the earlier
tests for does-this-relation-have-a-join-clause to the point where they can
tell whether the join clauses link outside the current join sub-problem.
And that looks messy, slow, and possibly a source of bugs in itself.
In any case, now is not the time to be inserting experimental code into
9.2, so let's just go back to the time-tested solution.
Tom Lane [Tue, 14 Aug 2012 22:28:37 +0000 (18:28 -0400)]
Prevent access to external files/URLs via contrib/xml2's xslt_process().
libxslt offers the ability to read and write both files and URLs through
stylesheet commands, thus allowing unprivileged database users to both read
and write data with the privileges of the database server. Disable that
through proper use of libxslt's security options.
Also, remove xslt_process()'s ability to fetch documents and stylesheets
from external files/URLs. While this was a documented "feature", it was
long regarded as a terrible idea. The fix for CVE-2012-3489 broke that
capability, and rather than expend effort on trying to fix it, we're just
going to summarily remove it.
While the ability to write as well as read makes this security hole
considerably worse than CVE-2012-3489, the problem is mitigated by the fact
that xslt_process() is not available unless contrib/xml2 is installed,
and the longstanding warnings about security risks from that should have
discouraged prudent DBAs from installing it in security-exposed databases.
Tom Lane [Tue, 14 Aug 2012 22:25:39 +0000 (18:25 -0400)]
Prevent access to external files/URLs via XML entity references.
xml_parse() would attempt to fetch external files or URLs as needed to
resolve DTD and entity references in an XML value, thus allowing
unprivileged database users to attempt to fetch data with the privileges
of the database server. While the external data wouldn't get returned
directly to the user, portions of it could be exposed in error messages
if the data didn't parse as valid XML; and in any case the mere ability
to check existence of a file might be useful to an attacker.
The ideal solution to this would still allow fetching of references that
are listed in the host system's XML catalogs, so that documents can be
validated according to installed DTDs. However, doing that with the
available libxml2 APIs appears complex and error-prone, so we're not going
to risk it in a security patch that necessarily hasn't gotten wide review.
So this patch merely shuts off all access, causing any external fetch to
silently expand to an empty string. A future patch may improve this.
In HEAD and 9.2, also suppress warnings about undefined entities, which
would otherwise occur as a result of not loading referenced DTDs. Previous
branches don't show such warnings anyway, due to different error handling
arrangements.
Credit to Noah Misch for first reporting the problem, and for much work
towards a solution, though this simplistic approach was not his preference.
Also thanks to Daniel Veillard for consultation.
Tom Lane [Tue, 14 Aug 2012 14:54:30 +0000 (10:54 -0400)]
Update time zone data files to tzdata release 2012e.
DST law changes in Morocco; Tokelau has relocated to the other side of
the International Date Line; and apparently Olson had Tokelau's GMT
offset wrong by an hour even before that.
There are also a large number of non-significant changes in this update.
Upstream took the opportunity to remove trailing whitespace, and the
SCCS-style version numbers on the individual files are gone too.
Tom Lane [Sat, 11 Aug 2012 16:51:30 +0000 (12:51 -0400)]
Fix dependencies generated during ALTER TABLE ADD CONSTRAINT USING INDEX.
This command generated new pg_depend entries linking the index to the
constraint and the constraint to the table, which match the entries made
when a unique or primary key constraint is built de novo. However, it did
not bother to get rid of the entries linking the index directly to the
table. We had considered the issue when the ADD CONSTRAINT USING INDEX
patch was written, and concluded that we didn't need to get rid of the
extra entries. But this is wrong: ALTER COLUMN TYPE wasn't expecting such
redundant dependencies to exist, as reported by Hubert Depesz Lubaczewski.
On reflection it seems rather likely to break other things as well, since
there are many bits of code that crawl pg_depend for one purpose or
another, and most of them are pretty naive about what relationships they're
expecting to find. Fortunately it's not that hard to get rid of the extra
dependency entries, so let's do that.
Back-patch to 9.1, where ALTER TABLE ADD CONSTRAINT USING INDEX was added.
Simon Riggs [Wed, 8 Aug 2012 22:58:49 +0000 (23:58 +0100)]
Force archive_status of .done for xlogs created by dearchival/replication.
This prevents spurious attempts to archive xlog files after promotion of
standby, a bug introduced by cascading replication patch in 9.2.
Fujii Masao, simplified and extended to cover streaming by Simon Riggs
Simon Riggs [Wed, 8 Aug 2012 20:28:41 +0000 (21:28 +0100)]
Fix minor bug in XLogFileRead() that accidentally worked.
Cascading replication copied the incoming file into pg_xlog but
didn't set path correctly, so the first attempt to open file failed
causing it to loop around and look for file in pg_xlog. So the
earlier coding worked, but accidentally rather than by design.
Spotted by Fujii Masao, fix by Fujii Masao and Simon Riggs
Tom Lane [Wed, 8 Aug 2012 15:52:06 +0000 (11:52 -0400)]
Fix TwoPhaseGetDummyBackendId().
This was broken in commit ed0b409d22346b1b027a4c2099ca66984d94b6dd,
which revised the GlobalTransactionData struct to not include the
associated PGPROC as its first member, but overlooked one place where
a cast was used in reliance on that equivalence.
The most effective way of fixing this seems to be to create a new function
that looks up the GlobalTransactionData struct given the XID, and make
both TwoPhaseGetDummyBackendId and TwoPhaseGetDummyProc rely on that.
Bruce Momjian [Tue, 7 Aug 2012 17:10:44 +0000 (13:10 -0400)]
Fix pg_upgrade file share violation on Windows created by the commit 4741e9afb93f0d769655b2d18c2b73b86f281010. This was done by adding an
optional second log file parameter to exec_prog(), and closing and
reopening the log file between system() calls.
Magnus Hagander [Mon, 6 Aug 2012 11:53:46 +0000 (13:53 +0200)]
Complain with proper error message if streaming stops prematurely
In particular, with a controlled shutdown of the master, pg_basebackup
with streaming log could terminate without an error message, even though
the backup is not consistent.
In passing, fix a few cases where walfile wasn't properly set to -1 after
closing.
Perform conversion from Python unicode to string/bytes object via UTF-8.
We used to convert the unicode object directly to a string in the server
encoding by calling Python's PyUnicode_AsEncodedString function. In other
words, we used Python's routines to do the encoding. However, that has a
few problems. First of all, it required keeping a mapping table of Python
encoding names and PostgreSQL encodings. But the real killer was that Python
doesn't support EUC_TW and MULE_INTERNAL encodings at all.
Instead, convert the Python unicode object to UTF-8, and use PostgreSQL's
encoding conversion functions to convert from UTF-8 to server encoding. We
were already doing the same in the other direction in PLyUnicode_FromString,
so this is more consistent, too.
Note: This makes SQL_ASCII to behave more leniently. We used to map
SQL_ASCII to Python's 'ascii', which on Python means strict 7-bit ASCII
only, so you got an error if the python string contained anything but pure
ASCII. You no longer get an error; you get the UTF-8 representation of the
string instead.
Backpatch to 9.0, where these conversions were introduced.
Tom Lane [Fri, 3 Aug 2012 21:39:45 +0000 (17:39 -0400)]
Fix bugs with parsing signed hh:mm and hh:mm:ss fields in interval input.
DecodeInterval() failed to honor the "range" parameter (the special SQL
syntax for indicating which fields appear in the literal string) if the
time was signed. This seems inappropriate, so make it work like the
not-signed case. The inconsistency was introduced in my commit f867339c0148381eb1d01f93ab5c79f9d10211de, which as noted in its log message
was only really focused on making SQL-compliant literals work per spec.
Including a sign here is not per spec, but if we're going to allow it
then it's reasonable to expect it to work like the not-signed case.
Also, remove bogus setting of tmask, which caused subsequent processing to
think that what had been given was a timezone and not an hh:mm(:ss) field,
thus confusing checks for redundant fields. This seems to be an aboriginal
mistake in Lockhart's commit 2cf1642461536d0d8f3a1cf124ead0eac04eb760.
Add regression test cases to illustrate the changed behaviors.
Back-patch as far as 8.4, where support for spec-compliant interval
literals was added.
Range problem reported and diagnosed by Amit Kapila, tmask problem by me.
As noted by Noah Misch, btree_xlog_delete_get_latestRemovedXid is
critically dependent on the assumption that it's examining a consistent
state of the database. This was undocumented though, so the
seemingly-unrelated check for no active HS sessions might be thought to be
merely an optional optimization. Improve comments, and add an explicit
check of reachedConsistency just to be sure.
This function returns InvalidTransactionId (thereby killing all HS
transactions) in several cases that are not nearly unlikely enough for my
taste. This commit doesn't attempt to fix those deficiencies, just
document them.
Back-patch to 9.2, not from any real functional need but just to keep the
branches more closely synced to simplify possible future back-patching.
Tom Lane [Fri, 3 Aug 2012 19:22:47 +0000 (15:22 -0400)]
In SPGiST replay, do conflict resolution before modifying the page.
In yesterday's commit 962e0cc71e839c58fb9125fa85511b8bbb8bdbee, I added the
ResolveRecoveryConflictWithSnapshot call in the wrong place. I correctly
put it before spgRedoVacuumRedirect itself would modify the index page ---
but not before RestoreBkpBlocks, so replay of a record with a full-page
image would modify the page before kicking off any conflicting HS
transactions. Oops.
Tom Lane [Thu, 2 Aug 2012 19:34:21 +0000 (15:34 -0400)]
Fix race conditions associated with SPGiST redirection tuples.
The correct test for whether a redirection tuple is removable is whether
tuple's xid < RecentGlobalXmin, not OldestXmin; the previous coding
failed to protect index searches being done in concurrent transactions that
have no XID. This mirrors the recent fix in btree's page recycling logic
made in commit d3abbbebe52eb1e59e621c880ad57df9d40d13f2.
Also, WAL-log the newest XID of any removed redirection tuple on an index
page, and apply ResolveRecoveryConflictWithSnapshot during InHotStandby WAL
replay. This protects against concurrent Hot Standby transactions possibly
needing to see the redirection tuple(s).
Per my query of 2012-03-12 and subsequent discussion.
Tom Lane [Thu, 2 Aug 2012 17:10:36 +0000 (13:10 -0400)]
Replace libpq's "row processor" API with a "single row" mode.
After taking awhile to digest the row-processor feature that was added to
libpq in commit 92785dac2ee7026948962cd61c4cd84a2d052772, we've concluded
it is over-complicated and too hard to use. Leave the core infrastructure
changes in place (that is, there's still a row processor function inside
libpq), but remove the exposed API pieces, and instead provide a "single
row" mode switch that causes PQgetResult to return one row at a time in
separate PGresult objects.
This approach incurs more overhead than proper use of a row processor
callback would, since construction of a PGresult per row adds extra cycles.
However, it is far easier to use and harder to break. The single-row mode
still affords applications the primary benefit that the row processor API
was meant to provide, namely not having to accumulate large result sets in
memory before processing them. Preliminary testing suggests that we can
probably buy back most of the extra cycles by micro-optimizing construction
of the extra results, but that task will be left for another day.
Tom Lane [Tue, 31 Jul 2012 21:56:27 +0000 (17:56 -0400)]
Fix WITH attached to a nested set operation (UNION/INTERSECT/EXCEPT).
Parse analysis neglected to cover the case of a WITH clause attached to an
intermediate-level set operation; it only handled WITH at the top level
or WITH attached to a leaf-level SELECT. Per report from Adam Mackler.
In HEAD, I rearranged the order of SelectStmt's fields to put withClause
with the other fields that can appear on non-leaf SelectStmts. In back
branches, leave it alone to avoid a possible ABI break for third-party
code.
Tom Lane [Tue, 31 Jul 2012 18:36:58 +0000 (14:36 -0400)]
Fix syslogger so that log_truncate_on_rotation works in the first rotation.
In the original coding of the log rotation stuff, we did not bother to make
the truncation logic work for the very first rotation after postmaster
start (or after a syslogger crash and restart). It just always appended
in that case. It did not seem terribly important at the time, but we've
recently had two separate complaints from people who expected it to work
unsurprisingly. (Both users tend to restart the postmaster about as often
as a log rotation is configured to happen, which is maybe not typical use,
but still...) Since the initial log file is opened in the postmaster,
fixing this requires passing down some more state to the syslogger child
process.
It's always been like this, so back-patch to all supported branches.
The most user-visible part of this is to change the long options
--statusint and --noloop to --status-interval and --no-loop,
respectively, per discussion.
Also, consistently enclose file names in double quotes, per our
conventions; and consistently use the term "transaction log file" to
talk about WAL segments. (Someday we may need to go over this
terminology and make it consistent across the whole source code.)
Finally, reflow the code to better fit in 80 columns, and have pgindent
fix it up some more.
Fix memory and file descriptor leaks in pg_receivexlog/pg_basebackup
When the internal loop mode was added, freeing memory and closing
filedescriptors before returning became important, and a few cases
in the code missed that.
This is a backpatch of commit 058a050e to the 9.2 branch, which seems to
have been neglected (in error, because the bugs it fixes were introduced
in commit 16282ae6 which is present in both master and 9.2).
Tom Lane [Fri, 27 Jul 2012 23:31:19 +0000 (19:31 -0400)]
Improve reporting of error situations in find_other_exec().
This function suppressed any stderr output from the called program, which
is unnecessary in the normal case and unhelpful in error cases. It also
gave a rather opaque message along the lines of "fgets failure: Success"
in case the called program failed to return anything on stdout. Since
we've seen multiple reports of people not understanding what's wrong when
pg_ctl reports this, improve the message.
Tom Lane [Thu, 26 Jul 2012 18:29:32 +0000 (14:29 -0400)]
Only allow autovacuum to be auto-canceled by a directly blocked process.
In the original coding of the autovacuum cancel feature, commit acac68b2bcae818bc8803b8cb8cbb17eee8d5e2b, an autovacuum process was
considered a target for cancellation if it was found to hard-block any
process examined in the deadlock search. This patch tightens the test so
that the autovacuum must directly hard-block the current process. This
should make the behavior more predictable in general, and in particular
it ensures that an autovacuum will not be canceled with less than
deadlock_timeout grace period. In the old coding, it was possible for an
autovacuum to be canceled almost instantly, given unfortunate timing of two
or more other processes' lock attempts.
This also justifies the logging methodology in the recent commit d7318d43d891bd63e82dcfc27948113ed7b1db80; without this restriction, that
patch isn't providing enough information to see the connection of the
canceling process to the autovacuum. Like that one, patch all the way
back.
Robert Haas [Thu, 26 Jul 2012 13:18:32 +0000 (09:18 -0400)]
Log a better message when canceling autovacuum.
The old message was at DEBUG2, so typically it didn't show up in the
log at all. As a result, in most cases where autovacuum was canceled,
the only information that was logged was the table being vacuumed,
with no indication as to what problem caused the cancel. Crank up
the level to LOG and add some more details to assist with debugging.
Back-patch all the way, per discussion on pgsql-hackers.
Tom Lane [Wed, 25 Jul 2012 21:40:43 +0000 (17:40 -0400)]
Fix longstanding crash-safety bug with newly-created-or-reset sequences.
If a crash occurred immediately after the first nextval() call for a serial
column, WAL replay would restore the sequence to a state in which it
appeared that no nextval() had been done, thus allowing the first sequence
value to be returned again by the next nextval() call; as reported in
bug #6748 from Xiangming Mei.
More generally, the problem would occur if an ALTER SEQUENCE was executed
on a freshly created or reset sequence. (The manifestation with serial
columns was introduced in 8.2 when we added an ALTER SEQUENCE OWNED BY step
to serial column creation.) The cause is that sequence creation attempted
to save one WAL entry by writing out a WAL record that made it appear that
the first nextval() had already happened (viz, with is_called = true),
while marking the sequence's in-database state with log_cnt = 1 to show
that the first nextval() need not emit a WAL record. However, ALTER
SEQUENCE would emit a new WAL entry reflecting the actual in-database state
(with is_called = false). Then, nextval would allocate the first sequence
value and set is_called = true, but it would trust the log_cnt value and
not emit any WAL record. A crash at this point would thus restore the
sequence to its post-ALTER state, causing the next nextval() call to return
the first sequence value again.
To fix, get rid of the idea of logging an is_called status different from
reality. This means that the first nextval-driven WAL record will happen
at the first nextval call not the second, but the marginal cost of that is
pretty negligible. In addition, make sure that ALTER SEQUENCE resets
log_cnt to zero in any case where it touches sequence parameters that
affect future nextval results. This will result in some user-visible
changes in the contents of a sequence's log_cnt column, as reflected in the
patch's regression test changes; but no application should be depending on
that anyway, since it was already true that log_cnt changes rather
unpredictably depending on checkpoint timing.
In addition, make some basically-cosmetic improvements to get rid of
sequence.c's undesirable intimacy with page layout details. It was always
really trying to WAL-log the contents of the sequence tuple, so we should
have it do that directly using a HeapTuple's t_data and t_len, rather than
backing into it with some magic assumptions about where the tuple would be
on the sequence's page.