Tom Lane [Wed, 31 Jul 2013 15:31:26 +0000 (11:31 -0400)]
Fix regexp_matches() handling of zero-length matches.
We'd find the same match twice if it was of zero length and not immediately
adjacent to the previous match. replace_text_regexp() got similar cases
right, so adjust this search logic to match that. Note that even though
the regexp_split_to_xxx() functions share this code, they did not display
equivalent misbehavior, because the second match would be considered
degenerate and ignored.
Currently we don't need to update the pg_tablespace catalog
after redefining the symbolic links to the tablespaces
because pg_tablespace.spclocation column was removed in
PostgreSQL 9.2.
Back patch to 9.2 where pg_tablespace.spclocation was removed.
Refactoring as part of commit 8ceb24568054232696dddc1166a8563bc78c900a
had the unintended effect of making REINDEX TABLE and REINDEX DATABASE
no longer validate constraints enforced by the indexes in question;
REINDEX INDEX still did so. Indexes marked invalid remained so, and
constraint violations arising from data corruption went undetected.
Back-patch to 9.0, like the causative commit.
Tom Lane [Mon, 29 Jul 2013 14:42:41 +0000 (10:42 -0400)]
Fix contrib/cube and contrib/seg to build with bison 3.0.
These modules used the YYPARSE_PARAM macro, which has been deprecated
by the bison folk since 1.875, and which they finally removed in 3.0.
Adjust the code to use the replacement facility, %parse-param, which
is a much better solution anyway since it allows specification of the
type of the extra parser parameter. We can thus get rid of a lot of
unsightly casting.
Back-patch to all active branches, since somebody might try to build
a back branch with up-to-date tools.
Bruce Momjian [Sat, 27 Jul 2013 19:00:58 +0000 (15:00 -0400)]
pg_upgrade: fix -j race condition on Windows
Pg_Upgrade cannot write the command string to the log file and then call
system() to write to the same file without causing occasional file-share
errors on Windows. So instead, write the command string to the log file
after system(), in those cases.
Backpatch to 9.3.
Bruce Momjian [Fri, 26 Jul 2013 17:52:01 +0000 (13:52 -0400)]
pg_upgrade docs: don't use cluster for binary/lib
In a few cases, pg_upgrade said old/new cluster location when it meant
old/new Postgres install location, so fix those.
Per private email report
Tom Lane [Thu, 25 Jul 2013 20:45:47 +0000 (16:45 -0400)]
Prevent leakage of SPI tuple tables during subtransaction abort.
plpgsql often just remembers SPI-result tuple tables in local variables,
and has no mechanism for freeing them if an ereport(ERROR) causes an escape
out of the execution function whose local variable it is. In the original
coding, that wasn't a problem because the tuple table would be cleaned up
when the function's SPI context went away during transaction abort.
However, once plpgsql grew the ability to trap exceptions, repeated
trapping of errors within a function could result in significant
intra-function-call memory leakage, as illustrated in bug #8279 from
Chad Wagner.
We could fix this locally in plpgsql with a bunch of PG_TRY/PG_CATCH
coding, but that would be tedious, probably slow, and prone to bugs of
omission; moreover it would do nothing for similar risks elsewhere.
What seems like a better plan is to make SPI itself responsible for
freeing tuple tables at subtransaction abort. This patch attacks the
problem that way, keeping a list of live tuple tables within each SPI
function context. Currently, such freeing is automatic for tuple tables
made within the failed subtransaction. We might later add a SPI call to
mark a tuple table as not to be freed this way, allowing callers to opt
out; but until someone exhibits a clear use-case for such behavior, it
doesn't seem worth bothering.
A very useful side-effect of this change is that SPI_freetuptable() can
now defend itself against bad calls, such as duplicate free requests;
this should make things more robust in many places. (In particular,
this reduces the risks involved if a third-party extension contains
now-redundant SPI_freetuptable() calls in error cleanup code.)
Even though the leakage problem is of long standing, it seems imprudent
to back-patch this into stable branches, since it does represent an API
semantics change for SPI users. We'll patch this in 9.3, but live with
the leakage in older branches.
Tom Lane [Thu, 25 Jul 2013 15:39:11 +0000 (11:39 -0400)]
Fix configure probe for sys/ucred.h.
The configure script's test for <sys/ucred.h> did not work on OpenBSD,
because on that platform <sys/param.h> has to be included first.
As a result, socket peer authentication was disabled on that platform.
Problem introduced in commit be4585b1c27ac5dbdd0d61740d18f7ad9a00e268.
Bruce Momjian [Thu, 25 Jul 2013 15:33:14 +0000 (11:33 -0400)]
pg_upgrade: adjust umask() calls
Since pg_upgrade -j on Windows uses threads, calling umask()
before/after opening a file via fopen_priv() is no longer possible, so
set umask() as we enter the thread-creating loop, and reset it on exit.
Also adjust internal fopen_priv() calls to just use fopen().
Backpatch to 9.3beta.
Bruce Momjian [Thu, 25 Jul 2013 02:01:14 +0000 (22:01 -0400)]
pg_upgrade: fix initialization of thread argument
Reorder initialization of thread argument marker to it happens before
reap_child() is called.
Backpatch to 9.3.
Tom Lane [Wed, 24 Jul 2013 21:42:03 +0000 (17:42 -0400)]
Improve ilist.h's support for deletion of slist elements during iteration.
Previously one had to use slist_delete(), implying an additional scan of
the list, making this infrastructure considerably less efficient than
traditional Lists when deletion of element(s) in a long list is needed.
Modify the slist_foreach_modify() macro to support deleting the current
element in O(1) time, by keeping a "prev" pointer in addition to "cur"
and "next". Although this makes iteration with this macro a bit slower,
no real harm is done, since in any scenario where you're not going to
delete the current list element you might as well just use slist_foreach
instead. Improve the comments about when to use each macro.
Back-patch to 9.3 so that we'll have consistent semantics in all branches
that provide ilist.h. Note this is an ABI break for callers of
slist_foreach_modify().
Bruce Momjian [Wed, 24 Jul 2013 17:15:47 +0000 (13:15 -0400)]
pg_upgrade: more Windows parallel/-j fixes
More fixes to handle Windows thread parameter passing.
Backpatch to 9.3 beta.
Patch originally from Andrew Dunstan
Bruce Momjian [Wed, 24 Jul 2013 14:00:37 +0000 (10:00 -0400)]
pg_upgrade: fix parallel/-j crash on Windows
This fixes the problem of passing the wrong function pointer when doing
parallel copy/link operations on Windows.
Backpatched to 9.3beta.
Found and patch supplied by Andrew Dunstan
Tom Lane [Wed, 24 Jul 2013 04:44:09 +0000 (00:44 -0400)]
Fix booltestsel() for case where we have NULL stats but not MCV stats.
In a boolean column that contains mostly nulls, ANALYZE might not find
enough non-null values to populate the most-common-values stats,
but it would still create a pg_statistic entry with stanullfrac set.
The logic in booltestsel() for this situation did the wrong thing for
"col IS NOT TRUE" and "col IS NOT FALSE" tests, forgetting that null
values would satisfy these tests (so that the true selectivity would
be close to one, not close to zero). Per bug #8274.
Fix by Andrew Gierth, some comment-smithing by me.
Tom Lane [Tue, 23 Jul 2013 21:54:24 +0000 (17:54 -0400)]
Further hacking on ruleutils' new column-alias-assignment code.
After further thought about implicit coercions appearing in a joinaliasvars
list, I realized that they represent an additional reason why we might need
to reference the join output column directly instead of referencing an
underlying column. Consider SELECT x FROM t1 LEFT JOIN t2 USING (x) where
t1.x is of type date while t2.x is of type timestamptz. The merged output
variable is of type timestamptz, but it won't go to null when t2 does,
therefore neither t1.x nor t2.x is a valid substitute reference.
The code in get_variable() actually gets this case right, since it knows
it shouldn't look through a coercion, but we failed to ensure that the
unqualified output column name would be globally unique. To fix, modify
the code that trawls for a dangerous situation so that it actually scans
through an unnamed join's joinaliasvars list to see if there are any
non-simple-Var entries.
Tom Lane [Tue, 23 Jul 2013 20:23:04 +0000 (16:23 -0400)]
Change post-rewriter representation of dropped columns in joinaliasvars.
It's possible to drop a column from an input table of a JOIN clause in a
view, if that column is nowhere actually referenced in the view. But it
will still be there in the JOIN clause's joinaliasvars list. We used to
replace such entries with NULL Const nodes, which is handy for generation
of RowExpr expansion of a whole-row reference to the view. The trouble
with that is that it can't be distinguished from the situation after
subquery pull-up of a constant subquery output expression below the JOIN.
Instead, replace such joinaliasvars with null pointers (empty expression
trees), which can't be confused with pulled-up expressions. expandRTE()
still emits the old convention, though, for convenience of RowExpr
generation and to reduce the risk of breaking extension code.
In HEAD and 9.3, this patch also fixes a problem with some new code in
ruleutils.c that was failing to cope with implicitly-casted joinaliasvars
entries, as per recent report from Feike Steenbergen. That oversight was
because of an inadequate description of the data structure in parsenodes.h,
which I've now corrected. There were some pre-existing oversights of the
same ilk elsewhere, which I believe are now all fixed.
Tweak FOR UPDATE/SHARE error message wording (again)
In commit 0ac5ad5134 I changed some error messages from "FOR
UPDATE/SHARE" to a rather long gobbledygook which nobody liked. Then,
in commit cb9b66d31 I changed them again, but the alternative chosen
there was deemed suboptimal by Peter Eisentraut, who in message 1373937980.20441.8.camel@vanquo.pezone.net proposed an alternative
involving a dynamically-constructed string based on the actual locking
strength specified in the SQL command. This patch implements that
suggestion.
Robert Haas [Mon, 22 Jul 2013 19:41:44 +0000 (15:41 -0400)]
Back-patch bgworker API changes to 9.3.
Commit 7f7485a0cde92aa4ba235a1ffe4dda0ca0b6cc9a made these changes
in master; per discussion, backport the API changes (but not the
functional changes), so that people don't get used to the 9.3 API
only to see it get broken in the next release. There are already
some people coding to the original 9.3 API, and this will cause
minor breakage, but there will be even more if we wait until next
year to roll out these changes.
Robert Haas [Mon, 22 Jul 2013 18:13:00 +0000 (14:13 -0400)]
Remove bgw_sighup and bgw_sigterm.
Per discussion on pgsql-hackers, these aren't really needed. Interim
versions of the background worker patch had the worker starting with
signals already unblocked, which would have made this necessary.
But the final version does not, so we don't really need it; and it
doesn't work well with the new facility for starting dynamic background
workers, so just rip it out.
Also per discussion on pgsql-hackers, back-patch this change to 9.3.
It's best to get the API break out of the way before we do an
official release of this facility, to avoid more pain for extension
authors later.
Tom Lane [Sat, 20 Jul 2013 16:44:37 +0000 (12:44 -0400)]
Fix error handling in PLy_spi_execute_fetch_result().
If an error is thrown out of the datatype I/O functions called by this
function, we need to do subtransaction cleanup, which the previous coding
entirely failed to do. Fortunately, both existing callers of this function
already have proper cleanup logic, so re-throwing the exception is enough.
Also, postpone creation of the resultset tupdesc until after the I/O
conversions are complete, so that we won't leak memory in TopMemoryContext
when such an error happens.
Peter Eisentraut [Sat, 20 Jul 2013 10:38:31 +0000 (06:38 -0400)]
Clean up new JSON API typedefs
The new JSON API uses a bit of an unusual typedef scheme, where for
example OkeysState is a pointer to okeysState. And that's not applied
consistently either. Change that to the more usual PostgreSQL style
where struct typedefs are upper case, and use pointers explicitly.
Fix HeapTupleSatisfiesVacuum on aborted updater xacts
By using only the macro that checks infomask bits
HEAP_XMAX_IS_LOCKED_ONLY to verify whether a multixact is not an
updater, and not the full HeapTupleHeaderIsOnlyLocked, it would come to
the wrong result in case of a multixact containing an aborted update;
therefore returning the wrong result code. This would cause predicate.c
to break completely (as in bug report #8273 from David Leverton), and
certain index builds would misbehave. As far as I can tell, other
callers of the bogus routine would make harmless mistakes or not be
affected by the difference at all; so this was a pretty narrow case.
Also, no other user of the HEAP_XMAX_IS_LOCKED_ONLY macro is as
careless; they all check specifically for the HEAP_XMAX_IS_MULTI case,
and they all verify whether the updater is InvalidXid before concluding
that it's a valid updater. So there doesn't seem to be any similar bug.
Tom Lane [Fri, 19 Jul 2013 01:22:43 +0000 (21:22 -0400)]
Fix regex match failures for backrefs combined with non-greedy quantifiers.
An ancient logic error in cfindloop() could cause the regex engine to fail
to find matches that begin later than the start of the string. This
function is only used when the regex pattern contains a back reference,
and so far as we can tell the error is only reachable if the pattern is
non-greedy (i.e. its first quantifier uses the ? modifier). Furthermore,
the actual match must begin after some potential match that satisfies the
DFA but then fails the back-reference's match test.
Reported and fixed by Jeevan Chalke, with cosmetic adjustments by me.
Stephen Frost [Mon, 15 Jul 2013 14:42:27 +0000 (10:42 -0400)]
Correct off-by-one when reading from pipe
In pg_basebackup.c:reached_end_position(), we're reading from an
internal pipe with our own background process but we're possibly
reading more bytes than will actually fit into our buffer due to
an off-by-one error. As we're reading from an internal pipe
there's no real risk here, but it's good form to not depend on
such convenient arrangements.
Stephen Frost [Sun, 14 Jul 2013 21:25:47 +0000 (17:25 -0400)]
Be sure to close() file descriptor on error case
In receivelog.c:writeTimeLineHistoryFile(), we were not properly
closing the open'd file descriptor in error cases. While this
wouldn't matter much if we were about to exit due to such an
error, that's not the case with pg_receivexlog as it can be a
long-running process and these errors are non-fatal.
This resource leak was found by the Coverity scanner.
Back-patch to 9.3 where this issue first appeared.
Stephen Frost [Sun, 14 Jul 2013 20:42:58 +0000 (16:42 -0400)]
Ensure 64bit arithmetic when calculating tapeSpace
In tuplesort.c:inittapes(), we calculate tapeSpace by first figuring
out how many 'tapes' we can use (maxTapes) and then multiplying the
result by the tape buffer overhead for each. Unfortunately, when
we are on a system with an 8-byte long, we allow work_mem to be
larger than 2GB and that allows maxTapes to be large enough that the
32bit arithmetic can overflow when multiplied against the buffer
overhead.
When this overflow happens, we end up adding the overflow to the
amount of space available, causing the amount of memory allocated to
be larger than work_mem.
Note that to reach this point, you have to set work mem to at least
24GB and be sorting a set which is at least that size. Given that a
user who can set work_mem to 24GB could also set it even higher, if
they were looking to run the system out of memory, this isn't
considered a security issue.
This overflow risk was found by the Coverity scanner.
Back-patch to all supported branches, as this issue has existed
since before 8.4.
Stephen Frost [Sun, 14 Jul 2013 19:31:23 +0000 (15:31 -0400)]
pg_receivexlog - Exit on failure to parse
In streamutil.c:GetConnection(), upgrade failure to parse the
connection string to an exit(1) instead of simply returning NULL.
Most callers already immediately exited, but pg_receivexlog would
loop on this case, continually trying to re-parse the connection
string (which can't be changed after pg_receivexlog has started).
GetConnection() was already expected to exit(1) in some cases
(eg: failure to allocate memory or if unable to determine the
integer_datetimes flag), so this change shouldn't surprise anyone.
Began looking at this due to the Coverity scanner complaining that
we were leaking err_msg in this case- no longer an issue since we
just exit(1) immediately.
Stephen Frost [Sun, 14 Jul 2013 18:35:26 +0000 (14:35 -0400)]
During parallel pg_dump, free commands from master
The command strings read by the child processes during parallel
pg_dump, after being read and handled, were not being free'd.
This patch corrects this relatively minor memory leak.
Leak found by the Coverity scanner.
Back patch to 9.3 where parallel pg_dump was introduced.
Switch user ID to the object owner when populating a materialized view.
This makes superuser-issued REFRESH MATERIALIZED VIEW safe regardless of
the object's provenance. REINDEX is an earlier example of this pattern.
As a downside, functions called from materialized views must tolerate
running in a security-restricted operation. CREATE MATERIALIZED VIEW
need not change user ID. Nonetheless, avoid creation of materialized
views that will invariably fail REFRESH by making it, too, start a
security-restricted operation.
Back-patch to 9.3 so materialized views have this from the beginning.
Bruce Momjian [Thu, 11 Jul 2013 13:43:19 +0000 (09:43 -0400)]
pg_upgrade: document possible pg_hba.conf options
Previously, pg_upgrade docs recommended using .pgpass if using MD5
authentication to avoid being prompted for a password. Turns out pg_ctl
never prompts for a password, so MD5 requires .pgpass --- document that.
Also recommend 'peer' for authentication too.
Backpatch back to 9.1.
Tom Lane [Mon, 8 Jul 2013 02:37:28 +0000 (22:37 -0400)]
Fix planning of parameterized appendrel paths with expensive join quals.
The code in set_append_rel_pathlist() for building parameterized paths
for append relations (inheritance and UNION ALL combinations) supposed
that the cheapest regular path for a child relation would still be cheapest
when reparameterized. Which might not be the case, particularly if the
added join conditions are expensive to compute, as in a recent example from
Jeff Janes. Fix it to compare child path costs *after* reparameterizing.
We can short-circuit that if the cheapest pre-existing path is already
parameterized correctly, which seems likely to be true often enough to be
worth checking for.
Back-patch to 9.2 where parameterized paths were introduced.
Tom Lane [Sat, 6 Jul 2013 15:16:53 +0000 (11:16 -0400)]
Rename a function to avoid naming conflict in parallel regression tests.
Commit 31a891857a128828d47d93c63e041f3b69cbab70 added some tests in
plpgsql.sql that used a function rather unthinkingly named "foo()".
However, rangefuncs.sql has some much older tests that create a function
of that name, and since these test scripts run in parallel, there is a
chance of failures if the timing is just right. Use another name to
avoid that. Per buildfarm (failure seen today on "hamerkop", but
probably it's happened before and not been noticed).
Tom Lane [Wed, 3 Jul 2013 16:26:33 +0000 (12:26 -0400)]
Fix handling of auto-updatable views on inherited tables.
An INSERT into such a view should work just like an INSERT into its base
table, ie the insertion should go directly into that table ... not be
duplicated into each child table, as was happening before, per bug #8275
from Rushabh Lathia. On the other hand, the current behavior for
UPDATE/DELETE seems reasonable: the update/delete traverses the child
tables, or not, depending on whether the view specifies ONLY or not.
Add some regression tests covering this area.
Specifically, permit attaching them to the error in RAISE and retrieving
them from a caught error in GET STACKED DIAGNOSTICS. RAISE enforces
nothing about the content of the fields; for its purposes, they are just
additional string fields. Consequently, clarify in the protocol and
libpq documentation that the usual relationships between error fields,
like a schema name appearing wherever a table name appears, are not
universal. This freedom has other applications; consider a FDW
propagating an error from an RDBMS having no schema support.
Back-patch to 9.3, where core support for the error fields was
introduced. This prevents the confusion of having a release where libpq
exposes the fields and PL/pgSQL does not.
Silence compiler warning in assertion-enabled builds.
With -Wtype-limits, gcc correctly points out that size_t can never be < 0.
Backpatch to 9.3 and 9.2. It's been like this forever, but in <= 9.1 you got
a lot other warnings with -Wtype-limits anyway (at least with my version of
gcc).
Bruce Momjian [Mon, 1 Jul 2013 18:52:56 +0000 (14:52 -0400)]
pg_dump docs: use escaped double-quotes, for Windows
On Unix, you can embed double-quotes in single-quotes, and via versa.
However, on Windows, you can only escape double-quotes in double-quotes,
so use that in the pg_dump -t/table example.
Backpatch to 9.3.
Report from Mike Toews
Tom Lane [Thu, 27 Jun 2013 17:54:55 +0000 (13:54 -0400)]
Mark index-constraint comments with correct dependency in pg_dump.
When there's a comment on an index that was created with UNIQUE or PRIMARY
KEY constraint syntax, we need to label the comment as depending on the
constraint not the index, since only the constraint object actually appears
in the dump. This incorrect dependency can lead to parallel pg_restore
trying to restore the comment before the index has been created, per bug
#8257 from Lloyd Albin.
This patch fixes pg_dump to produce the right dependency in dumps made
in the future. Usually we also try to hack pg_restore to work around
bogus dependencies, so that existing (wrong) dumps can still be restored in
parallel mode; but that doesn't seem practical here since there's no easy
way to relate the constraint dump entry to the comment after the fact.
Tom Lane [Thu, 27 Jun 2013 16:36:44 +0000 (12:36 -0400)]
Expect EWOULDBLOCK from a non-blocking connect() call only on Windows.
On Unix-ish platforms, EWOULDBLOCK may be the same as EAGAIN, which is
*not* a success return, at least not on Linux. We need to treat it as a
failure to avoid giving a misleading error message. Per the Single Unix
Spec, only EINPROGRESS and EINTR returns indicate that the connection
attempt is in progress.
On Windows, on the other hand, EWOULDBLOCK (WSAEWOULDBLOCK) is the expected
case. We must accept EINPROGRESS as well because Cygwin will return that,
and it doesn't seem worth distinguishing Cygwin from native Windows here.
It's not very clear whether EINTR can occur on Windows, but let's leave
that part of the logic alone in the absence of concrete trouble reports.
Also, remove the test for errno == 0, effectively reverting commit da9501bddb42222dc33c031b1db6ce2133bcee7b, which AFAICS was just a thinko;
or at best it might have been a workaround for a platform-specific bug,
which we can hope is gone now thirteen years later. In any case, since
libpq makes no effort to reset errno to zero before calling connect(),
it seems unlikely that that test has ever reliably done anything useful.
Tom Lane [Thu, 27 Jun 2013 04:23:37 +0000 (00:23 -0400)]
Tweak wording in sequence-function docs to avoid PDF build failures.
Adjust the wording in the first para of "Sequence Manipulation Functions"
so that neither of the link phrases in it break across line boundaries,
in either A4- or US-page-size PDF output. This fixes a reported build
failure for the 9.3beta2 A4 PDF docs, and future-proofs this particular
para against causing similar problems in future. (Perhaps somebody will
fix this issue in the SGML/TeX documentation tool chain someday, but I'm
not holding my breath.)
Back-patch to all supported branches, since the same problem could rise up
to bite us in future updates if anyone changes anything earlier than this
in func.sgml.
Alvaro Herrera [Tue, 25 Jun 2013 20:36:29 +0000 (16:36 -0400)]
Avoid inconsistent type declaration
Clang 3.3 correctly complains that a variable of type enum
MultiXactStatus cannot hold a value of -1, which makes sense. Change
the declared type of the variable to int instead, and apply casting as
necessary to avoid the warning.
Andrew Dunstan [Tue, 25 Jun 2013 17:46:10 +0000 (13:46 -0400)]
Properly dump dropped foreign table cols in binary-upgrade mode.
In binary upgrade mode, we need to recreate and then drop dropped
columns so that all the columns get the right attribute number. This is
true for foreign tables as well as for native tables. For foreign
tables we have been getting the first part right but not the second,
leading to bogus columns in the upgraded database. Fix this all the way
back to 9.1, where foreign tables were introduced.
Fujii Masao [Tue, 25 Jun 2013 17:18:26 +0000 (02:18 +0900)]
Support clean switchover.
In replication, when we shutdown the master, walsender tries to send
all the outstanding WAL records to the standby, and then to exit. This
basically means that all the WAL records are fully synced between
two servers after the clean shutdown of the master. So, after
promoting the standby to new master, we can restart the stopped
master as new standby without the need for a fresh backup from
new master.
But there was one problem so far: though walsender tries to send all
the outstanding WAL records, it doesn't wait for them to be replicated
to the standby. Then, before receiving all the WAL records,
walreceiver can detect the closure of connection and exit. We cannot
guarantee that there is no missing WAL in the standby after clean
shutdown of the master. In this case, backup from new master is
required when restarting the stopped master as new standby.
This patch fixes this problem. It just changes walsender so that it
waits for all the outstanding WAL records to be replicated to the
standby before closing the replication connection.
Per discussion, this is a fix that needs to get backpatched rather than
new feature. So, back-patch to 9.1 where enough infrastructure for
this exists.
Peter Eisentraut [Sat, 22 Jun 2013 02:48:06 +0000 (22:48 -0400)]
doc: Fix date in EPUB manifest
If there is no <date> element, the publication date for the EPUB
manifest is taken from the copyright year. But something like
"1996-2013" is not a legal date specification. So the EPUB output
currently fails epubcheck.
Put in a separate <date> element with the current year. Put it in
legal.sgml, because copyright.pl already instructs to update that
manually, so it hopefully won't be missed.
Kevin Grittner [Wed, 19 Jun 2013 15:37:39 +0000 (10:37 -0500)]
Fix the create_index regression test for Danish collation.
In Danish collations, there are letter combinations which sort
higher than 'Z'. A test for values > 'WA' was picking up rows
where the value started with 'AA', causing the test to fail.
Backpatch to 9.2, where the failing test was added.
Per report from Svenne Krap and analysis by Jeff Janes
Jeff Davis [Mon, 17 Jun 2013 15:02:12 +0000 (08:02 -0700)]
Add buffer_std flag to MarkBufferDirtyHint().
MarkBufferDirtyHint() writes WAL, and should know if it's got a
standard buffer or not. Currently, the only callers where buffer_std
is false are related to the FSM.
In passing, rename XLOG_HINT to XLOG_FPI, which is more descriptive.
Tom Lane [Sat, 15 Jun 2013 20:22:29 +0000 (16:22 -0400)]
Use WaitLatch, not pg_usleep, for delaying in pg_sleep().
This avoids platform-dependent behavior wherein pg_sleep() might fail to be
interrupted by statement timeout, query cancel, SIGTERM, etc. Also, since
there's no reason to wake up once a second any more, we can reduce the
power consumption of a sleeping backend a tad.
Back-patch to 9.3, since use of SA_RESTART for SIGALRM makes this a bigger
issue than it used to be.
Tom Lane [Sat, 15 Jun 2013 19:39:51 +0000 (15:39 -0400)]
Use SA_RESTART for all signals, including SIGALRM.
The exclusion of SIGALRM dates back to Berkeley days, when Postgres used
SIGALRM in only one very short stretch of code. Nowadays, allowing it to
interrupt kernel calls doesn't seem like a very good idea, since its use
for statement_timeout means SIGALRM could occur anyplace in the code, and
there are far too many call sites where we aren't prepared to deal with
EINTR failures. When third-party code is taken into consideration, it
seems impossible that we ever could be fully EINTR-proof, so better to
use SA_RESTART always and deal with the implications of that. One such
implication is that we should not assume pg_usleep() will be terminated
early by a signal. Therefore, long sleeps should probably be replaced
by WaitLatch operations where practical.
Back-patch to 9.3 so we can get some beta testing on this change.
Tom Lane [Fri, 14 Jun 2013 18:26:43 +0000 (14:26 -0400)]
Avoid deadlocks during insertion into SP-GiST indexes.
SP-GiST's original scheme for avoiding deadlocks during concurrent index
insertions doesn't work, as per report from Hailong Li, and there isn't any
evident way to make it work completely. We could possibly lock individual
inner tuples instead of their whole pages, but preliminary experimentation
suggests that the performance penalty would be huge. Instead, if we fail
to get a buffer lock while descending the tree, just restart the tree
descent altogether. We keep the old tuple positioning rules, though, in
hopes of reducing the number of cases where this can happen.
Tom Lane [Fri, 14 Jun 2013 03:15:15 +0000 (23:15 -0400)]
Remove special-case treatment of LOG severity level in standalone mode.
elog.c has historically treated LOG messages as low-priority during
bootstrap and standalone operation. This has led to confusion and even
masked a bug, because the normal expectation of code authors is that
elog(LOG) will put something into the postmaster log, and that wasn't
happening during initdb. So get rid of the special-case rule and make
the priority order the same as it is in normal operation. To keep from
cluttering initdb's output and the behavior of a standalone backend,
tweak the severity level of three messages routinely issued by xlog.c
during startup and shutdown so that they won't appear in these cases.
Per my proposal back in December.
Tom Lane [Fri, 14 Jun 2013 02:35:56 +0000 (22:35 -0400)]
Refactor checksumming code to make it easier to use externally.
pg_filedump and other external utility programs are likely to want to be
able to check Postgres page checksums. To avoid messy duplication of code,
move the checksumming functionality into an exported header file, much as
we did awhile back for the CRC code.
In passing, get rid of an unportable assumption that a static char[] array
will be word-aligned, and do some other minor code beautification.
Peter Eisentraut [Fri, 14 Jun 2013 01:42:42 +0000 (21:42 -0400)]
PL/Python: Fix type mixup
Memory was allocated based on the sizeof a type that was not the type of
the pointer that the result was being assigned to. The types happen to
be of the same size, but it's still wrong.
Tom Lane [Thu, 13 Jun 2013 17:11:29 +0000 (13:11 -0400)]
Only install a portal's ResourceOwner if it actually has one.
In most scenarios a portal without a ResourceOwner is dead and not subject
to any further execution, but a portal for a cursor WITH HOLD remains in
existence with no ResourceOwner after the creating transaction is over.
In this situation, if we attempt to "execute" the portal directly to fetch
data from it, we were setting CurrentResourceOwner to NULL, leading to a
segfault if the datatype output code did anything that required a resource
owner (such as trying to fetch system catalog entries that weren't already
cached). The case appears to be impossible to provoke with stock libpq,
but psqlODBC at least is able to cause it when working with held cursors.
Simplest fix is to just skip the assignment to CurrentResourceOwner, so
that any resources used by the data output operations will be managed by
the transaction-level resource owner instead. For consistency I changed
all the places that install a portal's resowner as current, even though
some of them are probably not reachable with a held cursor's portal.
Per report from Joshua Berry (with thanks to Hiroshi Inoue for developing
a self-contained test case). Back-patch to all supported versions.
Noah Misch [Wed, 12 Jun 2013 23:51:12 +0000 (19:51 -0400)]
Avoid reading past datum end when parsing JSON.
Several loops in the JSON parser examined a byte in memory just before
checking whether its address was in-bounds, so they could read one byte
beyond the datum's allocation. A SIGSEGV is possible. New in 9.3, so
no back-patch.