Tom Lane [Mon, 25 Jul 2011 03:29:10 +0000 (23:29 -0400)]
Fix previous patch so it also works if not USE_SSL (mea culpa).
On balance, the need to cover this case changes my mind in favor of pushing
all error-message generation duties into the two fe-secure.c routines.
So do it that way.
Tom Lane [Sun, 24 Jul 2011 20:29:13 +0000 (16:29 -0400)]
Improve libpq's error reporting for SSL failures.
In many cases, pqsecure_read/pqsecure_write set up useful error messages,
which were then overwritten with useless ones by their callers. Fix this
by defining the responsibility to set an error message to be entirely that
of the lower-level function when using SSL.
Back-patch to 8.3; the code is too different in 8.2 to be worth the
trouble.
Tom Lane [Sun, 24 Jul 2011 19:17:56 +0000 (15:17 -0400)]
Use OpenSSL's SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER flag.
This disables an entirely unnecessary "sanity check" that causes failures
in nonblocking mode, because OpenSSL complains if we move or compact the
write buffer. The only actual requirement is that we not modify pending
data once we've attempted to send it, which we don't. Per testing and
research by Martin Pihlak, though this fix is a lot simpler than his patch.
I put the same change into the backend, although it's less clear whether
it's necessary there. We do use nonblock mode in some situations in
streaming replication, so seems best to keep the same behavior in the
backend as in libpq.
Peter Eisentraut [Sat, 23 Jul 2011 21:25:29 +0000 (00:25 +0300)]
Change EDITOR_LINENUMBER_SWITCH to an environment variable
Also change "switch" to "arg" because "switch" is a bit of a sloppy
term. So the environment variable is called
PSQL_EDITOR_LINENUMBER_ARG. Set "+" as hardcoded default value on
Unix (since "vi" is the hardcoded default editor), so many users won't
have to configure this at all. Move the documentation around a bit to
centralize the editor configuration under environment variables,
rather than repeating bits of it under every backslash command that
invokes an editor.
Tom Lane [Sat, 23 Jul 2011 20:59:49 +0000 (16:59 -0400)]
Rethink behavior of CREATE OR REPLACE during CREATE EXTENSION.
The original implementation simply did nothing when replacing an existing
object during CREATE EXTENSION. The folly of this was exposed by a report
from Marc Munro: if the existing object belongs to another extension, we
are left in an inconsistent state. We should insist that the object does
not belong to another extension, and then add it to the current extension
if not already a member.
Robert Haas [Fri, 22 Jul 2011 20:15:43 +0000 (16:15 -0400)]
Unbreak unlogged tables.
I broke this in commit 5da79169d3e9f0fab47da03318c44075b3f824c5, which
was obviously insufficiently well tested. Add some regression tests
in the hope of making future slip-ups more likely to be noticed.
Tom Lane [Thu, 21 Jul 2011 16:24:14 +0000 (12:24 -0400)]
Fix PQsetvalue() to avoid possible crash when adding a new tuple.
PQsetvalue unnecessarily duplicated the logic in pqAddTuple, and didn't
duplicate it exactly either --- pqAddTuple does not care what is in the
tuple-pointer array positions beyond the last valid entry, whereas the
code in PQsetvalue assumed such positions would contain NULL. This led
to possible crashes if PQsetvalue was applied to a PGresult that had
previously been enlarged with pqAddTuple, for instance one built from a
server query. Fix by relying on pqAddTuple instead of duplicating logic,
and not assuming anything about the contents of res->tuples[res->ntups].
Back-patch to 8.4, where PQsetvalue was introduced.
Tom Lane [Sat, 16 Jul 2011 18:21:24 +0000 (14:21 -0400)]
Replace errdetail("%s", ...) with errdetail_internal("%s", ...).
There may be some other places where we should use errdetail_internal,
but they'll have to be evaluated case-by-case. This commit just hits
a bunch of places where invoking gettext is obviously a waste of cycles.
Tom Lane [Sat, 16 Jul 2011 17:41:48 +0000 (13:41 -0400)]
Add an errdetail_internal() ereport auxiliary routine.
This function supports untranslated detail messages, in the same way that
errmsg_internal supports untranslated primary messages. We've needed this
for some time IMO, but discussion of some cases in the SSI code provided
the impetus to actually add it.
Magnus Hagander [Sat, 16 Jul 2011 17:58:53 +0000 (19:58 +0200)]
Fix SSPI login when multiple roundtrips are required
This fixes SSPI login failures showing "The function
requested is not supported", often showing up when connecting
to localhost. The reason was not properly updating the SSPI
handle when multiple roundtrips were required to complete the
authentication sequence.
Report and analysis by Ahmed Shinwari, patch by Magnus Hagander
Fix two ancient bugs in GiST code to re-find a parent after page split:
First, when following a right-link, we incorrectly marked the current page
as the parent of the right sibling. In reality, the parent of the right page
is the same as the parent of the current page (or some page to the right of
it, gistFindCorrectParent() will sort that out).
Secondly, when we follow a right-link, we must prepend, not append, the right
page to our list of pages to visit. That's because we assume that once we
hit a leaf page in the list, all the rest are leaf pages too, and give up.
To hit these bugs, you need concurrent actions and several unlucky accidents.
Another backend must split the root page, while you're in process of
splitting a lower-level page. Furthermore, while you scan the internal nodes
to re-find the parent, another backend needs to again split some more internal
pages. Even then, the bugs don't necessarily manifest as user-visible errors
or index corruption.
While we're at it, make the error reporting a bit better if gistFindPath()
fails to re-find the parent. It used to be an assertion, but an elog() seems
more appropriate.
Tom Lane [Thu, 14 Jul 2011 21:30:57 +0000 (17:30 -0400)]
In planner, don't assume that empty parent tables aren't really empty.
There's a heuristic in estimate_rel_size() to clamp the minimum size
estimate for a table to 10 pages, unless we can see that vacuum or analyze
has been run (and set relpages to something nonzero, so this will always
happen for a table that's actually empty). However, it would be better
not to do this for inheritance parent tables, which very commonly are
really empty and can be expected to stay that way. Per discussion of a
recent pgsql-performance report from Anish Kejariwal. Also prevent it
from happening for indexes (although this is more in the nature of
documentation, since CREATE INDEX normally initializes relpages to
something nonzero anyway).
Back-patch to 9.0, because the ability to collect statistics across a
whole inheritance tree has improved the planner's estimates to the point
where this relatively small error makes a significant difference. In the
referenced report, merge or hash joins were incorrectly estimated as
cheaper than a nestloop with inner indexscan on the inherited table.
That was less likely before 9.0 because the lack of inherited stats would
have resulted in a default (and rather pessimistic) estimate of the cost
of a merge or hash join.
Tom Lane [Tue, 12 Jul 2011 22:24:53 +0000 (18:24 -0400)]
Avoid listing ungrouped Vars in the targetlist of Agg-underneath-Window.
Regular aggregate functions in combination with, or within the arguments
of, window functions are OK per spec; they have the semantics that the
aggregate output rows are computed and then we run the window functions
over that row set. (Thus, this combination is not really useful unless
there's a GROUP BY so that more than one aggregate output row is possible.)
The case without GROUP BY could fail, as recently reported by Jeff Davis,
because sloppy construction of the Agg node's targetlist resulted in extra
references to possibly-ungrouped Vars appearing outside the aggregate
function calls themselves. See the added regression test case for an
example.
Fixing this requires modifying the API of flatten_tlist and its underlying
function pull_var_clause. I chose to make pull_var_clause's API for
aggregates identical to what it was already doing for placeholders, since
the useful behaviors turn out to be the same (error, report node as-is, or
recurse into it). I also tightened the error checking in this area a bit:
if it was ever valid to see an uplevel Var, Aggref, or PlaceHolderVar here,
that was a long time ago, so complain instead of ignoring them.
Backpatch into 9.1. The failure exists in 8.4 and 9.0 as well, but seeing
that it only occurs in a basically-useless corner case, it doesn't seem
worth the risks of changing a function API in a minor release. There might
be third-party code using pull_var_clause.
Tom Lane [Fri, 8 Jul 2011 21:03:06 +0000 (17:03 -0400)]
Fix another oversight in logging of changes in postgresql.conf settings.
We were using GetConfigOption to collect the old value of each setting,
overlooking the possibility that it didn't exist yet. This does happen
in the case of adding a new entry within a custom variable class, as
exhibited in bug #6097 from Maxim Boguk.
To fix, add a missing_ok parameter to GetConfigOption, but only in 9.1
and HEAD --- it seems possible that some third-party code is using that
function, so changing its API in a minor release would cause problems.
In 9.0, create a near-duplicate function instead.
Tom Lane [Thu, 7 Jul 2011 23:34:24 +0000 (19:34 -0400)]
Update examples for string-related functions.
In the example for decode(), show the bytea result in hex format,
since that's now the default. Use an E'' string in the example for
quote_literal(), so that it works regardless of the
standard_conforming_strings setting. On the functions-for-binary-strings
page, leave the examples as-is for readability, but add a note pointing out
that they are shown in escape format. Per comments from Thom Brown.
Also, improve the description for encode() and decode() a tad.
Backpatch to 9.0, where bytea_output was introduced.
There's a small window wherein a transaction is committed but not yet
on the finished list, and we shouldn't flag it as a potential conflict
if so. We can also skip adding a doomed transaction to the list of
possible conflicts because we know it won't commit.
SSI has a race condition, where the order of commit sequence numbers of
transactions might not match the order the work done in those transactions
become visible to others. The logic in SSI, however, assumed that it does.
Fix that by having two sequence numbers for each serializable transaction,
one taken before a transaction becomes visible to others, and one after it.
This is easier than trying to make the the transition totally atomic, which
would require holding ProcArrayLock and SerializableXactHashLock at the same
time. By using prepareSeqNo instead of commitSeqNo in a few places where
commit sequence numbers are compared, we can make those comparisons err on
the safe side when we don't know for sure which committed first.
Per analysis by Kevin Grittner and Dan Ports, but this approach to fix it
is different from the original patch.
Robert Haas [Thu, 7 Jul 2011 19:05:21 +0000 (15:05 -0400)]
Adjust OLDSERXID_MAX_PAGE based on BLCKSZ.
The value when BLCKSZ = 8192 is unchanged, but with larger-than-normal
block sizes we might need to crank things back a bit, as we'll have
more entries per page than normal in that case.
If there's a dangerous structure T0 ---> T1 ---> T2, and T2 commits first,
we need to abort something. If T2 commits before both conflicts appear,
then it should be caught by OnConflict_CheckForSerializationFailure. If
both conflicts appear before T2 commits, it should be caught by
PreCommit_CheckForSerializationFailure. But that is actually run when
T2 *prepares*. Fix that in OnConflict_CheckForSerializationFailure, by
treating a prepared T2 as if it committed already.
This is mostly a problem for prepared transactions, which are in prepared
state for some time, but also for regular transactions because they also go
through the prepared state in the SSI code for a short moment when they're
committed.
Tom Lane [Wed, 6 Jul 2011 18:53:16 +0000 (14:53 -0400)]
Remove assumptions that not-equals operators cannot be in any opclass.
get_op_btree_interpretation assumed this in order to save some duplication
of code, but it's not true in general anymore because we added <> support
to btree_gist. (We still assume it for btree opclasses, though.)
Also, essentially the same logic was baked into predtest.c. Get rid of
that duplication by generalizing get_op_btree_interpretation so that it
can be used by predtest.c.
Per bug report from Denis de Bernardy and investigation by Jeff Davis,
though I didn't use Jeff's patch exactly as-is.
Back-patch to 9.1; we do not support this usage before that.
Tom Lane [Tue, 5 Jul 2011 22:46:03 +0000 (18:46 -0400)]
Make the file_fdw validator check that a filename option has been provided.
This was already a runtime failure condition, but it's better to check
at validation time if possible. Lightly modified version of a patch
by Shigeru Hanada.
Tom Lane [Tue, 5 Jul 2011 19:54:00 +0000 (15:54 -0400)]
Restructure foreign data wrapper chapter so it has more than one section.
As noted by Laurenz Albe, our SGML tools deal rather oddly with chapters
having just one <sect1>. Perhaps the tooling could be fixed, but really
the design of this chapter's introduction is pretty bogus anyhow. Split
it into a true introduction and a <sect1> about the FDW functions, so
that it reads better and dodges the lack-of-a-chapter-TOC problem.
Tom Lane [Tue, 5 Jul 2011 16:04:40 +0000 (12:04 -0400)]
Fix psql's counting of script file line numbers during COPY.
handleCopyIn incremented pset.lineno for each line of COPY data read from
a file. This is correct when reading from the current script file (i.e.,
we are doing COPY FROM STDIN followed by in-line data), but it's wrong if
the data is coming from some other file. Per bug #6083 from Steve Haslam.
Back-patch to all supported versions.
Simon Riggs [Mon, 4 Jul 2011 08:30:50 +0000 (09:30 +0100)]
Reset ALTER TABLE lock levels to AccessExclusiveLock in all cases.
Locks on inheritance parent remain at lower level, as they were before.
Remove entry from 9.1 release notes.
Tom Lane [Mon, 4 Jul 2011 02:12:20 +0000 (22:12 -0400)]
Fix omissions in documentation of the pg_roles view.
Somehow, column rolconfig got removed from the documentation of the
pg_roles view in the 9.0 cycle, although the column is actually still
there. In 9.1, we'd also forgotten to document the rolreplication column.
Spotted by Sakamoto Masahiko.
Robert Haas [Sun, 3 Jul 2011 21:34:47 +0000 (17:34 -0400)]
Fix bugs in relpersistence handling during table creation.
Unlike the relistemp field which it replaced, relpersistence must be
set correctly quite early during the table creation process, as we
rely on it quite early on for a number of purposes, including security
checks. Normally, this is set based on whether the user enters CREATE
TABLE, CREATE UNLOGGED TABLE, or CREATE TEMPORARY TABLE, but a
relation may also be made implicitly temporary by creating it in
pg_temp. This patch fixes the handling of that case, and also
disables creation of unlogged tables in temporary tablespace (such
table indeed skip WAL-logging, but we reject an explicit
specification) and creation of relations in the temporary schemas of
other sessions (which is not very sensible, and didn't work right
anyway).
Tom Lane [Sun, 3 Jul 2011 17:55:02 +0000 (13:55 -0400)]
Make distprep and *clean build targets recurse into all subdirectories.
Certain subdirectories do not get built if corresponding options are not
selected at configure time. However, "make distprep" should visit such
directories anyway, so that constructing derived files to be included in
the tarball happens without requiring all configure options to be given
in the tarball build script. Likewise, it's better if cleanup actions
unconditionally visit all directories (for example, this ensures proper
cleanup if someone has done a manual make in such a subdirectory).
To handle this, set up a convention that subdirectories that are
conditionally included in SUBDIRS should be added to ALWAYS_SUBDIRS
instead when they are excluded.
Back-patch to 9.1, so that plpython's spiexceptions.h will get provided
in 9.1 tarballs. There don't appear to be any instances where distprep
actions got missed in previous releases, and anyway this fix requires
gmake 3.80 so we don't want to apply it before 9.1.
Tom Lane [Wed, 29 Jun 2011 23:46:58 +0000 (19:46 -0400)]
Restore correct btree preprocessing of "indexedcol IS NULL" conditions.
Such a condition is unsatisfiable in combination with any other type of
btree-indexable condition (since we assume btree operators are always
strict). 8.3 and 8.4 had an explicit test for this, which I removed in
commit 29c4ad98293e3c5cb3fcdd413a3f4904efff8762, mistakenly thinking that
the case would be subsumed by the more general handling of IS (NOT) NULL
added in that patch. Put it back, and improve the comments about it, and
add a regression test case.
Per bug #6079 from Renat Nasyrov, and analysis by Dean Rasheed.
Move the PredicateLockRelation() call from nodeSeqscan.c to heapam.c. It's
more consistent that way, since all the other PredicateLock* calls are
made in various heapam.c and index AM functions. The call in nodeSeqscan.c
was unnecessarily aggressive anyway, there's no need to try to lock the
relation every time a tuple is fetched, it's enough to do it once.
This has the user-visible effect that if a seq scan is initialized in the
executor, but never executed, we now acquire the predicate lock on the heap
relation anyway. We could avoid that by taking the lock on the first
heap_getnext() call instead, but it doesn't seem worth the trouble given
that it feels more natural to do it in heap_beginscan().
Also, remove the retail PredicateLockTuple() calls from heap_getnext(). In
a seqscan, started with heap_begin(), we're holding a whole-relation
predicate lock on the heap so there's no need to lock the tuples
individually.
Simon Riggs [Mon, 27 Jun 2011 21:15:07 +0000 (22:15 +0100)]
Reduce impact of btree page reuse on Hot Standby by fixing off-by-1 error.
WAL records of type XLOG_BTREE_REUSE_PAGE were generated using a
latestRemovedXid one higher than actually needed because xid used was
page opaque->btpo.xact rather than an actually removed xid.
Noticed on an otherwise quiet system by Noah Misch.
Joe Conway [Sat, 25 Jun 2011 22:40:49 +0000 (15:40 -0700)]
Async dblink functions require a named connection, and therefore should
use DBLINK_GET_NAMED_CONN rather than DBLINK_GET_CONN.
Problem found by Peter Eisentraut and patch by Fujii Masao.
Tom Lane [Thu, 23 Jun 2011 03:03:11 +0000 (23:03 -0400)]
Undo overly enthusiastic de-const-ification.
s/const//g wasn't exactly what I was suggesting here ... parameter
declarations of the form "const structtype *param" are good and useful,
so put those occurrences back. Likewise, avoid casting away the const
in a "const void *" parameter.
Tom Lane [Wed, 22 Jun 2011 17:08:08 +0000 (13:08 -0400)]
Fix symlink for errcodes.h so it works in VPATH builds from tarballs.
backend/Makefile was treating errcodes.h as a header always generated
during build, but actually it's a header provided in tarballs. Hence,
must use the absolute-symlink recipe, not the relative-symlink one.
Per bug #6072 from Hartmut Raschick.
Remove pointless const qualifiers from function arguments in the SSI code.
As Tom Lane pointed out, "const Relation foo" doesn't guarantee that you
can't modify the data the "foo" pointer points to. It just means that you
can't change the pointer to point to something else within the function,
which is not very useful.
Tom Lane [Tue, 21 Jun 2011 18:41:05 +0000 (14:41 -0400)]
Apply upstream fix for blowfish signed-character bug (CVE-2011-2483).
A password containing a character with the high bit set was misprocessed
on machines where char is signed (which is most). This could cause the
preceding one to three characters to fail to affect the hashed result,
thus weakening the password. The result was also unportable, and failed
to match some other blowfish implementations such as OpenBSD's.
Since the fix changes the output for such passwords, upstream chose
to provide a compatibility hack: password salts beginning with $2x$
(instead of the usual $2a$ for blowfish) are intentionally processed
"wrong" to give the same hash as before. Stored password hashes can
thus be modified if necessary to still match, though it'd be better
to change any affected passwords.
In passing, sync a couple other upstream changes that marginally improve
performance and/or tighten error checking.
Back-patch to all supported branches. Since this issue is already
public, no reason not to commit the fix ASAP.
Adjust the alternative expected output file for prepared_xacts test case,
used when max_prepared_transactions=0, for the recent changes in the test
case.
Fix bug in PreCommit_CheckForSerializationFailure. A transaction that has
already been marked as PREPARED cannot be killed. Kill the current
transaction instead.
One of the prepared_xacts regression tests actually hits this bug. I
removed the anomaly from the duplicate-gids test so that it fails in the
intended way, and added a new test to check serialization failures with
a prepared transaction.
Fix bug introduced by recent SSI patch to merge ROLLED_BACK and
MARKED_FOR_DEATH flags into one. We still need the ROLLED_BACK flag to
mark transactions that are in the process of being rolled back. To be
precise, ROLLED_BACK now means that a transaction has already been
discounted from the count of transactions with the oldest xmin, but not
yet removed from the list of active transactions.
Tom Lane [Mon, 20 Jun 2011 18:33:20 +0000 (14:33 -0400)]
Fix thinko in previous patch for optimizing EXISTS-within-EXISTS.
When recursing after an optimization in pull_up_sublinks_qual_recurse, the
available_rels value passed down must include only the relations that are
in the righthand side of the new SEMI or ANTI join; it's incorrect to pull
up a sub-select that refers to other relations, as seen in the added test
case. Per report from BangarRaju Vadapalli.
While at it, rethink the idea of recursing below a NOT EXISTS. That is
essentially the same situation as pulling up ANY/EXISTS sub-selects that
are in the ON clause of an outer join, and it has the same disadvantage:
we'd force the two joins to be evaluated according to the syntactic nesting
order, because the lower join will most likely not be able to commute with
the ANTI join. That could result in having to form a rather large join
product, whereas the handling of a correlated subselect is not quite that
dumb. So until we can handle those cases better, #ifdef NOT_USED that
case. (I think it's okay to pull up in the EXISTS/ANY cases, because SEMI
joins aren't so inflexible about ordering.)
Back-patch to 8.4, same as for previous patch in this area. Fortunately
that patch hadn't made it into any shipped releases yet.
Peter Eisentraut [Sun, 19 Jun 2011 20:27:56 +0000 (23:27 +0300)]
Produce HISTORY file consistently as ASCII
The release notes may contain non-ASCII characters (for contributor
names), which lynx converts to the encoding determined by the current
locale. The get output that is deterministic and easily readable by
everyone, we make lynx produce LATIN1 and then convert that to ASCII
with transliteration for the non-ASCII characters.
Tom Lane [Sun, 19 Jun 2011 18:00:55 +0000 (14:00 -0400)]
Fix thinko in previous patch to always update pg_class.reltuples/relpages.
I mis-simplified the test where ANALYZE decided if it could get away
without doing anything: under the new regime, that's never allowed. Per
bug #6068 from Jeff Janes. Back-patch to 8.4, just like previous patch.
Tom Lane [Fri, 17 Jun 2011 23:13:08 +0000 (19:13 -0400)]
Don't use "cp -i" in the example WAL archive_command.
This is a dangerous example to provide because on machines with GNU cp,
it will silently do the wrong thing and risk archive corruption. Worse,
during the 9.0 cycle somebody "improved" the discussion by removing the
warning that used to be there about that, and instead leaving the
impression that the command would work as desired on most Unixen.
It doesn't. Try to rectify the damage by providing an example that is safe
most everywhere, and then noting that you can try cp -i if you want but
you'd better test that.
In back-patching this to all supported branches, I also added an example
command for Windows, which wasn't provided before 9.0.
Tom Lane [Fri, 17 Jun 2011 22:19:09 +0000 (18:19 -0400)]
Obtain table locks as soon as practical during pg_dump.
For some reason, when we (I) added table lock acquisition to pg_dump,
we didn't think about making it happen as soon as possible after the
start of the transaction. What with subsequent additions, there was
actually quite a lot going on before we got around to that; which sort
of defeats the purpose. Rearrange the order of calls in dumpSchema()
to close the risk window as much as we easily can. Back-patch to all
supported branches.
Robert Haas [Fri, 17 Jun 2011 18:28:45 +0000 (14:28 -0400)]
Add overflow checks to int4 and int8 versions of generate_series().
The previous code went into an infinite loop after overflow. In fact,
an overflow is not really an error; it just means that the current
value is the last one we need to return. So, just arrange to stop
immediately when overflow is detected.
Robert Haas [Fri, 17 Jun 2011 17:34:39 +0000 (13:34 -0400)]
Fix crash in CREATE UNLOGGED TABLE.
The code that created the init fork neglected to make sure that the
relation was open at the smgr level before attempting to invoke smgr.
This didn't happen every time; only when the relcache entry was rebuilt
along the way.