Tom Lane [Wed, 30 May 2012 14:53:30 +0000 (10:53 -0400)]
Fix incorrect password transformation in contrib/pgcrypto's DES crypt().
Overly tight coding caused the password transformation loop to stop
examining input once it had processed a byte equal to 0x80. Thus, if the
given password string contained such a byte (which is possible though not
highly likely in UTF8, and perhaps also in other non-ASCII encodings), all
subsequent characters would not contribute to the hash, making the password
much weaker than it appears on the surface.
This would only affect cases where applications used DES crypt() to encode
passwords before storing them in the database. If a weak password has been
created in this fashion, the hash will stop matching after this update has
been applied, so it will be easy to tell if any passwords were unexpectedly
weak. Changing to a different password would be a good idea in such a case.
(Since DES has been considered inadequately secure for some time, changing
to a different encryption algorithm can also be recommended.)
This code, and the bug, are shared with at least PHP, FreeBSD, and OpenBSD.
Since the other projects have already published their fixes, there is no
point in trying to keep this commit private.
This bug has been assigned CVE-2012-2143, and credit for its discovery goes
to Rubin Xu and Joseph Bonneau.
Change the way parent pages are tracked during buffered GiST build.
We used to mimic the way a stack is constructed when descending the tree
during normal GiST inserts, but that was quite complicated during a buffered
build. It was also wrong: in GiST, the left-to-right relationships on
different levels might not match each other, so that when you know the
parent of a child page, you won't necessarily find the parent of the page to
the right of the child page by following the rightlinks at the parent level.
This sometimes led to "could not re-find parent" errors while building a
GiST index.
We now use a simple hash table to track the parent of every internal page.
Whenever a page is split, and downlinks are moved from one page to another,
we update the hash table accordingly. This is also better for performance
than the old method, as we never need to move right to re-find the parent
page, which could take a significant amount of time for buffers that were
created much earlier in the index build.
Delete the temporary file used in buffered GiST build, after the build.
There were two bugs here: We forgot to call gistFreeBuildBuffers() function
at the end of build, and we passed interXact == true to BufFileCreateTemp,
so the file wasn't automatically cleaned up at end-of-transaction either.
Tom Lane [Wed, 30 May 2012 03:22:14 +0000 (23:22 -0400)]
Rewrite --section option to decouple it from --schema-only/--data-only.
The initial implementation of pg_dump's --section option supposed that the
existing --schema-only and --data-only options could be made equivalent to
--section settings. This is wrong, though, due to dubious but long since
set-in-stone decisions about where to dump SEQUENCE SET items, as seen in
bug report from Martin Pitt. (And I'm not totally convinced there weren't
other bugs, either.) Undo that coupling and instead drive --section
filtering off current-section state tracked as we scan through the TOC
list to call _tocEntryRequired().
To make sure those decisions don't shift around and hopefully save a few
cycles, run _tocEntryRequired() only once per TOC entry and save the result
in a new TOC field. This required minor rejiggering of ACL handling but
also allows a far cleaner implementation of inhibit_data_for_failed_table.
Also, to ensure that pg_dump and pg_restore have the same behavior with
respect to the --section switches, add _tocEntryRequired() filtering to
WriteToc() and WriteDataChunks(), rather than trying to implement section
filtering in an entirely orthogonal way in dumpDumpableObject(). This
required adjusting the handling of the special ENCODING and STDSTRINGS
items, but they were pretty weird before anyway.
Fix integer overflow bug in GiST buffering build calculations.
The result of (maintenance_work_mem * 1024) / BLCKSZ doesn't fit in a signed
32-bit integer, if maintenance_work_mem >= 2GB. Use double instead. And
while we're at it, write the calculations in an easier to understand form,
with the intermediary steps written out and commented.
Tom Lane [Tue, 29 May 2012 03:57:06 +0000 (23:57 -0400)]
Teach AbortOutOfAnyTransaction to clean up partially-started transactions.
AbortOutOfAnyTransaction failed to do anything if the state it saw on
entry corresponded to failing partway through StartTransaction. I fixed
AbortCurrentTransaction to cope with that case way back in commit 60b2444cc3ba037630c9b940c3c9ef01b954b87b, but evidently overlooked that
AbortOutOfAnyTransaction should do likewise.
Back-patch to all supported branches. It's not clear that this omission
has any more-than-cosmetic consequences, but it's also not clear that it
doesn't, so back-patching seems the least risky choice.
Tom Lane [Tue, 29 May 2012 00:38:28 +0000 (20:38 -0400)]
Eliminate some more O(N^2) behaviors in pg_dump/pg_restore.
This patch fixes three places (which AFAICT is all of them) where runtime
was O(N^2) in the number of TOC entries, by using an index array to replace
linear searches of the TOC list. This performance issue is a bit less bad
than those recently fixed, because it depends on the number of items dumped
not the number in the source database, so the problem can be dodged by
doing partial dumps.
The previous coding already had an instance of one of the two index arrays
needed, but it was only calculated in parallel-restore cases; now we need
it all the time. I also chose to move the arrays into the ArchiveHandle
data structure, to make this code a bit more ready for the day that we
try to sling multiple ArchiveHandles around in pg_dump or pg_restore.
Since we still need some server-side work before pg_dump can really cope
nicely with tens of thousands of tables, there's probably little point in
back-patching.
Peter Eisentraut [Sun, 27 May 2012 20:34:16 +0000 (23:34 +0300)]
Update SQL features list
Set E081 Basic Privileges to supported, since by the letter of it, we
support it, even though not all possible forms of USAGE privileges are
implemented.
Bruce Momjian [Sun, 27 May 2012 10:35:10 +0000 (06:35 -0400)]
Add C comment explaining why we can't exclude checking functions in the
pg_catalog schema, even though they are not explicitly dumped (they are
implicitly dumped, e.g. create language plperl).
Magnus Hagander [Sun, 27 May 2012 08:54:31 +0000 (10:54 +0200)]
Fix handling of pg_stat_statements.stat temporary file
Write the file to a temporary name and then rename() it into the
permanent name, to ensure it can't end up half-written and corrupt
in case of a crash during shutdown.
Unlink the file after it has been read so it's removed from the data
directory and not included in base backups going to replication slaves.
Tom Lane [Sat, 26 May 2012 23:09:52 +0000 (19:09 -0400)]
Prevent synchronized scanning when systable_beginscan chooses a heapscan.
The only interesting-for-performance case wherein we force heapscan here
is when we're rebuilding the relcache init file, and the only such case
that is likely to be examining a catalog big enough to be syncscanned is
RelationBuildTupleDesc. But the early-exit optimization in that code gets
broken if we start the scan at a random place within the catalog, so that
allowing syncscan is actually a big deoptimization if pg_attribute is large
(at least for the normal case where the rows for core system catalogs have
never been changed since initdb). Hence, prevent syncscan here. Per my
testing pursuant to complaints from Jeff Frost and Greg Sabino Mullane,
though neither of them seem to have actually hit this specific problem.
Tom Lane [Fri, 25 May 2012 18:35:37 +0000 (14:35 -0400)]
Use binary search instead of brute-force scan in findNamespace().
The previous coding presented a significant bottleneck when dumping
databases containing many thousands of schemas, since the total time
spent searching would increase roughly as O(N^2) in the number of objects.
Noted by Jeff Janes, though I rewrote his proposed patch to use the
existing findObjectByOid infrastructure.
Since this is a longstanding performance bug, backpatch to all supported
versions.
Magnus Hagander [Fri, 25 May 2012 09:36:22 +0000 (11:36 +0200)]
Fix base backup streaming xlog from standby
When backing up from a standby server, the backup process
will not automatically switch xlog segment. So we must
accept a partially transferred xlog file in this case, but
rename it into position anyway.
In passing, merge the two callbacks for segment end and
stop stream into a single callback, since their implementations
were close to identical, and rename this callback to
reflect that it stops streaming rather than continues it.
Tom Lane [Thu, 24 May 2012 17:56:16 +0000 (13:56 -0400)]
Fix array overrun in regex code.
zaptreesubs() was coded to unconditionally reset a capture subre's
corresponding pmatch[] entry. However, in regexes without backrefs, that
array is caller-supplied and might not have as many entries as the regex
has capturing parens. So check the array length and do nothing if there
is no corresponding entry, much as subset() does. Failure to check this
resulted in a stack clobber in the case reported by Marko Kreen.
This bug appears to have been latent in the regex library from the
beginning. It was not exposed because find() called dissect() not
cdissect(), and the dissect() code path didn't ever call zaptreesubs()
(formerly zapmem()). When I unified dissect() and cdissect() in commit 4dd78bf37aa29d04b3f358b08c4a2fa43cf828e7, the problem was exposed.
Now that I've seen this, I'm rather suspicious that we might need to
back-patch it; but will refrain for now, for lack of evidence that
the case can be hit in the previous coding.
Tom Lane [Tue, 22 May 2012 23:42:05 +0000 (19:42 -0400)]
Ensure that seqscans check for interrupts at least once per page.
If a seqscan encounters many consecutive pages containing only dead tuples,
it can remain in the loop in heapgettup for a long time, and there was no
CHECK_FOR_INTERRUPTS anywhere in that loop. This meant there were
real-world situations where a query would be effectively uncancelable for
long stretches. Add a check placed to occur once per page, which should be
enough to provide reasonable response time without adding any measurable
overhead.
Report and patch by Merlin Moncure (though I tweaked it a bit).
Back-patch to all supported branches.
Robert Haas [Tue, 22 May 2012 15:19:33 +0000 (11:19 -0400)]
Fix error message for COMMENT/SECURITY LABEL ON COLUMN xxx IS 'yyy'
When the column name is an unqualified name, rather than table.column,
the error message complains about too many dotted names, which is
wrong. Report by Peter Eisentraut based on examination of the
sepgsql regression test output, but the problem also affects COMMENT.
New wording as suggested by Tom Lane.
Robert Haas [Tue, 22 May 2012 13:30:59 +0000 (09:30 -0400)]
Repair out-of-date information in src/backend/storage/buffer/README.
In commit d526575f893c1a4e05ebd307e80203536b213a6d, we changed things so
that buffer usage counts are incremented when the buffer is pinned, rather
than when it is unpinned, but the README file didn't get the memo.
Tom Lane [Tue, 22 May 2012 02:50:30 +0000 (22:50 -0400)]
Move postmaster's RemovePgTempFiles call to a less randomly chosen place.
There is no reason to do this as early as possible in postmaster startup,
and good reason not to do it until we have completely created the
postmaster's lock file, namely that it might contribute to pg_ctl thinking
that postmaster startup has timed out. (This would require a rather
unusual amount of time to be spent scanning temp file directories, but we
have at least one field report of it happening reproducibly.)
Back-patch to 9.1. Before that, pg_ctl didn't wait for additional info to
be added to the lock file, so it wasn't a problem.
Note that this is not a complete fix to the slow-start issue in 9.1,
because we still had identify_system_timezone being run during postmaster
start in 9.1. But that's at least a reasonably well-defined delay, with
an easy workaround if needed, whereas the temp-files scan is not so
predictable and cannot be avoided.
Tom Lane [Tue, 22 May 2012 02:10:40 +0000 (22:10 -0400)]
Update woefully-obsolete comment.
The accurate info about what's in a lock file has been in miscadmin.h
for some time, so let's just make this comment point there instead of
maintaining a duplicative copy.
Peter Eisentraut [Mon, 21 May 2012 17:08:50 +0000 (20:08 +0300)]
libpq: Add missing file to GETTEXT_FILES list
For the record, fe-print.c is also missing, but it's sort of
deprecated, and the string internationalization there has some issues,
and it doesn't seem worth fixing that. So let's leave that out.
Tom Lane [Mon, 21 May 2012 15:56:00 +0000 (11:56 -0400)]
Document that we deviate from ISO 8601 by not using 'T' on output.
Per discussion, we should explain that we follow RFC 3339 and not really
the letter of the ISO 8601 spec for timestamp output format. Mostly
Brendan Jurd's wording, though I tweaked it to clarify that we do take 'T'
on input. Minor additional copy-editing and markup-tweaking, too.
When we create a temporary copy of the old node buffer, in stack, we mustn't
leak that into any of the long-lived data structures. Before this patch,
when we called gistPopItupFromNodeBuffer(), it got added to the array of
"loaded buffers". After gistRelocateBuildBuffersOnSplit() exits, the
pointer added to the loaded buffers array points to garbage. Often that goes
unnotied, because when we go through the array of loaded buffers to unload
them, buffers with a NULL pageBuffer are ignored, which can often happen by
accident even if the pointer points to garbage.
This patch fixes that by marking the temporary copy in stack explicitly as
temporary, and refrain from adding buffers marked as temporary to the array
of loaded buffers.
While we're at it, initialize nodeBuffer->pageBlocknum to InvalidBlockNumber
and improve comments a bit. This isn't strictly necessary, but makes
debugging easier.
Tom Lane [Tue, 15 May 2012 16:12:40 +0000 (12:12 -0400)]
Improve error message for ALTER COLUMN TYPE coercion failure.
Per recent discussion, the error message for this was actually a trifle
inaccurate, since it said "cannot be cast" which might be incorrect.
Adjust that wording, and add a HINT suggesting that a USING clause might
be needed.
Tom Lane [Tue, 15 May 2012 03:51:34 +0000 (23:51 -0400)]
In pgstat.c, use a timeout in WaitLatchOrSocket only on Windows.
We have no need for a timeout here really, but some broken products from
Redmond seem to lose FD_READ events occasionally, and waking up and
retrying the recv() is the only known way to work around that. Perhaps
somebody will be motivated to figure out a better answer here; but not I.
Tom Lane [Tue, 15 May 2012 03:06:48 +0000 (23:06 -0400)]
Put back AC_REQUIRE([AC_STRUCT_TM]).
The BSD-ish members of the buildfarm all seem to think removing this
was a bad idea. It looks to me like it resulted in omitting the system
header inclusion necessary to detect the fields of struct tm correctly.
Tom Lane [Mon, 14 May 2012 20:11:59 +0000 (16:11 -0400)]
Assert that WaitLatchOrSocket callers cannot wait only for writability.
Since we have chosen to report socket EOF and error conditions via the
WL_SOCKET_READABLE flag bit, it's unsafe to wait only for
WL_SOCKET_WRITEABLE; the caller would never be notified of the socket
condition, and in some of these implementations WaitLatchOrSocket would
busy-wait until something else happens. Add this restriction to the API
specification, and add Asserts to check that callers don't try to do that.
At some point we might want to consider adjusting the API to relax this
restriction, but until we have an actual use case for waiting on a
write-only socket, it seems premature to design a solution.
Tom Lane [Mon, 14 May 2012 19:03:14 +0000 (15:03 -0400)]
For testing purposes, reinsert a timeout in pgstat.c's wait call.
Test results from buildfarm members mastodon/narwhal (Windows Server 2003)
make it look like that platform just plain loses FD_READ events
occasionally, and the only reason our previous coding seemed to work was
that it timed out every couple of seconds and retried the whole operation.
Try to verify this by reinserting a finite timeout into the pgstat loop.
This isn't meant to be a permanent patch either, just to confirm or
disprove a theory.
Tom Lane [Mon, 14 May 2012 14:57:07 +0000 (10:57 -0400)]
Force pgwin32_recv into nonblock mode when called from pgstat.c.
This should get rid of the usage of pgwin32_waitforsinglesocket entirely,
and perhaps thereby remove the race condition that's evidently still
present on some versions of Windows. The previous arrangement was a bit
unsafe anyway, since waiting at the recv() would not allow pgstat to notice
postmaster death.
Update comments that became out-of-date with the PGXACT struct.
When the "hot" members of PGPROC were split off to separate PGXACT structs,
many PGPROC fields referred to in comments were moved to PGXACT, but the
comments were neglected in the commit. Mostly this is just a search/replace
of PGPROC with PGXACT, but the way the dummy PGPROC entries are created for
prepared transactions changed more, making some of the comments totally
bogus.
Tom Lane [Mon, 14 May 2012 01:11:31 +0000 (21:11 -0400)]
Add some temporary instrumentation to pgstat.c.
Log main-loop blocking events and the results of inquiry messages.
This is to get some clarity as to what's happening on those Windows
buildfarm members that still don't like the latch-ified stats collector.
This bulks up the postmaster log a tad, so I won't leave it in place for
long.
Tom Lane [Sun, 13 May 2012 22:06:52 +0000 (18:06 -0400)]
Fix DROP TABLESPACE to unlink symlink when directory is not there.
If the tablespace directory is missing entirely, we allow DROP TABLESPACE
to go through, on the grounds that it should be possible to clean up the
catalog entry in such a situation. However, we forgot that the pg_tblspc
symlink might still be there. We should try to remove the symlink too
(but not fail if it's no longer there), since not doing so can lead to
weird behavior subsequently, as per report from Michael Nolan.
There was some discussion of adding dependency links to prevent DROP
TABLESPACE when the catalogs still contain references to the tablespace.
That might be worth doing too, but it's an orthogonal question, and in
any case wouldn't be back-patchable.
Back-patch to 9.0, which is as far back as the logic looks like this.
We could possibly do something similar in 8.x, but given the lack of
reports I'm not sure it's worth the trouble, and anyway the case could
not arise in the form the logic is meant to cover (namely, a post-DROP
transaction rollback having resurrected the pg_tablespace entry after
some or all of the filesystem infrastructure is gone).
Tom Lane [Sun, 13 May 2012 18:44:39 +0000 (14:44 -0400)]
Re-revert stats collector latch changes.
This reverts commit cb2f2873d6b81ad7f0a9733ba738bfac0746fb7b, restoring
the latch-ified stats collector logic. We'll soon see if this works any
better on the Windows buildfarm machines.
Tom Lane [Sun, 13 May 2012 18:35:40 +0000 (14:35 -0400)]
Attempt to fix some issues in our Windows socket code.
Make sure WaitLatchOrSocket regards FD_CLOSE as a read-ready condition.
We might want to tweak this further, but it was surely wrong as-is.
Make pgwin32_waitforsinglesocket detach its private event object from the
passed socket before returning. I suspect that failure to do so leads
to race conditions when other code (such as WaitLatchOrSocket) attaches
a different event object to the same socket. Moreover, the existing
coding meant that repeated calls to pgwin32_waitforsinglesocket would
perform ResetEvent on an event actively connected to a socket, which
is rumored to be an unsafe practice; the WSAEventSelect documentation
appears to recommend against this, though it does not say not to do it
in so many words.
Also, uniformly use the coding pattern "WSAEventSelect(s, NULL, 0)" to
detach events from sockets, rather than passing the event in the second
parameter. The WSAEventSelect documentation says that the second parameter
is ignored if the third is 0, so theoretically this should make no
difference. However, elsewhere on the same reference page the use of NULL
in this context is recommended, and I have found suggestions on the net
that some versions of Windows have bugs with a non-NULL second parameter
in this usage.
Some other mostly-cosmetic cleanup, such as using the right one of
WSAGetLastError and GetLastError for reporting errors from these functions.
Tom Lane [Sun, 13 May 2012 04:30:32 +0000 (00:30 -0400)]
Fix bogus declaration of local variable.
rc should be an int here, not a pgsocket. Fairly harmless as long as
pgsocket is an integer type, but nonetheless wrong. Error introduced
in commit 87091cb1f1ed914e2ddca424fa28f94fdf8461d2.
Tom Lane [Sat, 12 May 2012 23:21:54 +0000 (19:21 -0400)]
Avoid unnecessary process wakeups in the log collector.
syslogger was coded to wake up once per second whether there was anything
useful to do or not. As part of our campaign to reduce the server's idle
power consumption, change it to use a latch for waiting. Now, in the
absence of any data to log or any signals to service, it will only wake up
at the programmed logfile rotation times (if any).
Peter Eisentraut [Sat, 12 May 2012 20:29:07 +0000 (23:29 +0300)]
Remove unused AC_SUBST variables
These were apparently never used. The AC_SUBST was probably just
added in a copy-and-paste manner. (The shell variables continue to be
used inside configure. The change is just that we don't need them
outside of configure.)
Tom Lane [Sat, 12 May 2012 20:36:47 +0000 (16:36 -0400)]
Fix WaitLatchOrSocket to handle EOF on socket correctly.
When using poll(), EOF on a socket is reported with the POLLHUP not
POLLIN flag (at least on Linux). WaitLatchOrSocket failed to check
this bit, causing it to go into a busy-wait loop if EOF occurs.
We earlier fixed the same mistake in the test for the state of the
postmaster_alive socket, but missed it for the caller-supplied socket.
Fortunately, this error is new in 9.2, since 9.1 only had a select()
based code path not a poll() based one.
Tom Lane [Fri, 11 May 2012 22:33:39 +0000 (18:33 -0400)]
Update example of process titles shown by "ps".
This example was quite old: it lacked the WAL writer and autovac launcher
as well as the more recently added checkpointer. Linux "ps" seems to show
slightly different stuff now too.
Peter Eisentraut [Fri, 11 May 2012 20:01:15 +0000 (23:01 +0300)]
PL/Python: Adjust the regression tests for Python 3.3
The string representation of ImportError changed. Remove printing
that; it's not necessary for the test.
The order in which members of a dict are printed changed. But this
was always implementation-dependent, so we have just been lucky for a
long time. Do the printing the hard way to ensure sorted order.
Tom Lane [Fri, 11 May 2012 19:22:30 +0000 (15:22 -0400)]
Fix contrib/citext's upgrade script to handle array and domain cases.
We previously recognized that citext wouldn't get marked as collatable
during pg_upgrade from a pre-9.1 installation, and hacked its
create-from-unpackaged script to manually perform the necessary catalog
adjustments. However, we overlooked the fact that domains over citext,
as well as the citext[] array type, need the same adjustments. Extend
the script to handle those cases.
Also, the documentation suggested that this was only an issue in pg_upgrade
scenarios, which is quite wrong; loading any dump containing citext from a
pre-9.1 server will also result in the type being wrongly marked.
I approached the documentation problem by changing the 9.1.2 release note
paragraphs about this issue, which is historically inaccurate. But it
seems better than having the information scattered in multiple places, and
leaving incorrect info in the 9.1.2 notes would be bad anyway. We'll still
need to mention the issue again in the 9.1.4 notes, but perhaps they can
just reference 9.1.2 for fix instructions.
Per report from Evan Carroll. Back-patch into 9.1.
On GiST page split, release the locks on child pages before recursing up.
When inserting the downlinks for a split gist page, we used hold the locks
on the child pages until the insertion into the parent - and recursively its
parent if it had to be split too - were all completed. Change that so that
the locks on child pages are released after the insertion in the immediate
parent is done, before recursing further up the tree.
This reduces the number of lwlocks that are held simultaneously. Holding
many locks is bad for concurrency, and in extreme cases you can even hit
the limit of 100 simultaneously held lwlocks in a backend. If you're really
unlucky, you can hit the limit while in a critical section, which brings
down the whole system.
This fixes bug #6629 reported by Tom Forbes. Backpatch to 9.1. The page
splitting code was rewritten in 9.1, and the old code did not have this
problem.