Tom Lane [Sun, 10 Mar 2013 19:07:38 +0000 (15:07 -0400)]
Band-aid for regression test expected-results problem with timestamptz.
We probably need to tell the remote server to use specific timezone and
datestyle settings, and maybe other things. But for now let's just hack
the postgres_fdw regression test to not provoke failures when run in
non-EST5EDT environments. Per buildfarm.
Tom Lane [Sun, 10 Mar 2013 18:14:53 +0000 (14:14 -0400)]
Support writable foreign tables.
This patch adds the core-system infrastructure needed to support updates
on foreign tables, and extends contrib/postgres_fdw to allow updates
against remote Postgres servers. There's still a great deal of room for
improvement in optimization of remote updates, but at least there's basic
functionality there now.
KaiGai Kohei, reviewed by Alexander Korotkov and Laurenz Albe, and rather
heavily revised by Tom Lane.
Magnus Hagander [Sun, 10 Mar 2013 14:54:37 +0000 (15:54 +0100)]
Report pg_hba line number and contents when users fail to log in
Instead of just reporting which user failed to log in, log both the
line number in the active pg_hba.conf file (which may not match reality
in case the file has been edited and not reloaded) and the contents of
the matching line (which will always be correct), to make it easier
to debug incorrect pg_hba.conf files.
The message to the client remains unchanged and does not include this
information, to prevent leaking security sensitive information.
Tom Lane [Thu, 7 Mar 2013 16:51:03 +0000 (11:51 -0500)]
Fix infinite-loop risk in fixempties() stage of regex compilation.
The previous coding of this function could get into situations where it
would never terminate, because successive passes would re-add EMPTY arcs
that had been removed by the previous pass. Rewrite the function
completely using a new algorithm that is guaranteed to terminate, and
also seems to be usually faster than the old one. Per Tcl bugs 3604074
and 3606683.
Fix tli history file fetching, broken by the archive after crash recevery patch.
If we were about to enter archive recovery after crash recovery, we scanned
the archive for the latest tli history file, and set the recovery target
timeline to that. However, when we actually tried to read the history file,
we would not fetch the file from the archive, because we were not in archive
recovery yet.
To fix, make readTimeLineHistory and existsTimeLineHistory to always fetch
the file from archive if archive recovery is requested, even if we're not in
archive recovery yet.
Tom Lane [Thu, 7 Mar 2013 04:47:38 +0000 (23:47 -0500)]
Arrange to cache FdwRoutine structs in foreign tables' relcache entries.
This saves several catalog lookups per reference. It's not all that
exciting right now, because we'd managed to minimize the number of places
that need to fetch the data; but the upcoming writable-foreign-tables patch
needs this info in a lot more places.
Kevin Grittner [Wed, 6 Mar 2013 23:15:34 +0000 (17:15 -0600)]
WAL-log the extension of a new empty MV heap which is being populated.
This page with no tuples is used to distinguish an MV containing a
zero-row resultset of its backing query from an MV which has not
been populated by its backing query. Unless WAL-logged, recovery
and hot standby don't work correctly with what should be an empty
but scannable materialized view.
Fixes bugs reported by Fujii Masao in testing MVs on hot standby.
Andrew Dunstan [Wed, 6 Mar 2013 00:24:29 +0000 (19:24 -0500)]
Remove dependency on the DLL of pythonxx.def file.
This confused Cygwin's make because of the colon in the path. The
DLL isn't likely to change under us so preserving the dependency
doesn't gain us much, and it's useful to be able to do a native
Windows build with the Cygwin mingw toolset.
Tom Lane [Tue, 5 Mar 2013 18:02:30 +0000 (13:02 -0500)]
Fix to_char() to use ASCII-only case-folding rules where appropriate.
formatting.c used locale-dependent case folding rules in some code paths
where the result isn't supposed to be locale-dependent, for example
to_char(timestamp, 'DAY'). Since the source data is always just ASCII
in these cases, that usually didn't matter ... but it does matter in
Turkish locales, which have unusual treatment of "i" and "I". To confuse
matters even more, the misbehavior was only visible in UTF8 encoding,
because in single-byte encodings we used pg_toupper/pg_tolower which
don't have locale-specific behavior for ASCII characters. Fix by providing
intentionally ASCII-only case-folding functions and using these where
appropriate. Per bug #7913 from Adnan Dursun. Back-patch to all active
branches, since it's been like this for a long time.
Tom Lane [Mon, 4 Mar 2013 20:13:31 +0000 (15:13 -0500)]
Fix overflow check in tm2timestamp (this time for sure).
I fixed this code back in commit 841b4a2d5, but didn't think carefully
enough about the behavior near zero, which meant it improperly rejected
1999-12-31 24:00:00. Per report from Magnus Hagander.
Tom Lane [Mon, 4 Mar 2013 00:32:12 +0000 (19:32 -0500)]
Fix map_sql_value_to_xml_value() to treat domains like their base types.
This was already the case for domains over arrays, but not for domains
over certain built-in types such as boolean. The special formatting
rules for those types should apply to domains over them as well.
Per discussion.
While this is a bug fix, it's also a behavioral change that seems likely
to trip up some applications. So no back-patch.
Kevin Grittner [Mon, 4 Mar 2013 00:23:31 +0000 (18:23 -0600)]
Add a materialized view relations.
A materialized view has a rule just like a view and a heap and
other physical properties like a table. The rule is only used to
populate the table, references in queries refer to the
materialized data.
This is a minimal implementation, but should still be useful in
many cases. Currently data is only populated "on demand" by the
CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements.
It is expected that future releases will add incremental updates
with various timings, and that a more refined concept of defining
what is "fresh" data will be developed. At some point it may even
be possible to have queries use a materialized in place of
references to underlying tables, but that requires the other
above-mentioned features to be working first.
Much of the documentation work by Robert Haas.
Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja
Security review by KaiGai Kohei, with a decision on how best to
implement sepgsql still pending.
Tom Lane [Mon, 4 Mar 2013 00:05:47 +0000 (19:05 -0500)]
Get rid of any toast table when converting a table to a view.
Also make sure other fields of the view's pg_class entry are appropriate
for a view; it shouldn't have relfrozenxid set for instance.
This ancient omission isn't believed to have any serious consequences in
versions 8.4-9.2, so no backpatch. But let's fix it before it does bite
us in some serious way. It's just luck that the case doesn't cause
problems for autovacuum. (It did cause problems in 8.3, but that's out
of support.)
Tom Lane [Sun, 3 Mar 2013 22:39:58 +0000 (17:39 -0500)]
Fix SQL function execution to be safe with long-lived FmgrInfos.
fmgr_sql had been designed on the assumption that the FmgrInfo it's called
with has only query lifespan. This is demonstrably unsafe in connection
with range types, as shown in bug #7881 from Andrew Gierth. Fix things
so that we re-generate the function's cache data if the (sub)transaction
it was made in is no longer active.
Back-patch to 9.2. This might be needed further back, but it's not clear
whether the case can realistically arise without range types, so for now
I'll desist from back-patching further.
Tom Lane [Sat, 2 Mar 2013 02:33:34 +0000 (21:33 -0500)]
Eliminate memory leaks in plperl's spi_prepare() function.
Careless use of TopMemoryContext for I/O function data meant that repeated
use of spi_prepare and spi_freeplan would leak memory at the session level,
as per report from Christian Schröder. In addition, spi_prepare
leaked a lot of transient data within the current plperl function's SPI
Proc context, which would be a problem for repeated use of spi_prepare
within a single plperl function call; and it wasn't terribly careful
about releasing permanent allocations in event of an error, either.
In passing, clean up some copy-and-pasteos in query-lookup error messages.
Cannot use WL_SOCKET_WRITEABLE without WL_SOCKET_READABLE.
In copy-out mode, the frontend should not send any messages until the
backend has finished streaming, by sending a CopyDone message. I'm not sure
if it would be legal for the client to send a new query before receiving the
CopyDone message from the backend, but trying to support that would require
bigger changes to the backend code structure.
Fixes an assertion failure reported by Fujii Masao.
Add support for piping COPY to/from an external program.
This includes backend "COPY TO/FROM PROGRAM '...'" syntax, and corresponding
psql \copy syntax. Like with reading/writing files, the backend version is
superuser-only, and in the psql version, the program is run in the client.
In the passing, the psql \copy STDIN/STDOUT syntax is subtly changed: if you
the stdin/stdout is quoted, it's now interpreted as a filename. For example,
"\copy foo from 'stdin'" now reads from a file called 'stdin', not from
standard input. Before this, there was no way to specify a filename called
stdin, stdout, pstdin or pstdout.
This creates a new function in pgport, wait_result_to_str(), which can
be used to convert the exit status of a process, as returned by wait(3),
to a human-readable string.
Tom Lane [Wed, 27 Feb 2013 15:40:03 +0000 (10:40 -0500)]
Add missing error check in regexp parser.
parseqatom() failed to check for an error return (NULL result) from its
recursive call to parsebranch(), and in consequence could crash with a
null-pointer dereference after an error return. This bug has been there
since day one, but wasn't noticed before, probably because most error cases
in parsebranch() didn't actually lead to returning NULL. Add the missing
error check, and also tweak parsebranch() to exit in a less indirect
fashion after a call to parseqatom() fails.
Remove the check for COPY TO STDIN and COPY FROM STDOUT from ecpg.
The backend grammar treats STDIN and STDOUT completely interchangeable, so
that the above accepted. Arguably that was a mistake the backend grammar,
but it's not ecpg's business to second guess that.
Only quote libpq connection string values that need quoting.
There's no harm in excessive quoting per se, but it makes the strings nicer
to read. The values can get quite unwieldy, when they're first quoted within
within single-quotes when included in the connection string, and then all
the single-quotes are escaped when the connection string is passed as a
shell argument.
You could already pass a database name just by passing it as the last
option, without -d. This is an alias for that, like the -d/--dbname option
in psql and many other client applications. For consistency.
Andrew Dunstan [Mon, 25 Feb 2013 17:00:53 +0000 (12:00 -0500)]
Redo MSVC build implementation for pg_xlogdump.
The previous commit didn't work on MSVC editions earlier than
Visual Studio 2011, apparently. This works by copying files into the
contrib directory, and making provision to clean them up, which should
work on all editions.
Add -d option to pg_basebackup and pg_receivexlog, for connection string.
Without this, there's no way to pass arbitrary libpq connection parameters
to these applications. It's a bit strange that the option is called
-d/--dbname, when in fact you can *not* pass a database name in it, but it's
consistent with other client applications where a connection string is also
passed using -d.
Original patch by Amit Kapila, heavily modified by me.
Tom Lane [Fri, 22 Feb 2013 15:56:06 +0000 (10:56 -0500)]
Fix some planning oversights in postgres_fdw.
Include eval costs of local conditions in remote-estimate mode, and don't
assume the remote eval cost is zero in local-estimate mode. (The best
we can do with that at the moment is to assume a seqscan, which may well
be wildly pessimistic ... but zero won't do at all.)
To get a reasonable local estimate, we need to know the relpages count
for the remote rel, so improve the ANALYZE code to fetch that rather
than just setting the foreign table's relpages field to zero.
Tom Lane [Fri, 22 Feb 2013 12:30:21 +0000 (07:30 -0500)]
Change postgres_fdw to show casts as casts, not underlying function calls.
On reflection this method seems to be exposing an unreasonable amount of
implementation detail. It wouldn't matter when talking to a remote server
of the identical Postgres version, but it seems likely to make things worse
not better if the remote is a different version with different casting
infrastructure. Instead adopt ruleutils.c's policy of regurgitating the
cast as it was originally specified; including not showing it at all, if
it was implicit to start with. (We must do that because for some datatypes
explicit and implicit casts have different semantics.)
Tom Lane [Fri, 22 Feb 2013 11:36:09 +0000 (06:36 -0500)]
Get rid of postgres_fdw's assumption that remote type OIDs match ours.
The only place we depended on that was in sending numeric type OIDs in
PQexecParams; but we can replace that usage with explicitly casting
each Param symbol in the query string, so that the types are specified
to the remote by name not OID. This makes no immediate difference but
will be essential if we ever hope to support use of non-builtin types.
We must still initialize minRecoveryPoint if we start straight with archive
recovery, e.g when recovering from a normal base backup taken with
pg_start/stop_backup. Otherwise we never consider the system consistent.
Tom Lane [Fri, 22 Feb 2013 11:03:46 +0000 (06:03 -0500)]
Adjust postgres_fdw's search path handling.
Set the remote session's search path to exactly "pg_catalog" at session
start, then schema-qualify only names that aren't in that schema. This
greatly reduces clutter in the generated SQL commands, as seen in the
regression test changes. Per discussion.
Also, rethink use of FirstNormalObjectId as the "built-in object" cutoff
--- FirstBootstrapObjectId is safer, since the former will accept
objects in information_schema for instance.
If recovery.conf is created after "pg_ctl stop -m i", do crash recovery.
If you create a base backup using an atomic filesystem snapshot, and try to
perform PITR starting from that base backup, or if you just kill a master
server and create recovery.conf to put it into standby mode, we don't know
how far we need to recover before reaching consistency. Normally in crash
recovery, we replay all the WAL present in pg_xlog, and assume that we're
consistent after that. And normally in archive recovery, minRecoveryPoint,
backupEndRequired, or backupEndPoint is set in the control file, indicating
how far we need to replay to reach consistency. But if the server was
previously up and running normally, and you kill -9 it or take an atomic
filesystem snapshot, none of those fields are set in the control file.
The solution is to perform crash recovery first, replaying all the WAL in
pg_xlog. After that's done, we assume that the system is consistent like in
normal crash recovery, and switch to archive recovery mode after that.
Per report from Kyotaro HORIGUCHI. In his scenario, recovery.conf was
created after "pg_ctl stop -m i". I'm not sure we need to support that exact
scenario, but we should support backing up using a filesystem snapshot,
which looks identical.
This issue goes back to at least 9.0, where hot standby was introduced and
we started to track when consistency is reached. In 9.1 and 9.2, we would
open up for hot standby too early, and queries could briefly see an
inconsistent state. But 9.2 made it more visible, as we started to PANIC if
we see a reference to a non-existing page during recovery, if we've already
reached consistency. This is a fairly big patch, so back-patch to 9.2 only,
where the issue is more visible. We can consider back-patching further after
this has received some more testing in 9.2 and master.
Alvaro Herrera [Fri, 22 Feb 2013 01:46:17 +0000 (22:46 -0300)]
Move relpath() to libpgcommon
This enables non-backend code, such as pg_xlogdump, to use it easily.
The previous location, in src/backend/catalog/catalog.c, made that
essentially impossible because that file depends on many backend-only
facilities; so this needs to live separately.
Tom Lane [Thu, 21 Feb 2013 10:26:23 +0000 (05:26 -0500)]
Add postgres_fdw contrib module.
There's still a lot of room for improvement, but it basically works,
and we need this to be present before we can do anything much with the
writable-foreign-tables patch. So let's commit it and get on with testing.
Shigeru Hanada, reviewed by KaiGai Kohei and Tom Lane
If a database name contained a '=' character, pg_dumpall failed. The problem
was in the way pg_dumpall passes the database name to pg_dump on the
command line. If it contained a '=' character, pg_dump would interpret it
as a libpq connection string instead of a plain database name.
To fix, pass the database name to pg_dump as a connection string,
"dbname=foo", with the database name escaped if necessary.
Alvaro Herrera [Mon, 18 Feb 2013 20:56:08 +0000 (17:56 -0300)]
Split pgstat file in smaller pieces
We now write one file per database and one global file, instead of
having the whole thing in a single huge file. This reduces the I/O that
must be done when partial data is required -- which is all the time,
because each process only needs information on its own database anyway.
Also, the autovacuum launcher does not need data about tables and
functions in each database; having the global stats for all DBs is
enough.
Catalog version bumped because we have a new subdir under PGDATA.
Author: Tomas Vondra. Some rework by Álvaro
Testing by Jeff Janes
Other discussion by Heikki Linnakangas, Tom Lane.
Peter Eisentraut [Mon, 18 Feb 2013 04:45:36 +0000 (23:45 -0500)]
Add ALTER ROLE ALL SET command
This generalizes the existing ALTER ROLE ... SET and ALTER DATABASE
... SET functionality to allow creating settings that apply to all users
in all databases.
Better fix for "unarchived WAL files get deleted on crash recovery" bug.
Revert my earlier fix for the bug that unarchived WAL files get deleted on
crash recovery, commit c9cc7e05c6d82a9781883a016c70d95aa4923122. We create
a .done file for files streamed or restored from archive, so the WAL file
recycling logic used during normal operation works just as well during
archive recovery.
Simon Riggs [Wed, 8 Aug 2012 22:58:49 +0000 (23:58 +0100)]
Force archive_status of .done for xlogs created by dearchival/replication.
This is a forward-patch of commit 6f4b8a4f4f7a2d683ff79ab59d3693714b965e3d,
applied to 9.2 back in August. The plan was to do something else in master,
but it looks like it's not going to happen, so let's just apply the 9.2
solution to master as well.
Peter Eisentraut [Fri, 15 Feb 2013 02:40:05 +0000 (21:40 -0500)]
pgindent: Fix order in instructions
The previous order of steps didn't literally work, because git clean
-fdx would delete the downloaded typedefs.list. Also, pgindent needs to
be called with a path when one is in at the top of the build tree.
Tom Lane [Fri, 15 Feb 2013 01:35:08 +0000 (20:35 -0500)]
Invent pre-commit/pre-prepare/pre-subcommit events for xact callbacks.
Currently it's only possible for loadable modules to get control during
post-commit cleanup of a transaction. That doesn't work too well if they
want to do something that could throw an error; for example, an FDW might
need to issue a remote commit, which could well fail. To improve matters,
extend the existing APIs for XactCallback and SubXactCallback functions
to provide new pre-commit events for this purpose.
The release notes will need to mention that existing callback functions
should be checked to make sure they don't do something unwanted when one
of the new event types occurs. In the examples within our source tree,
contrib/sepgsql was fine but plpgsql had been a bit too cute.
If users create tablespaces inside the old cluster directory, it is
impossible for the delete script to delete _only_ the old cluster files,
so don't create a script in that case, and issue a message to the user.
Tom Lane [Wed, 13 Feb 2013 21:20:01 +0000 (16:20 -0500)]
Fix CVE-2013-0255 properly.
Revert commit ab0f7b6089fd215f6ce6081e2e222c38d643a526 (in HEAD only)
in favor of the proper solution, which is to declare enum_recv() correctly
in the system catalogs. It should be declared to take type "internal"
not "cstring".
Also improve the type_sanity regression test, which should have caught
this typo, so that it actually would. Most of the relevant checks on
the signature of type I/O functions should not have been restricted to
basetypes/pseudotypes, as they should apply to any type's I/O functions.
Tom Lane [Wed, 13 Feb 2013 19:07:06 +0000 (14:07 -0500)]
Fix contrib/pg_trgm's similarity() function for trigram-free strings.
Cases such as similarity('', '') produced a NaN result due to computing
0/0. Per discussion, make it return zero instead.
This appears to be the basic cause of bug #7867 from Michele Baravalle,
although it remains unclear why her installation doesn't think Cyrillic
letters are letters.
Since a backend adds itself to the global listener array during
Exec_ListenPreCommit, it's inappropriate for it to remove itself during
Exec_UnlistenCommit or Exec_UnlistenAllCommit --- that leads to failure
when committing a transaction that did UNLISTEN then LISTEN, since we end
up not registered though we should be. (This leads to missing later
notifications, or to Assert failures in assert-enabled builds.) Instead
deal with deregistering at the bottom of AtCommit_Notify, when we know the
final state of the listenChannels list.
Also, simplify the representation of registration status by replacing the
transient backendHasExecutedInitialListen flag with an amRegisteredListener
flag.
Per report from Greg Sabino Mullane. Back-patch to 9.0, where the problem
was introduced during the LISTEN/NOTIFY rewrite.
Update visibility map in the second phase of vacuum.
There's a high chance that a page becomes all-visible when the second phase
of vacuum removes all the dead tuples on it, so it makes sense to check for
that. Otherwise the visibility map won't get updated until the next vacuum.
Alvaro Herrera [Tue, 12 Feb 2013 13:33:40 +0000 (10:33 -0300)]
Create libpgcommon, and move pg_malloc et al to it
libpgcommon is a new static library to allow sharing code among the
various frontend programs and backend; this lets us eliminate duplicate
implementations of common routines. We avoid libpgport, because that's
intended as a place for porting issues; per discussion, it seems better
to keep them separate.
The first use case, and the only implemented by this patch, is pg_malloc
and friends, which many frontend programs were already using.
At the same time, we can use this to provide palloc emulation functions
for the frontend; this way, some palloc-using files in the backend can
also be used by the frontend cleanly. To do this, we change palloc() in
the backend to be a function instead of a macro on top of
MemoryContextAlloc(). This was previously believed to cause loss of
performance, but this implementation has been tweaked by Tom and Andres
so that on modern compilers it provides a slight improvement over the
previous one.
This lets us clean up some places that were already with
localized hacks.
Most of the pg_malloc/palloc changes in this patch were authored by
Andres Freund. Zoltán Böszörményi also independently provided a form of
that. libpgcommon infrastructure was authored by Álvaro.
The reason this wasn't supported before was that GiST indexes need an
increasing sequence to detect concurrent page-splits. In a regular WAL-
logged GiST index, the LSN of the page-split record is used for that
purpose, and in a temporary index, we can get away with a backend-local
counter. Neither of those methods works for an unlogged relation.
To provide such an increasing sequence of numbers, create a "fake LSN"
counter that is saved and restored across shutdowns. On recovery, unlogged
relations are blown away, so the counter doesn't need to survive that
either.
Jeevan Chalke, based on discussions with Robert Haas, Tom Lane and me.