Stephen Frost [Thu, 14 May 2015 19:41:39 +0000 (15:41 -0400)]
Make repeated 'make installcheck' runs work
In pg_audit, set client_min_messages up to warning, then reset the role
attributes, to completely reset the session while not making the
regression tests depend on being run by any particular user.
Stephen Frost [Thu, 14 May 2015 19:16:27 +0000 (15:16 -0400)]
Improve pg_audit regression tests
Instead of creating a new superuser role, extract out what the current
user is and use that user instead. Further, clean up and drop all
objects created by the regression test.
Tom Lane [Thu, 14 May 2015 16:08:40 +0000 (12:08 -0400)]
Support "expanded" objects, particularly arrays, for better performance.
This patch introduces the ability for complex datatypes to have an
in-memory representation that is different from their on-disk format.
On-disk formats are typically optimized for minimal size, and in any case
they can't contain pointers, so they are often not well-suited for
computation. Now a datatype can invent an "expanded" in-memory format
that is better suited for its operations, and then pass that around among
the C functions that operate on the datatype. There are also provisions
(rudimentary as yet) to allow an expanded object to be modified in-place
under suitable conditions, so that operations like assignment to an element
of an array need not involve copying the entire array.
The initial application for this feature is arrays, but it is not hard
to foresee using it for other container types like JSON, XML and hstore.
I have hopes that it will be useful to PostGIS as well.
In this initial implementation, a few heuristics have been hard-wired
into plpgsql to improve performance for arrays that are stored in
plpgsql variables. We would like to generalize those hacks so that
other datatypes can obtain similar improvements, but figuring out some
appropriate APIs is left as a task for future work. (The heuristics
themselves are probably not optimal yet, either, as they sometimes
force expansion of arrays that would be better left alone.)
Preliminary performance testing shows impressive speed gains for plpgsql
functions that do element-by-element access or update of large arrays.
There are other cases that get a little slower, as a result of added array
format conversions; but we can hope to improve anything that's annoyingly
bad. In any case most applications should see a net win.
Stephen Frost [Thu, 14 May 2015 15:55:36 +0000 (11:55 -0400)]
Further fixes for the buildfarm for pg_audit
Also, use a function to load the extension ahead of all other calls,
simulating load from shared_libraries_preload, to make sure the
hooks are in place before logging start.
Stephen Frost [Thu, 14 May 2015 14:57:12 +0000 (10:57 -0400)]
Fix buildfarm with regard to pg_audit
Remove the check that pg_audit be installed by
shared_preload_libraries as that's not going to work when running the
regressions tests in the buildfarm. That check was primairly a nice to
have and isn't required anyway.
Stephen Frost [Thu, 14 May 2015 14:36:16 +0000 (10:36 -0400)]
Add pg_audit, an auditing extension
This extension provides detailed logging classes, ability to control
logging at a per-object level, and includes fully-qualified object
names for logged statements (DML and DDL) in independent fields of the
log output.
Authors: Ian Barwick, Abhijit Menon-Sen, David Steele
Reviews by: Robert Haas, Tatsuo Ishii, Sawada Masahiko, Fujii Masao,
Simon Riggs
Discussion with: Josh Berkus, Jaime Casanova, Peter Eisentraut,
David Fetter, Yeb Havinga, Alvaro Herrera, Petr Jelinek, Tom Lane,
MauMau, Bruce Momjian, Jim Nasby, Michael Paquier,
Fabrízio de Royes Mello, Neil Tiffin
Tom Lane [Wed, 13 May 2015 22:48:05 +0000 (18:48 -0400)]
Fix distclean/maintainer-clean targets to remove top-level tmp_install dir.
The top-level makefile removes tmp_install in its "clean" target, but the
distclean and maintainer-clean targets overlooked that (and they don't
simply invoke clean, because that would result in an extra tree traversal).
While at it, let's just make sure that removing GNUmakefile itself is the
very last step of the recipe.
Tom Lane [Wed, 13 May 2015 18:05:17 +0000 (14:05 -0400)]
Fix postgres_fdw to return the right ctid value in EvalPlanQual cases.
If a postgres_fdw foreign table is a non-locked source relation in an
UPDATE, DELETE, or SELECT FOR UPDATE/SHARE, and the query selects its
ctid column, the wrong value would be returned if an EvalPlanQual
recheck occurred. This happened because the foreign table's result row
was copied via the ROW_MARK_COPY code path, and EvalPlanQualFetchRowMarks
just unconditionally set the reconstructed tuple's t_self to "invalid".
To fix that, we can have EvalPlanQualFetchRowMarks copy the composite
datum's t_ctid field, and be sure to initialize that along with t_self
when postgres_fdw constructs a tuple to return.
If we just did that much then EvalPlanQualFetchRowMarks would start
returning "(0,0)" as ctid for all other ROW_MARK_COPY cases, which perhaps
does not matter much, but then again maybe it might. The cause of that is
that heap_form_tuple, which is the ultimate source of all composite datums,
simply leaves t_ctid as zeroes in newly constructed tuples. That seems
like a bad idea on general principles: a field that's really not been
initialized shouldn't appear to have a valid value. So let's eat the
trivial additional overhead of doing "ItemPointerSetInvalid(&(td->t_ctid))"
in heap_form_tuple.
This closes out our handling of Etsuro Fujita's report that tableoid and
ctid weren't correctly set in postgres_fdw EvalPlanQual cases. Along the
way we did a great deal of work to improve FDWs' ability to control row
locking behavior; which was not wasted effort by any means, but it didn't
end up being a fix for this problem because that feature would be too
expensive for postgres_fdw to use all the time.
Although the fix for the tableoid misbehavior was back-patched, I'm
hesitant to do so here; it seems far less likely that people would care
about remote ctid than tableoid, and even such a minor behavioral change
as this in heap_form_tuple is perhaps best not back-patched. So commit
to HEAD only, at least for the moment.
Andrew Dunstan [Wed, 13 May 2015 17:52:08 +0000 (13:52 -0400)]
Fix jsonb replace and delete on scalars and empty structures
These operations now error out if attempted on scalars, and simply
return the input if attempted on empty arrays or objects. Along the way
we remove the unnecessary cloning of the input when it's known to be
unchanged. Regression tests covering these cases are added.
Andres Freund [Wed, 13 May 2015 05:31:04 +0000 (07:31 +0200)]
Add pgstattuple_approx() to the pgstattuple extension.
The new function allows to estimate bloat and other table level statics
in a faster, but approximate, way. It does so by using information from
the free space map for pages marked as all visible in the visibility
map. The rest of the table is actually read and free space/bloat is
measured accurately. In many cases that allows to get bloat information
much quicker, causing less IO.
Author: Abhijit Menon-Sen Reviewed-By: Andres Freund, Amit Kapila and Tomas Vondra
Discussion: 20140402214144.GA28681@kea.toroid.org
Peter Eisentraut [Wed, 13 May 2015 02:52:18 +0000 (22:52 -0400)]
PL/Python: Remove procedure cache invalidation
This was added to react to changes in the pg_transform catalog, but
building with CLOBBER_CACHE_ALWAYS showed that PL/Python was not
prepared for having its procedure cache cleared. Since this is a
marginal use case, and we don't do this for other catalogs anyway, we
can postpone this to another day.
Andres Freund [Tue, 12 May 2015 22:13:22 +0000 (00:13 +0200)]
Fix ON CONFLICT bugs that manifest when used in rules.
Specifically the tlist and rti of the pseudo "excluded" relation weren't
properly treated by expression_tree_walker, which lead to errors when
excluded was referenced inside a rule because the varnos where not
properly adjusted. Similar omissions in OffsetVarNodes and
expression_tree_mutator had less impact, but should obviously be fixed
nonetheless.
A couple tests of for ON CONFLICT UPDATE into INSERT rule bearing
relations have been added.
Andrew Dunstan [Tue, 12 May 2015 19:52:45 +0000 (15:52 -0400)]
Additional functions and operators for jsonb
jsonb_pretty(jsonb) produces nicely indented json output.
jsonb || jsonb concatenates two jsonb values.
jsonb - text removes a key and its associated value from the json
jsonb - int removes the designated array element
jsonb - text[] removes a key and associated value or array element at
the designated path
jsonb_replace(jsonb,text[],jsonb) replaces the array element designated
by the path or the value associated with the key designated by the path
with the given value.
Original work by Dmitry Dolgov, adapted and reworked for PostgreSQL core
by Andrew Dunstan, reviewed and tidied up by Petr Jelinek.
Tom Lane [Tue, 12 May 2015 18:10:10 +0000 (14:10 -0400)]
Add support for doing late row locking in FDWs.
Previously, FDWs could only do "early row locking", that is lock a row as
soon as it's fetched, even though local restriction/join conditions might
discard the row later. This patch adds callbacks that allow FDWs to do
late locking in the same way that it's done for regular tables.
To make use of this feature, an FDW must support the "ctid" column as a
unique row identifier. Currently, since ctid has to be of type TID,
the feature is of limited use, though in principle it could be used by
postgres_fdw. We may eventually allow FDWs to specify another data type
for ctid, which would make it possible for more FDWs to use this feature.
This commit does not modify postgres_fdw to use late locking. We've
tested some prototype code for that, but it's not in committable shape,
and besides it's quite unclear whether it actually makes sense to do late
locking against a remote server. The extra round trips required are likely
to outweigh any benefit from improved concurrency.
Etsuro Fujita, reviewed by Ashutosh Bapat, and hacked up a lot by me
Stephen Frost [Tue, 12 May 2015 17:13:12 +0000 (13:13 -0400)]
pgbench: Don't fail during startup
In pgbench, report, but ignore, any errors returned when attempting to
vacuum/truncate the default tables during startup. If the tables are
needed, we'll error out soon enough anyway.
Per discussion with Tatsuo, David Rowley, Jim Nasby, Robert, Andres,
Fujii, Fabrízio de Royes Mello, Tomas Vondra, Michael Paquier, Peter,
based on a suggestion from Jeff Janes, patch from Robert, additional
message wording from Tom.
Andrew Dunstan [Tue, 12 May 2015 13:29:10 +0000 (09:29 -0400)]
Map basebackup tablespaces using a tablespace_map file
Windows can't reliably restore symbolic links from a tar format, so
instead during backup start we create a tablespace_map file, which is
used by the restoring postgres to create the correct links in pg_tblspc.
The backup protocol also now has an option to request this file to be
included in the backup stream, and this is used by pg_basebackup when
operating in tar mode.
This is done on all platforms, not just Windows.
This means that pg_basebackup will not not work in tar mode against 9.4
and older servers, as this protocol option isn't implemented there.
Amit Kapila, reviewed by Dilip Kumar, with a little editing from me.
Alvaro Herrera [Mon, 11 May 2015 22:14:31 +0000 (19:14 -0300)]
Allow on-the-fly capture of DDL event details
This feature lets user code inspect and take action on DDL events.
Whenever a ddl_command_end event trigger is installed, DDL actions
executed are saved to a list which can be inspected during execution of
a function attached to ddl_command_end.
The set-returning function pg_event_trigger_ddl_commands can be used to
list actions so captured; it returns data about the type of command
executed, as well as the affected object. This is sufficient for many
uses of this feature. For the cases where it is not, we also provide a
"command" column of a new pseudo-type pg_ddl_command, which is a
pointer to a C structure that can be accessed by C code. The struct
contains all the info necessary to completely inspect and even
reconstruct the executed command.
There is no actual deparse code here; that's expected to come later.
What we have is enough infrastructure that the deparsing can be done in
an external extension. The intention is that we will add some deparsing
code in a later release, as an in-core extension.
A new test module is included. It's probably insufficient as is, but it
should be sufficient as a starting point for a more complete and
future-proof approach.
Authors: Álvaro Herrera, with some help from Andres Freund, Ian Barwick,
Abhijit Menon-Sen.
Reviews by Andres Freund, Robert Haas, Amit Kapila, Michael Paquier,
Craig Ringer, David Steele.
Additional input from Chris Browne, Dimitri Fontaine, Stephen Frost,
Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer, Pavel Stěhule.
Based on original work by Dimitri Fontaine, though I didn't use his
code.
Stephen Frost [Mon, 11 May 2015 19:44:12 +0000 (15:44 -0400)]
Allow LOCK TABLE .. ROW EXCLUSIVE MODE with INSERT
INSERT acquires RowExclusiveLock during normal operation and therefore
it makes sense to allow LOCK TABLE .. ROW EXCLUSIVE MODE to be executed
by users who have INSERT rights on a table (even if they don't have
UPDATE or DELETE).
Not back-patching this as it's a behavior change which, strictly
speaking, loosens security restrictions.
Tom Lane [Mon, 11 May 2015 16:25:28 +0000 (12:25 -0400)]
Fix incorrect checking of deferred exclusion constraint after a HOT update.
If a row that potentially violates a deferred exclusion constraint is
HOT-updated later in the same transaction, the exclusion constraint would
be reported as violated when the check finally occurs, even if the row(s)
the new row originally conflicted with have since been removed. This
happened because the wrong TID was passed to check_exclusion_constraint(),
causing the live HOT-updated row to be seen as a conflicting row rather
than recognized as the row-under-test.
Per bug #13148 from Evan Martin. It's been broken since exclusion
constraints were invented, so back-patch to all supported branches.
Robert Haas [Mon, 11 May 2015 16:07:13 +0000 (12:07 -0400)]
Increase threshold for multixact member emergency autovac to 50%.
Analysis by Noah Misch shows that the 25% threshold set by commit 53bb309d2d5a9432d2602c93ed18e58bd2924e15 is lower than any other,
similar autovac threshold. While we don't know exactly what value
will be optimal for all users, it is better to err a little on the
high side than on the low side. A higher value increases the risk
that users might exhaust the available space and start seeing errors
before autovacuum can clean things up sufficiently, but a user who
hits that problem can compensate for it by reducing
autovacuum_multixact_freeze_max_age to a value dependent on their
average multixact size. On the flip side, if the emergency cap
imposed by that patch kicks in too early, the user will experience
excessive wraparound scanning and will be unable to mitigate that
problem by configuration. The new value will hopefully reduce the
risk of such bad experiences while still providing enough headroom
to avoid multixact member exhaustion for most users.
Along the way, adjust the documentation to reflect the effects of
commit 04e6d3b877e060d8445eb653b7ea26b1ee5cec6b, which taught
autovacuum to run for multixact wraparound even when autovacuum
is configured off.
Robert Haas [Mon, 11 May 2015 02:21:20 +0000 (22:21 -0400)]
Advance the stop point for multixact offset creation only at checkpoint.
Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c advanced the stop point
at vacuum time, but this has subsequently been shown to be unsafe as a
result of analysis by myself and Thomas Munro and testing by Thomas
Munro. The crux of the problem is that the SLRU deletion logic may
get confused about what to remove if, at exactly the right time during
the checkpoint process, the head of the SLRU crosses what used to be
the tail.
This patch, by me, fixes the problem by advancing the stop point only
following a checkpoint. This has the additional advantage of making
the removal logic work during recovery more like the way it works during
normal running, which is probably good.
At least one of the calls to DetermineSafeOldestOffset which this patch
removes was already dead, because MultiXactAdvanceOldest is called only
during recovery and DetermineSafeOldestOffset was set up to do nothing
during recovery. That, however, is inconsistent with the principle that
recovery and normal running should work similarly, and was confusing to
boot.
Along the way, fix some comments that previous patches in this area
neglected to update. It's not clear to me whether there's any
concrete basis for the decision to use only half of the multixact ID
space, but it's neither necessary nor sufficient to prevent multixact
member wraparound, so the comments should not say otherwise.
Tom Lane [Sun, 10 May 2015 18:36:30 +0000 (14:36 -0400)]
Code review for foreign/custom join pushdown patch.
Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design
decisions that seem pretty questionable to me, and there was quite a lot
of stuff not to like about the documentation and comments. Clean up
as follows:
* Consider foreign joins only between foreign tables on the same server,
rather than between any two foreign tables with the same underlying FDW
handler function. In most if not all cases, the FDW would simply have had
to apply the same-server restriction itself (far more expensively, both for
lack of caching and because it would be repeated for each combination of
input sub-joins), or else risk nasty bugs. Anyone who's really intent on
doing something outside this restriction can always use the
set_join_pathlist_hook.
* Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist
to better reflect what they're for, and allow these custom scan tlists
to be used even for base relations.
* Change make_foreignscan() API to include passing the fdw_scan_tlist
value, since the FDW is required to set that. Backwards compatibility
doesn't seem like an adequate reason to expect FDWs to set it in some
ad-hoc extra step, and anyway existing FDWs can just pass NIL.
* Change the API of path-generating subroutines of add_paths_to_joinrel,
and in particular that of GetForeignJoinPaths and set_join_pathlist_hook,
so that various less-used parameters are passed in a struct rather than
as separate parameter-list entries. The objective here is to reduce the
probability that future additions to those parameter lists will result in
source-level API breaks for users of these hooks. It's possible that this
is even a small win for the core code, since most CPU architectures can't
pass more than half a dozen parameters efficiently anyway. I kept root,
joinrel, outerrel, innerrel, and jointype as separate parameters to reduce
code churn in joinpath.c --- in particular, putting jointype into the
struct would have been problematic because of the subroutines' habit of
changing their local copies of that variable.
* Avoid ad-hocery in ExecAssignScanProjectionInfo. It was probably all
right for it to know about IndexOnlyScan, but if the list is to grow
we should refactor the knowledge out to the callers.
* Restore nodeForeignscan.c's previous use of the relcache to avoid
extra GetFdwRoutine lookups for base-relation scans.
* Lots of cleanup of documentation and missed comments. Re-order some
code additions into more logical places.
Andrew Dunstan [Sat, 9 May 2015 17:06:49 +0000 (13:06 -0400)]
Add new OID alias type regrole
The new type has the scope of whole the database cluster so it doesn't
behave the same as the existing OID alias types which have database
scope,
concerning object dependency. To avoid confusion constants of the new
type are prohibited from appearing where dependencies are made involving
it.
Also, add a note to the docs about possible MVCC violation and
optimization issues, which are general over the all reg* types.
Stephen Frost [Sat, 9 May 2015 15:13:37 +0000 (11:13 -0400)]
Improve ParseConfigFp comment wrt head/tail
The head_p and tail_p pointers passed to ParseConfigFp() are actually
input/output parameters, not strictly output paramaters. This updates
the function comment to reflect that.
Stephen Frost [Fri, 8 May 2015 23:39:42 +0000 (19:39 -0400)]
Change default for include_realm to 1
The default behavior for GSS and SSPI authentication methods has long
been to strip the realm off of the principal, however, this is not a
secure approach in multi-realm environments and the use-case for the
parameter at all has been superseded by the regex-based mapping support
available in pg_ident.conf.
Change the default for include_realm to be '1', meaning that we do
NOT remove the realm from the principal by default. Any installations
which depend on the existing behavior will need to update their
configurations (ideally by leaving include_realm set to 1 and adding a
mapping in pg_ident.conf, but alternatively by explicitly setting
include_realm=0 prior to upgrading). Note that the mapping capability
exists in all currently supported versions of PostgreSQL and so this
change can be done today. Barring that, existing users can update their
configurations today to explicitly set include_realm=0 to ensure that
the prior behavior is maintained when they upgrade.
Stephen Frost [Fri, 8 May 2015 23:25:30 +0000 (19:25 -0400)]
Modify pg_stat_get_activity to build a tuplestore
This updates pg_stat_get_activity() to build a tuplestore for its
results instead of using the old-style multiple-call method. This
simplifies the function, though that wasn't the primary motivation for
the change, which is that we may turn it into a helper function which
can filter the results (or not) much more easily.
Stephen Frost [Fri, 8 May 2015 23:09:26 +0000 (19:09 -0400)]
Add pg_file_settings view and function
The function and view added here provide a way to look at all settings
in postgresql.conf, any #include'd files, and postgresql.auto.conf
(which is what backs the ALTER SYSTEM command).
The information returned includes the configuration file name, line
number in that file, sequence number indicating when the parameter is
loaded (useful to see if it is later masked by another definition of the
same parameter), parameter name, and what it is set to at that point.
This information is updated on reload of the server.
This is unfiltered, privileged, information and therefore access is
restricted to superusers through the GRANT system.
Author: Sawada Masahiko, various improvements by me.
Reviewers: David Steele
Andres Freund [Fri, 8 May 2015 20:22:05 +0000 (22:22 +0200)]
Fix two problems in infer_arbiter_indexes().
The first is a pretty simple bug where a relcache entry is used after
the relation is closed. In this particular situation it does not appear
to have bad consequences unless compiled with RELCACHE_FORCE_RELEASE.
The second is that infer_arbiter_indexes() skipped indexes that aren't
yet valid according to indcheckxmin. That's not required here, because
uniqueness checks don't care about visibility according to an older
snapshot. While thats not really a bug, it makes things undesirably
non-deterministic. There is some hope that this explains a test failure
on buildfarm member jaguarundi.
At promotion, archive last segment from old timeline with .partial suffix.
Previously, we would archive the possible-incomplete WAL segment with its
normal filename, but that causes trouble if the server owning that timeline
is still running, and tries to archive the same segment later. It's not nice
for the standby to trip up the master's archival like that. And it's pretty
confusing, anyway, to have an incomplete segment in the archive that's
indistinguishable from a normal, complete segment.
To avoid such confusion, add a .partial suffix to the file. Or to be more
precise, make a copy of the old segment under the .partial suffix, and
archive that instead of the original file. pg_receivexlog also uses the
.partial suffix for the same purpose, to tell apart incompletely streamed
files from complete ones.
There is no automatic mechanism to use the .partial files at recovery, so
they will go unused, unless the administrator manually copies to them to
the pg_xlog directory (and removes the .partial suffix). Recovery won't
normally need the WAL - when recovering to the new timeline, it will find
the same WAL on the first segment on the new timeline instead - but it
nevertheless feels better to archive the file with the .partial suffix, for
debugging purposes if nothing else.
To fix this, progressively ramp down the effective value (but not the
actual contents) of autovacuum_multixact_freeze_max_age as member space
utilization increases. This makes autovacuum more aggressive and also
reduces the threshold for a manual VACUUM to perform a full-table scan.
This patch leaves unsolved the problem of ensuring that emergency
autovacuums are triggered even when autovacuum=off. We'll need to fix
that via a separate patch.
Andres Freund [Fri, 8 May 2015 04:06:03 +0000 (06:06 +0200)]
Remove dependency on ordering in logical decoding upsert test.
Buildfarm member magpie sorted the output differently than intended by
Peter. "Resolve" the problem by simply not aggregating, it's not that
many lines.
Andres Freund [Fri, 8 May 2015 03:31:36 +0000 (05:31 +0200)]
Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint. DO NOTHING avoids the
constraint violation, without touching the pre-existing row. DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed. The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.
This feature is often referred to as upsert.
This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert. If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made. If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.
To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.
Bumps catversion as stored rules change.
Author: Peter Geoghegan, with significant contributions from Heikki
Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
Dean Rasheed, Stephen Frost and many others.
Andres Freund [Thu, 7 May 2015 22:20:46 +0000 (00:20 +0200)]
Represent columns requiring insert and update privileges indentently.
Previously, relation range table entries used a single Bitmapset field
representing which columns required either UPDATE or INSERT privileges,
despite the fact that INSERT and UPDATE privileges are separately
cataloged, and may be independently held. As statements so far required
either insert or update privileges but never both, that was
sufficient. The required permission could be inferred from the top level
statement run.
The upcoming INSERT ... ON CONFLICT UPDATE feature needs to
independently check for both privileges in one statement though, so that
is not sufficient anymore.
Bumps catversion as stored rules change.
Author: Peter Geoghegan Reviewed-By: Andres Freund
Alvaro Herrera [Thu, 7 May 2015 16:02:22 +0000 (13:02 -0300)]
Improve BRIN infra, minmax opclass and regression test
The minmax opclass was using the wrong support functions when
cross-datatypes queries were run. Instead of trying to fix the
pg_amproc definitions (which apparently is not possible), use the
already correct pg_amop entries instead. This requires jumping through
more hoops (read: extra syscache lookups) to obtain the underlying
functions to execute, but it is necessary for correctness.
Author: Emre Hasegeli, tweaked by Álvaro
Review: Andreas Karlsson
Also change BrinOpcInfo to record each stored type's typecache entry
instead of just the OID. Turns out that the full type cache is
necessary in brin_deform_tuple: the original code used the indexed
type's byval and typlen properties to extract the stored tuple, which is
correct in Minmax; but in other implementations that want to store
something different, that's wrong. The realization that this is a bug
comes from Emre also, but I did not use his patch.
I also adopted Emre's regression test code (with smallish changes),
which is more complete.
Robert Haas [Thu, 7 May 2015 15:00:47 +0000 (11:00 -0400)]
Fix incorrect math in DetermineSafeOldestOffset.
The old formula didn't have enough parentheses, so it would do the wrong
thing, and it used / rather than % to find a remainder. The effect of
these oversights is that the stop point chosen by the logic introduced in
commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c might be rather
meaningless.
Thomas Munro, reviewed by Kevin Grittner, with a whitespace tweak by me.
Magnus Hagander [Thu, 7 May 2015 13:04:13 +0000 (15:04 +0200)]
Properly send SCM status updates when shutting down service on Windows
The Service Control Manager should be notified regularly during a shutdown
that takes a long time. Previously we would increaes the counter, but forgot
to actually send the notification to the system. The loop counter was also
incorrectly initalized in the event that the startup of the system took long
enough for it to increase, which could cause the shutdown process not to wait
as long as expected.
Tom Lane [Tue, 5 May 2015 19:50:53 +0000 (15:50 -0400)]
Fix incorrect declaration of citext's regexp_matches() functions.
These functions should return SETOF TEXT[], like the core functions they
are wrappers for; but they were incorrectly declared as returning just
TEXT[]. This mistake had two results: first, if there was no match you got
a scalar null result, whereas what you should get is an empty set (zero
rows). Second, the 'g' flag was effectively ignored, since you would get
only one result array even if there were multiple matches, as reported by
Jeff Certain.
While ignoring 'g' is a clear bug, the behavior for no matches might well
have been thought to be the intended behavior by people who hadn't compared
it carefully to the core regexp_matches() functions. So we should tread
carefully about introducing this change in the back branches. Still, it
clearly is a bug and so providing some fix is desirable.
After discussion, the conclusion was to introduce the change in a 1.1
version of the citext extension (as we would need to do anyway); 1.0 still
contains the incorrect behavior. 1.1 is the default and only available
version in HEAD, but it is optional in the back branches, where 1.0 remains
the default version. People wishing to adopt the fix in back branches will
need to explicitly do ALTER EXTENSION citext UPDATE TO '1.1'. (I also
provided a downgrade script in the back branches, so people could go back
to 1.0 if necessary.)
This should be called out as an incompatible change in the 9.5 release
notes, although we'll also document it in the next set of back-branch
release notes. The notes should mention that any views or rules that use
citext's regexp_matches() functions will need to be dropped before
upgrading to 1.1, and then recreated again afterwards.
Back-patch to 9.1. The bug goes all the way back to citext's introduction
in 8.4, but pre-9.1 there is no extension mechanism with which to manage
the change. Given the lack of previous complaints it seems unnecessary to
change this behavior in 9.0, anyway.
Robert Haas [Tue, 5 May 2015 12:30:28 +0000 (08:30 -0400)]
Fix some problems with patch to fsync the data directory.
pg_win32_is_junction() was a typo for pgwin32_is_junction(). open()
was used not only in a two-argument form, which breaks on Windows,
but also where BasicOpenFile() should have been used.
Tom Lane [Mon, 4 May 2015 19:38:57 +0000 (15:38 -0400)]
Improve procost estimates for some text search functions.
The text search functions that involve parsing raw text into lexemes are
remarkably CPU-intensive, so estimating them at the same cost as most other
built-in functions seems like a mistake; moreover, doing so turns out to
discourage the optimizer from using functional indexes on these functions.
After some debate, we've agreed to raise procost from 1 to 100 for
to_tsvector(), plainto_tsvector(), to_tsquery(), ts_headline(),
ts_match_tt(), and ts_match_tq(), which are all the text search functions
that parse raw text.
Also increase procost for the 2-argument form of ts_rewrite()
(tsquery_rewrite_query); while this function doesn't do text parsing,
it does execute a user-supplied SQL query, so its previous procost of 1 is
clearly a drastic underestimate. It seems reasonable to assign it the same
cost we assign to PL functions by default, so 100 is the number here too.
I did not bother bumping catversion for this change, since it does not
break catalog compatibility with the server executable nor result in
any regression test changes.
Per complaint from Andrew Gierth and subsequent discussion.
Robert Haas [Mon, 4 May 2015 18:13:53 +0000 (14:13 -0400)]
Recursively fsync() the data directory after a crash.
Otherwise, if there's another crash, some writes from after the first
crash might make it to disk while writes from before the crash fail
to make it to disk. This could lead to data corruption.
Back-patch to all supported versions.
Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised
by me.
Andrew Dunstan [Mon, 4 May 2015 16:38:58 +0000 (12:38 -0400)]
Fix two small bugs in json's populate_record_worker
The first bug is not releasing a tupdesc when doing an early return out
of the function. The second bug is a logic error in choosing when to do
an early return if given an empty jsonb object.
Bug reports from Pavel Stehule and Tom Lane respectively.
Tom Lane [Mon, 4 May 2015 03:44:52 +0000 (23:44 -0400)]
Second try at fixing warnings caused by commit 9b43d73b3f9bef27.
Commit ef3f9e642d2b2bba suppressed one cause of warnings here, but
recent clang on OS X is still unhappy because we're passing a "long"
to abs(). The fact that tm_gmtoff is declared as long is no doubt a
hangover from days when int might be only 16 bits; but Postgres has
never been able to run on such machines, so we can just cast it to int
with no worries. For consistency, also cast to int in the other
uses of tm_gmtoff in this stanza.
Note: this code is still broken on machines that don't follow C99
integer-division-truncates-towards-zero rules. Given the lack of
complaints about it, I don't feel a large desire to complicate things
enough to cope with the pre-C99 rules.
Tom Lane [Sun, 3 May 2015 15:30:24 +0000 (11:30 -0400)]
Fix overlooked relcache invalidation in ALTER TABLE ... ALTER CONSTRAINT.
When altering the deferredness state of a foreign key constraint, we
correctly updated the catalogs and then invalidated the relcache state for
the target relation ... but that's not the only relation with relevant
triggers. Must invalidate the other table as well, or the state change
fails to take effect promptly for operations triggered on the other table.
Per bug #13224 from Christian Ullrich.
In passing, reorganize regression test case for this feature so that it
isn't randomly injected into the middle of an unrelated test sequence.
Andrew Dunstan [Sun, 3 May 2015 13:10:47 +0000 (09:10 -0400)]
Enable transforms modules to build and run with Mingw builds.
These modules were all missing essential Windows scaffolding, including
resources files and descriptions, and links to the relevant library
import files. This latter item means that the modules can't be built
with pgxs on Windows, as we don't install the import files. If we ever
decide to install them this restriction could probably be removed.
Also, as with plperl we need to make sure that perl's CORE directory is
last on the include list, as on Windows it appears to contain some
headers with names that clash with names of some headers we include.
Andrew Dunstan [Sun, 3 May 2015 12:17:04 +0000 (08:17 -0400)]
Fix python_includespec on Windows at configure time
By converting to using forward slashes at configure time we avoid
having to repeat the logic anywhere that this is needed, such as
in transforms modules for plpython.
Noah Misch [Sat, 2 May 2015 20:46:52 +0000 (16:46 -0400)]
Fix one more TAP test to use standard command-line argument ordering.
Commit c67a86f7da90c30b81f91957023fb752f06f0598 caught most of these,
but this negative test escaped notice. The test did pass, for the wrong
reason, under affected configurations.
Noah Misch [Sat, 2 May 2015 20:46:23 +0000 (16:46 -0400)]
Rename coerce_type() local variable.
coerce_type() has local variables named targetTypeId, baseTypeId, and
targetType. targetType has been the Type structure for baseTypeId, so
rename it to baseType.
Make hstore_plperl's build even more like plperl's
Combine the two places that set CPPFLAGS into one. Also, some settings
should be restricted to Windows only. More precisely, -Wno-comment is
a GCC-only option, but Windows in a makefile implies GCC at the moment.
Also, since -Wno-comment is more properly a preprocessor option, move it
to CPPFLAGS to simplify things a bit.
Move interpreter shared library detection to configure
For building PL/Perl, PL/Python, and PL/Tcl, we need a shared library of
libperl, libpython, and libtcl, respectively. Previously, this was
checked in the makefiles, skipping the PL build with a warning if no
shared library was available. Now this is checked in configure, with an
error if no shared library is available.
The previous situation arose because in the olden days, the configure
options --with-perl, --with-python, and --with-tcl controlled whether
frontend interfaces for those languages would be built. The procedural
languages were added later, and shared libraries were often not
available in the beginning. So it was decided skip the builds of the
procedural languages in those cases. The frontend interfaces have since
been removed from the tree, and shared libraries are now available most
of the time, so that setup makes much less sense now.
Also, the new setup allows contrib modules and pgxs users to rely on the
respective PLs being available based on configure flags.
Andrew Dunstan [Fri, 1 May 2015 19:36:44 +0000 (15:36 -0400)]
Make hstore_plperl's build more like plperl's
This involves moving perl's CORE library to the end of the include list,
and adding other compilation settings that plperl uses. This won't
completely fix the breakage currently being seen by gcc builds on
Windows, but it will let the build get further, and should be wholly
benign, if not beneficial, on *nix.
Robert Haas [Fri, 1 May 2015 13:37:10 +0000 (09:37 -0400)]
Deparse named arguments to use the new => operator instead of :=
Tom Lane pointed out that this wasn't done, and asked whether that was
intentional. Subsequent discussion was in favor of making the change,
so here we go.