Joe Conway [Thu, 6 Apr 2017 21:28:19 +0000 (14:28 -0700)]
Silence uninitialized variable compiler warning in sepgsql
At -Og optimization gcc warns that variable tclass may be used
uninitialized when relkind == RELKIND_INDEX. Actually that can't
happen due to an early return, but quiet the compiler by initializing
tclass to 0.
In passing, use uint16_t consistently for the declaration of tclass.
Complaint and initial patch by Mike Palmiotto. Editorializing by me.
Probably not worth backpatching given that it is cosmetic, so apply
to development head only.
Joe Conway [Thu, 6 Apr 2017 21:21:25 +0000 (14:21 -0700)]
Silence compiler warning in sepgsql
<selinux/label.h> includes <stdbool.h>, which creates an incompatible
We don't care if <stdbool.h> redefines "true"/"false"; those are close
enough.
Complaint and initial patch by Mike Palmiotto. Final approach per
Tom Lane's suggestion, as discussed on hackers. Backpatching to
all supported branches.
The original code was overly optimistic about the cost of scanning a
BRIN index, leading to BRIN indexes being selected when they'd be a
worse choice than some other index. This complete rewrite should be
more accurate.
Author: David Rowley, based on an earlier patch by Emre Hasegeli Reviewed-by: Emre Hasegeli
Discussion: https://postgr.es/m/CAKJS1f9n-Wapop5Xz1dtGdpdqmzeGqQK4sV2MK-zZugfC14Xtw@mail.gmail.com
Andres Freund [Thu, 6 Apr 2017 20:44:48 +0000 (13:44 -0700)]
Add minimal test for EXPLAIN ANALYZE of parallel query.
This displays the number of workers launched, thus the test is
dependant on configuration to some degree. We'll see whether that
turns out ot be a problem.
Fix logical replication between different encodings
When sending a tuple attribute, the previous coding erroneously sent the
length byte before encoding conversion, which would lead to protocol
failures on the receiving side if the length did not match the following
string.
To fix that, use pq_sendcountedtext() for sending tuple attributes,
which takes care of all of that internally. To match the API of
pq_sendcountedtext(), send even text values without a trailing zero byte
and have the receiving end put it in place instead. This matches how
the standard FE/BE protocol behaves.
Mark immutable functions in information schema as parallel safe
Also add opr_sanity check that all preloaded immutable functions are
parallel safe. (Per discussion, this does not necessarily have to be
true for all possible such functions, but deviations would be unlikely
enough that maintaining such a test is reasonable.)
Reported-by: David Rowley <david.rowley@2ndquadrant.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Peter Eisentraut [Tue, 30 Aug 2016 16:00:00 +0000 (12:00 -0400)]
pg_dump: Rename some typedefs to avoid name conflicts
In struct _archiveHandle, some of the fields have the same name as a
typedef. This is kind of confusing, so rename the types so they have
names distinct from the struct fields. In C++, the previous coding
changes the meaning of the typedef in the scope of the struct, causing
warnings and possibly other problems.
It was not used for what the comment claimed, at all. It was actually used
as the 'base' argument to strtol(), when reading the iteration count. We
don't need a constant for base-10, so remove it.
This is the SQL standard-conforming variant of PostgreSQL's serial
columns. It fixes a few usability issues that serial columns have:
- CREATE TABLE / LIKE copies default but refers to same sequence
- cannot add/drop serialness with ALTER TABLE
- dropping default does not drop sequence
- need to grant separate privileges to sequence
- other slight weirdnesses because serial is some kind of special macro
Simon Riggs [Thu, 6 Apr 2017 12:31:52 +0000 (08:31 -0400)]
Avoid SnapshotResetXmin() during AtEOXact_Snapshot()
For normal commits and aborts we already reset PgXact->xmin,
so we can simply avoid running SnapshotResetXmin() twice.
During performance tests by Alexander Korotkov, diagnosis
by Andres Freund showed PgXact array as a bottleneck. After
manual analysis by me of the code paths that touch those
memory locations, I was able to identify extraneous code
in the main transaction commit path.
Avoiding touching highly contented shmem improves concurrent
performance slightly on all workloads, confirmed by tests
run by Ashutosh Sharma and Alexander Korotkov.
Remove dead code and fix comments in fast-path function handling.
HandleFunctionRequest() is no longer responsible for reading the protocol
message from the client, since commit 2b3a8b20c2. Fix the outdated
comments.
HandleFunctionRequest() now always returns 0, because the code that used
to return EOF was moved in 2b3a8b20c2. Therefore, the caller no longer
needs to check the return value.
Reported by Andres Freund. Backpatch to all supported versions, even though
this doesn't have any user-visible effect, to make backporting future
patches in this area easier.
Tom Lane [Thu, 6 Apr 2017 03:51:27 +0000 (23:51 -0400)]
Fix integer-overflow problems in interval comparison.
When using integer timestamps, the interval-comparison functions tried
to compute the overall magnitude of an interval as an int64 number of
microseconds. As reported by Frazer McLean, this overflows for intervals
exceeding about 296000 years, which is bad since we nominally allow
intervals many times larger than that. That results in wrong comparison
results, and possibly in corrupted btree indexes for columns containing
such large interval values.
To fix, compute the magnitude as int128 instead. Although some compilers
have native support for int128 calculations, many don't, so create our
own support functions that can do 128-bit addition and multiplication
if the compiler support isn't there. These support functions are designed
with an eye to allowing the int128 code paths in numeric.c to be rewritten
for use on all platforms, although this patch doesn't do that, or even
provide all the int128 primitives that will be needed for it.
Back-patch as far as 9.4. Earlier releases did not guard against overflow
of interval values at all (commit 146604ec4 fixed that), so it seems not
very exciting to worry about overly-large intervals for them.
Before 9.6, we did not assume that unreferenced "static inline" functions
would not draw compiler warnings, so omit functions not directly referenced
by timestamp.c, the only present consumer of int128.h. (We could have
omitted these functions in HEAD too, but since they were written and
debugged on the way to the present patch, and they look likely to be needed
by numeric.c, let's keep them in HEAD.) I did not bother to try to prevent
such warnings in a --disable-integer-datetimes build, though.
Before 9.5, configure will never define HAVE_INT128, so the part of
int128.h that exploits a native int128 implementation is dead code in the
9.4 branch. I didn't bother to remove it, thinking that keeping the file
looking similar in different branches is more useful.
In HEAD only, add a simple test harness for int128.h in src/tools/.
In back branches, this does not change the float-timestamps code path.
That's not subject to the same kind of overflow risk, since it computes
the interval magnitude as float8. (No doubt, when this code was originally
written, overflow was disregarded for exactly that reason.) There is a
precision hazard instead :-(, but we'll avert our eyes from that question,
since no complaints have been reported and that code's deprecated anyway.
Simon Riggs [Wed, 5 Apr 2017 22:00:42 +0000 (18:00 -0400)]
Collect and use multi-column dependency stats
Follow on patch in the multi-variate statistics patch series.
CREATE STATISTICS s1 WITH (dependencies) ON (a, b) FROM t;
ANALYZE;
will collect dependency stats on (a, b) and then use the measured
dependency in subsequent query planning.
Commit 7b504eb282ca2f5104b5c00b4f05a3ef6bb1385b added
CREATE STATISTICS with n-distinct coefficients. These are now
specified using the mutually exclusive option WITH (ndistinct).
Author: Tomas Vondra, David Rowley Reviewed-by: Kyotaro HORIGUCHI, Álvaro Herrera, Dean Rasheed, Robert Haas
and many other comments and contributions
Discussion: https://postgr.es/m/56f40b20-c464-fad2-ff39-06b668fac47c@2ndquadrant.com
Robert Haas [Wed, 5 Apr 2017 18:17:23 +0000 (14:17 -0400)]
Fix pageinspect failures on hash indexes.
Make every page in a hash index which isn't all-zeroes have a valid
special space, so that tools like pageinspect don't error out.
Also, make pageinspect cope with all-zeroes pages, because
_hash_alloc_buckets can leave behind large numbers of those until
they're consumed by splits.
Ashutosh Sharma and Robert Haas, reviewed by Amit Kapila.
Original trouble report from Jeff Janes.
Kevin Grittner [Tue, 4 Apr 2017 23:36:39 +0000 (18:36 -0500)]
Follow-on cleanup for the transition table patch.
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables. In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data. Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.
Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql. That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code. Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.
While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.
Thomas Munro with some work by Kevin Grittner (mostly docs)
Andres Freund [Tue, 4 Apr 2017 21:26:42 +0000 (14:26 -0700)]
Fix two valgrind issues in slab allocator.
During allocation VALGRIND_MAKE_MEM_DEFINED was called with a pointer
as size. That kind of works, but makes valgrind exceedingly slow for
workloads involving the slab allocator.
Secondly there was an access to memory marked as unreachable within
SlabCheck(). Fix that too.
Author: Tomas Vondra
Discussion: https://postgr.es/m/a6543b6d-6015-99b1-63ef-3ed55a76a730@2ndquadrant.com
Simon Riggs [Tue, 4 Apr 2017 19:56:56 +0000 (15:56 -0400)]
Speedup 2PC recovery by skipping two phase state files in normal path
2PC state info held in shmem at PREPARE, then cleaned at COMMIT PREPARED/ABORT PREPARED,
avoiding writing/fsyncing any state information to disk in the normal path, greatly enhancing replay speed.
Prepared transactions that live past one checkpoint redo horizon will be written to disk as now.
Similar conceptually to 978b2f65aa1262eb4ecbf8b3785cb1b9cf4db78e and building upon
the infrastructure created by that commit.
Authors, in equal measure: Stas Kelvich, Nikhil Sontakke and Michael Paquier
Discussion: https://postgr.es/m/CAMGcDxf8Bn9ZPBBJZba9wiyQq-Qk5uqq=VjoMnRnW5s+fKST3w@mail.gmail.com
When changing the type of a sequence, adjust the min/max values of the
sequence if it looks like the previous values were the default values.
Previously, it would leave the old values in place, requiring manual
adjustments even in the usual/default cases.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Vitaly Burovoy <vitaly.burovoy@gmail.com>
Stephen Frost [Tue, 4 Apr 2017 12:42:09 +0000 (08:42 -0400)]
Remove --verbose from PROVE_FLAGS
Per discussion, the TAP tests are really more verbose than necessary, so
remove the --verbose flag from PROVE_FLAGS. Also add comments to let
folks know how they can enable it if they really wish to, as suggested
by Craig Ringer.
Author: Michael Paquier, additional comments by me.
Discussion: https://postgr.es/m/CAMsr%2BYGAzcMDOZ_BirnMCL6Sb%3DMUjP0FRE82YBDSbXcf6pm9Yg%40mail.gmail.com
Fix remote position tracking in logical replication
We need to set the origin remote position to end_lsn, not commit_lsn, as
commit_lsn is the start of commit record, and we use the origin remote
position as start position when restarting replication stream. If we'd
use commit_lsn, we could request data that we already received from the
remote server after a crash of a downstream server.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Robert Haas [Tue, 4 Apr 2017 11:42:05 +0000 (07:42 -0400)]
Fix formula in _hash_spareindex.
This was correct in earlier versions of the patch that lead to
commit ea69a0dead5128c421140dc53fac165ba4af8520, but somehow got
broken in the last version which I actually committed.
Mithun Cy, per an off-list report from Ashutosh Sharma
Robert Haas [Tue, 4 Apr 2017 03:46:33 +0000 (23:46 -0400)]
Expand hash indexes more gradually.
Since hash indexes typically have very few overflow pages, adding a
new splitpoint essentially doubles the on-disk size of the index,
which can lead to large and abrupt increases in disk usage (and
perhaps long delays on occasion). To mitigate this problem to some
degree, divide larger splitpoints into four equal phases. This means
that, for example, instead of growing from 4GB to 8GB all at once, a
hash index will now grow from 4GB to 5GB to 6GB to 7GB to 8GB, which
is perhaps still not as smooth as we'd like but certainly an
improvement.
This changes the on-disk format of the metapage, so bump HASH_VERSION
from 2 to 3. This will force a REINDEX of all existing hash indexes,
but that's probably a good idea anyway. First, hash indexes from
pre-10 versions of PostgreSQL could easily be corrupted, and we don't
want to confuse corruption carried over from an older release with any
corruption caused despite the new write-ahead logging in v10. Second,
it will let us remove some backward-compatibility code added by commit 293e24e507838733aba4748b514536af2d39d7f2.
Mithun Cy, reviewed by Amit Kapila, Jesper Pedersen and me. Regression
test outputs updated by me.
Robert Haas [Tue, 4 Apr 2017 02:41:31 +0000 (22:41 -0400)]
Abstract logic to allow for multiple kinds of child rels.
Currently, the only type of child relation is an "other member rel",
which is the child of a baserel, but in the future joins and even
upper relations may have child rels. To facilitate that, introduce
macros that test to test for particular RelOptKind values, and use
them in various places where they help to clarify the sense of a test.
(For example, a test may allow RELOPT_OTHER_MEMBER_REL either because
it intends to allow child rels, or because it intends to allow simple
rels.)
Also, remove find_childrel_top_parent, which will not work for a
child rel that is not a baserel. Instead, add a new RelOptInfo
member top_parent_relids to track the same kind of information in a
more generic manner.
Ashutosh Bapat, slightly tweaked by me. Review and testing of the
patch set from which this was taken by Rajkumar Raghuwanshi and Rafia
Sabih.
Andrew Gierth [Mon, 3 Apr 2017 22:30:24 +0000 (23:30 +0100)]
Try and silence spurious Coverity warning.
gset_data (aka gd) in planner.c is always non-null if and only if
parse->groupingSets is non-null, but Coverity doesn't know that and
complains. Feed it an assertion to see if that keeps it happy.
Change the style of links generated by xrefs to section number only, as
it was with DSSSL, instead of number and title, as is the default of the
XSLT stylesheets.
Our documentation is mostly written expecting the old style, so keep
that for the time being, per discussion.
Tom Lane [Sun, 2 Apr 2017 23:19:16 +0000 (19:19 -0400)]
Remove reinvention of stringify macro.
We already have CppAsString2, there's no need for the MSVC support to
re-invent a macro to do that (and especially not to inject it in as
ugly a way as this).
Tom Lane [Sun, 2 Apr 2017 22:26:37 +0000 (18:26 -0400)]
Document psql's behavior of recalling the previously executed query.
Various psql slash commands that normally act on the current query buffer
will automatically recall and re-use the most recently executed SQL command
instead, if the current query buffer is empty. Although this behavior is
ancient (dating apparently to commit 77a472993), it was documented nowhere
in the psql reference page. For that matter, we'd never bothered to define
the concept of "current query buffer" explicitly. Fix that. Do some
wordsmithing on relevant command descriptions to improve clarity and
consistency.
Tom Lane [Sun, 2 Apr 2017 20:50:25 +0000 (16:50 -0400)]
Fix behavior of psql's \p to agree with \g, \w, etc.
In commit e984ef586 I (tgl) simplified the behavior of \p to just print
the current query buffer; but Daniel Vérité points out that this made it
inconsistent with the behavior of \g and \w. It should print the same
thing \g would execute. Fix that, and improve related comments.
Tom Lane [Sun, 2 Apr 2017 01:44:54 +0000 (21:44 -0400)]
Allow psql variable substitution to occur in backtick command strings.
Previously, text between backquotes in a psql metacommand's arguments
was always passed to the shell literally. That considerably hobbles
the usefulness of the feature for scripting, so we'd foreseen for a long
time that we'd someday want to allow substitution of psql variables into
the shell command. IMO the addition of \if metacommands has brought us to
that point, since \if can greatly benefit from some sort of client-side
expression evaluation capability, and psql itself is not going to grow any
such thing in time for v10. Hence, this patch. It allows :VARIABLE to be
replaced by the exact contents of the named variable, while :'VARIABLE'
is replaced by the variable's contents suitably quoted to become a single
shell-command argument. (The quoting rules for that are different from
those for SQL literals, so this is a bit of an abuse of the :'VARIABLE'
notation, but I doubt anyone will be confused.)
As with other situations in psql, no substitution occurs if the word
following a colon is not a known variable name. That limits the risk of
compatibility problems for existing psql scripts; but the risk isn't zero,
so this needs to be called out in the v10 release notes.
Kevin Grittner [Sat, 1 Apr 2017 20:21:05 +0000 (15:21 -0500)]
Fix two undocumented parameters to functions from ENR patch.
On ProcessUtility document the parameter, to match others.
On CreateCachedPlan drop the queryEnv parameter. It was not
referenced within the function, and had been added on the
assumption that with some unknown future usage of QueryEnvironment
it might be useful to do something there. We have avoided other
"just in case" implementation of unused paramters, so drop it here.
When the BRIN summary tuple for a page range becomes too "wide" for the
values actually stored in the table (because the tuples that were
present originally are no longer present due to updates or deletes), it
can be useful to remove the outdated summary tuple, so that a future
summarization can install a tighter summary.
This commit introduces a SQL-callable interface to do so.
Previously, only VACUUM would cause a page range to get initially
summarized by BRIN indexes, which for some use cases takes too much time
since the inserts occur. To avoid the delay, have brininsert request a
summarization run for the previous range as soon as the first tuple is
inserted into the first page of the next range. Autovacuum is in charge
of processing these requests, after doing all the regular vacuuming/
analyzing work on tables.
This doesn't impose any new tasks on autovacuum, because autovacuum was
already in charge of doing summarizations. The only actual effect is to
change the timing, i.e. that it occurs earlier. For this reason, we
don't go any great lengths to record these requests very robustly; if
they are lost because of a server crash or restart, they will happen at
a later time anyway.
Most of the new code here is in autovacuum, which can now be told about
"work items" to process. This can be used for other things such as GIN
pending list cleaning, perhaps visibility map bit setting, both of which
are currently invoked during vacuum, but do not really depend on vacuum
taking place.
The requests are at the page range level, a granularity for which we did
not have SQL-level access; we only had index-level summarization
requests via brin_summarize_new_values(). It seems reasonable to add
SQL-level access to range-level summarization too, so add a function
brin_summarize_range() to do that.
Authors: Álvaro Herrera, based on sketch from Simon Riggs. Reviewed-by: Thomas Munro.
Discussion: https://postgr.es/m/20170301045823.vneqdqkmsd4as4ds@alvherre.pgsql
Magnus Hagander [Sat, 1 Apr 2017 15:04:14 +0000 (17:04 +0200)]
Write "waiting for checkpoint" on regular progress row
When reporting progress, make the "waiting for checkpoint" test be
overwritten by the file-based progress once it's completed. This is more
consistent with how we report the rest of the progress.
Kevin Grittner [Sat, 1 Apr 2017 04:17:18 +0000 (23:17 -0500)]
Add infrastructure to support EphemeralNamedRelation references.
A QueryEnvironment concept is added, which allows new types of
objects to be passed into queries from parsing on through
execution. At this point, the only thing implemented is a
collection of EphemeralNamedRelation objects -- relations which
can be referenced by name in queries, but do not exist in the
catalogs. The only type of ENR implemented is NamedTuplestore, but
provision is made to add more types fairly easily.
An ENR can carry its own TupleDesc or reference a relation in the
catalogs by relid.
Although these features can be used without SPI, convenience
functions are added to SPI so that ENRs can easily be used by code
run through SPI.
The initial use of all this is going to be transition tables in
AFTER triggers, but that will be added to each PL as a separate
commit.
An incidental effect of this patch is to produce a more informative
error message if an attempt is made to modify the contents of a CTE
from a referencing DML statement. No tests previously covered that
possibility, so one is added.
Kevin Grittner and Thomas Munro
Reviewed by Heikki Linnakangas, David Fetter, and Thomas Munro
with valuable comments and suggestions from many others
Robert Haas [Sat, 1 Apr 2017 01:15:05 +0000 (21:15 -0400)]
Avoid GatherMerge crash when there are no workers.
It's unnecessary to return an actual slot when we have no tuple.
We can just return NULL, which avoids the risk of indexing into an
array that might not contain any elements.
Robert Haas [Sat, 1 Apr 2017 01:01:20 +0000 (21:01 -0400)]
Fix parallel query so it doesn't spoil row estimates above Gather.
Commit 45be99f8cd5d606086e0a458c9c72910ba8a613d removed GatherPath's
num_workers field, but this is entirely bogus. Normally, a path's
parallel_workers flag is supposed to indicate the number of workers
that it wants, and should be 0 for a non-partial path. In that
commit, I mistakenly thought that GatherPath could also use that field
to indicate the number of workers that it would try to start, but
that's disastrous, because then it can propagate up to higher nodes in
the plan tree, which will then get incorrect rowcounts because the
parallel_workers flag is involved in computing those values. Repair
by putting the separate field back.
Report by Tomas Vondra. Patch by me, reviewed by Amit Kapila.
Robert Haas [Sat, 1 Apr 2017 00:35:51 +0000 (20:35 -0400)]
Don't use bgw_main even to specify in-core bgworker entrypoints.
On EXEC_BACKEND builds, this can fail if ASLR is in use.
Backpatch to 9.5. On master, completely remove the bgw_main field
completely, since there is no situation in which it is safe for an
EXEC_BACKEND build. On 9.6 and 9.5, leave the field intact to avoid
breaking things for third-party code that doesn't care about working
under EXEC_BACKEND. Prior to 9.5, there are no in-core bgworker
entrypoints.
Tom Lane [Sat, 1 Apr 2017 00:24:12 +0000 (20:24 -0400)]
Fix unstable regression test result.
Commit e306df7f9 added a test case that depends on "the" being a
stop word, which it is not in non-English locales. Since the
point of the test is to check stopword behavior, fix by forcibly
selecting the 'english' configuration.
Tom Lane [Fri, 31 Mar 2017 22:11:25 +0000 (18:11 -0400)]
For foreign keys, check REFERENCES privilege only on the referenced table.
We were requiring that the user have REFERENCES permission on both the
referenced and referencing tables --- but this doesn't seem to have any
support in the SQL standard, which says only that you need REFERENCES
permission on the referenced table. And ALTER TABLE ADD FOREIGN KEY has
already checked that you own the referencing table, so the check could
only fail if a table owner has revoked his own REFERENCES permission.
Moreover, the symmetric interpretation of this permission is unintuitive
and confusing, as per complaint from Paul Jungwirth. So let's drop the
referencing-side check.
In passing, do a bit of wordsmithing on the GRANT reference page so that
all the privilege types are described in similar fashion.
Robert Haas [Fri, 31 Mar 2017 20:47:38 +0000 (16:47 -0400)]
Revert "Allow ON CONFLICT .. DO NOTHING on a partitioned table."
This reverts commit 8355a011a0124bdf7ccbada206a967d427039553, which
turns out to have been a misguided effort. We can't really support
this in a partitioning hierarchy after all for exactly the reasons
stated in the documentation removed by that commit. It's still
possible to use ON CONFLICT .. DO NOTHING (or for that matter ON
CONFLICT .. DO UPDATE) on individual partitions if desired, but
but to allow this on a partitioned table implies that we have some
way of evaluating uniqueness across the whole partitioning
hierarchy, which is false.
Shinoda Noriyoshi noticed that the old code was crashing (which we
could fix, though not in a nice way) and Amit Langote realized
that this was indicative of a fundamental problem with the commit
being reverted here.
Robert Haas [Fri, 31 Mar 2017 20:28:30 +0000 (16:28 -0400)]
Don't allocate storage for partitioned tables.
Also, don't allow setting reloptions on them, since that would have no
effect given the lack of storage. The patch does this by introducing
a new reloption kind for which there are currently no reloptions -- we
might have some in the future -- so it adjusts parseRelOptions to
handle that case correctly.
Bumped catversion. System catalogs that contained reloptions for
partitioned tables are no longer valid; plus, there are now fewer
physical files on disk, which is not technically a catalog change but
still a good reason to re-initdb.
Amit Langote, reviewed by Maksim Milyutin and Kyotaro Horiguchi and
revised a bit by me.
Simon Riggs [Thu, 30 Mar 2017 18:18:53 +0000 (14:18 -0400)]
Default monitoring roles
Three nologin roles with non-overlapping privs are created by default
* pg_read_all_settings - read all GUCs.
* pg_read_all_stats - pg_stat_*, pg_database_size(), pg_tablespace_size()
* pg_stat_scan_tables - may lock/scan tables
Top level role - pg_monitor includes all of the above by default, plus others
Author: Dave Page Reviewed-by: Stephen Frost, Robert Haas, Peter Eisentraut, Simon Riggs
Tom Lane [Thu, 30 Mar 2017 16:59:11 +0000 (12:59 -0400)]
Support \if ... \elif ... \else ... \endif in psql scripting.
This patch adds nestable conditional blocks to psql. The control
structure feature per se is complete, but the boolean expressions
understood by \if and \elif are pretty primitive; basically, after
variable substitution and backtick expansion, the result has to be
"true" or "false" or one of the other standard spellings of a boolean
value. But that's enough for many purposes, since you can always
do the heavy lifting on the server side; and we can extend it later.
Along the way, pay down some of the technical debt that had built up
around psql/command.c:
* Refactor exec_command() into a function per command, instead of
being a 1500-line monstrosity. This makes the file noticeably longer
because of repetitive function header/trailer overhead, but it seems
much more readable.
* Teach psql_get_variable() and psqlscanslash.l to suppress variable
substitution and backtick expansion on the basis of the conditional
stack state, thereby allowing removal of the OT_NO_EVAL kluge.
* Fix the no-doubt-once-expedient hack of sometimes silently substituting
mainloop.c's previous_buf for query_buf when calling HandleSlashCmds.
(It's a bit remarkable that commands like \r worked at all with that.)
Recall of a previous query is now done explicitly in the slash commands
where that should happen.
Corey Huinker, reviewed by Fabien Coelho, further hacking by me
Fujii Masao [Thu, 30 Mar 2017 16:31:15 +0000 (01:31 +0900)]
Simplify the example of VACUUM in documentation.
Previously a detailed activity report by VACUUM VERBOSE ANALYZE was
described as an example of VACUUM in docs. But it had been obsolete
for a long time. For example, commit feb4f44d296b88b7f0723f4a4f3945a371276e0b
updated the content of that activity report in 2003, but we had
forgotten to update the example.
So basically we need to update the example. But since no one cared
about the details of VACUUM output and complained about that mistake
for such long time, per discussion on hackers, we decided to get rid
of the detailed activity report from the example and simplify it.
Back-patch to all supported versions.
Reported by Masahiko Sawada, patch by me.
Discussion: https://postgr.es/m/CAD21AoAGA2pB3p-CWmTkxBsbkZS1bcDGBLcYVcvcDxspG_XAfA@mail.gmail.com
Andres Freund [Wed, 29 Mar 2017 20:16:49 +0000 (13:16 -0700)]
Remove support for version-0 calling conventions.
The V0 convention is failure prone because we've so far assumed that a
function is V0 if PG_FUNCTION_INFO_V1 is missing, leading to crashes
if a function was coded against the V1 interface. V0 doesn't allow
proper NULL, SRF and toast handling. V0 doesn't offer features that
V1 doesn't.
Thus remove V0 support and obsolete fmgr README contents relating to
it.
Author: Andres Freund, with contributions by Peter Eisentraut & Craig Ringer Reviewed-By: Peter Eisentraut, Craig Ringer
Discussion: https://postgr.es/m/20161208213441.k3mbno4twhg2qf7g@alap3.anarazel.de
Andres Freund [Wed, 29 Mar 2017 20:16:30 +0000 (13:16 -0700)]
Move contrib/seg to only use V1 calling conventions.
A later commit will remove V0 support.
Author: Andres Freund, with contributions by Craig Ringer Reviewed-By: Peter Eisentraut, Craig Ringer
Discussion: https://postgr.es/m/20161208213441.k3mbno4twhg2qf7g@alap3.anarazel.de
Peter Eisentraut [Wed, 29 Mar 2017 19:17:14 +0000 (15:17 -0400)]
pg_dump: Remove query truncation in error messages
Remove the behavior that a query mentioned in an error message would be
truncated to 128 characters. The queries that pg_dump runs are often
longer than that, and this behavior makes analyzing failures harder
unnecessarily.
Alvaro Herrera [Wed, 29 Mar 2017 15:18:48 +0000 (12:18 -0300)]
Simplify check of modified attributes in heap_update
The old coding was getting more complicated as new things were added,
and it would be barely tolerable with upcoming WARM updates and other
future features such as indirect indexes. The new coding incurs a small
performance cost in synthetic benchmark cases, and is barely measurable
in normal cases. A much larger benefit is expected from WARM, which
could actually bolt its needs on top of the existing coding, but it is
much uglier and bug-prone than doing it on this new code. Additional
optimization can be applied on top of this, if need be.
Reviewed-by: Pavan Deolasee, Amit Kapila, Mithun CY
Discussion: https://postgr.es/m/20161228232018.4hc66ndrzpz4g4wn@alvherre.pgsql
https://postgr.es/m/CABOikdMJfz69dBNRTOZcB6s5A0tf8OMCyQVYQyR-WFFdoEwKMQ@mail.gmail.com
Robert Haas [Wed, 29 Mar 2017 14:59:27 +0000 (10:59 -0400)]
Mark more functions parallel-restricted.
Commit 61c2e1a95f94bb904953a6281ce17a18ac38ee6d allowed parallel
query to be used in more places, revealing via buildfarm member
mandrill that several functions intended to be called from triggers
were incorrectly marked parallel-safe rather than parallel-restricted.
Report by Tom Lane. Patch by Rafia Sabih. Reviewed by me.