Peter Eisentraut [Sat, 12 Aug 2017 01:04:04 +0000 (21:04 -0400)]
isn: Fix debug code
The ISN_DEBUG code did not compile. Fix that code, don't hide it behind
an #ifdef, make it run when building with asserts, and make it error out
instead of just logging if it fails.
Andres Freund [Wed, 13 Sep 2017 09:12:17 +0000 (02:12 -0700)]
Perform only one ReadControlFile() during startup.
Previously we read the control file in multiple places. But soon the
segment size will be configurable and stored in the control file, and
that needs to be available earlier than it currently is needed.
Instead of adding yet another place where it's read, refactor things
so there's a single processing of the control file during startup (in
EXEC_BACKEND that's every individual backend's startup).
Author: Andres Freund
Discussion: http://postgr.es/m/20170913092828.aozd3gvvmw67gmyc@alap3.anarazel.de
Robert Haas [Thu, 14 Sep 2017 19:41:08 +0000 (15:41 -0400)]
Expand partitioned table RTEs level by level, without flattening.
Flattening the partitioning hierarchy at this stage makes various
desirable optimizations difficult. The original use case for this
patch was partition-wise join, which wants to match up the partitions
in one partitioning hierarchy with those in another such hierarchy.
However, it now seems that it will also be useful in making partition
pruning work using the PartitionDesc rather than constraint exclusion,
because with a flattened expansion, we have no easy way to figure out
which PartitionDescs apply to which leaf tables in a multi-level
partition hierarchy.
As it turns out, we end up creating both rte->inh and !rte->inh RTEs
for each intermediate partitioned table, just as we previously did for
the root table. This seems unnecessary since the partitioned tables
have no storage and are not scanned. We might want to go back and
rejigger things so that no partitioned tables (including the parent)
need !rte->inh RTEs, but that seems to require some adjustments not
related to the core purpose of this patch.
Ashutosh Bapat, reviewed by me and by Amit Langote. Some final
adjustments by me.
Peter Eisentraut [Wed, 16 Aug 2017 04:22:32 +0000 (00:22 -0400)]
Avoid use of bool in thread_test.c
It's not necessary for such a small program, and it causes unnecessary
extra work to get the correct definition of bool, more so if we are
going to introduce stdbool.h later.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Robert Haas [Thu, 14 Sep 2017 16:28:50 +0000 (12:28 -0400)]
Make RelationGetPartitionDispatchInfo expand depth-first.
With this change, the order of leaf partitions as returned by
RelationGetPartitionDispatchInfo should now be the same as the
order used by expand_inherited_rtentry. This will make it simpler
for future patches to match up the partition dispatch information
with the planner data structures. The new code is also, in my
opinion anyway, simpler and easier to understand.
Amit Langote, reviewed by Amit Khandekar. I also reviewed and
made a few cosmetic revisions.
Robert Haas [Thu, 14 Sep 2017 14:43:44 +0000 (10:43 -0400)]
Set partitioned_rels appropriately when UNION ALL is used.
In most cases, this omission won't matter, because the appropriate
locks will have been acquired during parse/plan or by AcquireExecutorLocks.
But it's a bug all the same.
Report by Ashutosh Bapat. Patch by me, reviewed by Amit Langote.
Andres Freund [Thu, 14 Sep 2017 08:53:10 +0000 (01:53 -0700)]
Properly check interrupts in execScan.c.
During the development of d47cfef711 the CFI()s in ExecScan() were
moved back and forth, ending up in the wrong place. Thus queries that
largely spend their time in ExecScan(), and have neither projection
nor a qual, can't be cancelled in a timely manner.
Reported-By: Jeff Janes
Author: Andres Freund
Discussion: https://postgr.es/m/CAMkU=1weDXp8eLLPt9SO1LEUsJYYK9cScaGhLKpuN+WbYo9b5g@mail.gmail.com
Backpatch: 10, as d47cfef711
Stephen Frost [Thu, 14 Sep 2017 00:02:09 +0000 (20:02 -0400)]
Fix ordering in pg_dump of GRANTs
The order in which GRANTs are output is important as GRANTs which have
been GRANT'd by individuals via WITH GRANT OPTION GRANTs have to come
after the GRANT which included the WITH GRANT OPTION. This happens
naturally in the backend during normal operation as we only change
existing ACLs in-place, only add new ACLs to the end, and when removing
an ACL we remove any which depend on it also.
Also, adjust the comments in acl.h to make this clear.
Unfortunately, the updates to pg_dump to handle initial privileges
involved pulling apart ACLs and then combining them back together and
could end up putting them back together in an invalid order, leading to
dumps which wouldn't restore.
Fix this by adjusting the queries used by pg_dump to ensure that the
ACLs are rebuilt in the same order in which they were originally.
Back-patch to 9.6 where the changes for initial privileges were done.
Tom Lane [Wed, 13 Sep 2017 16:27:01 +0000 (12:27 -0400)]
Adjust unstable regression test case.
Test queries added by commit 69835bc89 are giving unexpected results
on some smaller buildfarm critters. I think probably the seqscan
logic is kicking in to cause the scans to not start at the beginning
of the table. Add ORDER BY to make them be indexscans instead.
Tom Lane [Wed, 13 Sep 2017 15:12:39 +0000 (11:12 -0400)]
Distinguish selectivity of < from <= and > from >=.
Historically, the selectivity functions have simply not distinguished
< from <=, or > from >=, arguing that the fraction of the population that
satisfies the "=" aspect can be considered to be vanishingly small, if the
comparison value isn't any of the most-common-values for the variable.
(If it is, the code path that executes the operator against each MCV will
take care of things properly.) But that isn't really true unless we're
dealing with a continuum of variable values, and in practice we seldom are.
If "x = const" would estimate a nonzero number of rows for a given const
value, then it follows that we ought to estimate different numbers of rows
for "x < const" and "x <= const", even if the const is not one of the MCVs.
Handling this more honestly makes a significant difference in edge cases,
such as the estimate for a tight range (x BETWEEN y AND z where y and z
are close together).
Hence, split scalarltsel into scalarltsel/scalarlesel, and similarly
split scalargtsel into scalargtsel/scalargesel. Adjust <= and >=
operator definitions to reference the new selectivity functions.
Improve the core ineq_histogram_selectivity() function to make a
correction for equality. (Along the way, I learned quite a bit about
exactly why that function gives good answers, which I tried to memorialize
in improved comments.)
The corresponding join selectivity functions were, and remain, just stubs.
But I chose to split them similarly, to avoid confusion and to prevent the
need for doing this exercise again if someone ever makes them less stubby.
In passing, change ineq_histogram_selectivity's clamp for extreme
probability estimates so that it varies depending on the histogram
size, instead of being hardwired at 0.0001. With the default histogram
size of 100 entries, you still get the old clamp value, but bigger
histograms should allow us to put more faith in edge values.
Tom Lane, reviewed by Aleksander Alekseev and Kuntal Ghosh
The documentation claimed that one should send
"pg_same_as_startup_message" as the user name in the SCRAM messages, but
this did not match the actual implementation, so remove it.
Bruce Momjian [Wed, 13 Sep 2017 13:11:28 +0000 (09:11 -0400)]
docs: improve pg_upgrade standby instructions
This makes it clear that pg_upgrade standby upgrade instructions should
only be used in link mode, adds examples, and explains how rsync works
with links.
Reported-by: Andreas Joseph Krogh
Discussion: https://postgr.es/m/VisenaEmail.6c.c0e592c5af4ef0a2.15e785dcb61@tc7-visena
Peter Eisentraut [Wed, 13 Sep 2017 12:20:45 +0000 (08:20 -0400)]
Define LDAP_NO_ATTRS if necessary.
Commit 83aaac41c66959a3ebaec7daadc4885b5f98f561 introduced the use of
LDAP_NO_ATTRS to avoid requesting a dummy attribute when doing search+bind
LDAP authentication. It turns out that not all LDAP implementations define
that macro, but its value is fixed by the protocol so we can define it
ourselves if it's missing.
Author: Thomas Munro Reported-By: Ashutosh Sharma
Discussion: https://postgr.es/m/CAE9k0Pm6FKCfPCiAr26-L_SMGOA7dT_k0%2B3pEbB8%2B-oT39xRpw%40mail.gmail.com
Tom Lane [Tue, 12 Sep 2017 23:27:48 +0000 (19:27 -0400)]
Add psql variables to track success/failure of SQL queries.
This patch adds ERROR, SQLSTATE, and ROW_COUNT, which are updated after
every query, as well as LAST_ERROR_MESSAGE and LAST_ERROR_SQLSTATE,
which are updated only when a query fails. The expected usage of these
is for scripting.
Peter Eisentraut [Tue, 12 Sep 2017 13:46:14 +0000 (09:46 -0400)]
Allow custom search filters to be configured for LDAP auth
Before, only filters of the form "(<ldapsearchattribute>=<user>)"
could be used to search an LDAP server. Introduce ldapsearchfilter
so that more general filters can be configured using patterns, like
"(|(uid=$username)(mail=$username))" and "(&(uid=$username)
(objectClass=posixAccount))". Also allow search filters to be included
in an LDAP URL.
Author: Thomas Munro Reviewed-By: Peter Eisentraut, Mark Cave-Ayland, Magnus Hagander
Discussion: https://postgr.es/m/CAEepm=0XTkYvMci0WRubZcf_1am8=gP=7oJErpsUfRYcKF2gwg@mail.gmail.com
Tom Lane [Tue, 12 Sep 2017 02:02:58 +0000 (22:02 -0400)]
Fix RecursiveCopy.pm to cope with disappearing files.
When copying from an active database tree, it's possible for files to be
deleted after we see them in a readdir() scan but before we can open them.
(Once we've got a file open, we don't expect any further errors from it
getting unlinked, though.) Tweak RecursiveCopy so it can cope with this
case, so as to avoid irreproducible test failures.
Back-patch to 9.6 where this code was added. In v10 and HEAD, also
remove unused "use RecursiveCopy" in one recovery test script.
Peter Eisentraut [Mon, 11 Sep 2017 20:30:50 +0000 (16:30 -0400)]
pg_receivewal: Add --endpos option
This is primarily useful for making tests of this utility more
deterministic, to avoid the complexity of starting pg_receivewal as a
deamon in TAP tests.
While this is less useful than the equivalent pg_recvlogical option,
users can as well use it for example to enforce WAL streaming up to a
end-of-backup position, to save only a minimal amount of WAL.
Use this new option to stream WAL data in a deterministic way within a
new set of TAP tests.
Author: Michael Paquier <michael.paquier@gmail.com>
Andres Freund [Sun, 10 Sep 2017 23:20:41 +0000 (16:20 -0700)]
Constify numeric.c.
This allows the compiler/linker to move the static variables to a
read-only segment. Not all the signature changes are necessary, but
it seems better to apply const in a consistent manner.
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20170910232154.asgml44ji2b7lv3d@alap3.anarazel.de
Tom Lane [Mon, 11 Sep 2017 20:24:34 +0000 (16:24 -0400)]
Prefer argument name over "$n" for the refname of a plpgsql argument.
If a function argument has a name, use that as the "refname" of the
PLpgSQL_datum representing the argument, instead of $n as before.
This allows better error messages in some cases.
Tom Lane [Sun, 10 Sep 2017 18:59:56 +0000 (14:59 -0400)]
Quick-hack fix for foreign key cascade vs triggers with transition tables.
AFTER triggers using transition tables crashed if they were fired due
to a foreign key ON CASCADE update. This is because ExecEndModifyTable
flushes the transition tables, on the assumption that any trigger that
could need them was already fired during ExecutorFinish. Normally
that's true, because we don't allow transition-table-using triggers
to be deferred. However, foreign key CASCADE updates force any
triggers on the referencing table to be deferred to the outer query
level, by means of the EXEC_FLAG_SKIP_TRIGGERS flag. I don't recall
all the details of why it's like that and am pretty loath to redesign
it right now. Instead, just teach ExecEndModifyTable to skip destroying
the TransitionCaptureState when that flag is set. This will allow the
transition table data to survive until end of the current subtransaction.
This isn't a terribly satisfactory solution, because (1) we might be
leaking the transition tables for much longer than really necessary,
and (2) as things stand, an AFTER STATEMENT trigger will fire once per
RI updating query, ie once per row updated or deleted in the referenced
table. I suspect that is not per SQL spec. But redesigning this is a
research project that we're certainly not going to get done for v10.
So let's go with this hackish answer for now.
In passing, tweak AfterTriggerSaveEvent to not save the transition_capture
pointer into the event record for a deferrable trigger. This is not
necessary to fix the current bug, but it avoids letting dangling pointers
to long-gone transition tables persist in the trigger event queue. That's
at least a safety feature. It might also allow merging shared trigger
states in more cases than before.
I added a regression test that demonstrates the crash on unpatched code,
and also exposes the behavior of firing the AFTER STATEMENT triggers
once per row update.
Per bug #14808 from Philippe Beaudoin. Back-patch to v10.
Tom Lane [Sun, 10 Sep 2017 17:19:11 +0000 (13:19 -0400)]
Remove pre-order and post-order traversal logic for red-black trees.
This code isn't used, and there's no clear reason why anybody would ever
want to use it. These traversal mechanisms don't yield a visitation order
that is semantically meaningful for any external purpose, nor are they
any faster or simpler than the left-to-right or right-to-left traversals.
(In fact, some rough testing suggests they are slower :-(.) Moreover,
these mechanisms are impossible to test in any arm's-length fashion; doing
so requires knowledge of the red-black tree's internal implementation.
Hence, let's just jettison them.
The previous coding of get_qual_for_list() was careful to copy everything
it was using from the input data structure. The new version missed
making a copy of pass-by-ref datum values that it's inserting into Consts.
This is not optional, however, as revealed by buildfarm failures on
machines running -DRELCACHE_FORCE_RELEASE: we're copying from a relcache
entry that could go away before the required lifespan of our output
expression. I'm pretty sure -DCLOBBER_CACHE_ALWAYS machines won't like
this either, but none of them have reported in yet.
Tom Lane [Fri, 8 Sep 2017 23:04:32 +0000 (19:04 -0400)]
Fix uninitialized-variable bug.
map_partition_varattnos() failed to set its found_whole_row output
parameter if the given expression list was NIL. This seems to be
a pre-existing bug that chanced to be exposed by commit 6f6b99d13.
It might be unreachable in v10, but I have little faith in that
proposition, so back-patch.
Tom Lane [Fri, 8 Sep 2017 21:37:43 +0000 (17:37 -0400)]
Fix more portability issues in new pgbench TAP tests.
Not completely sure, but I think bowerbird is spitting up on attempting
to include ">" in a temporary file name. (Why in the world are we
writing this stuff into files at all? A hash would be a better answer.)
Robert Haas [Fri, 8 Sep 2017 21:28:04 +0000 (17:28 -0400)]
Allow a partitioned table to have a default partition.
Any tuples that don't route to any other partition will route to the
default partition.
Jeevan Ladhe, Beena Emerson, Ashutosh Bapat, Rahila Syed, and Robert
Haas, with review and testing at various stages by (at least) Rushabh
Lathia, Keith Fiske, Amit Langote, Amul Sul, Rajkumar Raghuanshi, Sven
Kunze, Kyotaro Horiguchi, Thom Brown, Rafia Sabih, and Dilip Kumar.
Tom Lane [Fri, 8 Sep 2017 15:28:02 +0000 (11:28 -0400)]
Fix assorted portability issues in new pgbench TAP tests.
* Our own version of getopt_long doesn't support abbreviation of
long options.
* It doesn't do automatic rearrangement of non-option arguments to the end,
either.
* Test was way too optimistic about the platform independence of
NaN and Infinity outputs. I rather imagine we might have to lose
those tests altogether, but for the moment just allow case variation
and fully spelled out Infinity.
Robert Haas [Fri, 8 Sep 2017 01:07:47 +0000 (21:07 -0400)]
Refactor get_partition_for_tuple a bit.
Pending patches for both default partitioning and hash partitioning
find the current coding pattern to be inconvenient. Change it so that
we switch on the partitioning method first and then do whatever is
needed.
Amul Sul, reviewed by Jeevan Ladhe, with a few adjustments by me.
Tom Lane [Thu, 7 Sep 2017 23:41:51 +0000 (19:41 -0400)]
Improve performance of get_actual_variable_range with recently-dead tuples.
In commit fccebe421, we hacked get_actual_variable_range() to scan the
index with SnapshotDirty, so that if there are many uncommitted tuples
at the end of the index range, it wouldn't laboriously scan through all
of them looking for a live value to return. However, that didn't fix it
for the case of many recently-dead tuples at the end of the index;
SnapshotDirty recognizes those as committed dead and so we're back to
the same problem.
To improve the situation, invent a "SnapshotNonVacuumable" snapshot type
and use that instead. The reason this helps is that, if the snapshot
rejects a given index entry, we know that the indexscan will mark that
index entry as killed. This means the next get_actual_variable_range()
scan will proceed past that entry without visiting the heap, making the
scan a lot faster. We may end up accepting a recently-dead tuple as
being the estimated extremal value, but that doesn't seem much worse than
the compromise we made before to accept not-yet-committed extremal values.
The cost of the scan is still proportional to the number of dead index
entries at the end of the range, so in the interval after a mass delete
but before VACUUM's cleaned up the mess, it's still possible for
get_actual_variable_range() to take a noticeable amount of time, if you've
got enough such dead entries. But the constant factor is much much better
than before, since all we need to do with each index entry is test its
"killed" bit.
We chose to back-patch commit fccebe421 at the time, but I'm hesitant to
do so here, because this form of the problem seems to affect many fewer
people. Also, even when it happens, it's less bad than the case fixed
by commit fccebe421 because we don't get the contention effects from
expensive TransactionIdIsInProgress tests.
Tom Lane [Thu, 7 Sep 2017 18:04:41 +0000 (14:04 -0400)]
Improve documentation about behavior of multi-statement Query messages.
We've long done our best to sweep this topic under the rug, but in view
of recent work it seems like it's time to explain it more precisely.
Here's an initial cut at doing that.
Reduce excessive dereferencing of function pointers
It is equivalent in ANSI C to write (*funcptr) () and funcptr(). These
two styles have been applied inconsistently. After discussion, we'll
use the more verbose style for plain function pointer variables, to make
it clear that it's a variable, and the shorter style when the function
pointer is in a struct (s.func() or s->func()), because then it's clear
that it's not a plain function name, and otherwise the excessive
punctuation makes some of those invocations hard to read.
Robert Haas [Thu, 7 Sep 2017 14:55:45 +0000 (10:55 -0400)]
Even if some partitions are foreign, allow tuple routing.
This doesn't allow routing tuple to the foreign partitions themselves,
but it permits tuples to be routed to regular partitions despite the
presence of foreign partitions in the same inheritance hierarchy.
Etsuro Fujita, reviewed by Amit Langote and by me.
Tom Lane [Thu, 7 Sep 2017 13:49:55 +0000 (09:49 -0400)]
Fix handling of savepoint commands within multi-statement Query strings.
Issuing a savepoint-related command in a Query message that contains
multiple SQL statements led to a FATAL exit with a complaint about
"unexpected state STARTED". This is a shortcoming of commit 4f896dac1,
which attempted to prevent such misbehaviors in multi-statement strings;
its quick hack of marking the individual statements as "not top-level"
does the wrong thing in this case, and isn't a very accurate description
of the situation anyway.
To fix, let's introduce into xact.c an explicit model of what happens for
multi-statement Query strings. This is an "implicit transaction block
in progress" state, which for many purposes works like the normal
TBLOCK_INPROGRESS state --- in particular, IsTransactionBlock returns true,
causing the desired result that PreventTransactionChain will throw error.
But in case of error abort it works like TBLOCK_STARTED, allowing the
transaction to be cancelled without need for an explicit ROLLBACK command.
Commit 4f896dac1 is reverted in toto, so that we go back to treating the
individual statements as "top level". We could have left it as-is, but
this allows sharpening the error message for PreventTransactionChain
calls inside functions.
Except for getting a normal error instead of a FATAL exit for savepoint
commands, this patch should result in no user-visible behavioral change
(other than that one error message rewording). There are some things
we might want to do in the line of changing the appearance or wording of
error and warning messages around this behavior, which would be much
simpler to do now that it's an explicitly modeled state. But I haven't
done them here.
Although this fixes a long-standing bug, no backpatch. The consequences
of the bug don't seem severe enough to justify the risk that this commit
itself creates some new issue.
Patch by me, but it owes something to previous investigation by
Takayuki Tsunakawa, who also reported the bug in the first place.
Also thanks to Michael Paquier for reviewing.
Tom Lane [Thu, 7 Sep 2017 12:50:01 +0000 (08:50 -0400)]
Further marginal hacking on generic atomic ops.
In the generic atomic ops that rely on a loop around a CAS primitive,
there's no need to force the initial read of the "old" value to be atomic.
In the typically-rare case that we get a torn value, that simply means
that the first CAS attempt will fail; but it will update "old" to the
atomically-read value, so the next attempt has a chance of succeeding.
It was already being done that way in pg_atomic_exchange_u64_impl(),
but let's duplicate the approach in the rest.
(Given the current coding of the pg_atomic_read functions, this change
is a no-op anyway on popular platforms; it only makes a difference where
pg_atomic_read_u64_impl() is implemented as a CAS.)
In passing, also remove unnecessary take-a-pointer-and-dereference-it
coding in the pg_atomic_read functions. That seems to have been based
on a misunderstanding of what the C standard requires. What actually
matters is that the pointer be declared as pointing to volatile, which
it is.
I don't believe this will change the assembly code at all on x86
platforms (even ignoring the likelihood that these implementations
get overridden by others); but it may help on less-mainstream CPUs.
Simon Riggs [Thu, 7 Sep 2017 11:56:34 +0000 (04:56 -0700)]
Exclude special values in recovery_target_time
recovery_target_time accepts timestamp input, though
does not allow use of special values, e.g. “today”.
Report a useful error message for these cases.
Reported-by: Piotr Stefaniak
Author: Simon Riggs
Discussion: https://postgr.es/m/CANP8+jJdKA+BkkYLWz9zAm16Y0s2ExBv0WfpAwXdTpPfWnA9Bg@mail.gmail.com
Simon Riggs [Wed, 6 Sep 2017 20:46:01 +0000 (13:46 -0700)]
Allow SET STATISTICS on expression indexes
Index columns are referenced by ordinal number rather than name, e.g.
CREATE INDEX coord_idx ON measured (x, y, (z + t));
ALTER INDEX coord_idx ALTER COLUMN 3 SET STATISTICS 1000;
Incompatibility note for release notes:
\d+ for indexes now also displays Stats Target
Authors: Alexander Korotkov, with contribution by Adrien NAYRAT
Review: Adrien NAYRAT, Simon Riggs
Wordsmith: Simon Riggs
Tom Lane [Wed, 6 Sep 2017 18:21:39 +0000 (14:21 -0400)]
Use more of gcc's __sync_fetch_and_xxx builtin functions for atomic ops.
In addition to __sync_fetch_and_add, gcc offers __sync_fetch_and_sub,
__sync_fetch_and_and, and __sync_fetch_and_or, which correspond directly
to primitive atomic ops that we want. Testing shows that in some cases
they generate better code than our generic implementations, so use them.
We've assumed that configure's test for __sync_val_compare_and_swap is
sufficient to allow assuming that __sync_fetch_and_add is available, so
make the same assumption for these functions. Should that prove to be
wrong, we can add more configure tests.
Yura Sokolov, reviewed by Jesper Pedersen and myself
Tom Lane [Wed, 6 Sep 2017 18:06:09 +0000 (14:06 -0400)]
Remove duplicate reads from the inner loops in generic atomic ops.
The pg_atomic_compare_exchange_xxx functions are defined to update
*expected to whatever they read from the target variable. Therefore,
there's no need to do additional explicit reads after we've initialized
the "old" variable. The actual benefit of this is somewhat debatable,
but it seems fairly unlikely to hurt anything, especially since we
will override the generic implementations in most performance-sensitive
cases.
Yura Sokolov, reviewed by Jesper Pedersen and myself
This is not required in SGML, but will be in XML, so this is a step to
prepare for the conversion to XML. (It is still not required to escape
>, but we did it here in some cases for symmetry.)
Add a command-line option to osx/onsgmls calls to warn about unescaped
occurrences in the future.
Author: Alexander Law <exclusion@gmail.com>
Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Tom Lane [Wed, 6 Sep 2017 14:41:05 +0000 (10:41 -0400)]
Clean up handling of dropped columns in NAMEDTUPLESTORE RTEs.
The NAMEDTUPLESTORE patch piggybacked on the infrastructure for
TABLEFUNC/VALUES/CTE RTEs, none of which can ever have dropped columns,
so the possibility was ignored most places. Fix that, including adding a
specification to parsenodes.h about what it's supposed to look like.
In passing, clean up assorted comments that hadn't been maintained
properly by said patch.
Per bug #14799 from Philippe Beaudoin. Back-patch to v10.
Tom Lane [Tue, 5 Sep 2017 22:17:47 +0000 (18:17 -0400)]
Add \gdesc psql command.
This command acts somewhat like \g, but instead of executing the query
buffer, it merely prints a description of the columns that the query
result would have. (Of course, this still requires parsing the query;
if parse analysis fails, you get an error anyway.) We accomplish this
using an unnamed prepared statement, which should be invisible to psql
users.
Support retaining data dirs on successful TAP tests
This moves the data directories from using temporary directories with
randomness in the directory name to a static name, to make it easier to
debug. The data directory will be retained if tests fail or the test
code dies/exits with failure, and is automatically removed on the next
make check.
If the environment variable PG_TEST_NOCLEAN is defined, the data
directories will be retained regardless of test or exit status.
Tom Lane [Tue, 5 Sep 2017 16:02:06 +0000 (12:02 -0400)]
In psql, use PSQL_PAGER in preference to PAGER, if it's set.
This allows the user's environment to set up a psql-specific choice
of pager, in much the same way that we provide PSQL_EDITOR to allow
a psql-specific override of the more widely known EDITOR variable.
Throttling for sending a base backup in walsender is broken for the case
where there is a lot of WAL traffic, because the latch used to put the
walsender to sleep is also signalled by regular WAL traffic (and each
signal causes an additional batch of data to be sent); the net effect is
that there is no or little actual throttling. This is undesirable, so
rewrite the sleep into a loop to achieve the desired effeect.
Author: Jeff Janes, small tweaks by me Reviewed-by: Antonin Houska
Discussion: https://postgr.es/m/CAMkU=1xH6mde-yL-Eo1TKBGNd0PB1-TMxvrNvqcAkN-qr2E9mw@mail.gmail.com
Tom Lane [Tue, 5 Sep 2017 14:51:36 +0000 (10:51 -0400)]
Add psql variables showing server version and psql version.
We already had a psql variable VERSION that shows the verbose form of
psql's own version. Add VERSION_NAME to show the short form (e.g.,
"11devel") and VERSION_NUM to show the numeric form (e.g., 110000).
Also add SERVER_VERSION_NAME and SERVER_VERSION_NUM to show the short and
numeric forms of the server's version. (We'd probably add SERVER_VERSION
with the verbose string if it were readily available; but adding another
network round trip to get it seems too expensive.)
The numeric forms, in particular, are expected to be useful for scripting
purposes, now that psql can do conditional tests.
Tom Lane [Tue, 5 Sep 2017 14:17:10 +0000 (10:17 -0400)]
Reformat psql's --help=variables output.
The previous format with variable names and descriptions in separate
columns was extremely constraining about the length of the descriptions.
We'd dealt with that in several inconsistent ways over the years,
including letting the lines run over 80 characters, breaking descriptions
into multiple lines, or shoving the description onto a separate line.
But it's been a long time since the output could realistically fit onto
a single screen vertically, so let's just rely even more heavily on the
pager to deal with the vertical distance, and split each entry into two
(or more) lines, in the format
variable-name
variable description goes here
Each variable name + description remains a single translatable string,
in hopes of reducing translator confusion; we're just changing the
embedded whitespace.
I failed to resist the temptation to copy-edit one or two of the
descriptions while at it.
Tom Lane [Mon, 4 Sep 2017 21:25:31 +0000 (17:25 -0400)]
Be more careful about newline-chomping in pgbench.
process_backslash_command would drop the last character of the input
command on the assumption that it was a newline. Given a non newline
terminated input file, this could result in dropping the last character
of the command. Fix that by doing an actual test that we're removing
a newline.
While at it, allow for Windows newlines (\r\n), and suppress multiple
newlines if any. I do not think either of those cases really occur,
since (a) we read script files in text mode and (b) the lexer stops
when it hits a newline. But it's cheap enough and it provides a
stronger guarantee about what the result string looks like.
This is just cosmetic, I think, since the possibly-overly-chomped
line was only used for display not for further processing. So
it doesn't seem necessary to back-patch.
Fabien Coelho, reviewed by Nikolay Shaplov, whacked around a bit by me
Tom Lane [Mon, 4 Sep 2017 20:24:08 +0000 (16:24 -0400)]
Fix some subtle problems in pgbench transaction stats counting.
With --latency-limit, transactions might get skipped even beyond the
transaction count limit specified by -t, throwing off the expected
number of transactions and thus the denominator for later stats.
Be sure to stop skipping transactions once -t is reached.
Also, include skipped transactions in the "cnt" fields; this requires
discounting them again in a couple of places, but most places are
better off with this definition. In particular this is needed to
get correct overall stats for the combination of -R/-L/-t options.
Merge some more processing into processXactStats() to simplify this.
In passing, add a check that --progress-timestamp is specified only
when --progress is.
We might consider back-patching this, but given that it only matters
for a combination of options, and given the lack of field complaints,
consensus seems to be not to bother.
Tom Lane [Mon, 4 Sep 2017 17:45:20 +0000 (13:45 -0400)]
Adjust pgbench to allow non-ASCII characters in variable names.
This puts it in sync with psql's notion of what is a valid variable name.
Like psql, we document that "non-Latin letters" are allowed, but actually
any non-ASCII character is accepted.
Tom Lane [Sun, 3 Sep 2017 15:12:29 +0000 (11:12 -0400)]
Suppress compiler warnings in dshash.c.
Some compilers complain, not unreasonably, about left-shifting an
int32 "1" and then assigning the result to an int64. In practice
I sure hope that this data structure never gets large enough that
an overflow would actually occur; but let's cast the constant to
the right type to avoid the hazard.
In passing, fix a typo in dshash.h.
Amit Kapila, adjusted as per comment from Thomas Munro.
Tom Lane [Sun, 3 Sep 2017 15:01:08 +0000 (11:01 -0400)]
Fix macro-redefinition warning on MSVC.
In commit 9d6b160d7, I tweaked pg_config.h.win32 to use
"#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty,
for consistency with what happens in an autoconf'd build.
But Solution.pm injects another definition of that macro into
ecpg_config.h, leading to justifiable (though harmless) compiler whining.
Make that one consistent too. Back-patch, like the previous patch.
Tom Lane [Fri, 1 Sep 2017 21:38:54 +0000 (17:38 -0400)]
Improve division of labor between execParallel.c and nodeGather[Merge].c.
Move the responsibility for creating/destroying TupleQueueReaders into
execParallel.c, to avoid duplicative coding in nodeGather.c and
nodeGatherMerge.c. Also, instead of having DestroyTupleQueueReader do
shm_mq_detach, do it in the caller (which is now only ExecParallelFinish).
This means execParallel.c does both the attaching and detaching of the
tuple-queue-reader shm_mqs, which seems less weird than the previous
arrangement.
These changes also eliminate a vestigial memory leak (of the pei->tqueue
array). It's now demonstrable that rescans of Gather or GatherMerge don't
leak memory.
Add the maxrss field to the getrusage output (log_*_stats). This was
previously omitted because of portability concerns, but we feel this
might not be a concern anymore.
based on patch by Justin Pryzby <pryzby@telsasoft.com>
Tom Lane [Fri, 1 Sep 2017 19:14:18 +0000 (15:14 -0400)]
Make [U]INT64CONST safe for use in #if conditions.
Instead of using a cast to force the constant to be the right width,
assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.
The old approach to this is very hoary, dating from before we were
willing to require compilers to have working int64 types.
This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX
constants safe to use in preprocessor conditions, where a cast
doesn't work. Other symbolic constants that might be defined using
[U]INT64CONST are likewise safer than before.
Also fix the SIZE_MAX macro to be similarly safe, if we are forced
to provide a definition for that. The test added in commit 2e70d6b5e
happens to do what we want even with the hack "(size_t) -1" definition,
but we could easily get burnt on other tests in future.
Back-patch to all supported branches, like the previous commits.
doc: Remove mentions of server-side CRL and CA file names
Commit a445cb92ef5b3a31313ebce30e18cc1d6e0bdecb removed the default file
names for server-side CRL and CA files, but left them in the docs with a
small note. This removes the note and the previous default names to
clarify, as well as changes mentions of the file names to make it
clearer that they are configurable.
Author: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>