Robert Haas [Tue, 11 Apr 2017 16:31:00 +0000 (12:31 -0400)]
Fix confusion of max_parallel_workers mechanism following crash.
Commit b460f5d6693103076dc554aa7cbb96e1e53074f9 failed to contemplate
the possibilit that a parallel worker registered before a crash would
be unregistered only after the crash; if that happened, we'd end up
with parallel_terminate_count > parallel_register_count and the
system would refuse to launch any more parallel workers.
The easiest way to fix that seems to be to forget BGW_NEVER_RESTART
workers in ResetBackgroundWorkerCrashTimes() rather than leaving them
around to be cleaned up after the conclusion of the restart, so that
they go away before rather than after shared memory is reset.
To make sure that this fix is water-tight, don't allow parallel
workers to be anything other than BGW_NEVER_RESTART, so that after
recovering from a crash, 0 is guaranteed to be the correct starting
value for parallel_register_count. The core code wouldn't do this
anyway, but somebody might try to do it in extension code.
Report by Thomas Vondra. Patch by me, reviewed by Kuntal Ghosh.
Robert Haas [Tue, 11 Apr 2017 16:03:12 +0000 (12:03 -0400)]
Fix failure when a shared tidbitmap has only one page.
Commit 98e6e89040a0534ca26914c66cae9dd49ef62ad9 made inadequate
provision for the case of a single-page shared tidbitmap. It
allocate space for a shared PagetableEntry, but failed to
initialize it.
Report by Thomas Munro. Patch by Dilip Kumar, with some comment
changes by me.
Tom Lane [Tue, 11 Apr 2017 15:58:59 +0000 (11:58 -0400)]
Handle restriction clause lists more uniformly in postgres_fdw.
Clauses in the lists retained by postgres_fdw during planning were
sometimes bare boolean clauses, sometimes RestrictInfos, and sometimes
a mixture of the two in the same list. The comment about that situation
didn't come close to telling the full truth, either. Aside from being
confusing, this had a couple of bad practical consequences:
* waste of planning cycles due to inability to cache per-clause selectivity
and cost estimates;
* sometimes, RestrictInfos would sneak into the fdw_private list of a
finished Plan node, causing failures if, for example, we tried to ship
the Plan tree to a parallel worker.
(It may well be that it's a bug in the parallel-query logic that we
would ever try to ship such a plan to a parallel worker, but in any
case this deserves to be cleaned up.)
To fix, rearrange so that clause lists in PgFdwRelationInfo are always
lists of RestrictInfos, and then strip the RestrictInfos at the last
minute when making a Plan node. In passing do a bit of refactoring and
comment cleanup in postgresGetForeignPlan and foreign_join_ok.
Although the messiness here dates back at least to 9.6, there's no evidence
that it causes anything worse than wasted planning cycles in 9.6, so no
back-patch for now.
Add max_sync_workers_per_subscription to postgresql.conf.sample.
This commit also does
- add REPLICATION_SUBSCRIBERS into config_group
- mark max_logical_replication_workers and max_sync_workers_per_subscription
as REPLICATION_SUBSCRIBERS parameters
- move those parameters into "Subscribers" section in postgresql.conf.sample
Author: Masahiko Sawada, Petr Jelinek and me Reported-by: Masahiko Sawada
Discussion: http://postgr.es/m/CAD21AoAonSCoa=v=87ZO3vhfUZA1k_E2XRNHTt=xioWGUa+0ug@mail.gmail.com
Bruce Momjian [Tue, 11 Apr 2017 14:47:40 +0000 (10:47 -0400)]
docs: Improve window function docs
Specifically, the behavior of general-purpose and statistical aggregates
as window functions was not clearly documented, and terms were
inconsistently used. Also add docs about the difference between
cume_dist and percent_rank, rather than just the formulas.
Magnus Hagander [Tue, 11 Apr 2017 13:21:25 +0000 (15:21 +0200)]
Remove symbol WIN32_ONLY_COMPILER
This used to mean "Visual C++ except in those parts where Borland C++
was supported where it meant one of those". Now that we don't support
Borland C++ anymore, simplify by using _MSC_VER which is the normal way
to detect Visual C++.
Magnus Hagander [Tue, 11 Apr 2017 13:14:26 +0000 (15:14 +0200)]
Remove support for bcc and msvc standalone libpq builds
This removes the support for building just libpq using Borland C++ or
Visual C++. This has not worked properly for years, and given the number
of complaints it's clearly not worth the maintenance burden.
Building libpq using the standard MSVC build system is of course still
supported, along with mingw.
Tom Lane [Tue, 11 Apr 2017 12:59:40 +0000 (08:59 -0400)]
Fix pgbench's --progress-timestamp option to print Unix-epoch timestamps.
As a consequence of commit 1d63f7d2d, on platforms with CLOCK_MONOTONIC,
you got some random timescale or other instead of standard Unix timestamps
as expected. I'd attempted to fix pgbench for that change in commits 74baa1e3b and 67a875355, but missed this place. Fix in the same way as
those previous commits, ie, just eat the cost of an extra gettimeofday();
one extra syscall per progress report isn't worth sweating over. Per
report from Jeff Janes.
In passing, use snprintf not sprintf for this purpose. I don't think
there's any chance of actual buffer overrun, but it just looks safer.
Andrew Dunstan [Mon, 10 Apr 2017 23:53:47 +0000 (19:53 -0400)]
Run most pg_dump and pg_dumpall tests with --no-sync
Commit 96a7128b made pg_dump and pg_dumpall sync their output by
default. However, there's no great need for that in testing, and it
could impose a performance penalty, so we add the --no-sync flag to most
of the test cases.
Peter Eisentraut [Mon, 10 Apr 2017 19:08:14 +0000 (15:08 -0400)]
Use weaker locks when updating pg_subscription_rel
The previously used ShareRowExclusiveLock, while technically probably
more correct, led to deadlocks during seemingly unrelated operations and
thus a poor experience. Use RowExclusiveLock, like for most similar
catalog operations. In some care cases, the user might see an error
from DDL commands.
Andres Freund [Mon, 10 Apr 2017 18:56:46 +0000 (11:56 -0700)]
Fix initialization of dsa.c free area counter.
The backend local copy of dsa_area_control->freed_segment_counter was
not properly initialized / maintained. This could, if unlucky, lead
to keeping attached to a segment for too long.
Found via valgrind bleat on buildfarm animal skink.
Author: Thomas Munro
Discussion: https://postgr.es/m/20170407164935.obsf2jipjfos5zei@alap3.anarazel.de
Tom Lane [Mon, 10 Apr 2017 17:51:29 +0000 (13:51 -0400)]
Improve castNode notation by introducing list-extraction-specific variants.
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type. For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.
As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.
Robert Haas [Mon, 10 Apr 2017 16:20:08 +0000 (12:20 -0400)]
Fix reporting of violations in ExecConstraints, again.
We decided in f1b4c771ea74f42447dccaed42ffcdcccf3aa694 to pass the
original slot to ExecConstraints(), but that breaks when there are
BEFORE ROW triggers involved. So we need to do reverse-map the tuples
back to the original descriptor instead, as Amit originally proposed.
Amit Langote, reviewed by Ashutosh Bapat. One overlooked comment
fixed by me.
Tom Lane [Mon, 10 Apr 2017 14:26:54 +0000 (10:26 -0400)]
Move isolationtester's is-blocked query into C code for speed.
Commit 4deb41381 modified isolationtester's query to see whether a
session is blocked to also check for waits occurring in GetSafeSnapshot.
However, it did that in a way that enormously increased the query's
runtime under CLOBBER_CACHE_ALWAYS, causing the buildfarm members
that use that to run about four times slower than before, and in some
cases fail entirely. To fix, push the entire logic into a dedicated
backend function. This should actually reduce the CLOBBER_CACHE_ALWAYS
runtime from what it was previously, though I've not checked that.
In passing, expose a SQL function to check for safe-snapshot blockage,
comparable to pg_blocking_pids. This is more or less free given the
infrastructure built to solve the other problem, so we might as well.
Joe Conway [Sun, 9 Apr 2017 22:59:02 +0000 (15:59 -0700)]
Make sepgsql regression tests robust vs. collation differences
In commit 25542d77, regression test coverage was added to sepgsql
for partitioned tables. Unfortunately it was not robust in the face
of collation differences, per the buildfarm. Force "C" collation
in order to fix that.
Joe Conway [Sun, 9 Apr 2017 21:01:58 +0000 (14:01 -0700)]
Add partitioned table support to sepgsql
The new partitioned table capability added a new relkind, namely
RELKIND_PARTITIONED_TABLE. Update sepgsql to treat this new relkind
exactly the same way it does RELKIND_RELATION.
In addition, add regression test coverage for partitioned tables.
Issue raised by Stephen Frost and initial patch by Mike Palmiotto.
Review by Tom Lane and Robert Haas, and editorializing by me.
Tom Lane [Sat, 8 Apr 2017 20:38:03 +0000 (16:38 -0400)]
Clean up bugs in clause_selectivity() cleanup.
Commit ac2b09508 was not terribly carefully reviewed. Band-aid it to
not fail on non-RestrictInfo input, per report from Andreas Seltenreich.
Also make it do something more reasonable with variable-free clauses,
and improve nearby comments.
Tom Lane [Sat, 8 Apr 2017 02:20:03 +0000 (22:20 -0400)]
Optimize joins when the inner relation can be proven unique.
If there can certainly be no more than one matching inner row for a given
outer row, then the executor can move on to the next outer row as soon as
it's found one match; there's no need to continue scanning the inner
relation for this outer row. This saves useless scanning in nestloop
and hash joins. In merge joins, it offers the opportunity to skip
mark/restore processing, because we know we have not advanced past the
first possible match for the next outer row.
Of course, the devil is in the details: the proof of uniqueness must
depend only on joinquals (not otherquals), and if we want to skip
mergejoin mark/restore then it must depend only on merge clauses.
To avoid adding more planning overhead than absolutely necessary,
the present patch errs in the conservative direction: there are cases
where inner_unique or skip_mark_restore processing could be used, but
it will not do so because it's not sure that the uniqueness proof
depended only on "safe" clauses. This could be improved later.
David Rowley, reviewed and rather heavily editorialized on by me
When the 64bit atomics simulation is in use, we can't necessarily
guarantee the correct alignment of the atomics due to lack of compiler
support for doing so- that's fine from a safety perspective, because
everything is protected by a lock, but we asserted the alignment in
all cases. Weaken them. Per complaint from Alvaro Herrera.
My #ifdefery for PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY wasn't
sufficient. Fix that. Per complaint from Alexander Korotkov.
This enables the same OpenSP warnings on osx calls that we get from
onsgmls (make check) and formerly from openjade.
Older tool chains apparently have some of these warnings on by
default (see comment at SPFLAGS assignment). So users of such tool
chains would complain about warnings or errors that users of newer tool
chains would not see, unless they used "make check".
Instead of allocating memory in brin_deform_tuple and brin_copy_tuple
over and over during a scan, allow reuse of previously allocated memory.
This is said to make for a measurable performance improvement.
Andres Freund [Fri, 7 Apr 2017 21:44:47 +0000 (14:44 -0700)]
Improve 64bit atomics support.
When adding atomics back in b64d92f1a, I added 64bit support as
optional; there wasn't yet a direct user in sight. That turned out to
be a bit short-sighted, it'd already have been useful a number of times.
Add a fallback implementation of 64bit atomics, just like the one we
have for 32bit atomics.
Additionally optimize reads/writes to 64bit on a number of platforms
where aligned writes of that size are atomic. This can now be tested
with PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY.
Author: Andres Freund Reviewed-By: Amit Kapila
Discussion: https://postgr.es/m/20160330230914.GH13305@awork2.anarazel.de
The WAL-writing piece was forgetting to set the pages-per-range value.
Also, fix the declared type of struct member heapBlk, which I mistakenly
set as OffsetNumber rather than BlockNumber.
Problem was introduced by commit c655899ba9ae (April 1st). Any system
that tries to replay the new WAL record written before this fix is
likely to die on replay and require pg_resetwal.
Reported by Tom Lane.
Discussion: https://postgr.es/m/20191.1491524824@sss.pgh.pa.us
Tom Lane [Fri, 7 Apr 2017 16:18:38 +0000 (12:18 -0400)]
Fix planner error (or assert trap) with nested set operations.
As reported by Sean Johnston in bug #14614, since 9.6 the planner can fail
due to trying to look up the referent of a Var with varno 0. This happens
because we generate such Vars in generate_append_tlist, for lack of any
better way to describe the output of a SetOp node. In typical situations
nothing really cares about that, but given nested set-operation queries
we will call estimate_num_groups on the output of the subquery, and that
wants to know what a Var actually refers to. That logic used to look at
subquery->targetList, but in commit 3fc6e2d7f I'd switched it to look at
subroot->processed_tlist, ie the actual output of the subquery plan not the
parser's idea of the result. It seemed like a good idea at the time :-(.
As a band-aid fix, change it back.
Really we ought to have an honest way of naming the outputs of SetOp steps,
which suggests that it'd be a good idea for the parser to emit an RTE
corresponding to each one. But that's a task for another day, and it
certainly wouldn't yield a back-patchable fix.
Use SASLprep to normalize passwords for SCRAM authentication.
An important step of SASLprep normalization, is to convert the string to
Unicode normalization form NFKC. Unicode normalization requires a fairly
large table of character decompositions, which is generated from data
published by the Unicode consortium. The script to generate the table is
put in src/common/unicode, as well test code for the normalization.
A pre-generated version of the tables is included in src/include/common,
so you don't need the code in src/common/unicode to build PostgreSQL, only
if you wish to modify the normalization tables.
The SASLprep implementation depends on the UTF-8 functions from
src/backend/utils/mb/wchar.c. So to use it, you must also compile and link
that. That doesn't change anything for the current users of these
functions, the backend and libpq, as they both already link with wchar.o.
It would be good to move those functions into a separate file in
src/commmon, but I'll leave that for another day.
No documentation changes included, because there is no details on the
SCRAM mechanism in the docs anyway. An overview on that in the protocol
specification would probably be good, even though SCRAM is documented in
detail in RFC5802. I'll write that as a separate patch. An important thing
to mention there is that we apply SASLprep even on invalid UTF-8 strings,
to support other encodings.
Andrew Dunstan [Fri, 7 Apr 2017 02:11:21 +0000 (22:11 -0400)]
Make json_populate_record and friends operate recursively
With this change array fields are populated from json(b) arrays, and
composite fields are populated from json(b) objects.
Along the way, some significant code refactoring is done to remove
redundancy in the way to populate_record[_set] and to_record[_set]
functions operate, and some significant efficiency gains are made by
caching tuple descriptors.
All documentation is now built using XSLT. Remove all references to
Jade, DSSSL, also JadeTex and some other outdated tooling.
For chunked HTML builds, this changes nothing, but removes the
transitional "oldhtml" target. The single-page HTML build is ported
over to XSLT. For PDF builds, this removes the JadeTex builds and moves
the FOP builds in their place.
Tom Lane [Fri, 7 Apr 2017 01:10:09 +0000 (21:10 -0400)]
Clean up after insufficiently-researched optimization of tuple conversions.
tupconvert.c's functions formerly considered that an explicit tuple
conversion was necessary if the input and output tupdescs contained
different type OIDs. The point of that was to make sure that a composite
datum resulting from the conversion would contain the destination rowtype
OID in its composite-datum header. However, commit 3838074f8 entirely
misunderstood what that check was for, thinking that it had something to do
with presence or absence of an OID column within the tuple. Removal of the
check broke the no-op conversion path in ExecEvalConvertRowtype, as
reported by Ashutosh Bapat.
It turns out that of the dozen or so call sites for tupconvert.c functions,
ExecEvalConvertRowtype is the only one that cares about the composite-datum
header fields in the output tuple. In all the rest, we'd much rather avoid
an unnecessary conversion whenever the tuples are physically compatible.
Moreover, the comments in tupconvert.c only promise physical compatibility
not a metadata match. So, let's accept the removal of the guarantee about
the output tuple's rowtype marking, recognizing that this is a API change
that could conceivably break third-party callers of tupconvert.c. (So,
let's remember to mention it in the v10 release notes.)
However, commit 3838074f8 did have a bit of a point here, in that two
tuples mustn't be considered physically compatible if one has HEAP_HASOID
set and the other doesn't. (Some of the callers of tupconvert.c might not
really care about that, but we can't assume it in general.) The previous
check accidentally covered that issue, because no RECORD types ever have
OIDs, while if two tupdescs have the same named composite type OID then,
a fortiori, they have the same tdhasoid setting. If we're removing the
type OID match check then we'd better include tdhasoid match as part of
the physical compatibility check.
Without that hack in tupconvert.c, we need ExecEvalConvertRowtype to take
responsibility for inserting the correct rowtype OID label whenever
tupconvert.c decides it need not do anything. This is easily done with
heap_copy_tuple_as_datum, which will be considerably faster than a tuple
disassembly and reassembly anyway; so from a performance standpoint this
change is a win all around compared to what happened in earlier branches.
It just means a couple more lines of code in ExecEvalConvertRowtype.
Andres Freund [Thu, 6 Apr 2017 21:48:59 +0000 (14:48 -0700)]
Allow avoiding tuple copy within tuplesort_gettupleslot().
Add a "copy" argument to make it optional to receive a copy of caller
tuple that is safe to use following a subsequent manipulating of
tuplesort's state. This is a performance optimization. Most existing
tuplesort_gettupleslot() callers are made to opt out of copying.
Existing callers that happen to rely on the validity of tuple memory
beyond subsequent manipulations of the tuplesort request their own
copy.
This brings tuplesort_gettupleslot() in line with
tuplestore_gettupleslot(). In the future, a "copy"
tuplesort_getdatum() argument may be added, that similarly allows
callers to opt out of receiving their own copy of tuple.
In passing, clarify assumptions that callers of other tuplesort fetch
routines may make about tuple memory validity, per gripe from Tom
Lane.
Author: Peter Geoghegan
Discussion: CAM3SWZQWZZ_N=DmmL7tKy_OUjGH_5mN=N=A6h7kHyyDvEhg2DA@mail.gmail.com
Joe Conway [Thu, 6 Apr 2017 21:28:19 +0000 (14:28 -0700)]
Silence uninitialized variable compiler warning in sepgsql
At -Og optimization gcc warns that variable tclass may be used
uninitialized when relkind == RELKIND_INDEX. Actually that can't
happen due to an early return, but quiet the compiler by initializing
tclass to 0.
In passing, use uint16_t consistently for the declaration of tclass.
Complaint and initial patch by Mike Palmiotto. Editorializing by me.
Probably not worth backpatching given that it is cosmetic, so apply
to development head only.
Joe Conway [Thu, 6 Apr 2017 21:21:25 +0000 (14:21 -0700)]
Silence compiler warning in sepgsql
<selinux/label.h> includes <stdbool.h>, which creates an incompatible
We don't care if <stdbool.h> redefines "true"/"false"; those are close
enough.
Complaint and initial patch by Mike Palmiotto. Final approach per
Tom Lane's suggestion, as discussed on hackers. Backpatching to
all supported branches.
The original code was overly optimistic about the cost of scanning a
BRIN index, leading to BRIN indexes being selected when they'd be a
worse choice than some other index. This complete rewrite should be
more accurate.
Author: David Rowley, based on an earlier patch by Emre Hasegeli Reviewed-by: Emre Hasegeli
Discussion: https://postgr.es/m/CAKJS1f9n-Wapop5Xz1dtGdpdqmzeGqQK4sV2MK-zZugfC14Xtw@mail.gmail.com
Andres Freund [Thu, 6 Apr 2017 20:44:48 +0000 (13:44 -0700)]
Add minimal test for EXPLAIN ANALYZE of parallel query.
This displays the number of workers launched, thus the test is
dependant on configuration to some degree. We'll see whether that
turns out ot be a problem.
Fix logical replication between different encodings
When sending a tuple attribute, the previous coding erroneously sent the
length byte before encoding conversion, which would lead to protocol
failures on the receiving side if the length did not match the following
string.
To fix that, use pq_sendcountedtext() for sending tuple attributes,
which takes care of all of that internally. To match the API of
pq_sendcountedtext(), send even text values without a trailing zero byte
and have the receiving end put it in place instead. This matches how
the standard FE/BE protocol behaves.
Mark immutable functions in information schema as parallel safe
Also add opr_sanity check that all preloaded immutable functions are
parallel safe. (Per discussion, this does not necessarily have to be
true for all possible such functions, but deviations would be unlikely
enough that maintaining such a test is reasonable.)
Reported-by: David Rowley <david.rowley@2ndquadrant.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Peter Eisentraut [Tue, 30 Aug 2016 16:00:00 +0000 (12:00 -0400)]
pg_dump: Rename some typedefs to avoid name conflicts
In struct _archiveHandle, some of the fields have the same name as a
typedef. This is kind of confusing, so rename the types so they have
names distinct from the struct fields. In C++, the previous coding
changes the meaning of the typedef in the scope of the struct, causing
warnings and possibly other problems.
It was not used for what the comment claimed, at all. It was actually used
as the 'base' argument to strtol(), when reading the iteration count. We
don't need a constant for base-10, so remove it.
This is the SQL standard-conforming variant of PostgreSQL's serial
columns. It fixes a few usability issues that serial columns have:
- CREATE TABLE / LIKE copies default but refers to same sequence
- cannot add/drop serialness with ALTER TABLE
- dropping default does not drop sequence
- need to grant separate privileges to sequence
- other slight weirdnesses because serial is some kind of special macro
Simon Riggs [Thu, 6 Apr 2017 12:31:52 +0000 (08:31 -0400)]
Avoid SnapshotResetXmin() during AtEOXact_Snapshot()
For normal commits and aborts we already reset PgXact->xmin,
so we can simply avoid running SnapshotResetXmin() twice.
During performance tests by Alexander Korotkov, diagnosis
by Andres Freund showed PgXact array as a bottleneck. After
manual analysis by me of the code paths that touch those
memory locations, I was able to identify extraneous code
in the main transaction commit path.
Avoiding touching highly contented shmem improves concurrent
performance slightly on all workloads, confirmed by tests
run by Ashutosh Sharma and Alexander Korotkov.
Remove dead code and fix comments in fast-path function handling.
HandleFunctionRequest() is no longer responsible for reading the protocol
message from the client, since commit 2b3a8b20c2. Fix the outdated
comments.
HandleFunctionRequest() now always returns 0, because the code that used
to return EOF was moved in 2b3a8b20c2. Therefore, the caller no longer
needs to check the return value.
Reported by Andres Freund. Backpatch to all supported versions, even though
this doesn't have any user-visible effect, to make backporting future
patches in this area easier.
Tom Lane [Thu, 6 Apr 2017 03:51:27 +0000 (23:51 -0400)]
Fix integer-overflow problems in interval comparison.
When using integer timestamps, the interval-comparison functions tried
to compute the overall magnitude of an interval as an int64 number of
microseconds. As reported by Frazer McLean, this overflows for intervals
exceeding about 296000 years, which is bad since we nominally allow
intervals many times larger than that. That results in wrong comparison
results, and possibly in corrupted btree indexes for columns containing
such large interval values.
To fix, compute the magnitude as int128 instead. Although some compilers
have native support for int128 calculations, many don't, so create our
own support functions that can do 128-bit addition and multiplication
if the compiler support isn't there. These support functions are designed
with an eye to allowing the int128 code paths in numeric.c to be rewritten
for use on all platforms, although this patch doesn't do that, or even
provide all the int128 primitives that will be needed for it.
Back-patch as far as 9.4. Earlier releases did not guard against overflow
of interval values at all (commit 146604ec4 fixed that), so it seems not
very exciting to worry about overly-large intervals for them.
Before 9.6, we did not assume that unreferenced "static inline" functions
would not draw compiler warnings, so omit functions not directly referenced
by timestamp.c, the only present consumer of int128.h. (We could have
omitted these functions in HEAD too, but since they were written and
debugged on the way to the present patch, and they look likely to be needed
by numeric.c, let's keep them in HEAD.) I did not bother to try to prevent
such warnings in a --disable-integer-datetimes build, though.
Before 9.5, configure will never define HAVE_INT128, so the part of
int128.h that exploits a native int128 implementation is dead code in the
9.4 branch. I didn't bother to remove it, thinking that keeping the file
looking similar in different branches is more useful.
In HEAD only, add a simple test harness for int128.h in src/tools/.
In back branches, this does not change the float-timestamps code path.
That's not subject to the same kind of overflow risk, since it computes
the interval magnitude as float8. (No doubt, when this code was originally
written, overflow was disregarded for exactly that reason.) There is a
precision hazard instead :-(, but we'll avert our eyes from that question,
since no complaints have been reported and that code's deprecated anyway.
Simon Riggs [Wed, 5 Apr 2017 22:00:42 +0000 (18:00 -0400)]
Collect and use multi-column dependency stats
Follow on patch in the multi-variate statistics patch series.
CREATE STATISTICS s1 WITH (dependencies) ON (a, b) FROM t;
ANALYZE;
will collect dependency stats on (a, b) and then use the measured
dependency in subsequent query planning.
Commit 7b504eb282ca2f5104b5c00b4f05a3ef6bb1385b added
CREATE STATISTICS with n-distinct coefficients. These are now
specified using the mutually exclusive option WITH (ndistinct).
Author: Tomas Vondra, David Rowley Reviewed-by: Kyotaro HORIGUCHI, Álvaro Herrera, Dean Rasheed, Robert Haas
and many other comments and contributions
Discussion: https://postgr.es/m/56f40b20-c464-fad2-ff39-06b668fac47c@2ndquadrant.com
Robert Haas [Wed, 5 Apr 2017 18:17:23 +0000 (14:17 -0400)]
Fix pageinspect failures on hash indexes.
Make every page in a hash index which isn't all-zeroes have a valid
special space, so that tools like pageinspect don't error out.
Also, make pageinspect cope with all-zeroes pages, because
_hash_alloc_buckets can leave behind large numbers of those until
they're consumed by splits.
Ashutosh Sharma and Robert Haas, reviewed by Amit Kapila.
Original trouble report from Jeff Janes.
Kevin Grittner [Tue, 4 Apr 2017 23:36:39 +0000 (18:36 -0500)]
Follow-on cleanup for the transition table patch.
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables. In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data. Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.
Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql. That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code. Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.
While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.
Thomas Munro with some work by Kevin Grittner (mostly docs)
Andres Freund [Tue, 4 Apr 2017 21:26:42 +0000 (14:26 -0700)]
Fix two valgrind issues in slab allocator.
During allocation VALGRIND_MAKE_MEM_DEFINED was called with a pointer
as size. That kind of works, but makes valgrind exceedingly slow for
workloads involving the slab allocator.
Secondly there was an access to memory marked as unreachable within
SlabCheck(). Fix that too.
Author: Tomas Vondra
Discussion: https://postgr.es/m/a6543b6d-6015-99b1-63ef-3ed55a76a730@2ndquadrant.com
Simon Riggs [Tue, 4 Apr 2017 19:56:56 +0000 (15:56 -0400)]
Speedup 2PC recovery by skipping two phase state files in normal path
2PC state info held in shmem at PREPARE, then cleaned at COMMIT PREPARED/ABORT PREPARED,
avoiding writing/fsyncing any state information to disk in the normal path, greatly enhancing replay speed.
Prepared transactions that live past one checkpoint redo horizon will be written to disk as now.
Similar conceptually to 978b2f65aa1262eb4ecbf8b3785cb1b9cf4db78e and building upon
the infrastructure created by that commit.
Authors, in equal measure: Stas Kelvich, Nikhil Sontakke and Michael Paquier
Discussion: https://postgr.es/m/CAMGcDxf8Bn9ZPBBJZba9wiyQq-Qk5uqq=VjoMnRnW5s+fKST3w@mail.gmail.com
When changing the type of a sequence, adjust the min/max values of the
sequence if it looks like the previous values were the default values.
Previously, it would leave the old values in place, requiring manual
adjustments even in the usual/default cases.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Vitaly Burovoy <vitaly.burovoy@gmail.com>
Stephen Frost [Tue, 4 Apr 2017 12:42:09 +0000 (08:42 -0400)]
Remove --verbose from PROVE_FLAGS
Per discussion, the TAP tests are really more verbose than necessary, so
remove the --verbose flag from PROVE_FLAGS. Also add comments to let
folks know how they can enable it if they really wish to, as suggested
by Craig Ringer.
Author: Michael Paquier, additional comments by me.
Discussion: https://postgr.es/m/CAMsr%2BYGAzcMDOZ_BirnMCL6Sb%3DMUjP0FRE82YBDSbXcf6pm9Yg%40mail.gmail.com
Fix remote position tracking in logical replication
We need to set the origin remote position to end_lsn, not commit_lsn, as
commit_lsn is the start of commit record, and we use the origin remote
position as start position when restarting replication stream. If we'd
use commit_lsn, we could request data that we already received from the
remote server after a crash of a downstream server.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>