Tom Lane [Thu, 29 Mar 2018 16:44:19 +0000 (12:44 -0400)]
Remove unnecessary BufferGetPage() calls in fsm_vacuum_page().
Just noticed that these were quite redundant, since we're holding the
page address in a local variable anyway, and we have pin on the buffer
throughout.
Tom Lane [Thu, 29 Mar 2018 16:22:37 +0000 (12:22 -0400)]
Remove UpdateFreeSpaceMap(), use FreeSpaceMapVacuumRange() instead.
FreeSpaceMapVacuumRange has the same effect, is more efficient if many
pages are involved, and makes fewer assumptions about how it's used.
Notably, Claudio Freire pointed out that UpdateFreeSpaceMap could fail
if the specified freespace value isn't the maximum possible. This isn't
a problem for the single existing user, but the function represents an
attractive nuisance IMO, because it's named as though it were a
general-purpose update function and its limitations are undocumented.
In any case we don't need multiple ways to get the same result.
In passing, do some code review and cleanup in RelationAddExtraBlocks.
In particular, I see no excuse for it to omit the PageIsNew safety check
that's done in the mainline extension path in RelationGetBufferForTuple.
Tom Lane [Thu, 29 Mar 2018 15:29:54 +0000 (11:29 -0400)]
While vacuuming a large table, update upper-level FSM data every so often.
VACUUM updates leaf-level FSM entries immediately after cleaning the
corresponding heap blocks. fsmpage.c updates the intra-page search trees
on the leaf-level FSM pages when this happens, but it does not touch the
upper-level FSM pages, so that the released space might not actually be
findable by searchers. Previously, updating the upper-level pages happened
only at the conclusion of the VACUUM run, in a single FreeSpaceMapVacuum()
call. This is bad because the VACUUM might get canceled before ever
reaching that point, so that from the point of view of searchers no space
has been freed at all, leading to table bloat.
We can improve matters by updating the upper pages immediately after each
cycle of index-cleaning and heap-cleaning, processing just the FSM pages
corresponding to the range of heap blocks we have now fully cleaned.
This adds a small amount of extra work, since the FSM pages leading down
to each range boundary will be touched twice, but it's pretty negligible
compared to everything else going on in a large VACUUM.
If there are no indexes, VACUUM doesn't work in cycles but just cleans
each heap page on first visit. In that case we just arbitrarily update
upper FSM pages after each 8GB of heap. That maintains the goal of not
letting all this work slide until the very end, and it doesn't seem worth
expending extra complexity on a case that so seldom occurs in practice.
In either case, the FSM is fully up to date before any attempt is made
to truncate the relation, so that the most likely scenario for VACUUM
cancellation no longer results in out-of-date upper FSM pages. When
we do successfully truncate, adjusting the FSM to reflect that is now
fully handled within FreeSpaceMapTruncateRel.
Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional
tweaks by me
Teodor Sigaev [Thu, 29 Mar 2018 13:33:56 +0000 (16:33 +0300)]
Add casts from jsonb
Add explicit cast from scalar jsonb to all numeric and bool types. It would be
better to have cast from scalar jsonb to text too but there is already a cast
from jsonb to text as just text representation of json. There is no way to have
two different casts for the same type's pair.
Bump catalog version
Author: Anastasia Lubennikova with editorization by Nikita Glukhov and me
Review by: Aleksander Alekseev, Nikita Glukhov, Darafei Praliaskouski
Discussion: https://www.postgresql.org/message-id/flat/0154d35a-24ae-f063-5273-9ffcdf1c7f2e@postgrespro.ru
Peter Eisentraut [Wed, 28 Mar 2018 22:57:10 +0000 (18:57 -0400)]
Allow committing inside cursor loop
Previously, committing or aborting inside a cursor loop was prohibited
because that would close and remove the cursor. To allow that,
automatically convert such cursors to holdable cursors so they survive
commits or rollbacks. Portals now have a new state "auto-held", which
means they have been converted automatically from pinned. An auto-held
portal is kept on transaction commit or rollback, but is still removed
when returning to the main loop on error.
This supports all languages that have cursor loop constructs: PL/pgSQL,
PL/Python, PL/Perl.
Andres Freund [Wed, 28 Mar 2018 21:22:42 +0000 (14:22 -0700)]
Add documentation for the JIT feature.
As promised in earlier commits, this adds documentation about the new
build options, the new GUCs, about the planner logic when JIT is used,
and the benefits of JIT in general.
Also adds a more implementation oriented README.
I'm sure we're going to want to expand this further, but I think this
is a reasonable start.
Author: Andres Freund, with contributions by Thomas Munro Reviewed-By: Thomas Munro
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Wed, 28 Mar 2018 20:26:51 +0000 (13:26 -0700)]
Add EXPLAIN support for JIT.
This just shows a few details about JITing, e.g. how many functions
have been JITed, and how long that took. To avoid noise in regression
tests with functions sometimes being JITed in --with-llvm builds,
disable display when COSTS OFF is specified.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Wed, 28 Mar 2018 20:19:08 +0000 (13:19 -0700)]
Add inlining support to LLVM JIT provider.
This provides infrastructure to allow JITed code to inline code
implemented in C. This e.g. can be postgres internal functions or
extension code.
This already speeds up long running queries, by allowing the LLVM
optimizer to optimize across function boundaries. The optimization
potential currently doesn't reach its full potential because LLVM
cannot optimize the FunctionCallInfoData argument fully away, because
it's allocated on the heap rather than the stack. Fixing that is
beyond what's realistic for v11.
To be able to do that, use CLANG to convert C code to LLVM bitcode,
and have LLVM build a summary for it. That bitcode can then be used to
to inline functions at runtime. For that the bitcode needs to be
installed. Postgres bitcode goes into $pkglibdir/bitcode/postgres,
extensions go into equivalent directories. PGXS has been modified so
that happens automatically if postgres has been compiled with LLVM
support.
Currently this isn't the fastest inline implementation, modules are
reloaded from disk during inlining. That's to work around an apparent
LLVM bug, triggering an apparently spurious error in LLVM assertion
enabled builds. Once that is resolved we can remove the superfluous
read from disk.
Docs will follow in a later commit containing docs for the whole JIT
feature.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Wed, 28 Mar 2018 19:45:32 +0000 (12:45 -0700)]
Use isinf builtin for clang, for performance.
When compiling with clang glibc's definition of isinf() ends up
leading to and external libc function call. That's because there was a
bug in the builtin in an old gcc version, and clang claims
compatibility with an older version. That causes clang to be
measurably slower for floating point heavy workloads than gcc.
To fix simply redirect isinf when using clang and clang confirms it
has __builtin_isinf().
Fujii Masao [Wed, 28 Mar 2018 19:56:52 +0000 (04:56 +0900)]
Make pg_rewind skip files and directories that are removed during server start.
The target cluster that was rewound needs to perform recovery from
the checkpoint created at failover, which leads it to remove or recreate
some files and directories that may have been copied from the source
cluster. So pg_rewind can skip synchronizing such files and directories,
and which reduces the amount of data transferred during a rewind
without changing the usefulness of the operation.
Author: Michael Paquier Reviewed-by: Anastasia Lubennikova, Stephen Frost and me
Discussion: https://postgr.es/m/20180205071022.GA17337@paquier.xyz
Fujii Masao [Wed, 28 Mar 2018 19:00:21 +0000 (04:00 +0900)]
Fix handling of files that source server removes during pg_rewind is running.
After processing the filemap to build the list of chunks that will be
fetched from the source to rewing the target server, it is possible that
a file which was previously processed is removed from the source. A
simple example of such an occurence is a WAL segment which gets recycled
on the target in-between. When the filemap is processed, files not
categorized as relation files are first truncated to prepare for its
full copy of which is going to be taken from the source, divided into a
set of junks. However, for a recycled WAL segment, this would result in
a segment which has a zero-byte size. With such an empty file,
post-rewind recovery thinks that records are saved but they are actually
not because of the truncation which happened when processing the
filemap, resulting in data loss.
In order to fix the problem, make sure that files which are found as
removed on the source when receiving chunks of them are as well deleted
on the target server for consistency.
Peter Eisentraut [Sat, 24 Mar 2018 14:05:06 +0000 (10:05 -0400)]
PL/pgSQL: Nested CALL with transactions
So far, a nested CALL or DO in PL/pgSQL would not establish a context
where transaction control statements were allowed. This fixes that by
handling CALL and DO specially in PL/pgSQL, passing the atomic/nonatomic
execution context through and doing the required management around
transaction boundaries.
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Tom Lane [Wed, 28 Mar 2018 17:26:43 +0000 (13:26 -0400)]
Fix actual and potential double-frees around tuplesort usage.
tuplesort_gettupleslot() passed back tuples allocated in the tuplesort's
own memory context, even when the caller was responsible to free them.
This created a double-free hazard, because some callers might destroy
the tuplesort object (via tuplesort_end) before trying to clean up the
last returned tuple. To avoid this, change the API to specify that the
tuple is allocated in the caller's memory context. v10 and HEAD already
did things that way, but in 9.5 and 9.6 this is a live bug that can
demonstrably cause crashes with some grouping-set usages.
In 9.5 and 9.6, this requires doing an extra tuple copy in some cases,
which is unfortunate. But the amount of refactoring needed to avoid it
seems excessive for a back-patched change, especially since the cases
where an extra copy happens are less performance-critical.
Likewise change tuplesort_getdatum() to return pass-by-reference Datums
in the caller's context not the tuplesort's context. There seem to be
no live bugs among its callers, but clearly the same sort of situation
could happen in future.
For other tuplesort fetch routines, continue to allocate the memory in
the tuplesort's context. This is a little inconsistent with what we now
do for tuplesort_gettupleslot() and tuplesort_getdatum(), but that's
preferable to adding new copy overhead in the back branches where it's
clearly unnecessary. These other fetch routines provide the weakest
possible guarantees about tuple memory lifespan from v10 on, anyway,
so this actually seems more consistent overall.
Adjust relevant comments to reflect these API redefinitions.
Arguably, we should change the pre-9.5 branches as well, but since
there are no known failure cases there, it seems not worth the risk.
Peter Geoghegan, per report from Bernd Helmle. Reviewed by Kyotaro
Horiguchi; thanks also to Andreas Seltenreich for extracting a
self-contained test case.
Simon Riggs [Wed, 28 Mar 2018 16:42:50 +0000 (17:42 +0100)]
Store 2PC GID in commit/abort WAL recs for logical decoding
Store GID of 2PC in commit/abort WAL records when wal_level = logical.
This allows logical decoding to send the SAME gid to subscribers
across restarts of logical replication.
Track relica origin replay progress for 2PC.
(Edited from patch 0003 in the logical decoding 2PC series.)
Authors: Nikhil Sontakke, Stas Kelvich Reviewed-by: Simon Riggs, Andres Freund
Peter Eisentraut [Wed, 28 Mar 2018 12:32:43 +0000 (08:32 -0400)]
Transforms for jsonb to PL/Python
Add a new contrib module jsonb_plpython that provide a transform between
jsonb and PL/Python. jsonb values are converted to appropriate Python
types such as dicts and lists, and vice versa.
Author: Anthony Bykov <a.bykov@postgrespro.ru> Reviewed-by: Aleksander Alekseev <a.alekseev@postgrespro.ru> Reviewed-by: Nikita Glukhov <n.gluhov@postgrespro.ru>
Simon Riggs [Wed, 28 Mar 2018 04:21:00 +0000 (05:21 +0100)]
Use pg_stat_get_xact* functions within xacts
Resolve build farm failures from c203d6cf81b4d7e43,
diagnosed by Tom Lane.
The output of pg_stat_get_xact_tuples_hot_updated() and friends
is not guaranteed to show anything after the transaction completes.
Data is flushed slowly to stats collector, so using them can
give timing issues.
Andres Freund [Wed, 28 Mar 2018 04:03:10 +0000 (21:03 -0700)]
Quick adaption of JIT tuple deforming to the fast default patch.
Instead using memset to set tts_isnull, call the new
slot_getmissingattrs().
Also fix a bug (= instead of >=) in the code generation. Normally = is
correct, but when repeatedly deforming fields not in a
tuple (e.g. deform up to natts + 1 and then natts + 2) >= is needed.
Andrew Dunstan [Wed, 28 Mar 2018 00:13:52 +0000 (10:43 +1030)]
Fast ALTER TABLE ADD COLUMN with a non-NULL default
Currently adding a column to a table with a non-NULL default results in
a rewrite of the table. For large tables this can be both expensive and
disruptive. This patch removes the need for the rewrite as long as the
default value is not volatile. The default expression is evaluated at
the time of the ALTER TABLE and the result stored in a new column
(attmissingval) in pg_attribute, and a new column (atthasmissing) is set
to true. Any existing row when fetched will be supplied with the
attmissingval. New rows will have the supplied value or the default and
so will never need the attmissingval.
Any time the table is rewritten all the atthasmissing and attmissingval
settings for the attributes are cleared, as they are no longer needed.
The most visible code change from this is in heap_attisnull, which
acquires a third TupleDesc argument, allowing it to detect a missing
value if there is one. In many cases where it is known that there will
not be any (e.g. catalog relations) NULL can be passed for this
argument.
Andrew Dunstan, heavily modified from an original patch from Serge
Rielau.
Reviewed by Tom Lane, Andres Freund, Tomas Vondra and David Rowley.
Tom Lane [Tue, 27 Mar 2018 22:15:39 +0000 (18:15 -0400)]
Update pgindent's typedefs blacklist, and make it easier to adjust.
It seems that all buildfarm members are now using the <stdbool.h> code
path, so that none of them report "bool" as a typedef. We still need it
to be treated that way, so adjust pgindent to force that whether or not
it's in the given list.
Also, the recent introduction of LLVM infrastructure has caused the
appearance of some typedef names that we definitely *don't* want
treated as typedefs, such as "string" and "abs". Extend the existing
blacklist to include these. (Additions based on comparing v10's
typedefs list to what the buildfarm is currently emitting.)
Rearrange the code so that the lists of whitelisted/blacklisted
names are a bit easier to find and modify.
Tom Lane [Tue, 27 Mar 2018 20:46:47 +0000 (16:46 -0400)]
Allow memory contexts to have both fixed and variable ident strings.
Originally, we treated memory context names as potentially variable in
all cases, and therefore always copied them into the context header.
Commit 9fa6f00b1 rethought this a little bit and invented a distinction
between fixed and variable names, skipping the copy step for the former.
But we can make things both simpler and more useful by instead allowing
there to be two parts to a context's identification, a fixed "name" and
an optional, variable "ident". The name supplied in the context create
call is now required to be a compile-time-constant string in all cases,
as it is never copied but just pointed to. The "ident" string, if
wanted, is supplied later. This is needed because typically we want
the ident to be stored inside the context so that it's cleaned up
automatically on context deletion; that means it has to be copied into
the context before we can set the pointer.
The cost of this approach is basically just an additional pointer field
in struct MemoryContextData, which isn't much overhead, and is bought
back entirely in the AllocSet case by not needing a headerSize field
anymore, since we no longer have to cope with variable header length.
In addition, we can simplify the internal interfaces for memory context
creation still further, saving a few cycles there. And it's no longer
true that a custom identifier disqualifies a context from participating
in aset.c's freelist scheme, so possibly there's some win on that end.
All the places that were using non-compile-time-constant context names
are adjusted to put the variable info into the "ident" instead. This
allows more effective identification of those contexts in many cases;
for example, subsidary contexts of relcache entries are now identified
by both type (e.g. "index info") and relname, where before you got only
one or the other. Contexts associated with PL function cache entries
are now identified more fully and uniformly, too.
I also arranged for plancache contexts to use the query source string
as their identifier. This is basically free for CachedPlanSources, as
they contained a copy of that string already. We pay an extra pstrdup
to do it for CachedPlans. That could perhaps be avoided, but it would
make things more fragile (since the CachedPlanSource is sometimes
destroyed first). I suspect future improvements in error reporting will
require CachedPlans to have a copy of that string anyway, so it's not
clear that it's worth moving mountains to avoid it now.
This also changes the APIs for context statistics routines so that the
context-specific routines no longer assume that output goes straight
to stderr, nor do they know all details of the output format. This
is useful immediately to reduce code duplication, and it also allows
for external code to do something with stats output that's different
from printing to stderr.
The reason for pushing this now rather than waiting for v12 is that
it rethinks some of the API changes made by commit 9fa6f00b1. Seems
better for extension authors to endure just one round of API changes
not two.
Simon Riggs [Tue, 27 Mar 2018 18:57:02 +0000 (19:57 +0100)]
Allow HOT updates for some expression indexes
If the value of an index expression is unchanged after UPDATE,
allow HOT updates where previously we disallowed them, giving
a significant performance boost in those cases.
Particularly useful for indexes such as JSON->>field where the
JSON value changes but the indexed value does not.
Submitted as "surjective indexes" patch, now enabled by use
of new "recheck_on_update" parameter.
Author: Konstantin Knizhnik
Reviewer: Simon Riggs, with much wordsmithing and some cleanup
Peter Eisentraut [Tue, 27 Mar 2018 16:31:34 +0000 (12:31 -0400)]
libpq: PQhost to return active connected host or hostaddr
Previously, PQhost didn't return the connected host details when the
connection type was CHT_HOST_ADDRESS (i.e., via hostaddr). Instead, it
returned the complete host connection parameter (which could contain
multiple hosts) or the default host details, which was confusing and
arguably incorrect.
Change this to return the actually connected host or hostaddr
irrespective of the connection type. When hostaddr but no host was
specified, hostaddr is now returned. Never return the original host
connection parameter, and document that PQhost cannot be relied on
before the connection is established.
PQport is similarly changed to always return the active connection port
and never the original connection parameter.
Author: Hari Babu <kommi.haribabu@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Teodor Sigaev [Tue, 27 Mar 2018 12:43:19 +0000 (15:43 +0300)]
Add predicate locking for GiST
Add page-level predicate locking, due to gist's code organization, patch seems
close to trivial: add check before page changing, add predicate lock before page
scanning. Although choosing right place to check is not simple: it should not
be called during index build, it should support insertion of new downlink and so
on.
Author: Shubham Barai with editorization by me and Alexander Korotkov
Reviewed by: Alexander Korotkov, Andrey Borodin, me
Discussion: https://www.postgresql.org/message-id/flat/CALxAEPtdcANpw5ePU3LvnTP8HCENFw6wygupQAyNBgD-sG3h0g@mail.gmail.com
Andres Freund [Mon, 26 Mar 2018 21:59:37 +0000 (14:59 -0700)]
Make new regression indpendent of max_parallel_workers_per_gather.
The tests in e2f1eb0ee30d1 ("Implement partition-wise
grouping/aggregation.") weren't independent of the server's
max_parallel_workers_per_gather setting. I (Andres) find it useful to
locally run with that disabled, and the aforementioned patch broke
this.
Andres Freund [Mon, 26 Mar 2018 19:57:19 +0000 (12:57 -0700)]
JIT tuple deforming in LLVM JIT provider.
Performing JIT compilation for deforming gains performance benefits
over unJITed deforming from compile-time knowledge of the tuple
descriptor. Fixed column widths, NOT NULLness, etc can be taken
advantage of.
Right now the JITed deforming is only used when deforming tuples as
part of expression evaluation (and obviously only if the descriptor is
known). It's likely to be beneficial in other cases, too.
By default tuple deforming is JITed whenever an expression is JIT
compiled. There's a separate boolean GUC controlling it, but that's
expected to be primarily useful for development and benchmarking.
Docs will follow in a later commit containing docs for the whole JIT
feature.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Teodor Sigaev [Mon, 26 Mar 2018 15:26:27 +0000 (18:26 +0300)]
Set random seed for pgbench.
Setting random could increase reproducibility of test in some cases. Patch
suggests three providers for seed: time (default), strong random
generator (if available) and unsigned constant. Seed could be set from
command line or enviroment variable.
Alvaro Herrera [Mon, 26 Mar 2018 15:00:25 +0000 (12:00 -0300)]
Fix thinko in comment
The listed numbers disagreed with the ones being used in the symbols;
but instead of just fixing the numbers in the comment, use the symbolic
name instead, which seems clearer.
This has been wrong all along, so apply back to 9.5 where BRIN was
introduced.
Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/5ff514f2-8b1e-6366-b11c-8e2ed442562d@2ndquadrant.com
Alvaro Herrera [Mon, 26 Mar 2018 13:43:54 +0000 (10:43 -0300)]
Handle INSERT .. ON CONFLICT with partitioned tables
Commit eb7ed3f30634 enabled unique constraints on partitioned tables,
but one thing that was not working properly is INSERT/ON CONFLICT.
This commit introduces a new node keeps state related to the ON CONFLICT
clause per partition, and fills it when that partition is about to be
used for tuple routing.
Andrew Dunstan [Mon, 26 Mar 2018 12:09:24 +0000 (22:39 +1030)]
Optimize btree insertions for common case of increasing values
Remember the last page of an index insert if it's the rightmost leaf
page. If the next entry belongs on and can fit in the remembered page,
insert the new entry there as long as we can get a lock on the page.
Otherwise, fall back on the more expensive method of searching for
the right place to insert the entry.
This provides a performance improvement for the common case where an
index entry is for monotonically increasing or nearly monotonically
increasing value such as an identity field or a current timestamp.
Pavan Deolasee
Reviewed by Claudio Freire, Simon Riggs and Peter Geoghegan
Tom Lane [Sun, 25 Mar 2018 20:15:15 +0000 (16:15 -0400)]
Doc: add example of type resolution in nested UNIONs.
Section 10.5 didn't say explicitly that multiple UNIONs are resolved
pairwise. Since the resolution algorithm is described as taking any
number of inputs, readers might well think that a query like
"select x union select y union select z" would be resolved by
considering x, y, and z in one resolution step. But that's not what
happens (and I think that behavior is per SQL spec). Add an example
clarifying this point.
Tom Lane [Sun, 25 Mar 2018 19:15:32 +0000 (15:15 -0400)]
Fix unsafe extraction of the OID part of a relation filename.
Commit 8694cc96b did this randomly differently from other callers of
parse_filename_for_nontemp_relation(). Perhaps unsurprisingly,
the randomly different way is wrong; it fails to ensure the
extracted string is null-terminated. Per buildfarm member skink.
Tom Lane [Sun, 25 Mar 2018 18:54:16 +0000 (14:54 -0400)]
Remove useless if-test.
Coverity complained that this check is pointless, and it's right.
There is no case where we'd call ExecutorStart with a null plannedstmt,
and if we did, it'd have crashed before here. Thinko in commit cc415a56d.
Tom Lane [Sun, 25 Mar 2018 04:09:26 +0000 (00:09 -0400)]
Stabilize regression test result.
If random() returns a result sufficiently close to zero, float8out
switches to scientific notation, breaking this test case's expectation
that the output should look like '0.xxxxxxxxx'. Casting to numeric
should fix that. Per buildfarm member pogona.
Peter Eisentraut [Sun, 25 Mar 2018 01:14:20 +0000 (21:14 -0400)]
Add long options to pg_resetwal and pg_controldata
We were running out of good single-letter options for some upcoming
pg_resetwal functionality, so add long options to create more
possibilities. Add to pg_controldata as well for symmetry.
based on patch by Bossart, Nathan <bossartn@amazon.com>
Peter Eisentraut [Sat, 24 Mar 2018 19:38:57 +0000 (15:38 -0400)]
Improve pg_resetwal documentation
Clarify that the -l option takes a file name, not an "address", and that
that might be different from the LSN if nondefault WAL segment sizes are
used.
Noah Misch [Sat, 24 Mar 2018 03:31:03 +0000 (20:31 -0700)]
Don't qualify type pg_catalog.text in extend-extensions-example.
Extension scripts begin execution with pg_catalog at the front of the
search path, so type names reliably refer to pg_catalog. Remove these
superfluous qualifications. Earlier <programlisting> of this <sect1>
already omitted them. Back-patch to 9.3 (all supported versions).
Peter Eisentraut [Fri, 23 Mar 2018 20:31:49 +0000 (16:31 -0400)]
Further fix interaction of Perl and stdbool.h
In the case that PostgreSQL uses stdbool.h but Perl doesn't, we need to
prevent Perl from defining bool, to prevent compiler warnings about
redefinition.
Tom Lane [Fri, 23 Mar 2018 17:45:37 +0000 (13:45 -0400)]
Fix make rules that generate multiple output files.
For years, our makefiles have correctly observed that "there is no correct
way to write a rule that generates two files". However, what we did is to
provide empty rules that "generate" the secondary output files from the
primary one, and that's not right either. Depending on the details of
the creating process, the primary file might end up timestamped later than
one or more secondary files, causing subsequent make runs to consider the
secondary file(s) out of date. That's harmless in a plain build, since
make will just re-execute the empty rule and nothing happens. But it's
fatal in a VPATH build, since make will expect the secondary file to be
rebuilt in the build directory. This would manifest as "file not found"
failures during VPATH builds from tarballs, if we were ever unlucky enough
to ship a tarball with apparently out-of-date secondary files. (It's not
clear whether that has ever actually happened, but it definitely could.)
To ensure that secondary output files have timestamps >= their primary's,
change our makefile convention to be that we provide a "touch $@" action
not an empty rule. Also, make sure that this rule actually gets invoked
during a distprep run, else the hazard remains.
It's been like this a long time, so back-patch to all supported branches.
In HEAD, I skipped the changes in src/backend/catalog/Makefile, because
those rules are due to get replaced soon in the bootstrap data format
patch, and there seems no need to create a merge issue for that patch.
If for some reason we fail to land that patch in v11, we'll need to
back-fill the changes in that one makefile from v10.
Teodor Sigaev [Fri, 23 Mar 2018 16:14:12 +0000 (19:14 +0300)]
Exclude unlogged tables from base backups
Exclude unlogged tables from base backup entirely except init fork which marks
created unlogged table. The next question is do not backup temp table but
it's a story for separate patch.
Author: David Steele
Review by: Adam Brightwell, Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/04791bab-cb04-ba43-e9c0-664a4c1ffb2c@pgmasters.net
Peter Eisentraut [Fri, 23 Mar 2018 14:31:10 +0000 (10:31 -0400)]
Fix interaction of Perl and stdbool.h
Revert the PL/Perl-specific change in 9a95a77d9d5d3003d2d67121f2731b6e5fc37336. We must not prevent Perl from
using stdbool.h when it has been built to do so, even if it uses an
incompatible size. Otherwise, we would be imposing our bool on Perl,
which will lead to crashes because of the size mismatch.
Instead, we undef bool after including the Perl headers, as we did
previously, but now only if we are not using stdbool.h ourselves.
Record that choice in c.h as USE_STDBOOL. This will also make it easier
to apply that coding pattern elsewhere if necessary.
Peter Eisentraut [Fri, 23 Mar 2018 14:10:49 +0000 (10:10 -0400)]
pg_resetwal: Prevent division-by-zero errors
Handle the case where the pg_control file specifies a WAL segment size
of 0 bytes. This would previously have led to a division by zero error.
Change this to assume the whole file is corrupt and go to guess
everything.
Alvaro Herrera [Fri, 23 Mar 2018 13:48:22 +0000 (10:48 -0300)]
Allow FOR EACH ROW triggers on partitioned tables
Previously, FOR EACH ROW triggers were not allowed in partitioned
tables. Now we allow AFTER triggers on them, and on trigger creation we
cascade to create an identical trigger in each partition. We also clone
the triggers to each partition that is created or attached later.
This means that deferred unique keys are allowed on partitioned tables,
too.
Author: Álvaro Herrera Reviewed-by: Peter Eisentraut, Simon Riggs, Amit Langote, Robert Haas,
Thomas Munro
Discussion: https://postgr.es/m/20171229225319.ajltgss2ojkfd3kp@alvherre.pgsql
Andres Freund [Fri, 23 Mar 2018 05:15:51 +0000 (22:15 -0700)]
Adapt expression JIT to stdbool.h introduction.
The LLVM JIT provider uses clang to synchronize types between normal C
code and runtime generated code. Clang represents stdbool.h style
booleans in return values & parameters differently from booleans
stored in variables.
Thus the expression compilation code from 2a0faed9d needs to be
adapted to 9a95a77d9. Instead of hardcoding i8 as the type for
booleans (which already was wrong on some edge case platforms!), use
postgres' notion of a boolean as used for storage and for parameters.
Peter Eisentraut [Fri, 23 Mar 2018 01:59:28 +0000 (21:59 -0400)]
Remove stdbool workaround in sepgsql
Since we now use stdbool.h in c.h, this workaround breaks the build and
is no longer necessary, so remove it. (Technically, there could be
platforms with a 4-byte bool in stdbool.h, in which case we would not
include stdbool.h in c.h, and so the old problem that caused this
workaround would reappear. But this combination is not known to happen
on the range of platforms where sepgsql can be built.)
Peter Eisentraut [Fri, 23 Mar 2018 00:42:25 +0000 (20:42 -0400)]
Use stdbool.h if suitable
Using the standard bool type provided by C allows some recent compilers
and debuggers to give better diagnostics. Also, some extension code and
third-party headers are increasingly pulling in stdbool.h, so it's
probably saner if everyone uses the same definition.
But PostgreSQL code is not prepared to handle bool of a size other than
1, so we keep our own old definition if we encounter a stdbool.h with a
bool of a different size. (Among current build farm members, this only
applies to old macOS versions on PowerPC.)
To check that the used bool is of the right size, add a static
assertions about size of GinTernaryValue vs bool. This is currently the
only place that assumes that bool and char are of the same size.
Andres Freund [Tue, 20 Mar 2018 09:20:46 +0000 (02:20 -0700)]
Add expression compilation support to LLVM JIT provider.
In addition to the interpretation of expressions (which back
evaluation of WHERE clauses, target list projection, aggregates
transition values etc) support compiling expressions to native code,
using the infrastructure added in earlier commits.
To avoid duplicating a lot of code, only support emitting code for
cases that are likely to be performance critical. For expression steps
that aren't deemed that, use the existing interpreter.
The generated code isn't great - some architectural changes are
required to address that. But this already yields a significant
speedup for some analytics queries, particularly with WHERE clauses
filtering a lot, or computing multiple aggregates.
Author: Andres Freund Tested-By: Thomas Munro
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Disable JITing for VALUES() nodes.
VALUES() nodes are only ever executed once. This is primarily helpful
for debugging, when forcing JITing even for cheap queries.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Wed, 24 Jan 2018 07:20:02 +0000 (23:20 -0800)]
Add FIELDNO_* macro designating offset into structs required for JIT.
For any interesting JIT target, fields inside structs need to be
accessed. b96d550e contains infrastructure for syncing the definition
of types between postgres C code and runtime code generation with
LLVM. But that doesn't sync the number or names of fields inside
structs, just the types (including padding etc).
One option would be to hardcode the offset numbers in the JIT code,
but that'd be hard to keep in sync. Instead add macros indicating the
field offset to the fields that need to be accessed. Not pretty, but
manageable.
Author: Andres Freund
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Tom Lane [Thu, 22 Mar 2018 21:33:10 +0000 (17:33 -0400)]
Improve style guideline compliance of assorted error-report messages.
Per the project style guide, details and hints should have leading
capitalization and end with a period. On the other hand, errcontext should
not be capitalized and should not end with a period. To support well
formatted error contexts in dblink, extend dblink_res_error() to take a
format+arguments rather than a hardcoded string.
Robert Haas [Thu, 22 Mar 2018 20:09:28 +0000 (16:09 -0400)]
Consider Parallel Append of partial paths for UNION [ALL].
Without this patch, we can implement a UNION or UNION ALL as an
Append where Gather appears beneath one or more of the Append
branches, but this lets us put the Gather node on top, with
a partial path for each relation underneath.
There is considerably more work that could be done to improve
planning in this area, but that will probably need to wait
for a future release.
Patch by me, reviewed and tested by Ashutosh Bapat and Rajkumar
Raghuwanshi.
Tom Lane [Thu, 22 Mar 2018 19:47:29 +0000 (15:47 -0400)]
Sync up our various ways of estimating pg_class.reltuples.
VACUUM thought that reltuples represents the total number of tuples in
the relation, while ANALYZE counted only live tuples. This can cause
"flapping" in the value when background vacuums and analyzes happen
separately. The planner's use of reltuples essentially assumes that
it's the count of live (visible) tuples, so let's standardize on having
it mean live tuples.
Another issue is that the definition of "live tuple" isn't totally clear;
what should be done with INSERT_IN_PROGRESS or DELETE_IN_PROGRESS tuples?
ANALYZE's choices in this regard are made on the assumption that if the
originating transaction commits at all, it will happen after ANALYZE
finishes, so we should ignore the effects of the in-progress transaction
--- unless it is our own transaction, and then we should count it.
Let's propagate this definition into VACUUM, too.
Likewise propagate this definition into CREATE INDEX, and into
contrib/pgstattuple's pgstattuple_approx() function.
Tomas Vondra, reviewed by Haribabu Kommi, some corrections by me
Andres Freund [Thu, 22 Mar 2018 18:45:07 +0000 (11:45 -0700)]
Basic planner and executor integration for JIT.
This adds simple cost based plan time decision about whether JIT
should be performed. jit_above_cost, jit_optimize_above_cost are
compared with the total cost of a plan, and if the cost is above them
JIT is performed / optimization is performed respectively.
For that PlannedStmt and EState have a jitFlags (es_jit_flags) field
that stores information about what JIT operations should be performed.
EState now also has a new es_jit field, which can store a
JitContext. When there are no errors the context is released in
standard_ExecutorEnd().
It is likely that the default values for jit_[optimize_]above_cost
will need to be adapted further, but in my test these values seem to
work reasonably.
Author: Andres Freund, with feedback by Peter Eisentraut
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Andres Freund [Thu, 22 Mar 2018 18:05:22 +0000 (11:05 -0700)]
Support for optimizing and emitting code in LLVM JIT provider.
This commit introduces the ability to actually generate code using
LLVM. In particular, this adds:
- Ability to emit code both in heavily optimized and largely
unoptimized fashion
- Batching facility to allow functions to be defined in small
increments, but optimized and emitted in executable form in larger
batches (for performance and memory efficiency)
- Type and function declaration synchronization between runtime
generated code and normal postgres code. This is critical to be able
to access struct fields etc.
- Developer oriented jit_dump_bitcode GUC, for inspecting / debugging
the generated code.
- per JitContext statistics of number of functions, time spent
generating code, optimizing, and emitting it. This will later be
employed for EXPLAIN support.
This commit doesn't yet contain any code actually generating
functions. That'll follow in later commits.
Documentation for GUCs added, and for JIT in general, will be added in
later commits.
Author: Andres Freund, with contributions by Pierre Ducroquet Testing-By: Thomas Munro, Peter Eisentraut
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Tom Lane [Thu, 22 Mar 2018 17:23:47 +0000 (13:23 -0400)]
Fix tuple counting in SP-GiST index build.
Count the number of tuples in the index honestly, instead of assuming
that it's the same as the number of tuples in the heap. (It might be
different if the index is partial.)
Tom Lane [Thu, 22 Mar 2018 17:13:58 +0000 (13:13 -0400)]
Fix errors in contrib/bloom index build.
Count the number of tuples in the index honestly, instead of assuming
that it's the same as the number of tuples in the heap. (It might be
different if the index is partial.)
Fix counting of tuples in current index page, too. This error would
have led to failing to write out the final page of the index if it
contained exactly one tuple, so that the last tuple of the relation
would not get indexed.
Robert Haas [Thu, 22 Mar 2018 16:49:48 +0000 (12:49 -0400)]
Implement partition-wise grouping/aggregation.
If the partition keys of input relation are part of the GROUP BY
clause, all the rows belonging to a given group come from a single
partition. This allows aggregation/grouping over a partitioned
relation to be broken down * into aggregation/grouping on each
partition. This should be no worse, and often better, than the normal
approach.
If the GROUP BY clause does not contain all the partition keys, we can
still perform partial aggregation for each partition and then finalize
aggregation after appending the partial results. This is less certain
to be a win, but it's still useful.
Jeevan Chalke, Ashutosh Bapat, Robert Haas. The larger patch series
of which this patch is a part was also reviewed and tested by Antonin
Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin
Knizhnik, Pascal Legrand, and Rafia Sabih.
Teodor Sigaev [Thu, 22 Mar 2018 14:42:03 +0000 (17:42 +0300)]
Add \if support to pgbench
Patch adds \if to pgbench as it done for psql. Implementation shares condition
stack code with psql, so, this code is moved to fe_utils directory.
Author: Fabien COELHO with minor editorization by me
Review by: Vik Fearing, Fedor Sigaev
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.20.1711252200190.28523@lancre
Dean Rasheed [Thu, 22 Mar 2018 09:37:36 +0000 (09:37 +0000)]
Improve ANALYZE's strategy for finding MCVs.
Previously, a value was included in the MCV list if its frequency was
25% larger than the estimated average frequency of all nonnull values
in the table. For uniform distributions, that can lead to values
being included in the MCV list and significantly overestimated on the
basis of relatively few (sometimes just 2) instances being seen in the
sample. For non-uniform distributions, it can lead to too few values
being included in the MCV list, since the overall average frequency
may be dominated by a small number of very common values, while the
remaining values may still have a large spread of frequencies, causing
both substantial overestimation and underestimation of the remaining
values. Furthermore, increasing the statistics target may have little
effect because the overall average frequency will remain relatively
unchanged.
Instead, populate the MCV list with the largest set of common values
that are statistically significantly more common than the average
frequency of the remaining values. This takes into account the
variance of the sample counts, which depends on the counts themselves
and on the proportion of the table that was sampled. As a result, it
constrains the relative standard error of estimates based on the
frequencies of values in the list, reducing the chances of too many
values being included. At the same time, it allows more values to be
included, since the MCVs need only be more common than the remaining
non-MCVs, rather than the overall average. Thus it tends to produce
fewer MCVs than the previous code for uniform distributions, and more
for non-uniform distributions, reducing estimation errors in both
cases. In addition, the algorithm responds better to increasing the
statistics target, allowing more values to be included in the MCV list
when more of the table is sampled.
Jeff Janes, substantially modified by me. Reviewed by John Naylor and
Tomas Vondra.
Andres Freund [Thu, 22 Mar 2018 02:28:28 +0000 (19:28 -0700)]
Basic JIT provider and error handling infrastructure.
This commit introduces:
1) JIT provider abstraction, which allows JIT functionality to be
implemented in separate shared libraries. That's desirable because
it allows to install JIT support as a separate package, and because
it allows experimentation with different forms of JITing.
2) JITContexts which can be, using functions introduced in follow up
commits, used to emit JITed functions, and have them be cleaned up
on error.
3) The outline of a LLVM JIT provider, which will be fleshed out in
subsequent commits.
Documentation for GUCs added, and for JIT in general, will be added in
later commits.
Author: Andres Freund, with architectural input from Jeff Davis
Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de