Tom Lane [Thu, 17 Mar 2016 03:18:07 +0000 (23:18 -0400)]
Fix "pg_bench -C -M prepared".
This didn't work because when we dropped and re-established a database
connection, we did not bother to reset session-specific state such as
the statements-are-prepared flags.
The st->prepared[] array certainly needs to be flushed, and I cleared a
couple of other fields as well that couldn't possibly retain meaningful
state for a new connection.
In passing, fix some bogus comments and strange field order choices.
Tom Lane [Thu, 17 Mar 2016 00:57:45 +0000 (20:57 -0400)]
Fix j2day() to behave sanely for negative Julian dates.
Somebody had apparently once figured that casting to unsigned int would
produce the right output for negative inputs, but that would only be
true if 2^32 were a multiple of 7, which of course it ain't. We need
to use a signed division and then correct the sign of the remainder.
AFAICT, the only case where this would arise currently is when doing
ISO-week calculations for dates in 4714BC, where we'd compute a
negative Julian date representing 4714-01-04BC and then do some
arithmetic with it. Since we don't even really document support for
such dates, this is not of much consequence. But we may as well
get it right.
Tom Lane [Wed, 16 Mar 2016 23:09:04 +0000 (19:09 -0400)]
Be more careful about out-of-range dates and timestamps.
Tighten the semantics of boundary-case timestamptz so that we allow
timestamps >= '4714-11-24 00:00+00 BC' and < 'ENDYEAR-01-01 00:00+00 AD'
exactly, no more and no less, but it is allowed to enter timestamps
within that range using non-GMT timezone offsets (which could make the
nominal date 4714-11-23 BC or ENDYEAR-01-01 AD). This eliminates
dump/reload failure conditions for timestamps near the endpoints.
To do this, separate checking of the inputs for date2j() from the
final range check, and allow the Julian date code to handle a range
slightly wider than the nominal range of the datatypes.
Also add a bunch of checks to detect out-of-range dates and timestamps
that formerly could be returned by operations such as date-plus-integer.
All C-level functions that return date, timestamp, or timestamptz should
now be proof against returning a value that doesn't pass IS_VALID_DATE()
or IS_VALID_TIMESTAMP().
Vitaly Burovoy, reviewed by Anastasia Lubennikova, and substantially
whacked around by me
Vinayak Pokale provided a patch for a copy-and-paste error in a
comment. I noticed that I'd use the word "automatically" nearby where
I meant to talk about things being "atomic". Rahila Syed spotted a
misplaced counter update. Fix all that stuff.
Teodor Sigaev [Wed, 16 Mar 2016 14:44:58 +0000 (17:44 +0300)]
GUC variable pg_trgm.similarity_threshold insead of set_limit()
Use GUC variable pg_trgm.similarity_threshold insead of
set_limit()/show_limit() which was introduced when defining GUC varuables
by modules was absent.
INSERT ... ON CONFLICT's precheck may have to wait on the outcome of
another insertion, which may or may not itself be a speculative
insertion. This wait is not necessarily associated with an exclusion
constraint, but was always reported that way in log messages if the wait
happened to involve a tuple that had no speculative token.
Initially discovered through use of ON CONFLICT DO NOTHING, where
spurious references to exclusion constraints in log messages were more
likely.
Patch by Peter Geoghegan.
Reviewed by Julien Rouhaud.
Back-patch to 9.5 where INSERT ... ON CONFLICT was added.
Robert Haas [Tue, 15 Mar 2016 20:51:56 +0000 (16:51 -0400)]
postgres_fdw: make_tuple_from_result_row should set cur_attno for ctid.
There's no reason for this function to do this for every other
attribute number and omit it for CTID, especially since
conversion_error_callback has code to handle that case. This seems
to be an oversight in commit e690b9515072fd7767fdeca5c54166f6a77733bc.
Robert Haas [Tue, 15 Mar 2016 17:31:18 +0000 (13:31 -0400)]
Add simple VACUUM progress reporting.
There's a lot more that could be done here yet - in particular, this
reports only very coarse-grained information about the index vacuuming
phase - but even as it stands, the new pg_stat_progress_vacuum can
tell you quite a bit about what a long-running vacuum is actually
doing.
Amit Langote and Robert Haas, based on earlier work by Vinayak Pokale
and Rahila Syed.
Tom Lane [Tue, 15 Mar 2016 17:19:57 +0000 (13:19 -0400)]
Cope if platform declares mbstowcs_l(), but not locale_t, in <xlocale.h>.
Previously, we included <xlocale.h> only if necessary to get the definition
of type locale_t. According to notes in PGAC_TYPE_LOCALE_T, this is
important because on some versions of glibc that file supplies an
incompatible declaration of locale_t. (This info may be obsolete, because
on my RHEL6 box that seems to be the *only* definition of locale_t; but
there may still be glibc's in the wild for which it's a live concern.)
It turns out though that on FreeBSD and maybe other BSDen, you can get
locale_t from stdlib.h or locale.h but mbstowcs_l() and friends only from
<xlocale.h>. This was leaving us compiling calls to mbstowcs_l() and
friends with no visible prototype, which causes a warning and could
possibly cause actual trouble, since it's not declared to return int.
Hence, adjust the configure checks so that we'll include <xlocale.h>
either if it's necessary to get type locale_t or if it's necessary to
get a declaration of mbstowcs_l().
Report and patch by Aleksander Alekseev, somewhat whacked around by me.
Back-patch to all supported branches, since we have been using
mbstowcs_l() since 9.1.
Tom Lane [Tue, 15 Mar 2016 00:04:44 +0000 (20:04 -0400)]
Add a GetForeignUpperPaths callback function for FDWs.
This is basically like the just-added create_upper_paths_hook, but
control is funneled only to the FDW responsible for all the baserels
of the current query; so providing such a callback is much less likely
to add useless overhead than using the hook function is.
The documentation is a bit sketchy. We'll likely want to improve it,
and/or adjust the call conventions, when we get some experience with
actually using this callback. Hopefully somebody will find time to
experiment with it before 9.6 feature freeze.
Robert Haas [Mon, 14 Mar 2016 23:48:46 +0000 (19:48 -0400)]
Fix EXPLAIN ANALYZE SELECT INTO not to choose a parallel plan.
We don't support any parallel write operations at present, so choosing
a parallel plan causes us to error out. Also, add a new regression
test that uses EXPLAIN ANALYZE SELECT INTO; if we'd had this previously,
force_parallel_mode testing would have caught this issue.
Tom Lane [Mon, 14 Mar 2016 23:23:29 +0000 (19:23 -0400)]
Provide a planner hook at a suitable place for creating upper-rel Paths.
In the initial revision of the upper-planner pathification work, the only
available way for an FDW or custom-scan provider to inject Paths
representing post-scan-join processing was to insert them during scan-level
GetForeignPaths or similar processing. While that's not impossible, it'd
require quite a lot of duplicative processing to look forward and see if
the extension would be capable of implementing the whole query. To improve
matters for custom-scan providers, provide a hook function at the point
where the core code is about to start filling in upperrel Paths. At this
point Paths are available for the whole scan/join tree, which should reduce
the amount of redundant effort considerably.
(An alternative design that was suggested was to provide a separate hook
for each post-scan-join processing step, but that seems messy and not
clearly more useful.)
Following our time-honored tradition, there's no documentation for this
hook outside the source code.
As-is, this hook is only meant for custom scan providers, which we can't
assume very much about. A followon patch will implement an FDW callback
to let FDWs do the same thing in a somewhat more structured fashion.
Tom Lane [Mon, 14 Mar 2016 21:31:28 +0000 (17:31 -0400)]
Allow callers of create_foreignscan_path to specify nondefault PathTarget.
Although the default choice of rel->reltarget should typically be
sufficient for scan or join paths, it's not at all sufficient for the
purposes PathTargets were invented for; in particular not for
upper-relation Paths. So break API compatibility by adding a PathTarget
argument to create_foreignscan_path(). To ease updating of existing
code, accept a NULL value of the argument as selecting rel->reltarget.
Tom Lane [Mon, 14 Mar 2016 20:59:59 +0000 (16:59 -0400)]
Rethink representation of PathTargets.
In commit 19a541143a09c067 I did not make PathTarget a subtype of Node,
and embedded a RelOptInfo's reltarget directly into it rather than having
a separately-allocated Node. In hindsight that was misguided
micro-optimization, enabled by the fact that at that point we didn't have
any Paths with custom PathTargets. Now that PathTarget processing has
been fleshed out some more, it's easier to see that it's better to have
PathTarget as an indepedent Node type, even if it does cost us one more
palloc to create a RelOptInfo. So change it while we still can.
This commit just changes the representation, without doing anything more
interesting than that.
Tom Lane [Mon, 14 Mar 2016 18:38:36 +0000 (14:38 -0400)]
Improve conversions from uint64 to Perl types.
Perl's integers are pointer-sized, so can hold more than INT_MAX on LP64
platforms, and come in both signed (IV) and unsigned (UV). Floating
point values (NV) may also be larger than double.
Since Perl 5.19.4 array indices are SSize_t instead of I32, so allow up
to SSize_t_max on those versions. The limit is not imposed just by
av_extend's argument type, but all the array handling code, so remove
the speculative comment.
Tom Lane [Mon, 14 Mar 2016 18:22:16 +0000 (14:22 -0400)]
Use repalloc_huge() to enlarge a SPITupleTable's tuple pointer array.
Commit 23a27b039d94ba35 widened the rows-stored counters to uint64, but
that's academic unless we allow the tuple pointer array to exceed 1GB.
(It might be a good idea to provide some other limit on how much storage
a SPITupleTable can eat. On the other hand, there are plenty of other
ways to drive a backend into swap hell.)
Robert Haas [Mon, 14 Mar 2016 17:48:35 +0000 (13:48 -0400)]
Improve check for overly-long extensible node name.
The old code is bad for two reasons. First, it has an off-by-one
error. Second, it won't help if you aren't running with assertions
enabled. Per discussion, we want a check here in that case too.
Author: KaiGai Kohei, adjusted by me. Reviewed-by: Petr Jelinek
Discussion: 56E0D547.1030101@2ndquadrant.com
Tom Lane [Mon, 14 Mar 2016 14:41:29 +0000 (10:41 -0400)]
Teach the configure script to validate its --with-pgport argument.
Previously, configure would take any string, including an empty string,
leading to obscure compile failures in guc.c. It seems worth expending
a few lines of code to ensure that the argument is a decimal number
between 1 and 65535.
Report and patch by Jim Nasby; reviews by Alex Shulgin, Peter Eisentraut,
Ivan Kartyshov
Tom Lane [Sun, 13 Mar 2016 21:14:49 +0000 (17:14 -0400)]
Mop-up for setting minimum Tcl version to 8.4.
Commit e2609323e set the minimum Tcl version we support to 8.4, but
I forgot to adjust the documentation to say the same. Some nosing
around for other consequences found that the configure script could
be simplified slightly as well.
Tom Lane [Sun, 13 Mar 2016 20:44:10 +0000 (16:44 -0400)]
Fix memory leak in repeated GIN index searches.
Commit d88976cfa1302e8d removed this code from ginFreeScanKeys():
- if (entry->list)
- pfree(entry->list);
evidently in the belief that that ItemPointer array is allocated in the
keyCtx and so would be reclaimed by the following MemoryContextReset.
Unfortunately, it isn't and it won't. It'd likely be a good idea for
that to become so, but as a simple and back-patchable fix in the
meantime, restore this code to ginFreeScanKeys().
Also, add a similar pfree to where startScanEntry() is about to zero out
entry->list. I am not sure if there are any code paths where this
change prevents a leak today, but it seems like cheap future-proofing.
In passing, make the initial allocation of so->entries[] use palloc
not palloc0. The code doesn't depend on unused entries being zero;
if it did, the array-enlargement code in ginFillScanEntry() would be
wrong. So using palloc0 initially can only serve to confuse readers
about what the invariant is.
Per report from Felipe de Jesús Molina Bravo, via Jaime Casanova in
<CAJGNTeMR1ndMU2Thpr8GPDUfiHTV7idELJRFusA5UXUGY1y-eA@mail.gmail.com>
Tom Lane [Sun, 13 Mar 2016 05:21:07 +0000 (00:21 -0500)]
Report memory context stats upon out-of-memory in repalloc[_huge].
This longstanding functionality evidently got lost in commit 3d6d1b585524aab6. Noted while studying an OOM report from Jaime
Casanova. Backpatch to 9.5 where the bug was introduced.
Tom Lane [Sat, 12 Mar 2016 23:16:24 +0000 (18:16 -0500)]
Get rid of scribbling on a const variable in psql's print.c.
Commit a2dabf0e1dda93c8 had the bright idea that it could modify a "const"
global variable if it merely casted away const from a pointer. This does
not work on platforms where the compiler puts "const" variables into
read-only storage. Depressingly, we evidently have no such platforms in
our buildfarm ... an oversight I have now remedied. (The one platform
that is known to catch this is recent OS X with -fno-common.)
Per report from Chris Ruprecht. Back-patch to 9.5 where the bogus
code was introduced.
Tom Lane [Sat, 12 Mar 2016 21:05:10 +0000 (16:05 -0500)]
Widen query numbers-of-tuples-processed counters to uint64.
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
Andres Freund [Sat, 12 Mar 2016 20:16:48 +0000 (12:16 -0800)]
Include portability/mem.h into fd.c for MAP_FAILED.
Buildfarm members gaur and pademelon are old enough not to know about
MAP_FAILED; which is used in 428b1d6. Include portability/mem.h to fix;
as already done in a bunch of other places.
Tom Lane [Sat, 12 Mar 2016 17:12:59 +0000 (12:12 -0500)]
Re-export a few of createplan.c's make_xxx() functions.
CitusDB is using these and don't wish to redesign their code right now.
I am not on board with this being a good idea, or a good precedent,
but I lack the energy to fight about it.
Robert Haas [Fri, 11 Mar 2016 17:28:22 +0000 (12:28 -0500)]
pg_upgrade: Convert old visibility map format to new format.
Commit a892234f830e832110f63fc0a2afce2fb21d1584 added a second bit per
page to the visibility map, but pg_upgrade has been unaware of it up
until now. Therefore, a pg_upgrade from an earlier major release of
PostgreSQL to any commit preceding this one and following the one
mentioned above would result in invalid visibility map contents on the
new cluster, very possibly leading to data corruption. This plugs
that hole.
Masahiko Sawada, reviewed by Jeff Janes, Bruce Momjian, Simon Riggs,
Michael Paquier, Andres Freund, me, and others.
Tom Lane [Fri, 11 Mar 2016 17:27:41 +0000 (12:27 -0500)]
When appropriate, postpone SELECT output expressions till after ORDER BY.
It is frequently useful for volatile, set-returning, or expensive functions
in a SELECT's targetlist to be postponed till after ORDER BY and LIMIT are
done. Otherwise, the functions might be executed for every row of the
table despite the presence of LIMIT, and/or be executed in an unexpected
order. For example, in
SELECT x, nextval('seq') FROM tab ORDER BY x LIMIT 10;
it's probably desirable that the nextval() values are ordered the same
as x, and that nextval() is not run more than 10 times.
In the past, Postgres was inconsistent in this area: you would get the
desirable behavior if the ordering were performed via an indexscan, but
not if it had to be done by an explicit sort step. Getting the desired
behavior reliably required contortions like
SELECT x, nextval('seq')
FROM (SELECT x FROM tab ORDER BY x) ss LIMIT 10;
This patch conditionally postpones evaluation of pure-output target
expressions (that is, those that are not used as DISTINCT, ORDER BY, or
GROUP BY columns) so that they effectively occur after sorting, even if an
explicit sort step is necessary. Volatile expressions and set-returning
expressions are always postponed, so as to provide consistent semantics.
Expensive expressions (costing more than 10 times typical operator cost,
which by default would include any user-defined function) are postponed
if there is a LIMIT or if there are expressions that must be postponed.
We could be more aggressive and postpone any nontrivial expression, but
there are costs associated with doing so: it requires an extra Result plan
node which adds some overhead, and postponement changes the volume of data
going through the sort step, perhaps for the worse. Since we tend not to
have very good estimates of the output width of nontrivial expressions,
it's hard to have much confidence in our ability to predict whether
postponement would increase or decrease the cost of the sort; therefore
this patch doesn't attempt to make decisions conditionally on that.
Between these factors and a general desire not to change query behavior
when there's not a demonstrable benefit, it seems best to be conservative
about applying postponement. We might tweak the decision rules in the
future, though.
Teodor Sigaev [Fri, 11 Mar 2016 16:47:50 +0000 (19:47 +0300)]
Fix merge affixes for numeric ones
Some dictionaries have duplicated base words with different affix set, we
just merge that sets into one set. But previously merging of sets of affixes
was actually a concatenation of strings but it's wrong for numeric
representation of affixes because such representation uses comma to
separate affixes.
Teodor Sigaev [Fri, 11 Mar 2016 16:22:36 +0000 (19:22 +0300)]
Tsvector editing functions
Adds several tsvector editting function: convert tsvector to/from text array,
set weight for given lexemes, delete lexeme(s), unnest, filter lexemes
with given weights
Author: Stas Kelvich with some editorization by me
Reviewers: Tomas Vondram, Teodor Sigaev
Tom Lane [Fri, 11 Mar 2016 15:24:33 +0000 (10:24 -0500)]
Minor additional refactoring of planner.c's PathTarget handling.
Teach make_group_input_target() and make_window_input_target() to work
entirely with the PathTarget representation of tlists, rather than
constructing a tlist and immediately deconstructing it into PathTarget
format. In itself this only saves a few palloc's; the bigger picture is
that it opens the door for sharing cost_qual_eval work across all of
planner.c's constructions of PathTargets. I'll come back to that later.
In support of this, flesh out tlist.c's infrastructure for PathTargets
a bit more.
Magnus Hagander [Fri, 11 Mar 2016 14:08:34 +0000 (15:08 +0100)]
Allow setting sample ratio for auto_explain
New configuration parameter auto_explain.sample_ratio makes it
possible to log just a fraction of the queries meeting the configured
threshold, to reduce the amount of logging.
Author: Craig Ringer and Julien Rouhaud
Review: Petr Jelinek
Magnus Hagander [Fri, 11 Mar 2016 10:08:01 +0000 (11:08 +0100)]
Refactor receivelog.c parameters
Much cruft had accumulated over time with a large number of parameters
passed down between functions very deep. With this refactoring, instead
introduce a StreamCtl structure that holds the parameters, and pass around
a pointer to this structure instead. This makes it much easier to add or
remove fields that are needed deeper down in the implementation without
having to modify every function header in the file.
Patch by me after much nagging from Andres
Reviewed by Craig Ringer and Daniel Gustafsson
Simon Riggs [Fri, 11 Mar 2016 09:53:06 +0000 (09:53 +0000)]
Allow emit_log_hook to see original message text
emit_log_hook could only see the translated text, making it harder to identify
which message was being sent. Pass original text to allow the exact message to
be identified, whichever language is used for logging.
Robert Haas [Fri, 11 Mar 2016 02:37:22 +0000 (21:37 -0500)]
Simplify GetLockNameFromTagType.
The old code is wrong, because it returns a pointer to an automatic
variable. And it's also more clever than we really need to be
considering that the case it's worrying about should never happen.
Andres Freund [Fri, 19 Feb 2016 20:17:51 +0000 (12:17 -0800)]
Checkpoint sorting and balancing.
Up to now checkpoints were written in the order they're in the
BufferDescriptors. That's nearly random in a lot of cases, which
performs badly on rotating media, but even on SSDs it causes slowdowns.
To avoid that, sort checkpoints before writing them out. We currently
sort by tablespace, relfilenode, fork and block number.
One of the major reasons that previously wasn't done, was fear of
imbalance between tablespaces. To address that balance writes between
tablespaces.
The other prime concern was that the relatively large allocation to sort
the buffers in might fail, preventing checkpoints from happening. Thus
pre-allocate the required memory in shared memory, at server startup.
This particularly makes it more efficient to have checkpoint flushing
enabled, because that'll often result in a lot of writes that can be
coalesced into one flush.
Discussion: alpine.DEB.2.10.1506011320000.28433@sto
Author: Fabien Coelho and Andres Freund
Andres Freund [Fri, 19 Feb 2016 20:13:05 +0000 (12:13 -0800)]
Allow to trigger kernel writeback after a configurable number of writes.
Currently writes to the main data files of postgres all go through the
OS page cache. This means that some operating systems can end up
collecting a large number of dirty buffers in their respective page
caches. When these dirty buffers are flushed to storage rapidly, be it
because of fsync(), timeouts, or dirty ratios, latency for other reads
and writes can increase massively. This is the primary reason for
regular massive stalls observed in real world scenarios and artificial
benchmarks; on rotating disks stalls on the order of hundreds of seconds
have been observed.
On linux it is possible to control this by reducing the global dirty
limits significantly, reducing the above problem. But global
configuration is rather problematic because it'll affect other
applications; also PostgreSQL itself doesn't always generally want this
behavior, e.g. for temporary files it's undesirable.
Several operating systems allow some control over the kernel page
cache. Linux has sync_file_range(2), several posix systems have msync(2)
and posix_fadvise(2). sync_file_range(2) is preferable because it
requires no special setup, whereas msync() requires the to-be-flushed
range to be mmap'ed. For the purpose of flushing dirty data
posix_fadvise(2) is the worst alternative, as flushing dirty data is
just a side-effect of POSIX_FADV_DONTNEED, which also removes the pages
from the page cache. Thus the feature is enabled by default only on
linux, but can be enabled on all systems that have any of the above
APIs.
While desirable and likely possible this patch does not contain an
implementation for windows.
With the infrastructure added, writes made via checkpointer, bgwriter
and normal user backends can be flushed after a configurable number of
writes. Each of these sources of writes controlled by a separate GUC,
checkpointer_flush_after, bgwriter_flush_after and backend_flush_after
respectively; they're separate because the number of flushes that are
good are separate, and because the performance considerations of
controlled flushing for each of these are different.
A later patch will add checkpoint sorting - after that flushes from the
ckeckpoint will almost always be desirable. Bgwriter flushes are most of
the time going to be random, which are slow on lots of storage hardware.
Flushing in backends works well if the storage and bgwriter can keep up,
but if not it can have negative consequences. This patch is likely to
have negative performance consequences without checkpoint sorting, but
unfortunately so has sorting without flush control.
Discussion: alpine.DEB.2.10.1506011320000.28433@sto
Author: Fabien Coelho and Andres Freund
Tom Lane [Thu, 10 Mar 2016 21:23:40 +0000 (16:23 -0500)]
Give pull_var_clause() reject/recurse/return behavior for WindowFuncs too.
All along, this function should have treated WindowFuncs in a manner
similar to Aggrefs, ie with an option whether or not to recurse into them.
By not considering the case, it was always recursing, which is OK for most
callers (although I suspect that the case in prepare_sort_from_pathkeys
might represent a bug). But now we need return-without-recursing behavior
as well. There are also more than a few callers that should never see a
WindowFunc, and now we'll get some error checking on that.
Robert Haas [Thu, 10 Mar 2016 21:12:10 +0000 (16:12 -0500)]
Don't vacuum all-frozen pages.
Commit a892234f830e832110f63fc0a2afce2fb21d1584 gave us enough
infrastructure to avoid vacuuming pages where every tuple on the
page is already frozen. So, replace the notion of a scan_all or
whole-table vacuum with the less onerous notion of an "aggressive"
vacuum, which will pages that are all-visible, but still skip those
that are all-frozen.
This should greatly reduce the cost of anti-wraparound vacuuming
on large clusters where the majority of data is never touched
between one cycle and the next, because we'll no longer have to
read all of those pages only to find out that we don't need to
do anything with them.
Tom Lane [Thu, 10 Mar 2016 20:52:58 +0000 (15:52 -0500)]
Refactor pull_var_clause's API to make it less tedious to extend.
In commit 1d97c19a0f748e94 and later c1d9579dd8bf3c92, we extended
pull_var_clause's API by adding enum-type arguments. That's sort of a pain
to maintain, though, because it means every time we add a new behavior we
must touch every last one of the call sites, even if there's a reasonable
default behavior that most of them could use. Let's switch over to using a
bitmask of flags, instead; that seems more maintainable and might save a
nanosecond or two as well. This commit changes no behavior in itself,
though I'm going to follow it up with one that does add a new behavior.
In passing, remove flatten_tlist(), which has not been used since 9.1
and would otherwise need the same API changes.
Removing these enums means that optimizer/tlist.h no longer needs to
depend on optimizer/var.h. Changing that caused a number of C files to
need addition of #include "optimizer/var.h" (probably we can thank old
runs of pgrminclude for that); but on balance it seems like a good change
anyway.
Robert Haas [Thu, 10 Mar 2016 17:44:09 +0000 (12:44 -0500)]
Provide much better wait information in pg_stat_activity.
When a process is waiting for a heavyweight lock, we will now indicate
the type of heavyweight lock for which it is waiting. Also, you can
now see when a process is waiting for a lightweight lock - in which
case we will indicate the individual lock name or the tranche, as
appropriate - or for a buffer pin.
Amit Kapila, Ildus Kurbangaliev, reviewed by me. Lots of helpful
discussion and suggestions by many others, including Alexander
Korotkov, Vladimir Borodin, and many others.
Alvaro Herrera [Thu, 10 Mar 2016 16:15:08 +0000 (13:15 -0300)]
Document BRIN a bit more thoroughly
The chapter "Interfacing Extensions To Indexes" and CREATE OPERATOR
CLASS reference page were missed when BRIN was added. We document
all our other index access methods there, so make sure BRIN complies.
Author: Álvaro Herrera Reported-By: Julien Rouhaud, Tom Lane Reviewed-By: Emre Hasegeli
Discussion: https://www.postgresql.org/message-id/56CF604E.9000303%40dalibo.com
Backpatch: 9.5, where BRIN was introduced
Magnus Hagander [Thu, 10 Mar 2016 12:48:58 +0000 (13:48 +0100)]
Avoid crash on old Windows with AVX2-capable CPU for VS2013 builds
The Visual Studio 2013 CRT generates invalid code when it makes a 64-bit
build that is later used on a CPU that supports AVX2 instructions using a
version of Windows before 7SP1/2008R2SP1.
Detect this combination, and in those cases turn off the generation of
FMA3, per recommendation from the Visual Studio team.
The bug is actually in the CRT shipping with Visual Studio 2013, but
Microsoft have stated they're only fixing it in newer major versions.
The fix is therefor conditioned specifically on being built with this
version of Visual Studio, and not previous or later versions.
Tom Lane [Thu, 10 Mar 2016 04:29:05 +0000 (23:29 -0500)]
Remove a couple of useless pstrdup() calls.
There's no point in pstrdup'ing the result of TextDatumGetCString,
since that's necessarily already a freshly-palloc'd C string.
These particular calls are unlikely to be of any consequence
performance-wise, but still they're a bad precedent that can confuse
future patch authors.
Andres Freund [Thu, 10 Mar 2016 02:53:53 +0000 (18:53 -0800)]
Avoid unlikely data-loss scenarios due to rename() without fsync.
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes. Use the previously added durable_rename()/durable_link_or_rename()
in various places where we previously just renamed files.
Most of the changed call sites are arguably not critical, but it seems
better to err on the side of too much durability. The most prominent
known case where the previously missing fsyncs could cause data loss is
crashes at the end of a checkpoint. After the actual checkpoint has been
performed, old WAL files are recycled. When they're filled, their
contents are fdatasynced, but we did not fsync the containing
directory. An OS/hardware crash in an unfortunate moment could then end
up leaving that file with its old name, but new content; WAL replay
would thus not replay it.
Reported-By: Tomas Vondra
Author: Michael Paquier, Tomas Vondra, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
Andres Freund [Thu, 10 Mar 2016 02:53:53 +0000 (18:53 -0800)]
Introduce durable_rename() and durable_link_or_rename().
Renaming a file using rename(2) is not guaranteed to be durable in face
of crashes; especially on filesystems like xfs and ext4 when mounted
with data=writeback. To be certain that a rename() atomically replaces
the previous file contents in the face of crashes and different
filesystems, one has to fsync the old filename, rename the file, fsync
the new filename, fsync the containing directory. This sequence is not
generally adhered to currently; which exposes us to data loss risks. To
avoid having to repeat this arduous sequence, introduce
durable_rename(), which wraps all that.
Also add durable_link_or_rename(). Several places use link() (with a
fallback to rename()) to rename a file, trying to avoid replacing the
target file out of paranoia. Some of those rename sequences need to be
durable as well. There seems little reason extend several copies of the
same logic, so centralize the link() callers.
This commit does not yet make use of the new functions; they're used in
a followup commit.
Author: Michael Paquier, Andres Freund
Discussion: 56583BDD.9060302@2ndquadrant.com
Backpatch: All supported branches
Peter Eisentraut [Mon, 29 Feb 2016 23:48:34 +0000 (18:48 -0500)]
doc: Reorganize pg_resetxlog reference page
The pg_resetxlog reference page didn't have a proper options list, only
running text listing the options and some explanations of them. This
might have worked when there were only a few options, but the list has
grown over the releases, and now it's hard to find an option and its
associated explanation. So write out the options list as on other
reference pages.
Alvaro Herrera [Wed, 9 Mar 2016 22:54:03 +0000 (19:54 -0300)]
PostgresNode: add backup_fs_hot and backup_fs_cold
These simple methods rely on RecursiveCopy to create a filesystem-level
backup of a server. They aren't currently used anywhere yet,but will be
useful for future tests.
Author: Craig Ringer Reviewed-By: Michael Paquier, Salvador Fandino, Álvaro Herrera
Commitfest-URL: https://commitfest.postgresql.org/9/569/
Alvaro Herrera [Wed, 9 Mar 2016 21:00:31 +0000 (18:00 -0300)]
Add filter capability to RecursiveCopy::copypath
This allows skipping copying certain files and subdirectories in tests.
This is useful in some circumstances such as copying a data directory;
future tests want this feature.
Tom Lane [Wed, 9 Mar 2016 19:51:01 +0000 (14:51 -0500)]
Fix incorrect handling of NULL index entries in indexed ROW() comparisons.
An index search using a row comparison such as ROW(a, b) > ROW('x', 'y')
would stop upon reaching a NULL entry in the "b" column, ignoring the
fact that there might be non-NULL "b" values associated with later values
of "a". This happens because _bt_mark_scankey_required() marks the
subsidiary scankey for "b" as required, which is just wrong: it's for
a column after the one with the first inequality key (namely "a"), and
thus can't be considered a required match.
This bit of brain fade dates back to the very beginnings of our support
for indexed ROW() comparisons, in 2006. Kind of astonishing that no one
came across it before Glen Takahashi, in bug #14010.
Back-patch to all supported versions.
Note: the given test case doesn't actually fail in unpatched 9.1, evidently
because the fix for bug #6278 (i.e., stopping at nulls in either scan
direction) is required to make it fail. I'm sure I could devise a case
that fails in 9.1 as well, perhaps with something involving making a cursor
back up; but it doesn't seem worth the trouble.
Alvaro Herrera [Wed, 9 Mar 2016 17:31:07 +0000 (14:31 -0300)]
pgcrypto: support changing S2K iteration count
pgcrypto already supports key-stretching during symmetric encryption,
including the salted-and-iterated method; but the number of iterations
was not configurable. This commit implements a new s2k-count parameter
to pgp_sym_encrypt() which permits selecting a larger number of
iterations.
Robert Haas [Wed, 9 Mar 2016 17:08:58 +0000 (12:08 -0500)]
Add a generic command progress reporting facility.
Using this facility, any utility command can report the target relation
upon which it is operating, if there is one, and up to 10 64-bit
counters; the intent of this is that users should be able to figure out
what a utility command is doing without having to resort to ugly hacks
like attaching strace to a backend.
As a demonstration, this adds very crude reporting to lazy vacuum; we
just report the target relation and nothing else. A forthcoming patch
will make VACUUM report a bunch of additional data that will make this
much more interesting. But this gets the basic framework in place.
Vinayak Pokale, Rahila Syed, Amit Langote, Robert Haas, reviewed by
Kyotaro Horiguchi, Jim Nasby, Thom Brown, Masahiko Sawada, Fujii Masao,
and Masanori Oyama.
Tom Lane [Wed, 9 Mar 2016 15:56:36 +0000 (10:56 -0500)]
Fix incorrect tlist generation in create_gather_plan().
This function is written as though Gather doesn't project; but it does.
Even if it did not project, though, we must use build_path_tlist to ensure
that the output columns receive correct sortgroupref labeling.
Robert Haas [Wed, 9 Mar 2016 15:51:49 +0000 (10:51 -0500)]
postgres_fdw: Consider foreign joining and foreign sorting together.
Commit ccd8f97922944566d26c7d90eb67ab7848ee9905 gave us the ability to
request that the remote side sort the data, and, later, commit e4106b2528727c4b48639c0e12bf2f70a766b910 gave us the ability to
request that the remote side perform the join for us rather than doing
it locally. But we could not do both things at the same time: a
remote SQL query that had an ORDER BY clause would never be a join.
This commit adds that capability.
Tom Lane [Wed, 9 Mar 2016 06:12:16 +0000 (01:12 -0500)]
Improve handling of pathtargets in planner.c.
Refactor so that the internal APIs in planner.c deal in PathTargets not
targetlists, and establish a more regular structure for deriving the
targets needed for successive steps.
There is more that could be done here; calculating the eval costs of each
successive target independently is both inefficient and wrong in detail,
since we won't actually recompute values available from the input node's
tlist. But it's no worse than what happened before the pathification
rewrite. In any case this seems like a good starting point for considering
how to handle Konstantin Knizhnik's function-evaluation-postponement patch.
Andres Freund [Wed, 9 Mar 2016 01:34:09 +0000 (17:34 -0800)]
Add valgrind suppressions for python code.
Python's allocator does some low-level tricks for efficiency;
unfortunately they trigger valgrind errors. Those tricks can be disabled
making instrumentation easier; but few people testing postgres will have
such a build of python. So add broad suppressions of the resulting
errors.
See also https://svn.python.org/projects/python/trunk/Misc/README.valgrind
This possibly will suppress valid errors, but without it it's basically
impossible to use valgrind with plpython code.
Author: Andres Freund
Backpatch: 9.4, where we started to maintain valgrind suppressions
Tom Lane [Wed, 9 Mar 2016 03:32:03 +0000 (22:32 -0500)]
Improve handling of group-column indexes in GroupingSetsPath.
Instead of having planner.c compute a groupColIdx array and store it in
GroupingSetsPaths, make create_groupingsets_plan() find the grouping
columns by searching in the child plan node's tlist. Although that's
probably a bit slower for create_groupingsets_plan(), it's more like
the way every other plan node type does this, and it provides positive
confirmation that we know which child output columns we're supposed to be
grouping on. (Indeed, looking at this now, I'm not at all sure that it
wasn't broken before, because create_groupingsets_plan() isn't demanding
an exact tlist match from its child node.) Also, this allows substantial
simplification in planner.c, because it no longer needs to compute the
groupColIdx array at all; no other cases were using it.
I'd intended to put off this refactoring until later (like 9.7), but
in view of the likely bug fix and the need to rationalize planner.c's
tlist handling so we can do something sane with Konstantin Knizhnik's
function-evaluation-postponement patch, I think it can't wait.
Peter Eisentraut [Sat, 20 Feb 2016 04:07:46 +0000 (23:07 -0500)]
psql: Fix some strange code in SQL help creation
Struct QL_HELP used to be defined as static in the sql_help.h header
file, which is included in sql_help.c and help.c, thus creating two
separate instances of the struct. This causes a warning from GCC 6,
because the struct is not used in sql_help.c.
Instead, declare the struct as extern in the header file and define it
in sql_help.c. This also allows making a bunch of functions static
because they are no longer needed outside of sql_help.c.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Andres Freund [Tue, 8 Mar 2016 22:59:29 +0000 (14:59 -0800)]
ltree: Zero padding bytes when allocating memory for externally visible data.
ltree/ltree_gist/ltxtquery's headers stores data at MAXALIGN alignment,
requiring some padding bytes. So far we left these uninitialized. Zero
those by using palloc0.
Author: Andres Freund Reported-By: Andres Freund / valgrind / buildarm animal skink
Backpatch: 9.1-
Tom Lane [Tue, 8 Mar 2016 21:50:32 +0000 (16:50 -0500)]
Fix minor thinko in pathification code.
I passed the wrong "root" struct to create_pathtarget in build_minmax_path.
Since the subroot is a clone of the outer root, this would not cause any
serious problems, but it would waste some cycles because
set_pathtarget_cost_width would not have access to Var width estimates
set up while running query_planner on the subroot.
Andres Freund [Tue, 8 Mar 2016 21:33:24 +0000 (13:33 -0800)]
plperl: Correctly handle empty arrays in plperl_ref_from_pg_array.
plperl_ref_from_pg_array() didn't consider the case that postgrs arrays
can have 0 dimensions (when they're empty) and accessed the first
dimension without a check. Fix that by special casing the empty array
case.
Author: Alex Hunsaker Reported-By: Andres Freund / valgrind / buildfarm animal skink
Discussion: 20160308063240.usnzg6bsbjrne667@alap3.anarazel.de
Backpatch: 9.1-
Tom Lane [Tue, 8 Mar 2016 21:28:27 +0000 (16:28 -0500)]
Finish refactoring make_foo() functions in createplan.c.
This patch removes some redundant cost calculations that I left for later
cleanup in commit 3fc6e2d7f5b652b4. There's now a uniform policy that the
make_foo() convenience functions don't do any cost calculations. Most of
their callers copy costs from the source Path node, and for those that
don't, the calculation in the make_foo() function wasn't necessarily right
anyhow. (make_result() was particularly a mess, as it was serving multiple
callers using cost calcs designed for only the first one or two that had
ever existed.) Aside from saving a few cycles, this ensures that what
EXPLAIN prints matches the costs we used for planning purposes. It does
not change any planner decisions, since the decisions are already made.