Andres Freund [Thu, 25 Sep 2014 21:49:05 +0000 (23:49 +0200)]
Add a basic atomic ops API abstracting away platform/architecture details.
Several upcoming performance/scalability improvements require atomic
operations. This new API avoids the need to splatter compiler and
architecture dependent code over all the locations employing atomic
ops.
For several of the potential usages it'd be problematic to maintain
both, a atomics using implementation and one using spinlocks or
similar. In all likelihood one of the implementations would not get
tested regularly under concurrency. To avoid that scenario the new API
provides a automatic fallback of atomic operations to spinlocks. All
properties of atomic operations are maintained. This fallback -
obviously - isn't as fast as just using atomic ops, but it's not bad
either. For one of the future users the atomics ontop spinlocks
implementation was actually slightly faster than the old purely
spinlock using implementation. That's important because it reduces the
fear of regressing older platforms when improving the scalability for
new ones.
The API, loosely modeled after the C11 atomics support, currently
provides 'atomic flags' and 32 bit unsigned integers. If the platform
efficiently supports atomic 64 bit unsigned integers those are also
provided.
To implement atomics support for a platform/architecture/compiler for
a type of atomics 32bit compare and exchange needs to be
implemented. If available and more efficient native support for flags,
32 bit atomic addition, and corresponding 64 bit operations may also
be provided. Additional useful atomic operations are implemented
generically ontop of these.
The implementation for various versions of gcc, msvc and sun studio have
been tested. Additional existing stub implementations for
* Intel icc
* HUPX acc
* IBM xlc
are included but have never been tested. These will likely require
fixes based on buildfarm and user feedback.
As atomic operations also require barriers for some operations the
existing barrier support has been moved into the atomics code.
Author: Andres Freund with contributions from Oskari Saarenmaa Reviewed-By: Amit Kapila, Robert Haas, Heikki Linnakangas and Álvaro Herrera
Discussion: CA+TgmoYBW+ux5-8Ja=Mcyuy8=VXAnVRHp3Kess6Pn3DMXAPAEA@mail.gmail.com, 20131015123303.GH5300@awork2.anarazel.de, 20131028205522.GI20248@awork2.anarazel.de
Andrew Dunstan [Thu, 25 Sep 2014 19:08:42 +0000 (15:08 -0400)]
Remove ill-conceived ban on zero length json object keys.
We removed a similar ban on this in json_object recently, but the ban in
datum_to_json was left, which generate4d sprutious errors in othee json
generators, notable json_build_object.
Along the way, add an assertion that datum_to_json is not passed a null
key. All current callers comply with this rule, but the assertion will
catch any possible future misbehaviour.
Robert Haas [Thu, 25 Sep 2014 14:43:24 +0000 (10:43 -0400)]
Change locking regimen around buffer replacement.
Previously, we used an lwlock that was held from the time we began
seeking a candidate buffer until the time when we found and pinned
one, which is disastrous for concurrency. Instead, use a spinlock
which is held just long enough to pop the freelist or advance the
clock sweep hand, and then released. If we need to advance the clock
sweep further, we reacquire the spinlock once per buffer.
This represents a significant increase in atomic operations around
buffer eviction, but it still wins on many workloads. On others, it
may result in no gain, or even cause a regression, unless the number
of buffer mapping locks is also increased. However, that seems like
material for a separate commit. We may also need to consider other
methods of mitigating contention on this spinlock, such as splitting
it into multiple locks or jumping the clock sweep hand more than one
buffer at a time, but those, too, seem like separate improvements.
Patch by me, inspired by a much larger patch from Amit Kapila.
Reviewed by Andres Freund.
Refactor space allocation for base64 encoding/decoding in pgcrypto.
Instead of trying to accurately calculate the space needed, use a StringInfo
that's enlarged as needed. This is just moving things around currently - the
old code was not wrong - but this is in preparation for a patch that adds
support for extra armor headers, and would make the space calculation more
complicated.
Andres Freund [Thu, 25 Sep 2014 13:22:26 +0000 (15:22 +0200)]
Fix VPATH builds of the replication parser from git for some !gcc compilers.
Some compilers don't automatically search the current directory for
included files. 9cc2c182fc2 fixed that for builds from tarballs by
adding an include to the source directory. But that doesn't work when
the scanner is generated in the VPATH directory. Use the same search
path as the other parsers in the tree.
One compiler that definitely was affected is solaris' sun cc.
Backpatch to 9.1 which introduced using an actual parser for
replication commands.
Add -D option to specify data directory to pg_controldata and pg_resetxlog.
It was confusing that to other commands, like initdb and postgres, you would
pass the data directory with "-D datadir", but pg_controldata and
pg_resetxlog would take just plain path, without the "-D". With this patch,
pg_controldata and pg_resetxlog also accept "-D datadir".
Stephen Frost [Wed, 24 Sep 2014 21:45:11 +0000 (17:45 -0400)]
Copy-editing of row security
Address a few typos in the row security update, pointed out
off-list by Adam Brightwell. Also include 'ALL' in the list
of commands supported, for completeness.
Stephen Frost [Wed, 24 Sep 2014 20:32:22 +0000 (16:32 -0400)]
Code review for row security.
Buildfarm member tick identified an issue where the policies in the
relcache for a relation were were being replaced underneath a running
query, leading to segfaults while processing the policies to be added
to a query. Similar to how TupleDesc RuleLocks are handled, add in a
equalRSDesc() function to check if the policies have actually changed
and, if not, swap back the rsdesc field (using the original instead of
the temporairly built one; the whole structure is swapped and then
specific fields swapped back). This now passes a CLOBBER_CACHE_ALWAYS
for me and should resolve the buildfarm error.
In addition to addressing this, add a new chapter in Data Definition
under Privileges which explains row security and provides examples of
its usage, change \d to always list policies (even if row security is
disabled- but note that it is disabled, or enabled with no policies),
rework check_role_for_policy (it really didn't need the entire policy,
but it did need to be using has_privs_of_role()), and change the field
in pg_class to relrowsecurity from relhasrowsecurity, based on
Heikki's suggestion. Also from Heikki, only issue SET ROW_SECURITY in
pg_restore when talking to a 9.5+ server, list Bypass RLS in \du, and
document --enable-row-security options for pg_dump and pg_restore.
Lastly, fix a number of minor whitespace and typo issues from Heikki,
Dimitri, add a missing #include, per Peter E, fix a few minor
variable-assigned-but-not-used and resource leak issues from Coverity
and add tab completion for role attribute bypassrls as well.
Tom Lane [Wed, 24 Sep 2014 19:59:34 +0000 (15:59 -0400)]
Fix bogus variable-mangling in security_barrier_replace_vars().
This function created new Vars with varno different from varnoold, which
is a condition that should never prevail before setrefs.c does the final
variable-renumbering pass. The created Vars could not be seen as equal()
to normal Vars, which among other things broke equivalence-class processing
for them. The consequences of this were indeed visible in the regression
tests, in the form of failure to propagate constants as one would expect.
I stumbled across it while poking at bug #11457 --- after intentionally
disabling join equivalence processing, the security-barrier regression
tests started falling over with fun errors like "could not find pathkey
item to sort", because of failure to match the corrupted Vars to normal
ones.
Tom Lane [Wed, 24 Sep 2014 00:25:31 +0000 (20:25 -0400)]
Fix incorrect search for "x?" style matches in creviterdissect().
When the number of allowed iterations is limited (either a "?" quantifier
or a bound expression), the last sub-match has to reach to the end of the
target string. The previous coding here first tried the shortest possible
match (one character, usually) and then gave up and back-tracked if that
didn't work, typically leading to failure to match overall, as shown in
bug #11478 from Christoph Berg. The minimum change to fix that would be to
not decrement k before "goto backtrack"; but that would be a pretty stupid
solution, because we'd laboriously try each possible sub-match length
before finally discovering that only ending at the end can work. Instead,
force the sub-match endpoint limit up to the end for even the first
shortest() call if we cannot have any more sub-matches after this one.
Bug introduced in my rewrite that added the iterdissect logic, commit 173e29aa5deefd9e71c183583ba37805c8102a72. The shortest-first search code
was too closely modeled on the longest-first code, which hasn't got this
issue since it tries a match reaching to the end to start with anyway.
Back-patch to all affected branches.
Stephen Frost [Tue, 23 Sep 2014 01:51:25 +0000 (21:51 -0400)]
Add unicode_*_linestyle to \? variables
In a2dabf0 we added the ability to have single or double unicode
linestyle for the border, column, or header. Unfortunately, the
\? variables output was not updated for these new psql variables.
Stephen Frost [Tue, 23 Sep 2014 00:12:51 +0000 (20:12 -0400)]
Process withCheckOption exprs in setrefs.c
While withCheckOption exprs had been handled in many cases by
happenstance, they need to be handled during set_plan_references and
more specifically down in set_plan_refs for ModifyTable plan nodes.
This is to ensure that the opfuncid's are set for operators referenced
in the withCheckOption exprs.
Identified as an issue by Thom Brown
Patch by Dean Rasheed
Back-patch to 9.4, where withCheckOption was introduced.
Andres Freund [Mon, 22 Sep 2014 21:35:08 +0000 (23:35 +0200)]
Remove most volatile qualifiers from xlog.c
For the reason outlined in df4077cda2e also remove volatile qualifiers
from xlog.c. Some of these uses of volatile have been added after
noticing problems back when spinlocks didn't imply compiler
barriers. So they are a good test - in fact removing the volatiles
breaks when done without the barriers in spinlocks present.
Several uses of volatile remain where they are explicitly used to
access shared memory without locks. These locations are ok with
slightly out of date data, but removing the volatile might lead to the
variables never being reread from memory. These uses could also be
replaced by barriers, but that's a separate change of doubtful value.
Robert Haas [Mon, 22 Sep 2014 20:42:14 +0000 (16:42 -0400)]
Remove volatile qualifiers from lwlock.c.
Now that spinlocks (hopefully!) act as compiler barriers, as of commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, this should be safe. This
serves as a demonstration of the new coding style, and may be optimized
better on some machines as well.
Andres Freund [Mon, 22 Sep 2014 14:48:14 +0000 (16:48 +0200)]
Improve code around the recently added rm_identify rmgr callback.
There are four weaknesses in728f152e07f998d2cb4fe5f24ec8da2c3bda98f2:
* append_init() in heapdesc.c was ugly and required that rm_identify
return values are only valid till the next call. Instead just add a
couple more switch() cases for the INIT_PAGE cases. Now the returned
value will always be valid.
* a couple rm_identify() callbacks missed masking xl_info with
~XLR_INFO_MASK.
* pg_xlogdump didn't map a NULL rm_identify to UNKNOWN or a similar
string.
* append_init() was called when id=NULL - which should never actually
happen. But it's better to be careful.
Tom Lane [Fri, 19 Sep 2014 17:18:56 +0000 (13:18 -0400)]
Fix failure of contrib/auto_explain to print per-node timing information.
This has been broken since commit af7914c6627bcf0b0ca614e9ce95d3f8056602bf,
which added the EXPLAIN (TIMING) option. Although that commit included
updates to auto_explain, they evidently weren't tested very carefully,
because the code failed to print node timings even when it should, due to
failure to set es.timing in the ExplainState struct. Reported off-list by
Neelakanth Nadgir of Salesforce.
In passing, clean up the documentation for auto_explain's options a
little bit, including re-ordering them into what seems to me a more
logical order.
Robert Haas [Fri, 19 Sep 2014 16:39:00 +0000 (12:39 -0400)]
Add a fast pre-check for equality of equal-length strings.
Testing reveals that that doing a memcmp() before the strcoll() costs
practically nothing, at least on the systems we tested, and it speeds
up sorts containing many equal strings significatly.
Peter Geoghegan. Review by myself and Heikki Linnakangas. Comments
rewritten by me.
Stephen Frost [Fri, 19 Sep 2014 15:18:35 +0000 (11:18 -0400)]
Row-Level Security Policies (RLS)
Building on the updatable security-barrier views work, add the
ability to define policies on tables to limit the set of rows
which are returned from a query and which are allowed to be added
to a table. Expressions defined by the policy for filtering are
added to the security barrier quals of the query, while expressions
defined to check records being added to a table are added to the
with-check options of the query.
New top-level commands are CREATE/ALTER/DROP POLICY and are
controlled by the table owner. Row Security is able to be enabled
and disabled by the owner on a per-table basis using
ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.
Per discussion, ROW SECURITY is disabled on tables by default and
must be enabled for policies on the table to be used. If no
policies exist on a table with ROW SECURITY enabled, a default-deny
policy is used and no records will be visible.
By default, row security is applied at all times except for the
table owner and the superuser. A new GUC, row_security, is added
which can be set to ON, OFF, or FORCE. When set to FORCE, row
security will be applied even for the table owner and superusers.
When set to OFF, row security will be disabled when allowed and an
error will be thrown if the user does not have rights to bypass row
security.
Per discussion, pg_dump sets row_security = OFF by default to ensure
that exports and backups will have all data in the table or will
error if there are insufficient privileges to bypass row security.
A new option has been added to pg_dump, --enable-row-security, to
ask pg_dump to export with row security enabled.
A new role capability, BYPASSRLS, which can only be set by the
superuser, is added to allow other users to be able to bypass row
security using row_security = OFF.
Many thanks to the various individuals who have helped with the
design, particularly Robert Haas for his feedback.
Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
Rasheed, with additional changes and rework by me.
Reviewers have included all of the above, Greg Smith,
Jeff McCormick, and Robert Haas.
Andres Freund [Fri, 19 Sep 2014 15:04:00 +0000 (17:04 +0200)]
Mark x86's memory barrier inline assembly as clobbering the cpu flags.
x86's memory barrier assembly was marked as clobbering "memory" but
not "cc" even though 'addl' sets various flags. As it turns out gcc on
x86 implicitly assumes "cc" on every inline assembler statement, so
it's not a bug. But as that's poorly documented and might get copied
to architectures or compilers where that's not the case, it seems
better to be precise.
Andres Freund [Fri, 19 Sep 2014 14:33:16 +0000 (16:33 +0200)]
Add the capability to display summary statistics to pg_xlogdump.
The new --stats/--stats=record options to pg_xlogdump display per
rmgr/per record statistics about the parsed WAL. This is useful to
understand what the WAL primarily consists of, to allow targeted
optimizations on application, configuration, and core code level.
It is likely that we will want to fine tune the statistics further,
but the feature already is quite helpful.
Author: Abhijit Menon-Sen, slightly editorialized by me Reviewed-By: Andres Freund, Dilip Kumar and Furuya Osamu
Discussion: 20140604104716.GA3989@toroid.org
Andres Freund [Fri, 19 Sep 2014 13:17:12 +0000 (15:17 +0200)]
Add rmgr callback to name xlog record types for display purposes.
This is primarily useful for the upcoming pg_xlogdump --stats feature,
but also allows to remove some duplicated code in the rmgr_desc
routines.
Due to the separation and harmonization, the output of dipsplayed
records changes somewhat. But since this isn't enduser oriented
content that's ok.
It's potentially desirable to further change pg_xlogdump's display of
records. It previously wasn't possible to show the record type
separately from the description forcing it to be in the last
column. But that's better done in a separate commit.
Author: Abhijit Menon-Sen, slightly editorialized by me Reviewed-By: Álvaro Herrera, Andres Freund, and Heikki Linnakangas
Discussion: 20140604104716.GA3989@toroid.org
Andres Freund [Thu, 18 Sep 2014 07:59:10 +0000 (09:59 +0200)]
Fix configure check for %z printf support after INT64_MODIFIER changes.
The PGAC_FUNC_SNPRINTF_SIZE_T_SUPPORT test was broken by ce486056ecd28050. Among others it made the UINT64_FORMAT macro to be
defined in c.h, instead of directly being defined by configure.
This lead to the replacement printf being used on all platforms for a
while. Which seems to work, because this was only used due to
different profiles ;)
Peter Eisentraut [Wed, 17 Sep 2014 04:54:12 +0000 (00:54 -0400)]
Fix TAP checks when current directory name contains spaces
Add some quotes in the makefile snippet that creates the temporary
installation, so that it can handle spaces in the directory name and
possibly some other oddities.
Fix the return type of GIN triConsistent support functions to "char".
They were marked to return a boolean, but they actually return a
GinTernaryValue, which is more like a "char". It makes no practical
difference, as the triConsistent functions cannot be called directly from
SQL because they have "internal" arguments, but this nevertheless seems
more correct.
Also fix the GinTernaryValue name in the documentation. I renamed the enum
earlier, but neglected the docs.
Alexander Korotkov. This is new in 9.4, so backpatch there.
Follow the RFCs more closely in libpq server certificate hostname check.
The RFCs say that the CN must not be checked if a subjectAltName extension
of type dNSName is present. IOW, if subjectAltName extension is present,
but there are no dNSNames, we can still check the CN.
Tom Lane [Sun, 14 Sep 2014 01:01:49 +0000 (21:01 -0400)]
Invent PGC_SU_BACKEND and mark log_connections/log_disconnections that way.
This new GUC context option allows GUC parameters to have the combined
properties of PGC_BACKEND and PGC_SUSET, ie, they don't change after
session start and non-superusers can't change them. This is a more
appropriate choice for log_connections and log_disconnections than their
previous context of PGC_BACKEND, because we don't want non-superusers
to be able to affect whether their sessions get logged.
Note: the behavior for log_connections is still a bit odd, in that when
a superuser attempts to set it from PGOPTIONS, the setting takes effect
but it's too late to enable or suppress connection startup logging.
It's debatable whether that's worth fixing, and in any case there is
a reasonable argument for PGC_SU_BACKEND to exist.
In passing, re-pgindent the files touched by this commit.
Fujii Masao, reviewed by Joe Conway and Amit Kapila
Peter Eisentraut [Sun, 14 Sep 2014 00:14:17 +0000 (20:14 -0400)]
Run missing documentation tools through "missing"
Instead of just erroring out when a tool is missing, wrap the call with
the "missing" script that we are already using for bison, flex, and
perl, so that the users get a useful error message.
Robert Haas [Fri, 12 Sep 2014 20:11:58 +0000 (16:11 -0400)]
Change NTUP_PER_BUCKET to 1 to improve hash join lookup speed.
Since this makes the bucket headers use ~10x as much memory, properly
account for that memory when we figure out whether everything fits
in work_mem. This might result in some cases that previously used
only a single batch getting split into multiple batches, but it's
unclear as yet whether we need defenses against that case, and if so,
what the shape of those defenses should be.
It's worth noting that even in these edge cases, users should still be
no worse off than they would have been last week, because commit 45f6240a8fa9d35548eb2ef23dba2c11540aa02a saved a big pile of memory
on exactly the same workloads.
Tomas Vondra, reviewed and somewhat revised by me.
Add GUC to enable logging of replication commands.
Previously replication commands like IDENTIFY_COMMAND were not logged
even when log_statements is set to all. Some users who want to audit
all types of statements were not satisfied with this situation. To
address the problem, this commit adds new GUC log_replication_commands.
If it's enabled, all replication commands are logged in the server log.
There are many ways to allow us to enable that logging. For example,
we can extend log_statement so that replication commands are logged
when it's set to all. But per discussion in the community, we reached
the consensus to add separate GUC for that.
Reviewed by Ian Barwick, Robert Haas and Heikki Linnakangas.
Stephen Frost [Fri, 12 Sep 2014 16:04:37 +0000 (12:04 -0400)]
Add unicode_{column|header|border}_style to psql
With the unicode linestyle, this adds support to control if the
column, header, or border style should be single or double line
unicode characters. The default remains 'single'.
In passing, clean up the border documentation and address some
minor formatting/spelling issues.
Pavel Stehule, with some additional changes by me.
Stephen Frost [Fri, 12 Sep 2014 15:11:53 +0000 (11:11 -0400)]
Handle border = 3 in expanded mode
In psql, expanded mode was not being displayed correctly when using
the normal ascii or unicode linestyles and border set to '3'. Now,
per the documentation, border '3' is really only sensible for HTML
and LaTeX formats, however, that's no excuse for ascii/unicode to
break in that case, and provisions had been made for psql to cleanly
handle this case (and it did, in non-expanded mode).
This was broken when ascii/unicode was initially added a good five
years ago because print_aligned_vertical_line wasn't passed in the
border setting being used by print_aligned_vertical but instead was
given the whole printTableContent. There really isn't a good reason
for vertical_line to have the entire printTableContent structure, so
just pass in the printTextFormat and border setting (similar to how
this is handled in horizontal_line).
Support Subject Alternative Names in SSL server certificates.
This patch makes libpq check the server's hostname against DNS names listed
in the X509 subjectAltName extension field in the server certificate. This
allows the same certificate to be used for multiple domain names. If there
are no SANs in the certificate, the Common Name field is used, like before
this patch. If both are given, the Common Name is ignored. That is a bit
surprising, but that's the behavior mandated by the relevant RFCs, and it's
also what the common web browsers do.
This also adds a libpq_ngettext helper macro to allow plural messages to be
translated in libpq. Apparently this happened to be the first plural message
in libpq, so it was not needed before.
The code that tried to split a page at 75/25 ratio, when appending to the
end of an index, was buggy in two ways. First, there was a silly typo that
caused it to just fill the left page as full as possible. But the logic as
it was intended wasn't correct either, and would actually have given a ratio
closer to 60/40 than 75/25.
Gaetano Mendola spotted the typo. Backpatch to 9.4, where this code was added.
Tom Lane [Fri, 12 Sep 2014 03:30:51 +0000 (23:30 -0400)]
Fix power_var_int() for large integer exponents.
The code for raising a NUMERIC value to an integer power wasn't very
careful about large powers. It got an outright wrong answer for an
exponent of INT_MIN, due to failure to consider overflow of the Abs(exp)
operation; which is fixable by using an unsigned rather than signed
exponent value after that point. Also, even though the number of
iterations of the power-computation loop is pretty limited, it's easy for
the repeated squarings to result in ridiculously enormous intermediate
values, which can take unreasonable amounts of time/memory to process,
or even overflow the internal "weight" field and so produce a wrong answer.
We can forestall misbehaviors of that sort by bailing out as soon as the
weight value exceeds what will fit in int16, since then the final answer
must overflow (if exp > 0) or underflow (if exp < 0) the packed numeric
format.
Per off-list report from Pavel Stehule. Back-patch to all supported
branches.
Peter Eisentraut [Fri, 12 Sep 2014 01:08:59 +0000 (21:08 -0400)]
Fix vacuumdb --analyze-in-stages --all order
When running vacuumdb --analyze-in-stages --all, it needs to run the
first stage across all databases before the second one, instead of
running all stages in a database before processing the next one.
Also respect the --quiet option with --analyze-in-stages.
Stephen Frost [Fri, 12 Sep 2014 01:23:51 +0000 (21:23 -0400)]
Add 'ignore_nulls' option to row_to_json
Provide an option to skip NULL values in a row when generating a JSON
object from that row with row_to_json. This can reduce the size of the
JSON object in cases where columns are NULL without really reducing the
information in the JSON object.
This also makes row_to_json into a single function with default values,
rather than having multiple functions. In passing, change array_to_json
to also be a single function with default values (we don't add an
'ignore_nulls' option yet- it's not clear that there is a sensible
use-case there, and it hasn't been asked for in any case).
Simplify calculation of Poisson distributed delays in pgbench --rate mode.
The previous coding first generated a uniform random value between 0.0 and
1.0, then converted that to an integer between 1 and 10000, and divided that
again by 10000. Those conversions are unnecessary; we can use the double
value that pg_erand48() returns directly. While we're at it, put the logic
into a helper function, getPoissonRand().
The largest delay generated by the old coding was about 9.2 times the
average, because of the way the uniformly distributed value used for the
calculation was truncated to 1/10000 granularity. The new coding doesn't
have such clamping. With my laptop's DBL_MIN value, the maximum delay with
the new coding is about 700x the average. That seems acceptable - any
reasonable pgbench session should last long enough to average that out.
Change the way latency is calculated with pgbench --rate option.
The reported latency values now include the "schedule lag" time, that is,
the time between the transaction's scheduled start time and the time it
actually started. This relates better to a model where requests arrive at a
certain rate, and we are interested in the response time to the end user or
application, rather than the response time of the database itself.
Also, when --rate is used, include the schedule lag time in the log output.
The --rate option is new in 9.4, so backpatch to 9.4. It seems better to
make this change in 9.4, while we're still in the beta period, than ship a
9.4 version that calculates the values differently than 9.5.
Peter Eisentraut [Thu, 11 Sep 2014 00:05:56 +0000 (20:05 -0400)]
Support older versions of "prove"
Apparently, older versions of "prove" (couldn't identify the exact
version from the changelog) don't look into the t/ directory for tests
by default, so specify it explicitly.
Pack tuples in a hash join batch densely, to save memory.
Instead of palloc'ing each HashJoinTuple individually, allocate 32kB chunks
and pack the tuples densely in the chunks. This avoids the AllocChunk
header overhead, and the space wasted by standard allocator's habit of
rounding sizes up to the nearest power of two.
This doesn't contain any planner changes, because the planner's estimate of
memory usage ignores the palloc overhead. Now that the overhead is smaller,
the planner's estimates are in fact more accurate.
Andres Freund [Wed, 10 Sep 2014 15:21:50 +0000 (17:21 +0200)]
Add support for optional_argument to our own getopt_long() implementation.
07c8651dd91d5a currently causes compilation errors on mscv (and
probably some other) compilers because our getopt_long()
implementation doesn't have support for optional_argument.
Thus implement optional_argument in our fallback implemenation. It's
quite possibly also useful in other cases.
Arguably this needs a configure check for optional_argument, but it
has existed pretty much since getopt_long() was introduced and thus
doesn't seem worth the configure runtime.
Normally I'd would not push a patch this fast, but this allows msvc to
build again and has low risk as only optional_argument behaviour has
changed.
Tom Lane [Tue, 9 Sep 2014 22:35:14 +0000 (18:35 -0400)]
Preserve AND/OR flatness while extracting restriction OR clauses.
The code I added in commit f343a880d5555faf1dad0286c5632047c8f599ad was
careless about preserving AND/OR flatness: it could create a structure with
an OR node directly underneath another one. That breaks an assumption
that's fairly important for planning efficiency, not to mention triggering
various Asserts (as reported by Benjamin Smith). Add a trifle more logic
to handle the case properly.
Andres Freund [Tue, 9 Sep 2014 20:19:14 +0000 (22:19 +0200)]
Add new psql help topics, accessible to both --help and \?.
Add --help=<topic> for the commandline, and \? <topic> as a backslash
command, to show more help than the invocations without parameters
do. "commands", "variables" and "options" currently exist as help
topics describing, respectively, backslash commands, psql variables,
and commandline switches. Without parameters the help commands show
their previous topic.
Some further wordsmithing or extending of the added help content might
be needed; but there seems little benefit delaying the overall feature
further.
Author: Pavel Stehule, editorialized by many
Reviewed-By: Andres Freund, Petr Jelinek, Fujii Masao, MauMau, Abhijit
Menon-Sen and Erik Rijkers.
Robert Haas [Tue, 9 Sep 2014 21:45:20 +0000 (17:45 -0400)]
Change the spinlock primitives to function as compiler barriers.
Previously, they functioned as barriers against CPU reordering but not
compiler reordering, an odd API that required extensive use of volatile
everywhere that spinlocks are used. That's error-prone and has negative
implications for performance, so change it.
In theory, this makes it safe to remove many of the uses of volatile
that we currently have in our code base, but we may find that there are
some bugs in this effort when we do. In the long run, though, this
should make for much more maintainable code.
Tom Lane [Tue, 9 Sep 2014 19:34:10 +0000 (15:34 -0400)]
Add width_bucket(anyelement, anyarray).
This provides a convenient method of classifying input values into buckets
that are not necessarily equal-width. It works on any sortable data type.
The choice of function name is a bit debatable, perhaps, but showing that
there's a relationship to the SQL standard's width_bucket() function seems
more attractive than the other proposals.
The xml type previously rejected "content" that is empty or consists
only of spaces. But the SQL/XML standard allows that, so change that.
The accepted values for XML "documents" are not changed.
Stephen Frost [Tue, 9 Sep 2014 14:52:10 +0000 (10:52 -0400)]
Move ALTER ... ALL IN to ProcessUtilitySlow
Now that ALTER TABLE .. ALL IN TABLESPACE has replaced the previous
ALTER TABLESPACE approach, it makes sense to move the calls down in
to ProcessUtilitySlow where the rest of ALTER TABLE is handled.
This also means that event triggers will support ALTER TABLE .. ALL
(which was the impetus for the original change, though it has other
good qualities also).
Andres Freund [Mon, 8 Sep 2014 22:47:32 +0000 (00:47 +0200)]
Fix spinlock implementation for some !solaris sparc platforms.
Some Sparc CPUs can be run in various coherence models, ranging from
RMO (relaxed) over PSO (partial) to TSO (total). Solaris has always
run CPUs in TSO mode while in userland, but linux didn't use to and
the various *BSDs still don't. Unfortunately the sparc TAS/S_UNLOCK
were only correct under TSO. Fix that by adding the necessary memory
barrier instructions. On sparcv8+, which should be all relevant CPUs,
these are treated as NOPs if the current consistency model doesn't
require the barriers.
Will be backpatched to all released branches once a few buildfarm
cycles haven't shown up problems. As I've no access to sparc, this is
blindly written.
Tom Lane [Mon, 8 Sep 2014 20:09:45 +0000 (16:09 -0400)]
Fix psql \s to work with recent libedit, and add pager support.
psql's \s (print command history) doesn't work at all with recent libedit
versions when printing to the terminal, because libedit tries to do an
fchmod() on the target file which will fail if the target is /dev/tty.
(We'd already noted this in the context of the target being /dev/null.)
Even before that, it didn't work pleasantly, because libedit likes to
encode the command history file (to ensure successful reloading), which
renders it nigh unreadable, not to mention significantly different-looking
depending on exactly which libedit version you have. So let's forget using
write_history() for this purpose, and instead print the data ourselves,
using logic similar to that used to iterate over the history for newline
encoding/decoding purposes.
While we're at it, insert the ability to use the pager when \s is printing
to the terminal. This has been an acknowledged shortcoming of \s for many
years, so while you could argue it's not exactly a back-patchable bug fix
it still seems like a good improvement. Anyone who's seriously annoyed
at this can use "\s /dev/tty" or local equivalent to get the old behavior.
Experimentation with this showed that the history iteration logic was
actually rather broken when used with libedit. It turns out that with
libedit you have to use previous_history() not next_history() to advance
to more recent history entries. The easiest and most robust fix for this
seems to be to make a run-time test to verify which function to call.
We had not noticed this because libedit doesn't really need the newline
encoding logic: its own encoding ensures that command entries containing
newlines are reloaded correctly (unlike libreadline). So the effective
behavior with recent libedits was that only the oldest history entry got
newline-encoded or newline-decoded. However, because of yet other bugs in
history_set_pos(), some old versions of libedit allowed the existing loop
logic to reach entries besides the oldest, which means there may be libedit
~/.psql_history files out there containing encoded newlines in more than
just the oldest entry. To ensure we can reload such files, it seems
appropriate to back-patch this fix, even though that will result in some
incompatibility with older psql versions (ie, multiline history entries
written by a psql with this fix will look corrupted to a psql without it,
if its libedit is reasonably up to date).
Tom Lane [Mon, 8 Sep 2014 02:40:41 +0000 (22:40 -0400)]
Documentation fix: sum(float4) returns float4, not float8.
The old claim is from my commit d06ebdb8d3425185d7e641d15e45908658a0177d of
2000-07-17, but it seems to have been a plain old thinko; sum(float4) has
been distinct from sum(float8) since Berkeley days. Noted by KaiGai Kohei.
While at it, mention the existence of sum(money), which is also of
embarrassingly ancient vintage.
The link to the NIST web page about DES standards leads to nowhere, and
according to archive.org has been forwarded to an unrelated page for
many years. Therefore, just remove that link. More up to date
information can be found via Wikipedia, for example.
Allow \watch to display query execution time if \timing is enabled.
Previously \watch could not display the query execution time even
when \timing was enabled because it used PSQLexec instead of
SendQuery and that function didn't support \timing. This patch
introduces PSQLexecWatch and changes \watch so as to use it, instead.
PSQLexecWatch is the function to run the query, print its results and
display how long it took (only when \timing is enabled).
This patch also changes --echo-hidden so that it doesn't print
the query that \watch executes. Since \watch cannot execute
backslash command queries, they should not be printed even
when --echo-hidden is set.
Patch by me, review by Heikki Linnakangas and Michael Paquier
Check number of parameters in RAISE statement at compile time.
The number of % parameter markers in RAISE statement should match the number
of parameters given. We used to check that at execution time, but we have
all the information needed at compile time, so let's check it at compile
time instead. It's generally better to find mistakes earlier.
Refactor per-page logic common to all redo routines to a new function.
Every redo routine uses the same idiom to determine what to do to a page:
check if there's a backup block for it, and if not read, the buffer if the
block exists, and check its LSN. Refactor that into a common function,
XLogReadBufferForRedo, making all the redo routines shorter and more
readable.
This has no user-visible effect, and makes no changes to the WAL format.
Reviewed by Andres Freund, Alvaro Herrera, Michael Paquier.