Fujii Masao [Fri, 2 Apr 2021 15:07:00 +0000 (00:07 +0900)]
pg_checksums: Fix progress reporting.
pg_checksums uses two counters, total size and current size,
to calculate the progress. Previously the progress that
pg_checksums reported could not reach 100% at the end.
The cause of this issue was that the sizes of only pages excluding
new ones in each file were counted as the current size
while the size of each file is counted as the total size.
That is, the total size of all new pages could be reported
as the difference between the total size and current size.
This commit fixes this issue by making pg_checksums count
the sizes of all pages including new ones in each file as
the current size.
Back-patch to v12 where progress reporting was added to pg_checksums.
Reported-by: Shinya Kato
Author: Shinya Kato
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/TYAPR01MB289656B1ACA0A5E7CAD07BE3C47A9@TYAPR01MB2896.jpnprd01.prod.outlook.com
Tom Lane [Fri, 2 Apr 2021 14:43:54 +0000 (10:43 -0400)]
Strip file names reported in error messages on Windows, too.
Commit
dd136052b established a policy that error message FILE items
should include only the base name of the reporting source file, for
uniformity and succinctness. We now observe that some Windows compilers
use backslashes in __FILE__ strings, so truncate at backslashes as well.
This is expected to fix some platform variation in the results of the
new libpq_pipeline test module.
Discussion: https://postgr.es/m/
3650140.
1617372290@sss.pgh.pa.us
Andrew Dunstan [Fri, 2 Apr 2021 14:29:58 +0000 (10:29 -0400)]
Fix typo in
6d7a6feac4
Per gripe from Daniel Gustafsson
Fujii Masao [Fri, 2 Apr 2021 10:45:42 +0000 (19:45 +0900)]
postgres_fdw: Add option to control whether to keep connections open.
This commit adds a new option keep_connections that controls
whether postgres_fdw keeps the connections to the foreign server open
so that the subsequent queries can re-use them. This option can only be
specified for a foreign server. The default is on. If set to off,
all connections to the foreign server will be discarded
at the end of transaction. Closed connections will be re-established
when they are necessary by future queries using a foreign table.
This option is useful, for example, when users want to prevent
the connections from eating up the foreign servers connections
capacity.
Author: Bharath Rupireddy
Reviewed-by: Alexey Kondratov, Vignesh C, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACVvrp5=AVp2PupEm+nAC8S4buqR3fJMmaCoc7ftT0aD2A@mail.gmail.com
Peter Eisentraut [Fri, 2 Apr 2021 09:01:49 +0000 (11:01 +0200)]
Add support for NullIfExpr in eval_const_expressions
Author: Hou Zhijie <houzj.fnst@cn.fujitsu.com>
Discussion: https://www.postgresql.org/message-id/flat/
7ea5ce773bbc4eea9ff1a381acd3b102@G08CNEXMBPEKD05.g08.fujitsu.local
Fujii Masao [Fri, 2 Apr 2021 08:27:31 +0000 (17:27 +0900)]
Fix pgstat_report_replslot() to use proper data types for its arguments.
The caller of pgstat_report_replslot() passes int64 values to the function.
Also the function stores those values in PgStat_Counter (i.e., int64) fields
of PgStat_MsgReplSlot struct. But previously the function used "int" as
the data types of some arguments for those values, which could lead to
the overflow of values.
To avoid this risk, this commit fixes pgstat_report_replslot() to use
PgStat_Counter type for the arguments. Since they are the statistics counters,
PgStat_Counter, the data type used for counters, is used for them
instead of int64.
Reported-by: Vignesh C
Author: Vignesh C
Reviewed-by: Jeevan Ladhe, Fujii Masao
Discussion: https://postgr.es/m/CALDaNm080OpG=ZwOb0i8EyChH5SyHAMFWJCKaKTXmrfvJLbgaA@mail.gmail.com
Michael Paquier [Fri, 2 Apr 2021 07:37:00 +0000 (16:37 +0900)]
doc: Clarify how to generate backup files with non-exclusive backups
The current instructions describing how to write the backup_label and
tablespace_map files are confusing. For example, opening a file in text
mode on Windows and copy-pasting the file's contents would result in a
failure at recovery because of the extra CRLF characters generated. The
documentation was not stating that clearly, and per discussion this is
not considered as a supported scenario.
This commit extends a bit the documentation to mention that it may be
required to open the file in binary mode before writing its data.
Reported-by: Wang Shenhao
Author: David Steele
Reviewed-by: Andrew Dunstan, Magnus Hagander
Discussion: https://postgr.es/m/
8373f61426074f2cb6be92e02f838389@G08CNEXMBPEKD06.g08.fujitsu.local
Backpatch-through: 9.6
Fujii Masao [Fri, 2 Apr 2021 07:26:36 +0000 (16:26 +0900)]
Fix typos in comments.
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoA1YL7t0nzVSEySx6zOaE7xO3r0jyu8hkitGL2_XbaMxQ@mail.gmail.com
David Rowley [Fri, 2 Apr 2021 02:25:38 +0000 (15:25 +1300)]
Attempt to fix unstable Result Cache regression tests
force_parallel_mode = regress is causing a few more problems than I
thought. It seems that both the leader and the single worker can
contribute to the execution. I had mistakenly thought that only the worker
process would do any work. Since it's not deterministic as to which
of the two processes will get a chance to work on the plan, it seems just
better to disable force_parallel_mode for these tests. At least doing
this seems better than changing to EXPLAIN only rather than EXPLAIN
ANALYZE.
Additionally, I overlooked the fact that the number of executions of the
sub-plan below a Result Cache will execute a varying number of times
depending on cache eviction. 32-bit machines will use less memory and
evict fewer tuples from the cache. That results in the subnode being
executed fewer times on 32-bit machines. Let's just blank out the number
of loops in each node.
Bruce Momjian [Fri, 2 Apr 2021 01:17:24 +0000 (21:17 -0400)]
doc: mention that intervening major releases can be skipped
Also mention that you should read the intervening major releases notes.
This change was also applied to the website.
Discussion: https://postgr.es/m/
20210330144949.GA8259@momjian.us
Backpatch-through: 9.6
David Rowley [Fri, 2 Apr 2021 01:10:56 +0000 (14:10 +1300)]
Add Result Cache executor node (take 2)
Here we add a new executor node type named "Result Cache". The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins. This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again. Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.
For certain data sets, this can significantly improve the performance of
joins. The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join. In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch. Merge joins would have to
skip over all of the unmatched rows. If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join. The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large. Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join. This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does. The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables. Smaller hash tables generally perform better.
The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size. We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.
For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node. We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be. Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.
For now, the planner will only consider using a result cache for
parameterized nested loop joins. This works for both normal joins and
also for LATERAL type joins to subqueries. It is possible to use this new
node for other uses in the future. For example, to cache results from
correlated subqueries. However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio. Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.
The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations. With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be. In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%. Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join. However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values. If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join. Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature. Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.
For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache. However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default. There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression. Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default. It remains to be seen if we'll
maintain that setting for the release.
Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch. Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people. If there's some consensus on a better name, then we can
change it before the release. Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.
Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu, Hou Zhijie
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
Michael Paquier [Fri, 2 Apr 2021 00:44:42 +0000 (09:44 +0900)]
Improve stability of test with vacuum_truncate in reloptions.sql
This test has been using a simple VACUUM with pg_relation_size() to
check if a relation gets physically truncated or not, but forgot the
fact that some concurrent activity, like checkpoint buffer writes, could
cause some pages to be skipped. The second test enabling
vacuum_truncate could fail, seeing a non-empty relation. The first test
would not have failed, but could finish by testing a behavior different
than the one aimed for. Both tests gain a FREEZE option, to make the
vacuums more aggressive and prevent page skips.
This is similar to the issues fixed in
c2dc1a7.
Author: Arseny Sher
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/87tuotr2hh.fsf@ars-thinkpad
backpatch-through: 12
Tom Lane [Thu, 1 Apr 2021 21:55:14 +0000 (17:55 -0400)]
Rethink handling of pass-by-value leaf datums in SP-GiST.
The existing convention in SP-GiST is that any pass-by-value datatype
is stored in Datum representation, i.e. it's of width sizeof(Datum)
even when typlen is less than that. This is okay, or at least it's
too late to change it, for prefix datums and node-label datums in inner
(upper) tuples. But it's problematic for leaf datums, because we'd
prefer those to be stored in Postgres' standard on-disk representation
so that we can easily extend leaf tuples to carry additional "included"
columns.
I believe, however, that we can get away with just up and changing that.
This would be an unacceptable on-disk-format break, but there are two
big mitigating factors:
1. It seems quite unlikely that there are any SP-GiST opclasses out
there that use pass-by-value leaf datatypes. Certainly none of the
ones in core do, nor has codesearch.debian.net heard of any. Given
what SP-GiST is good for, it's hard to conceive of a use-case where
the leaf-level values would be both small and fixed-width. (As an
example, if you wanted to index text values with the leaf level being
just a byte, then every text string would have to be represented
with one level of inner tuple per preceding byte, which would be
horrendously space-inefficient and slow to access. You always want
to use as few inner-tuple levels as possible, leaving as much as
possible in the leaf values.)
2. Even granting that you have such an index, this change only
breaks things on big-endian machines. On little-endian, the high
order bytes of the Datum format will now just appear to be alignment
padding space.
So, change the code to store pass-by-value leaf datums in their
usual on-disk form. Inner-tuple datums are not touched.
This is extracted from a larger patch that intends to add support for
"included" columns. I'm committing it separately for visibility in
our commit logs.
Pavel Borisov and Tom Lane, reviewed by Andrey Borodin
Discussion: https://postgr.es/m/CALT9ZEFi-vMp4faht9f9Junb1nO3NOSjhpxTmbm1UGLMsLqiEQ@mail.gmail.com
Stephen Frost [Thu, 1 Apr 2021 19:32:06 +0000 (15:32 -0400)]
Rename Default Roles to Predefined Roles
The term 'default roles' wasn't quite apt as these roles aren't able to
be modified or removed after installation, so rename them to be
'Predefined Roles' instead, adding an entry into the newly added
Obsolete Appendix to help users of current releases find the new
documentation.
Bruce Momjian and Stephen Frost
Discussion: https://postgr.es/m/
157742545062.1149.
11052653770497832538%40wrigleys.postgresql.org
and https://www.postgresql.org/message-id/
20201120211304.GG16415@tamriel.snowman.net
Alvaro Herrera [Thu, 1 Apr 2021 19:25:46 +0000 (16:25 -0300)]
Fix setvbuf()-induced crash in libpq_pipeline
Windows doesn't like setvbuf(..., _IOLBF) and crashes if you use it,
which has been causing the libpq_pipeline failures all along ... and our
own port.h has known about it for a long time: it offers PG_IOLBF that's
defined to _IONBF on that platform. Follow its advice.
While at it, get rid of a bogus bitshift that used a constant of the
wrong size. Decorate the constant as LL to fix. While at it, remove a
pointless addition that only confused matters.
All as diagnosed by Tom Lane.
Discussion: https://postgr.es/m/
3458958.
1617302154@sss.pgh.pa.us
Robert Haas [Thu, 1 Apr 2021 17:36:28 +0000 (13:36 -0400)]
amcheck: Fix verify_heapam's tuple visibility checking rules.
We now follow the order of checks from HeapTupleSatisfies* more
closely to avoid coming to erroneous conclusions.
Mark Dilger and Robert Haas
Discussion: http://postgr.es/m/CA+Tgmob6sii0yTvULYJ0Vq4w6ZBmj7zUhddL3b+SKDi9z9NA7Q@mail.gmail.com
Tom Lane [Thu, 1 Apr 2021 17:34:16 +0000 (13:34 -0400)]
Fix pg_restore's misdesigned code for detecting archive file format.
Despite the clear comments pointing out that the duplicative code
segments in ReadHead() and _discoverArchiveFormat() needed to be
in sync, they were not: the latter did not bother to apply any of
the sanity checks in the former. We'd missed noticing this partly
because none of those checks would fail in scenarios we customarily
test, and partly because the oversight would be masked if both
segments execute, which they would in cases other than needing to
autodetect the format of a non-seekable stdin source. However,
in a case meeting all these requirements --- for example, trying
to read a newer-than-supported archive format from non-seekable
stdin --- pg_restore missed applying the version check and would
likely dump core or otherwise misbehave.
The whole thing is silly anyway, because there seems little reason
to duplicate the logic beyond the one-line verification that the
file starts with "PGDMP". There seems to have been an undocumented
assumption that multiple major formats (major enough to require
separate reader modules) would nonetheless share the first half-dozen
fields of the custom-format header. This seems unlikely, so let's
fix it by just nuking the duplicate logic in _discoverArchiveFormat().
Also get rid of the pointless attempt to seek back to the start of
the file after successful autodetection. That wastes cycles and
it means we have four behaviors to verify not two.
Per bug #16951 from Sergey Koposov. This has been broken for
decades, so back-patch to all supported versions.
Discussion: https://postgr.es/m/16951-
a4dd68cf0de23048@postgresql.org
Peter Eisentraut [Thu, 1 Apr 2021 14:12:53 +0000 (16:12 +0200)]
Fix internal extract(timezone_minute) formulas
Through various refactorings over time, the extract(timezone_minute
from time with time zone) and extract(timezone_minute from timestamp
with time zone) implementations ended up with two different but
equally nonsensical formulas by using SECS_PER_MINUTE and
MINS_PER_HOUR interchangeably. Since those two are of course both the
same number, the formulas do work, but for readability, fix them to be
semantically correct.
Alvaro Herrera [Thu, 1 Apr 2021 13:04:45 +0000 (10:04 -0300)]
libpq_pipeline: Must strdup(optarg) to avoid crash
I forgot to strdup() when processing argv[]. Apparently many platforms
hide this mistake from users, but in those that don't you may get a
program crash. Repair.
Per buildfarm member drongo, which is the only one in all the buildfarm
manifesting a problem here.
While at it, move "numrows" processing out of the line of special cases,
and make it getopt's -r instead. (A similar thing could be done to
'conninfo', but current use of the program doesn't warrant spending time
on that -- nowhere else we use conninfo in so simplistic a manner.)
Discussion: https://postgr.es/m/
20210401124850.GA19247@alvherre.pgsql
Heikki Linnakangas [Thu, 1 Apr 2021 09:23:40 +0000 (12:23 +0300)]
Do COPY FROM encoding conversion/verification in larger chunks.
This gives a small performance gain, by reducing the number of calls
to the conversion/verification function, and letting it work with
larger inputs. Also, reorganizing the input pipeline makes it easier
to parallelize the input parsing: after the input has been converted
to the database encoding, the next stage of finding the newlines can
be done in parallel, because there cannot be any newline chars
"embedded" in multi-byte characters in the encodings that we support
as server encodings.
This changes behavior in one corner case: if client and server
encodings are the same single-byte encoding (e.g. latin1), previously
the input would not be checked for zero bytes ('\0'). Any fields
containing zero bytes would be truncated at the zero. But if encoding
conversion was needed, the conversion routine would throw an error on
the zero. After this commit, the input is always checked for zeros.
Reviewed-by: John Naylor
Discussion: https://www.postgresql.org/message-id/
e7861509-3960-538a-9025-
b75a61188e01%40iki.fi
Heikki Linnakangas [Thu, 1 Apr 2021 08:45:22 +0000 (11:45 +0300)]
Add 'noError' argument to encoding conversion functions.
With the 'noError' argument, you can try to convert a buffer without
knowing the character boundaries beforehand. The functions now need to
return the number of input bytes successfully converted.
This is is a backwards-incompatible change, if you have created a custom
encoding conversion with CREATE CONVERSION. This adds a check to
pg_upgrade for that, refusing the upgrade if there are any user-defined
encoding conversions. Custom conversions are very rare, there are no
commonly used extensions that I know of that uses that feature. No other
objects can depend on conversions, so if you do have one, you can fairly
easily drop it before upgrading, and recreate it after the upgrade with
an updated version.
Add regression tests for built-in encoding conversions. This doesn't cover
every conversion, but it covers all the internal functions in conv.c that
are used to implement the conversions.
Reviewed-by: John Naylor
Discussion: https://www.postgresql.org/message-id/
e7861509-3960-538a-9025-
b75a61188e01%40iki.fi
Peter Eisentraut [Thu, 1 Apr 2021 07:52:03 +0000 (09:52 +0200)]
Make extract(timetz) tests a bit more interesting
Use a time zone offset with nonzero minutes to make the
timezone_minute test meaningful.
Michael Paquier [Thu, 1 Apr 2021 06:28:37 +0000 (15:28 +0900)]
doc: Clarify use of ACCESS EXCLUSIVE lock in various sections
Some sections of the documentation used "exclusive lock" to describe
that an ACCESS EXCLUSIVE lock is taken during a given operation. This
can be confusing to the reader as ACCESS SHARE is allowed with an
EXCLUSIVE lock is used, but that would not be the case with what is
described on those parts of the documentation.
Author: Greg Rychlewski
Discussion: https://postgr.es/m/CAKemG7VptD=7fNWckFMsMVZL_zzvgDO6v2yVmQ+ZiBfc_06kCQ@mail.gmail.com
Backpatch-through: 9.6
Amit Kapila [Thu, 1 Apr 2021 02:27:34 +0000 (07:57 +0530)]
Ensure to send a prepare after we detect concurrent abort during decoding.
It is possible that while decoding a prepared transaction, it gets aborted
concurrently via a ROLLBACK PREPARED command. In that case, we were
skipping all the changes and directly sending Rollback Prepared when we
find the same in WAL. However, the downstream has no idea of the GID of
such a transaction. So, ensure to send prepare even when a concurrent
abort is detected.
Author: Ajin Cherian
Reviewed-by: Markus Wanner, Amit Kapila
Discussion: https://postgr.es/m/
f82133c6-6055-b400-7922-
97dae9f2b50b@enterprisedb.com
Michael Paquier [Thu, 1 Apr 2021 00:48:17 +0000 (09:48 +0900)]
Move some client-specific routines from SSLServer to PostgresNode
test_connect_ok() and test_connect_fails() have always been part of the
SSL tests, and check if a connection to the backend should work or not,
and there are sanity checks done on specific error patterns dropped by
libpq if the connection fails.
This was fundamentally wrong on two aspects. First, SSLServer.pm works
mostly on setting up and changing the SSL configuration of a
PostgresNode, and has really nothing to do with the client. Second,
the situation became worse in light of
b34ca595, where the SSL tests
would finish by using a psql command that may not come from the same
installation as the node set up.
This commit moves those client routines into PostgresNode, making easier
the refactoring of SSLServer to become more SSL-implementation aware.
This can also be reused by the ldap, kerberos and authentication test
suites for connection checks, and a follow-up patch should extend those
interfaces to match with backend log patterns.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan, Daniel Gustafsson, Álvaro Herrera
Discussion: https://postgr.es/m/YGLKNBf9zyh6+WSt@paquier.xyz
David Rowley [Thu, 1 Apr 2021 00:33:23 +0000 (13:33 +1300)]
Revert
b6002a796
This removes "Add Result Cache executor node". It seems that something
weird is going on with the tracking of cache hits and misses as
highlighted by many buildfarm animals. It's not yet clear what the
problem is as other parts of the plan indicate that the cache did work
correctly, it's just the hits and misses that were being reported as 0.
This is especially a bad time to have the buildfarm so broken, so
reverting before too many more animals go red.
Discussion: https://postgr.es/m/CAApHDvq_hydhfovm4=izgWs+C5HqEeRScjMbOgbpC-jRAeK3Yw@mail.gmail.com
David Rowley [Wed, 31 Mar 2021 23:32:22 +0000 (12:32 +1300)]
Add Result Cache executor node
Here we add a new executor node type named "Result Cache". The planner
can include this node type in the plan to have the executor cache the
results from the inner side of parameterized nested loop joins. This
allows caching of tuples for sets of parameters so that in the event that
the node sees the same parameter values again, it can just return the
cached tuples instead of rescanning the inner side of the join all over
again. Internally, result cache uses a hash table in order to quickly
find tuples that have been previously cached.
For certain data sets, this can significantly improve the performance of
joins. The best cases for using this new node type are for join problems
where a large portion of the tuples from the inner side of the join have
no join partner on the outer side of the join. In such cases, hash join
would have to hash values that are never looked up, thus bloating the hash
table and possibly causing it to multi-batch. Merge joins would have to
skip over all of the unmatched rows. If we use a nested loop join with a
result cache, then we only cache tuples that have at least one join
partner on the outer side of the join. The benefits of using a
parameterized nested loop with a result cache increase when there are
fewer distinct values being looked up and the number of lookups of each
value is large. Also, hash probes to lookup the cache can be much faster
than the hash probe in a hash join as it's common that the result cache's
hash table is much smaller than the hash join's due to result cache only
caching useful tuples rather than all tuples from the inner side of the
join. This variation in hash probe performance is more significant when
the hash join's hash table no longer fits into the CPU's L3 cache, but the
result cache's hash table does. The apparent "random" access of hash
buckets with each hash probe can cause a poor L3 cache hit ratio for large
hash tables. Smaller hash tables generally perform better.
The hash table used for the cache limits itself to not exceeding work_mem
* hash_mem_multiplier in size. We maintain a dlist of keys for this cache
and when we're adding new tuples and realize we've exceeded the memory
budget, we evict cache entries starting with the least recently used ones
until we have enough memory to add the new tuples to the cache.
For parameterized nested loop joins, we now consider using one of these
result cache nodes in between the nested loop node and its inner node. We
determine when this might be useful based on cost, which is primarily
driven off of what the expected cache hit ratio will be. Estimating the
cache hit ratio relies on having good distinct estimates on the nested
loop's parameters.
For now, the planner will only consider using a result cache for
parameterized nested loop joins. This works for both normal joins and
also for LATERAL type joins to subqueries. It is possible to use this new
node for other uses in the future. For example, to cache results from
correlated subqueries. However, that's not done here due to some
difficulties obtaining a distinct estimation on the outer plan to
calculate the estimated cache hit ratio. Currently we plan the inner plan
before planning the outer plan so there is no good way to know if a result
cache would be useful or not since we can't estimate the number of times
the subplan will be called until the outer plan is generated.
The functionality being added here is newly introducing a dependency on
the return value of estimate_num_groups() during the join search.
Previously, during the join search, we only ever needed to perform
selectivity estimations. With this commit, we need to use
estimate_num_groups() in order to estimate what the hit ratio on the
result cache will be. In simple terms, if we expect 10 distinct values
and we expect 1000 outer rows, then we'll estimate the hit ratio to be
99%. Since cache hits are very cheap compared to scanning the underlying
nodes on the inner side of the nested loop join, then this will
significantly reduce the planner's cost for the join. However, it's
fairly easy to see here that things will go bad when estimate_num_groups()
incorrectly returns a value that's significantly lower than the actual
number of distinct values. If this happens then that may cause us to make
use of a nested loop join with a result cache instead of some other join
type, such as a merge or hash join. Our distinct estimations have been
known to be a source of trouble in the past, so the extra reliance on them
here could cause the planner to choose slower plans than it did previous
to having this feature. Distinct estimations are also fairly hard to
estimate accurately when several tables have been joined already or when a
WHERE clause filters out a set of values that are correlated to the
expressions we're estimating the number of distinct value for.
For now, the costing we perform during query planning for result caches
does put quite a bit of faith in the distinct estimations being accurate.
When these are accurate then we should generally see faster execution
times for plans containing a result cache. However, in the real world, we
may find that we need to either change the costings to put less trust in
the distinct estimations being accurate or perhaps even disable this
feature by default. There's always an element of risk when we teach the
query planner to do new tricks that it decides to use that new trick at
the wrong time and causes a regression. Users may opt to get the old
behavior by turning the feature off using the enable_resultcache GUC.
Currently, this is enabled by default. It remains to be seen if we'll
maintain that setting for the release.
Additionally, the name "Result Cache" is the best name I could think of
for this new node at the time I started writing the patch. Nobody seems
to strongly dislike the name. A few people did suggest other names but no
other name seemed to dominate in the brief discussion that there was about
names. Let's allow the beta period to see if the current name pleases
enough people. If there's some consensus on a better name, then we can
change it before the release. Please see the 2nd discussion link below
for the discussion on the "Result Cache" name.
Author: David Rowley
Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu
Tested-By: Konstantin Knizhnik
Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com
Alvaro Herrera [Wed, 31 Mar 2021 23:11:51 +0000 (20:11 -0300)]
Remove setvbuf() call from PQtrace()
It's misplaced there -- it's not libpq's output stream to tweak in that
way. In particular, POSIX says that it has to be called before any
other operation on the file, so if a stream previously used by the
calling application, bad things may happen.
Put setvbuf() in libpq_pipeline for good measure.
Also, reduce fopen(..., "w+") to just fopen(..., "w") in
libpq_pipeline.c. It's not clear that this fixes anything, but we don't
use w+ anywhere.
Per complaints from Tom Lane.
Discussion: https://postgr.es/m/
3337422.
1617229905@sss.pgh.pa.us
Alvaro Herrera [Wed, 31 Mar 2021 22:16:58 +0000 (19:16 -0300)]
Initialize conn->Pfdebug to NULL when creating a connection
Failing to do this can cause a crash, and I suspect is what has happened
with a buildfarm member reporting mysterious failures.
This is an ancient bug, but I'm not backpatching since evidently nobody
cares about PQtrace in older releases.
Discussion: https://postgr.es/m/
3333908.
1617227066@sss.pgh.pa.us
Alvaro Herrera [Wed, 31 Mar 2021 21:45:17 +0000 (18:45 -0300)]
Disable force_parallel_mode in libpq_pipeline
Some buildfarm animals with force_parallel_mode=regress were failing
this test because the error is reported in a parallel worker quicker
than the rows that succeed.
Take the opportunity to move the SET of lc_messages out of the traced
section, because it's not very interesting.
Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/
3304521.
1617221724@sss.pgh.pa.us
Tom Lane [Wed, 31 Mar 2021 21:14:16 +0000 (17:14 -0400)]
Fix unportable use of isprint().
We must cast the arguments of <ctype.h> functions to unsigned
char to avoid problems where char is signed.
Speaking of which, considering that this *is* a <ctype.h> function,
it's rather remarkable that we aren't seeing more complaints about
not having included that header.
Per buildfarm.
Tom Lane [Wed, 31 Mar 2021 21:00:30 +0000 (17:00 -0400)]
Fix portability and safety issues in pqTraceFormatTimestamp.
Remove confusion between time_t and pg_time_t; neither
gettimeofday() nor localtime() deal in the latter.
libpq indeed has no business using <pgtime.h> at all.
Use snprintf not sprintf, to ensure we can't overrun the
supplied buffer. (Unlikely, but let's be safe.)
Per buildfarm.
Tom Lane [Wed, 31 Mar 2021 20:50:45 +0000 (16:50 -0400)]
Silence compiler warning in non-assert builds.
Per buildfarm.
Tom Lane [Wed, 31 Mar 2021 20:45:17 +0000 (16:45 -0400)]
Don't prematurely cram a value into a short int.
Since
a4d75c86b, some buildfarm members have been warning that
Assert(attnum <= MaxAttrNumber);
is useless if attnum is an AttrNumber. I'm not certain how plausible
it is that the value coming out of the bitmap could actually exceed
MaxAttrNumber, but we seem to have thought that that was possible back
in
7300a6995. Revert the intermediate variable to int so that we have
the same overflow protection as before.
Stephen Frost [Wed, 31 Mar 2021 20:23:25 +0000 (16:23 -0400)]
Add a docs section for obsoleted and renamed functions and settings
The new appendix groups information on renamed or removed settings,
commands, etc into an out-of-the-way part of the docs.
The original id elements are retained in each subsection to ensure that
the same filenames are produced for HTML docs. This prevents /current/
links on the web from breaking, and allows users of the web docs
to follow links from old version pages to info on the changes in the
new version. Prior to this change, a link to /current/ for renamed
sections like the recovery.conf docs would just 404. Similarly if
someone searched for recovery.conf they would find the pg11 docs,
but there would be no /12/ or /current/ link, so they couldn't easily
find out that it was removed in pg12 or how to adapt.
Index entries are also added so that there's a breadcrumb trail for
users to follow when they know the old name, but not what we changed it
to. So a user who is trying to find out how to set standby_mode in
PostgreSQL 12+, or where pg_resetxlog went, now has more chance of
finding that information.
Craig Ringer and Stephen Frost
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/CAGRY4nzPNOyYQ_1-pWYToUVqQ0ThqP5jdURnJMZPm539fdizOg%40mail.gmail.com
Backpatch-through: 10
Tom Lane [Wed, 31 Mar 2021 19:30:04 +0000 (15:30 -0400)]
Suppress compiler warning in libpq_pipeline.c.
Some compilers seem to be concerned about the possibility that
recv_step is not any of the defined enum values. Silence
warnings about uninitialized cmdtag in a different way than
I did in
9fb9691a8.
Tom Lane [Wed, 31 Mar 2021 19:25:53 +0000 (15:25 -0400)]
Improve style of some replication-related error messages.
Put the remote end's error message into the primary error string,
instead of relegating it to errdetail(). Although this could end up
being awkward if the remote sends us a really long error message,
it seems more in keeping with our message style guidelines, and more
helpful in situations where the errdetail could get dropped.
Peter Smith
Discussion: https://postgr.es/m/CAHut+Ps-Qv2yQceCwobQDP0aJOkfDzRFrOaR6+2Op2K=WHGeWg@mail.gmail.com
Alvaro Herrera [Wed, 31 Mar 2021 18:13:42 +0000 (15:13 -0300)]
Fix some libpq_pipeline test problems
Test pipeline_abort was not checking that it got the rows it expected in
one mode; make it do so. This doesn't fix the actual problem (no idea
what that is, yet) but at least it should make it more obvious rather
than being visible only as a difference in the trace output.
While at it, fix other infelicities in the test:
* I reversed the order of result vs. expected in like().
* The output traces from -t are being put in the log dir, which means
the buildfarm script uselessly captures them. Put them in a separate
dir tmp_check/traces instead, to avoid cluttering the buildfarm results.
* Test pipelined_insert was using too large a row count. Reduce that a
tad and add a filler column to make each insert a little bulkier, while
still keeping enough that a buffer is filled and we have to switch mode.
Joe Conway [Wed, 31 Mar 2021 17:55:25 +0000 (13:55 -0400)]
Fix has_column_privilege function corner case
According to the comments, when an invalid or dropped column oid is passed
to has_column_privilege(), the intention has always been to return NULL.
However, when the caller had table level privilege the invalid/missing
column was never discovered, because table permissions were checked first.
Fix that by introducing extended versions of pg_attribute_acl(check|mask)
and pg_class_acl(check|mask) which take a new argument, is_missing. When
is_missing is NULL, the old behavior is preserved. But when is_missing is
passed by the caller, no ERROR is thrown for dropped or missing
columns/relations, and is_missing is flipped to true. This in turn allows
has_column_privilege to check for column privileges first, providing the
desired semantics.
Not backpatched since it is a user visible behavioral change with no previous
complaints, and the fix is a bit on the invasive side.
Author: Joe Conway
Reviewed-By: Tom Lane
Reported by: Ian Barwick
Discussion: https://postgr.es/m/flat/
9b5f4311-157b-4164-7fe7-
077b4fe8ed84%40joeconway.com
Tom Lane [Wed, 31 Mar 2021 15:52:34 +0000 (11:52 -0400)]
Rework planning and execution of UPDATE and DELETE.
This patch makes two closely related sets of changes:
1. For UPDATE, the subplan of the ModifyTable node now only delivers
the new values of the changed columns (i.e., the expressions computed
in the query's SET clause) plus row identity information such as CTID.
ModifyTable must re-fetch the original tuple to merge in the old
values of any unchanged columns. The core advantage of this is that
the changed columns are uniform across all tables of an inherited or
partitioned target relation, whereas the other columns might not be.
A secondary advantage, when the UPDATE involves joins, is that less
data needs to pass through the plan tree. The disadvantage of course
is an extra fetch of each tuple to be updated. However, that seems to
be very nearly free in context; even worst-case tests don't show it to
add more than a couple percent to the total query cost. At some point
it might be interesting to combine the re-fetch with the tuple access
that ModifyTable must do anyway to mark the old tuple dead; but that
would require a good deal of refactoring and it seems it wouldn't buy
all that much, so this patch doesn't attempt it.
2. For inherited UPDATE/DELETE, instead of generating a separate
subplan for each target relation, we now generate a single subplan
that is just exactly like a SELECT's plan, then stick ModifyTable
on top of that. To let ModifyTable know which target relation a
given incoming row refers to, a tableoid junk column is added to
the row identity information. This gets rid of the horrid hack
that was inheritance_planner(), eliminating O(N^2) planning cost
and memory consumption in cases where there were many unprunable
target relations.
Point 2 of course requires point 1, so that there is a uniform
definition of the non-junk columns to be returned by the subplan.
We can't insist on uniform definition of the row identity junk
columns however, if we want to keep the ability to have both
plain and foreign tables in a partitioning hierarchy. Since
it wouldn't scale very far to have every child table have its
own row identity column, this patch includes provisions to merge
similar row identity columns into one column of the subplan result.
In particular, we can merge the whole-row Vars typically used as
row identity by FDWs into one column by pretending they are type
RECORD. (It's still okay for the actual composite Datums to be
labeled with the table's rowtype OID, though.)
There is more that can be done to file down residual inefficiencies
in this patch, but it seems to be committable now.
FDW authors should note several API changes:
* The argument list for AddForeignUpdateTargets() has changed, and so
has the method it must use for adding junk columns to the query. Call
add_row_identity_var() instead of manipulating the parse tree directly.
You might want to reconsider exactly what you're adding, too.
* PlanDirectModify() must now work a little harder to find the
ForeignScan plan node; if the foreign table is part of a partitioning
hierarchy then the ForeignScan might not be the direct child of
ModifyTable. See postgres_fdw for sample code.
* To check whether a relation is a target relation, it's no
longer sufficient to compare its relid to root->parse->resultRelation.
Instead, check it against all_result_relids or leaf_result_relids,
as appropriate.
Amit Langote and Tom Lane
Discussion: https://postgr.es/m/CA+HiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ@mail.gmail.com
Peter Eisentraut [Wed, 31 Mar 2021 15:09:24 +0000 (17:09 +0200)]
Allow an alias to be attached to a JOIN ... USING
This allows something like
SELECT ... FROM t1 JOIN t2 USING (a, b, c) AS x
where x has the columns a, b, c and unlike a regular alias it does not
hide the range variables of the tables being joined t1 and t2.
Per SQL:2016 feature F404 "Range variable for common column names".
Reviewed-by: Vik Fearing <vik.fearing@2ndquadrant.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/
454638cf-d563-ab76-a585-
2564428062af@2ndquadrant.com
Etsuro Fujita [Wed, 31 Mar 2021 09:45:00 +0000 (18:45 +0900)]
Add support for asynchronous execution.
This implements asynchronous execution, which runs multiple parts of a
non-parallel-aware Append concurrently rather than serially to improve
performance when possible. Currently, the only node type that can be
run concurrently is a ForeignScan that is an immediate child of such an
Append. In the case where such ForeignScans access data on different
remote servers, this would run those ForeignScans concurrently, and
overlap the remote operations to be performed simultaneously, so it'll
improve the performance especially when the operations involve
time-consuming ones such as remote join and remote aggregation.
We may extend this to other node types such as joins or aggregates over
ForeignScans in the future.
This also adds the support for postgres_fdw, which is enabled by the
table-level/server-level option "async_capable". The default is false.
Robert Haas, Kyotaro Horiguchi, Thomas Munro, and myself. This commit
is mostly based on the patch proposed by Robert Haas, but also uses
stuff from the patch proposed by Kyotaro Horiguchi and from the patch
proposed by Thomas Munro. Reviewed by Kyotaro Horiguchi, Konstantin
Knizhnik, Andrey Lepikhov, Movead Li, Thomas Munro, Justin Pryzby, and
others.
Discussion: https://postgr.es/m/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com
Discussion: https://postgr.es/m/CA%2BhUKGLBRyu0rHrDCMC4%3DRn3252gogyp1SjOgG8SEKKZv%3DFwfQ%40mail.gmail.com
Discussion: https://postgr.es/m/
20200228.170650.
667613673625155850.horikyota.ntt%40gmail.com
Peter Eisentraut [Wed, 31 Mar 2021 08:52:37 +0000 (10:52 +0200)]
Add p_names field to ParseNamespaceItem
ParseNamespaceItem had a wired-in assumption that p_rte->eref
describes the table and column aliases exposed by the nsitem. This
relaxes this by creating a separate p_names field in an nsitem. This
is mainly preparation for a patch for JOIN USING aliases, but it saves
one indirection in common code paths, so it's possibly a win on its
own.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/785329.
1616455091@sss.pgh.pa.us
Peter Eisentraut [Wed, 31 Mar 2021 07:15:51 +0000 (09:15 +0200)]
Add errhint_plural() function and make use of it
Similar to existing errmsg_plural() and errdetail_plural(). Some
errhint() calls hadn't received the proper plural treatment yet.
Peter Eisentraut [Wed, 31 Mar 2021 06:20:35 +0000 (08:20 +0200)]
doc: Remove Cyrillic from unistr example
Not supported by PDF build right now, so let's do without it.
Amit Kapila [Wed, 31 Mar 2021 05:06:39 +0000 (10:36 +0530)]
Remove extra semicolon in postgres_fdw tests.
Author: Suraj Kharage
Reviewed-by: Bharath Rupireddy, Vignesh C
Discussion: https://postgr.es/m/CAF1DzPWRfxUeH-wShz7P_pK5Tx6M_nEK+TkS8gn5ngvg07Q5=g@mail.gmail.com
Amit Kapila [Wed, 31 Mar 2021 02:47:50 +0000 (08:17 +0530)]
Doc: Use consistent terminology for tablesync slots.
At some places in the docs, we refer to them as tablesync slots and at other
places as table synchronization slots. For consistency, we refer to them as
table synchronization slots at all places.
Author: Peter Smith
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAHut+PvzYNKCeZ=kKBDkh3dw-r=2D3fk=nNc9SXSW=CZGk69xg@mail.gmail.com
Noah Misch [Wed, 31 Mar 2021 01:53:44 +0000 (18:53 -0700)]
Accept slightly-filled pages for tuples larger than fillfactor.
We always inserted a larger-than-fillfactor tuple into a newly-extended
page, even when existing pages were empty or contained nothing but an
unused line pointer. This was unnecessary relation extension. Start
tolerating page usage up to 1/8 the maximum space that could be taken up
by line pointers. This is somewhat arbitrary, but it should allow more
cases to reuse pages. This has no effect on tables with fillfactor=100
(the default).
John Naylor and Floris van Nee. Reviewed by Matthias van de Meent.
Reported by Floris van Nee.
Discussion: https://postgr.es/m/
6e263217180649339720afe2176c50aa@opammb0562.comp.optiver.com
Michael Paquier [Wed, 31 Mar 2021 00:35:58 +0000 (09:35 +0900)]
Fix comment in parsenodes.h
CreateStmt->inhRelations is a list of RangeVars, but a comment was
incorrect about that.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/
20210330123015.yzekhz5sweqbgxdr@nol
Michael Paquier [Wed, 31 Mar 2021 00:12:34 +0000 (09:12 +0900)]
Add support for --extension in pg_dump
When specified, only extensions matching the given pattern are included
in dumps. Similarly to --table and --schema, when --strict-names is
used, a perfect match is required. Also, like the two other options,
this new option offers no guarantee that dependent objects have been
dumped, so a restore may fail on a clean database.
Tests are added in test_pg_dump/, checking after a set of positive and
negative cases, with or without an extension's contents added to the
dump generated.
Author: Guillaume Lelarge
Reviewed-by: David Fetter, Tom Lane, Michael Paquier, Asif Rehman,
Julien Rouhaud
Discussion: https://postgr.es/m/CAECtzeXOt4cnMU5+XMZzxBPJ_wu76pNy6HZKPRBL-j7yj1E4+g@mail.gmail.com
Tom Lane [Wed, 31 Mar 2021 00:01:27 +0000 (20:01 -0400)]
Remove small inefficiency in ExecARDeleteTriggers/ExecARUpdateTriggers.
Whilst poking at nodeModifyTable.c, I chanced to notice that while
its calls to ExecBR*Triggers and ExecIR*Triggers are protected by
tests to see if there are any relevant triggers to fire, its calls
to ExecAR*Triggers are not; the latter functions do the equivalent
tests themselves. This seems possibly reasonable given the more
complex conditions involved, but what's less reasonable is that
the ExecAR* functions aren't careful to do no work when there is
no work to be done. ExecARInsertTriggers gets this right, but the
other two will both force creation of a slot that the query may
have no use for. ExecARUpdateTriggers additionally performed a
usually-useless ExecClearTuple() on that slot. This is probably
all pretty microscopic in real workloads, but a cycle shaved is a
cycle earned.
Bruce Momjian [Tue, 30 Mar 2021 23:46:22 +0000 (19:46 -0400)]
adjust dblink regression expected output for commit
5da9868ed9
Seems the -1/singular output is used in the dblink regression tests.
Reported-by: Álvaro Herrera
Discussion: https://postgr.es/m/
20210330231506.GA10666@alvherre.pgsql
Alvaro Herrera [Tue, 30 Mar 2021 23:33:04 +0000 (20:33 -0300)]
libpq_pipeline: add PQtrace() support and tests
The libpq_pipeline program recently introduced by commit
acb7e4eb6b1c
is well equipped to test the PQtrace() functionality, so let's make it
do that.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/
20210327192812.GA25115@alvherre.pgsql
Alvaro Herrera [Tue, 30 Mar 2021 23:12:34 +0000 (20:12 -0300)]
Improve PQtrace() output format
Transform the PQtrace output format from its ancient (and mostly
useless) byte-level output format to a logical-message-level output,
making it much more usable. This implementation allows the printing
code to be written (as it indeed was) by looking at the protocol
documentation, which gives more confidence that the three (docs, trace
code and actual code) actually match.
Author: 岩田 彩 (Aya Iwata) <iwata.aya@fujitsu.com>
Reviewed-by: 綱川 貴之 (Takayuki Tsunakawa) <tsunakawa.takay@fujitsu.com>
Reviewed-by: Kirk Jamison <k.jamison@fujitsu.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: 黒田 隼人 (Hayato Kuroda) <kuroda.hayato@fujitsu.com>
Reviewed-by: "Nagaura, Ryohei" <nagaura.ryohei@jp.fujitsu.com>
Reviewed-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Reviewed-by: Greg Nancarrow <gregn4422@gmail.com>
Reviewed-by: Jim Doty <jdoty@pivotal.io>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/
71E660EB361DF14299875B198D4CE5423DE3FBA4@g01jpexmbkw25
Bruce Momjian [Tue, 30 Mar 2021 22:34:27 +0000 (18:34 -0400)]
In messages, use singular nouns for -1, like we do for +1.
This outputs "-1 year", not "-1 years".
Reported-by: neverov.max@gmail.com
Bug: 16939
Discussion: https://postgr.es/m/16939-
cceeb03fb72736ee@postgresql.org
Peter Eisentraut [Tue, 30 Mar 2021 20:05:18 +0000 (22:05 +0200)]
Add tests for date_part of epoch near upper bound of timestamp range
This exercises a special case in the implementations of
date_part('epoch', timestamp[tz]) that was previously not tested.
Stephen Frost [Tue, 30 Mar 2021 16:52:56 +0000 (12:52 -0400)]
Use a WaitLatch for vacuum/autovacuum sleeping
Instead of using pg_usleep() in vacuum_delay_point(), use a WaitLatch.
This has the advantage that we will realize if the postmaster has been
killed since the last time we decided to sleep while vacuuming.
Reviewed-by: Thomas Munro
Discussion: https://postgr.es/m/CAFh8B=kcdk8k-Y21RfXPu5dX=bgPqJ8TC3p_qxR_ygdBS=JN5w@mail.gmail.com
Tom Lane [Tue, 30 Mar 2021 14:57:57 +0000 (10:57 -0400)]
Further tweaking of pg_dump's handling of default_toast_compression.
As committed in
bbe0a81db, pg_dump from a pre-v14 server effectively
acts as though you'd said --no-toast-compression. I think the right
thing is for it to act as though default_toast_compression is set to
"pglz", instead, so that the tables' toast compression behavior is
preserved. You can always get the other behavior, if you want that,
by giving the switch.
Discussion: https://postgr.es/m/
1112852.
1616609702@sss.pgh.pa.us
David Rowley [Tue, 30 Mar 2021 07:52:46 +0000 (20:52 +1300)]
Allow estimate_num_groups() to pass back further details about the estimation
Here we add a new output parameter to estimate_num_groups() to allow it to
inform the caller of additional, possibly useful information about the
estimation.
The new output parameter is a struct that currently contains just a single
field with a set of flags. This was done rather than having the flags as
an output parameter to allow future fields to be added without having to
change the signature of the function at a later date when we want to pass
back further information that might not be suitable to store in the flags
field.
It seems reasonable that one day in the future that the planner would want
to know more about the estimation. For example, how many individual sets
of statistics was the estimation generated from? The planner may want to
take that into account if we ever want to consider risks as well as costs
when generating plans.
For now, there's only 1 flag we set in the flags field. This is to
indicate if the estimation fell back on using the hard-coded constants in
any part of the estimation. Callers may like to change their behavior if
this is set, and this gives them the ability to do so. Callers may pass
the flag pointer as NULL if they have no interest in obtaining any
additional information about the estimate.
We're not adding any actual usages of these flags here. Some follow-up
commits will make use of this feature. Additionally, we're also not
making any changes to add support for clauselist_selectivity() and
clauselist_selectivity_ext(). However, if this is required in the future
then the same struct being added here should be fine to use as a new
output argument for those functions too.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvqQqpk=1W-G_ds7A9CsXX3BggWj_7okinzkLVhDubQzjA@mail.gmail.com
David Rowley [Tue, 30 Mar 2021 07:28:09 +0000 (20:28 +1300)]
Fix compiler warning in unistr function
Some compilers are not aware that elog/ereport ERROR does not return.
David Rowley [Tue, 30 Mar 2021 06:56:50 +0000 (19:56 +1300)]
Allow users of simplehash.h to perform direct deletions
Previously simplehash.h only exposed a method to perform a hash table
delete using the hash table key. This meant that the delete function had
to perform a hash lookup in order to find the entry to delete. Here we
add a new function so that users of simplehash.h can perform a hash delete
directly using the entry pointer, thus saving the hash lookup.
An upcoming patch that uses simplehash.h already has performed the hash
lookup so already has the entry pointer. This change will allow the
code in that patch to perform the hash delete without the code in
simplehash.h having to perform an additional hash lookup.
Author: David Rowley
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAApHDvqFLXXge153WmPsjke5VGOSt7Ez0yD0c7eBXLfmWxs3Kw@mail.gmail.com
Peter Eisentraut [Tue, 30 Mar 2021 06:46:34 +0000 (08:46 +0200)]
Add upper boundary tests for timestamp and timestamptz types
The existing regression tests only tested the lower boundary of the
range supported by the timestamp and timestamptz types because "The
upper boundary differs between integer and float timestamps, so no
check". Since this is obsolete, add similar tests for the upper
boundary.
Amit Kapila [Tue, 30 Mar 2021 05:04:43 +0000 (10:34 +0530)]
Add a xid argument to the filter_prepare callback for output plugins.
Along with gid, this provides a different way to identify the transaction.
The users that use xid in some way to prepare the transactions can use it
to filter prepare transactions. The later commands COMMIT PREPARED or
ROLLBACK PREPARED carries both identifiers, providing an output plugin the
choice of what to use.
Author: Markus Wanner
Reviewed-by: Vignesh C, Amit Kapila
Discussion: https://postgr.es/m/
ee280000-7355-c4dc-e47b-
2436e7be959c@enterprisedb.com
Etsuro Fujita [Tue, 30 Mar 2021 04:00:00 +0000 (13:00 +0900)]
Update obsolete comment.
Back-patch to all supported branches.
Author: Etsuro Fujita
Discussion: https://postgr.es/m/CAPmGK17DwzaSf%2BB71dhL2apXdtG-OmD6u2AL9Cq2ZmAR0%2BzapQ%40mail.gmail.com
Alvaro Herrera [Mon, 29 Mar 2021 21:34:39 +0000 (18:34 -0300)]
psql: call clearerr() just before printing
We were never doing clearerr() on the output stream, which results in a
message being printed after each result once an EOF is seen:
could not print result table: Success
This message was added by commit
b03436994bcc (in the pg13 era); before
that, the error indicator would never be examined. So backpatch only
that far back, even though the actual bug (to wit: the fact that the
error indicator is never cleared) is older.
David Rowley [Mon, 29 Mar 2021 21:17:09 +0000 (10:17 +1300)]
Adjust design of per-worker parallel seqscan data struct
The design of the data structures which allow storage of the per-worker
memory during parallel seq scans were not ideal. The work done in
56788d215 required an additional data structure to allow workers to
remember the range of pages that had been allocated to them for
processing during a parallel seqscan. That commit added a void pointer
field to TableScanDescData to allow heapam to store the per-worker
allocation information. However putting the field there made very little
sense given that we have AM specific structs for that, e.g.
HeapScanDescData.
Here we remove the void pointer field from TableScanDescData and add a
dedicated field for this purpose to HeapScanDescData.
Previously we also allocated memory for this parallel per-worker data for
all scans, regardless if it was a parallel scan or not. This was just a
wasted allocation for non-parallel scans, so here we make the allocation
conditional on the scan being parallel.
Also, add previously missing pfree() to free the per-worker data in
heap_endscan().
Reported-by: Andres Freund
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/
20210317023101.anvejcfotwka6gaa@alap3.anarazel.de
Andrew Dunstan [Mon, 29 Mar 2021 19:31:22 +0000 (15:31 -0400)]
Allow matching the DN of a client certificate for authentication
Currently we only recognize the Common Name (CN) of a certificate's
subject to be matched against the user name. Thus certificates with
subjects '/OU=eng/CN=fred' and '/OU=sales/CN=fred' will have the same
connection rights. This patch provides an option to match the whole
Distinguished Name (DN) instead of just the CN. On any hba line using
client certificate identity, there is an option 'clientname' which can
have values of 'DN' or 'CN'. The default is 'CN', the current procedure.
The DN is matched against the RFC2253 formatted DN, which looks like
'CN=fred,OU=eng'.
This facility of probably best used in conjunction with an ident map.
Discussion: https://postgr.es/m/
92e70110-9273-d93c-5913-
0bccb6562740@dunslane.net
Reviewed-By: Michael Paquier, Daniel Gustafsson, Jacob Champion
Peter Eisentraut [Mon, 29 Mar 2021 15:53:30 +0000 (17:53 +0200)]
Clean up date_part tests a bit
Some tests for timestamp and timestamptz were in the date.sql test
file. Move them to their appropriate files, or drop tests cases that
were already present there.
Peter Eisentraut [Sun, 28 Mar 2021 06:16:15 +0000 (08:16 +0200)]
Add unistr function
This allows decoding a string with Unicode escape sequences. It is
similar to Unicode escape strings, but offers some more flexibility.
Author: Pavel Stehule <pavel.stehule@gmail.com>
Reviewed-by: Asif Rehman <asifr.rehman@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRA5GnKT+gDVwbVRH2ep451H_myBt+NTz8RkYUARE9+qOQ@mail.gmail.com
Peter Eisentraut [Mon, 29 Mar 2021 06:40:39 +0000 (08:40 +0200)]
Reset standard_conforming_strings in strings test
After some tests relating to standard_conforming_strings behavior, the
value was not reset to the default value. Therefore, the rest of the
tests in that file ran with the nondefault setting, which affected the
results of some tests. For clarity, reset the value and run the rest
of the tests with the default setting again.
Peter Geoghegan [Mon, 29 Mar 2021 03:10:02 +0000 (20:10 -0700)]
PageAddItemExtended(): Add LP_UNUSED assertion.
Assert that LP_UNUSED items have no storage. If it's worth having
defensive code in non-assert builds then it's worth having an assertion
as well.
David Rowley [Mon, 29 Mar 2021 01:47:05 +0000 (14:47 +1300)]
Cache if PathTarget and RestrictInfos contain volatile functions
Here we aim to reduce duplicate work done by contain_volatile_functions()
by caching whether PathTargets and RestrictInfos contain any volatile
functions the first time contain_volatile_functions() is called for them.
Any future calls for these nodes just use the cached value rather than
going to the trouble of recursively checking the sub-node all over again.
Thanks to Tom Lane for the idea.
Any locations in the code which make changes to a PathTarget or
RestrictInfo which could change the outcome of the volatility check must
change the cached value back to VOLATILITY_UNKNOWN again.
contain_volatile_functions() is the only code in charge of setting the
cache value to either VOLATILITY_VOLATILE or VOLATILITY_NOVOLATILE.
Some existing code does benefit from this additional caching, however,
this change is mainly aimed at an upcoming patch that must check for
volatility during the join search. Repeated volatility checks in that
case can become very expensive when the join search contains more than a
few relations.
Author: David Rowley
Discussion: https://postgr.es/m/
3795226.
1614059027@sss.pgh.pa.us
Stephen Frost [Sun, 28 Mar 2021 15:27:59 +0000 (11:27 -0400)]
doc: Define TLS as an acronym
Commit
c6763156589 added an acronym reference for "TLS" but the definition
was never added.
Author: Daniel Gustafsson
Reviewed-by: Michael Paquier
Backpatch-through: 9.6
Discussion: https://postgr.es/m/
27109504-82DB-41A8-8E63-
C0498314F5B0@yesql.se
Tomas Vondra [Sat, 27 Mar 2021 17:26:52 +0000 (18:26 +0100)]
Stabilize stats_ext test with other collations
The tests used string concatenation to test statistics on expressions,
but that made the tests locale-dependent, e.g. because the ordering of
'11' and '1X' depends on the collation. This affected both the estimated
and actual row couts, breaking some of the tests.
Fixed by replacing the string concatenation with upper() function call,
so that the text values contain only digits.
Discussion: https://postgr.es/m/
b650920b-2767-fbc3-c87a-
cb8b5d693cbf%40enterprisedb.com
Peter Eisentraut [Sat, 27 Mar 2021 09:17:12 +0000 (10:17 +0100)]
Improve consistency of SQL code capitalization
Tomas Vondra [Fri, 26 Mar 2021 22:22:01 +0000 (23:22 +0100)]
Extended statistics on expressions
Allow defining extended statistics on expressions, not just just on
simple column references. With this commit, expressions are supported
by all existing extended statistics kinds, improving the same types of
estimates. A simple example may look like this:
CREATE TABLE t (a int);
CREATE STATISTICS s ON mod(a,10), mod(a,20) FROM t;
ANALYZE t;
The collected statistics are useful e.g. to estimate queries with those
expressions in WHERE or GROUP BY clauses:
SELECT * FROM t WHERE mod(a,10) = 0 AND mod(a,20) = 0;
SELECT 1 FROM t GROUP BY mod(a,10), mod(a,20);
This introduces new internal statistics kind 'e' (expressions) which is
built automatically when the statistics object definition includes any
expressions. This represents single-expression statistics, as if there
was an expression index (but without the index maintenance overhead).
The statistics is stored in pg_statistics_ext_data as an array of
composite types, which is possible thanks to
79f6a942bd.
CREATE STATISTICS allows building statistics on a single expression, in
which case in which case it's not possible to specify statistics kinds.
A new system view pg_stats_ext_exprs can be used to display expression
statistics, similarly to pg_stats and pg_stats_ext views.
ALTER TABLE ... ALTER COLUMN ... TYPE now treats indexes the same way it
treats indexes, i.e. it drops and recreates the statistics. This means
all statistics are reset, and we no longer try to preserve at least the
functional dependencies. This should not be a major issue in practice,
as the functional dependencies actually rely on per-column statistics,
which were always reset anyway.
Author: Tomas Vondra
Reviewed-by: Justin Pryzby, Dean Rasheed, Zhihong Yu
Discussion: https://postgr.es/m/
ad7891d2-e90c-b446-9fe2-
7419143847d7%40enterprisedb.com
Tomas Vondra [Fri, 26 Mar 2021 22:00:41 +0000 (23:00 +0100)]
Reduce duration of stats_ext regression tests
The regression tests of extended statistics were taking a fair amount of
time, due to using fairly large data sets with a couple thousand rows.
So far this was fine, but with tests for statistics on expressions the
duration would get a bit excessive. So reduce the size of some of the
tests that will be used to test expressions, to keep the duration under
control. Done in a separate commit before adding the statistics on
expressions, to make it clear which estimates are expected to change.
Author: Tomas Vondra
Discussion: https://postgr.es/m/
ad7891d2-e90c-b446-9fe2-
7419143847d7%40enterprisedb.com
Tomas Vondra [Fri, 26 Mar 2021 21:34:53 +0000 (22:34 +0100)]
Fix ndistinct estimates with system attributes
When estimating the number of groups using extended statistics, the code
was discarding information about system attributes. This led to strange
situation that
SELECT 1 FROM t GROUP BY ctid;
could have produced higher estimate (equal to pg_class.reltuples) than
SELECT 1 FROM t GROUP BY a, b, ctid;
with extended statistics on (a,b). Fixed by retaining information about
the system attribute.
Backpatch all the way to 10, where extended statistics were introduced.
Author: Tomas Vondra
Backpatch-through: 10
Noah Misch [Fri, 26 Mar 2021 17:42:17 +0000 (10:42 -0700)]
Add "pg_database_owner" default role.
Membership consists, implicitly, of the current database owner. Expect
use in template databases. Once pg_database_owner has rights within a
template, each owner of a database instantiated from that template will
exercise those rights.
Reviewed by John Naylor.
Discussion: https://postgr.es/m/
20201228043148.GA1053024@rfd.leadboat.com
Noah Misch [Fri, 26 Mar 2021 17:42:16 +0000 (10:42 -0700)]
Merge similar algorithms into roles_is_member_of().
The next commit would have complicated two or three algorithms, so take
this opportunity to consolidate. No functional changes.
Reviewed by John Naylor.
Discussion: https://postgr.es/m/
20201228043148.GA1053024@rfd.leadboat.com
Tomas Vondra [Fri, 26 Mar 2021 15:33:19 +0000 (16:33 +0100)]
Fix alignment in BRIN minmax-multi deserialization
The deserialization failed to ensure correct alignment, as it assumed it
can simply point into the serialized value. The serialization however
ignores alignment and copies just the significant bytes in order to make
the result as small as possible. This caused failures on systems that
are sensitive to mialigned addresses, like sparc, or with address
sanitizer enabled.
Fixed by copying the serialized data to ensure proper alignment. While
at it, fix an issue with serialization on big endian machines, using the
same store_att_byval/fetch_att trick as extended statistics.
Discussion: https://postgr.es/
0c8c3304-d3dd-5e29-d5ac-
b50589a23c8c%40enterprisedb.com
Tomas Vondra [Fri, 26 Mar 2021 12:54:29 +0000 (13:54 +0100)]
BRIN minmax-multi indexes
Adds BRIN opclasses similar to the existing minmax, except that instead
of summarizing the page range into a single [min,max] range, the summary
consists of multiple ranges and/or points, allowing gaps. This allows
more efficient handling of data with poor correlation to physical
location within the table and/or outlier values, for which the regular
minmax opclassed tend to work poorly.
It's possible to specify the number of values kept for each page range,
either as a single point or an interval boundary.
CREATE TABLE t (a int);
CREATE INDEX ON t
USING brin (a int4_minmax_multi_ops(values_per_range=16));
When building the summary, the values are combined into intervals with
the goal to minimize the "covering" (sum of interval lengths), using a
support procedure computing distance between two values.
Bump catversion, due to various catalog changes.
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru>
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://postgr.es/m/
c1138ead-7668-f0e1-0638-
c3be3237e812@2ndquadrant.com
Discussion: https://postgr.es/m/
5d78b774-7e9c-c94e-12cf-
fef51cc89b1a%402ndquadrant.com
Tomas Vondra [Fri, 26 Mar 2021 12:35:29 +0000 (13:35 +0100)]
BRIN bloom indexes
Adds a BRIN opclass using a Bloom filter to summarize the range. Indexes
using the new opclasses allow only equality queries (similar to hash
indexes), but that works fine for data like UUID, MAC addresses etc. for
which range queries are not very common. This also means the indexes
work for data that is not well correlated to physical location within
the table, or perhaps even entirely random (which is a common issue with
existing BRIN minmax opclasses).
It's possible to specify opclass parameters with the usual Bloom filter
parameters, i.e. the desired false-positive rate and the expected number
of distinct values per page range.
CREATE TABLE t (a int);
CREATE INDEX ON t
USING brin (a int4_bloom_ops(false_positive_rate = 0.05,
n_distinct_per_range = 100));
The opclasses do not operate on the indexed values directly, but compute
a 32-bit hash first, and the Bloom filter is built on the hash value.
Collisions should not be a huge issue though, as the number of distinct
values in a page ranges is usually fairly small.
Bump catversion, due to various catalog changes.
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Sokolov Yura <y.sokolov@postgrespro.ru>
Reviewed-by: Nico Williams <nico@cryptonector.com>
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://postgr.es/m/
c1138ead-7668-f0e1-0638-
c3be3237e812@2ndquadrant.com
Discussion: https://postgr.es/m/
5d78b774-7e9c-c94e-12cf-
fef51cc89b1a%402ndquadrant.com
Tomas Vondra [Fri, 26 Mar 2021 12:17:56 +0000 (13:17 +0100)]
Support the old signature of BRIN consistent function
Commit
a1c649d889 changed the signature of the BRIN consistent function
by adding a new required parameter. Treating the parameter as optional,
which would make the change backwards incompatibile, was rejected with
the justification that there are few out-of-core extensions, so it's not
worth adding making the code more complex, and it's better to deal with
that in the extension.
But after further thought, that would be rather problematic, because
pg_upgrade simply dumps catalog contents and the same version of an
extension needs to work on both PostgreSQL versions. Supporting both
variants of the consistent function (with 3 or 4 arguments) makes that
possible.
The signature is not the only thing that changed, as commit
72ccf55cb9
moved handling of IS [NOT] NULL keys from the support procedures. But
this change is backward compatible - handling the keys in exension is
unnecessary, but harmless. The consistent function will do a bit of
unnecessary work, but it should be very cheap.
This also undoes most of the changes to the existing opclasses (minmax
and inclusion), making them use the old signature again. This should
make backpatching simpler.
Catversion bump, because of changes in pg_amproc.
Author: Tomas Vondra <tomas.vondra@postgresql.org>
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Reviewed-by: Mark Dilger <hornschnorter@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Masahiko Sawada <masahiko.sawada@enterprisedb.com>
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://postgr.es/m/
c1138ead-7668-f0e1-0638-
c3be3237e812@2ndquadrant.com
Tomas Vondra [Fri, 26 Mar 2021 12:04:13 +0000 (13:04 +0100)]
Remove unnecessary pg_amproc BRIN minmax entries
The BRIN minmax opclasses included amproc entries with mismatching left
and right types, but those happen to be unnecessary. The opclasses only
need cross-type operators, not cross-type support procedures. Discovered
when trying to define equivalent BRIN operator families in an extension.
Catversion bump, because of pg_amproc changes.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://postgr.es/m/
78c357ab-3395-8433-e7b3-
b2cfcc9fdc23%40enterprisedb.com
Robert Haas [Thu, 25 Mar 2021 23:55:32 +0000 (19:55 -0400)]
Fix interaction of TOAST compression with expression indexes.
Before, trying to compress a value for insertion into an expression
index would crash.
Dilip Kumar, with some editing by me. Report by Jaime Casanova.
Discussion: http://postgr.es/m/CAJKUy5gcs0zGOp6JXU2mMVdthYhuQpFk=S3V8DOKT=LZC1L36Q@mail.gmail.com
Alvaro Herrera [Thu, 25 Mar 2021 21:00:28 +0000 (18:00 -0300)]
ALTER TABLE ... DETACH PARTITION ... CONCURRENTLY
Allow a partition be detached from its partitioned table without
blocking concurrent queries, by running in two transactions and only
requiring ShareUpdateExclusive in the partitioned table.
Because it runs in two transactions, it cannot be used in a transaction
block. This is the main reason to use dedicated syntax: so that users
can choose to use the original mode if they need it. But also, it
doesn't work when a default partition exists (because an exclusive lock
would still need to be obtained on it, in order to change its partition
constraint.)
In case the second transaction is cancelled or a crash occurs, there's
ALTER TABLE .. DETACH PARTITION .. FINALIZE, which executes the final
steps.
The main trick to make this work is the addition of column
pg_inherits.inhdetachpending, initially false; can only be set true in
the first part of this command. Once that is committed, concurrent
transactions that use a PartitionDirectory will include or ignore
partitions so marked: in optimizer they are ignored if the row is marked
committed for the snapshot; in executor they are always included. As a
result, and because of the way PartitionDirectory caches partition
descriptors, queries that were planned before the detach will see the
rows in the detached partition and queries that are planned after the
detach, won't.
A CHECK constraint is created that duplicates the partition constraint.
This is probably not strictly necessary, and some users will prefer to
remove it afterwards, but if the partition is re-attached to a
partitioned table, the constraint needn't be rechecked.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/
20200803234854.GA24158@alvherre.pgsql
Alvaro Herrera [Thu, 25 Mar 2021 19:30:22 +0000 (16:30 -0300)]
Document lock obtained during partition detach
On partition detach, we acquire a SHARE lock on all tables that
reference the partitioned table that we're detaching a partition from,
but failed to document this fact. My oversight in commit
f56f8f8da6af.
Repair. Backpatch to 12.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/
20210325180244.GA12738@alvherre.pgsql
Alvaro Herrera [Thu, 25 Mar 2021 19:07:15 +0000 (16:07 -0300)]
Add comments for AlteredTableInfo->rel
The prior commit which introduced it was pretty squalid in terms of
code documentation, so add some comments.
Alvaro Herrera [Thu, 25 Mar 2021 18:56:11 +0000 (15:56 -0300)]
Let ALTER TABLE Phase 2 routines manage the relation pointer
Struct AlteredRelationInfo gains a new Relation member, to be used only
by Phase 2 (ATRewriteCatalogs); this allows ATExecCmd() subroutines open
and close the relation internally.
A future commit will use this facility to implement an ALTER TABLE
subcommand that closes and reopens the relation across transaction
boundaries.
(It is possible to keep the relation open past phase 2 to be used by
phase 3 instead of having to reopen it that point, but there are some
minor complications with that; it's not clear that there is much to be
won from doing that, though.)
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/
20200803234854.GA24158@alvherre.pgsql
Alvaro Herrera [Mon, 22 Feb 2021 20:21:22 +0000 (17:21 -0300)]
Rework HeapTupleHeader macros to reuse itemptr.h
The original definitions pointlessly disregarded existing ItemPointer
macros that do the same thing.
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/
20210222201557.GA32655@alvherre.pgsql
Alvaro Herrera [Thu, 25 Mar 2021 13:47:38 +0000 (10:47 -0300)]
Remove StoreSingleInheritance reimplementation
I introduced this duplicate code in commit
8b08f7d4820f for no good
reason. Remove it, and backpatch to 11 where it was introduced.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Peter Eisentraut [Thu, 25 Mar 2021 09:17:52 +0000 (10:17 +0100)]
Trim some extra whitespace in parser file
Peter Eisentraut [Thu, 25 Mar 2021 09:06:32 +0000 (10:06 +0100)]
Rename a parse node to be more general
A WHERE clause will be used for row filtering in logical replication.
We already have a similar node: 'WHERE (condition here)'. Let's
rename the node to a generic name and use it for row filtering too.
Author: Euler Taveira <euler.taveira@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X+mK_DitLXF25B=jE6xyNCH4YOwM860JR7HarGQ@mail.gmail.com
Michael Paquier [Thu, 25 Mar 2021 07:08:03 +0000 (16:08 +0900)]
Sanitize the term "combo CID" in code comments
Combo CIDs were referred in the code comments using different terms
across various places of the code, so unify a bit the term used with
what is currently in use in some of the READMEs.
Author: "Hou, Zhijie"
Discussion: https://postgr.es/m/
1d42865c91404f46af4562532fdbea31@G08CNEXMBPEKD05.g08.fujitsu.local
Fujii Masao [Thu, 25 Mar 2021 02:23:30 +0000 (11:23 +0900)]
Fix bug in WAL replay of COMMIT_TS_SETTS record.
Previously the WAL replay of COMMIT_TS_SETTS record called
TransactionTreeSetCommitTsData() with the argument write_xlog=true,
which generated and wrote new COMMIT_TS_SETTS record.
This should not be acceptable because it's during recovery.
This commit fixes the WAL replay of COMMIT_TS_SETTS record
so that it calls TransactionTreeSetCommitTsData() with write_xlog=false
and doesn't generate new WAL during recovery.
Back-patch to all supported branches.
Reported-by: lx zou <zoulx1982@163.com>
Author: Fujii Masao
Reviewed-by: Alvaro Herrera
Discussion: https://postgr.es/m/16931-
620d0f2fdc6108f1@postgresql.org
Fujii Masao [Thu, 25 Mar 2021 01:41:28 +0000 (10:41 +0900)]
Improve connection denied error message during recovery.
Previously when an archive recovery or a standby was starting and
reached the consistent recovery state but hot_standby was configured
to off, the error message when a client connectted was "the database
system is starting up", which was needless confusing and not really
all that accurate either.
This commit improves the connection denied error message during
recovery, as follows, so that the users immediately know that their
servers are configured to deny those connections.
- If hot_standby is disabled, the error message "the database system
is not accepting connections" and the detail message "Hot standby
mode is disabled." are output when clients connect while an archive
recovery or a standby is running.
- If hot_standby is enabled, the error message "the database system
is not yet accepting connections" and the detail message
"Consistent recovery state has not been yet reached." are output
when clients connect until the consistent recovery state is reached
and postmaster starts accepting read only connections.
This commit doesn't change the connection denied error message of
"the database system is starting up" during normal server startup and
crash recovery. Because it's still suitable for those situations.
Author: James Coleman
Reviewed-by: Alvaro Herrera, Andres Freund, David Zhang, Tom Lane, Fujii Masao
Discussion: https://postgr.es/m/CAAaqYe8h5ES_B=F_zDT+Nj9XU7YEwNhKhHA2RE4CFhAQ93hfig@mail.gmail.com
Andrew Dunstan [Wed, 24 Mar 2021 22:52:25 +0000 (18:52 -0400)]
Allow for installation-aware instances of PostgresNode
Currently instances of PostgresNode find their Postgres executables in
the PATH of the caller. This modification allows for instances that know
the installation path they are supposed to use, and the module adjusts
the environment of methods that call Postgres executables appropriately.
This facility is activated by passing the installation path to the
constructor:
my $node = PostgresNode->get_new_node('mynode',
installation_path => '/path/to/installation');
This makes a number of things substantially easier, including
. testing third party modules
. testing different versions of postgres together
. testing different builds of postgres together
Discussion: https://postgr.es/m/
a94c74f9-6b71-1957-7973-
a734ea3cbef1@dunslane.net
Reviewed-By: Alvaro Herrera, Michael Paquier, Dagfinn Ilmari Mannsåker
Michael Meskes [Wed, 24 Mar 2021 21:06:31 +0000 (22:06 +0100)]
Need to step forward in the loop to get to an end.
Michael Meskes [Wed, 24 Mar 2021 19:48:20 +0000 (20:48 +0100)]
Add DECLARE STATEMENT command to ECPG
This command declares a SQL identifier for a SQL statement to be used in other
embedded SQL statements. The identifier is linked to a connection.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Shawn Wang <shawn.wang.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/TY2PR01MB24438A52DB04E71D0E501452F5630@TY2PR01MB2443.jpnprd01.prod.outlook.com