Robert Haas [Tue, 13 Apr 2021 18:56:12 +0000 (14:56 -0400)]
docs: Update TOAST storage docs for configurable compression.
Mention that there are multiple TOAST compression methods and that the
compression method used is stored in a TOAST pointer along with the
other information that was stored there previously. Add a reference to
the documentation for default_toast_compression, where the supported
methods are listed, instead of duplicating that here.
I haven't tried to preserve the text claiming that pglz is "fairly
simple and very fast." I have no view on the veracity of the former
claim, but LZ4 seems to be faster (and to compress better) so it
seems better not to muddy the waters by talking about compression
speed as a strong point of PGLZ.
Patch by me, reviewed by Justin Pryzby.
Discussion: http://postgr.es/m/CA+Tgmoaw_YBwQhOS_hhEPPwFhfAnu+VCLs18EfGr9gQw1z4H-w@mail.gmail.com
Tom Lane [Tue, 13 Apr 2021 19:10:18 +0000 (15:10 -0400)]
Fix some inappropriately-disallowed uses of ALTER ROLE/DATABASE SET.
Most GUC check hooks that inspect database state have special checks
that prevent them from throwing hard errors for state-dependent issues
when source == PGC_S_TEST. This allows, for example,
"ALTER DATABASE d SET default_text_search_config = foo" when the "foo"
configuration hasn't been created yet. Without this, we have problems
during dump/reload or pg_upgrade, because pg_dump has no idea about
possible dependencies of GUC values and can't ensure a safe restore
ordering.
However, check_role() and check_session_authorization() hadn't gotten
the memo about that, and would throw hard errors anyway. It's not
entirely clear what is the use-case for "ALTER ROLE x SET role = y",
but we've now heard two independent complaints about that bollixing
an upgrade, so apparently some people are doing it.
Hence, fix these two functions to act more like other check hooks
with similar needs. (But I did not change their insistence on
being inside a transaction, as it's still not apparent that setting
either GUC from the configuration file would be wise.)
Also fix check_temp_buffers, which had a different form of the disease
of making state-dependent checks without any exception for PGC_S_TEST.
A cursory survey of other GUC check hooks did not find any more issues
of this ilk. (There are a lot of interdependencies among
PGC_POSTMASTER and PGC_SIGHUP GUCs, which may be a bad idea, but
they're not relevant to the immediate concern because they can't be
set via ALTER ROLE/DATABASE.)
Per reports from Charlie Hornsby and Nathan Bossart. Back-patch
to all supported branches.
Discussion: https://postgr.es/m/HE1P189MB0523B31598B0C772C908088DB7709@HE1P189MB0523.EURP189.PROD.OUTLOOK.COM
Discussion: https://postgr.es/m/
20160711223641.1426.86096@wrigleys.postgresql.org
Tom Lane [Tue, 13 Apr 2021 17:37:07 +0000 (13:37 -0400)]
Redesign the caching done by get_cached_rowtype().
Previously, get_cached_rowtype() cached a pointer to a reference-counted
tuple descriptor from the typcache, relying on the ExprContextCallback
mechanism to release the tupdesc refcount when the expression tree
using the tupdesc was destroyed. This worked fine when it was designed,
but the introduction of within-DO-block COMMITs broke it. The refcount
is logged in a transaction-lifespan resource owner, but plpgsql won't
destroy simple expressions made within the DO block (before its first
commit) until the DO block is exited. That results in a warning about
a leaked tupdesc refcount when the COMMIT destroys the original resource
owner, and then an error about the active resource owner not holding a
matching refcount when the expression is destroyed.
To fix, get rid of the need to have a shutdown callback at all, by
instead caching a pointer to the relevant typcache entry. Those
survive for the life of the backend, so we needn't worry about the
pointer becoming stale. (For registered RECORD types, we can still
cache a pointer to the tupdesc, knowing that it won't change for the
life of the backend.) This mechanism has been in use in plpgsql
and expandedrecord.c since commit
4b93f5799, and seems to work well.
This change requires modifying the ExprEvalStep structs used by the
relevant expression step types, which is slightly worrisome for
back-patching. However, there seems no good reason for extensions
to be familiar with the details of these particular sub-structs.
Per report from Rohit Bhogate. Back-patch to v11 where within-DO-block
COMMITs became a thing.
Discussion: https://postgr.es/m/CAAV6ZkQRCVBh8qAY+SZiHnz+U+FqAGBBDaDTjF2yiKa2nJSLKg@mail.gmail.com
Tom Lane [Tue, 13 Apr 2021 16:17:24 +0000 (12:17 -0400)]
Avoid improbable PANIC during heap_update.
heap_update needs to clear any existing "all visible" flag on
the old tuple's page (and on the new page too, if different).
Per coding rules, to do this it must acquire pin on the appropriate
visibility-map page while not holding exclusive buffer lock;
which creates a race condition since someone else could set the
flag whenever we're not holding the buffer lock. The code is
supposed to handle that by re-checking the flag after acquiring
buffer lock and retrying if it became set. However, one code
path through heap_update itself, as well as one in its subroutine
RelationGetBufferForTuple, failed to do this. The end result,
in the unlikely event that a concurrent VACUUM did set the flag
while we're transiently not holding lock, is a non-recurring
"PANIC: wrong buffer passed to visibilitymap_clear" failure.
This has been seen a few times in the buildfarm since recent VACUUM
changes that added code paths that could set the all-visible flag
while holding only exclusive buffer lock. Previously, the flag
was (usually?) set only after doing LockBufferForCleanup, which
would insist on buffer pin count zero, thus preventing the flag
from becoming set partway through heap_update. However, it's
clear that it's heap_update not VACUUM that's at fault here.
What's less clear is whether there is any hazard from these bugs
in released branches. heap_update is certainly violating API
expectations, but if there is no code path that can set all-visible
without a cleanup lock then it's only a latent bug. That's not
100% certain though, besides which we should worry about extensions
or future back-patch fixes that could introduce such code paths.
I chose to back-patch to v12. Fixing RelationGetBufferForTuple
before that would require also back-patching portions of older
fixes (notably
0d1fe9f74), which is more code churn than seems
prudent to fix a hypothetical issue.
Discussion: https://postgr.es/m/
2247102.
1618008027@sss.pgh.pa.us
Fujii Masao [Tue, 13 Apr 2021 05:21:51 +0000 (14:21 +0900)]
doc: Fix typo in logicaldecoding.sgml.
Introduced in commit
0aa8a01d04.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Ps8rkVHKWyjg09Fb1PaVG5-EhoFTFJ9OZMF4HPzDSXfew@mail.gmail.com
Noah Misch [Tue, 13 Apr 2021 02:24:41 +0000 (19:24 -0700)]
Use "-I." in directories holding Bison parsers, for Oracle compilers.
With the Oracle Developer Studio 12.6 compiler, #line directives alter
the current source file location for purposes of #include "..."
directives. Hence, a VPATH build failed with 'cannot find include file:
"specscanner.c"'. With two exceptions, parser-containing directories
already add "-I. -I$(srcdir)"; eliminate the exceptions. Back-patch to
9.6 (all supported versions).
Noah Misch [Tue, 13 Apr 2021 02:24:21 +0000 (19:24 -0700)]
Port regress-python3-mangle.mk to Solaris "sed".
It doesn't support "\(foo\)*" like a POSIX "sed" implementation does;
see the Autoconf manual. Back-patch to 9.6 (all supported versions).
Thomas Munro [Tue, 13 Apr 2021 00:34:25 +0000 (12:34 +1200)]
Fix potential SSI hazard in heap_update().
Commit
6f38d4dac38 failed to heed a warning about the stability of the
value pointed to by "otid". The caller is allowed to pass in a pointer to
newtup->t_self, which will be updated during the execution of the
function. Instead, the SSI check should use the value we copy into
oldtup.t_self near the top of the function.
Not a live bug, because newtup->t_self doesn't really get updated until
a bit later, but it was confusing and broke the rule established by the
comment.
Back-patch to 13.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/
2689164.
1618160085%40sss.pgh.pa.us
Michael Paquier [Tue, 13 Apr 2021 00:42:01 +0000 (09:42 +0900)]
Remove duplicated --no-sync switches in new tests of test_pg_dump
These got introduced in
6568cef.
Reported-by: Noah Misch
Discussion: https://postgr.es/m/
20210404220802.GA728316@rfd.leadboat.com
Tom Lane [Mon, 12 Apr 2021 22:58:20 +0000 (18:58 -0400)]
Remove no-longer-relevant test case.
collate.icu.utf8.sql was exercising the recording of a collation
dependency for an enum comparison expression, but such an expression
should never have had any collation dependency in the first place.
After I fixed that in commit
c402b02b9, the test started failing.
We don't need to test that scenario anymore, so just remove the
now-useless test steps.
(This test case is new in HEAD, so no need to back-patch.)
Discussion: https://postgr.es/m/
3044030.
1618261159@sss.pgh.pa.us
Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
Tom Lane [Mon, 12 Apr 2021 18:37:22 +0000 (14:37 -0400)]
Fix old bug with coercing the result of a COLLATE expression.
There are hacks in parse_coerce.c to push down a requested coercion
to below any CollateExpr that may appear. However, we did that even
if the requested data type is non-collatable, leading to an invalid
expression tree in which CollateExpr is applied to a non-collatable
type. The fix is just to drop the CollateExpr altogether, reasoning
that it's useless.
This bug is ten years old, dating to the original addition of
COLLATE support. The lack of field complaints suggests that there
aren't a lot of user-visible consequences. We noticed the problem
because it would trigger an assertion in DefineVirtualRelation if
the invalid structure appears as an output column of a view; however,
in a non-assert build, you don't see a crash just a (subtly incorrect)
complaint about applying collation to a non-collatable type. I found
that by putting the incorrect structure further down in a view, I could
make a view definition that would fail dump/reload, per the added
regression test case. But CollateExpr doesn't do anything at run-time,
so this likely doesn't lead to any really exciting consequences.
Per report from Yulin Pei. Back-patch to all supported branches.
Discussion: https://postgr.es/m/HK0PR01MB22744393C474D503E16C8509F4709@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
Peter Eisentraut [Mon, 12 Apr 2021 18:29:24 +0000 (20:29 +0200)]
pg_upgrade: Print OID using %u instead of %d
This could write wrong output into the cluster deletion script if a
database OID exceeds the signed 32-bit range.
Peter Eisentraut [Mon, 12 Apr 2021 17:04:33 +0000 (19:04 +0200)]
pg_amcheck: Add basic NLS support
Peter Eisentraut [Mon, 12 Apr 2021 13:44:38 +0000 (15:44 +0200)]
Fix files references in nls.mk
broken by
37d2ff38031262a1778bc76a9c55fff7afbcf275
Fujii Masao [Mon, 12 Apr 2021 12:34:23 +0000 (21:34 +0900)]
Support tab-complete for TRUNCATE on foreign tables.
Commit
8ff1c94649 extended TRUNCATE command so that it can also truncate
foreign tables. But it forgot to support tab-complete for TRUNCATE on
foreign tables. That is, previously tab-complete for TRUNCATE displayed
only the names of regular tables.
This commit improves tab-complete for TRUNCATE so that it displays also
the names of foreign tables.
Author: Fujii Masao
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/
551ed8c1-f531-818b-664a-
2cecdab99cd8@oss.nttdata.com
Michael Paquier [Mon, 12 Apr 2021 04:53:17 +0000 (13:53 +0900)]
Move log_autovacuum_min_duration into its correct sections
This GUC has already been classified as LOGGING_WHAT, but its location
in postgresql.conf.sample and the documentation did not reflect that, so
fix those inconsistencies.
Author: Justin Pryzby
Discussion: https://postgr.es/m/
20210404012546.GK6592@telsasoft.com
Amit Kapila [Mon, 12 Apr 2021 03:57:10 +0000 (09:27 +0530)]
doc: Update information of new messages for logical replication.
Updated documentation for new messages added for streaming of in-progress
transactions, as well as changes made to the existing messages. It also
updates the information of protocol versions supported for logical
replication.
Author: Ajin Cherian
Reviewed-by: Amit Kapila, Peter Smith, Euler Taveira
Discussion: https://postgr.es/m/CAFPTHDYHN9m=MZZct-B=BYg_TETvv+kXvL9RD2DpaBS5pGxGYg@mail.gmail.com
Michael Paquier [Mon, 12 Apr 2021 02:30:50 +0000 (11:30 +0900)]
Fix out-of-bound memory access for interval -> char conversion
Using Roman numbers (via "RM" or "rm") for a conversion to calculate a
number of months has never considered the case of negative numbers,
where a conversion could easily cause out-of-bound memory accesses. The
conversions in themselves were not completely consistent either, as
specifying 12 would result in NULL, but it should mean XII.
This commit reworks the conversion calculation to have a more
consistent behavior:
- If the number of months and years is 0, return NULL.
- If the number of months is positive, return the exact month number.
- If the number of months is negative, do a backward calculation, with
-1 meaning December, -2 November, etc.
Reported-by: Theodor Arsenij Larionov-Trichkin
Author: Julien Rouhaud
Discussion: https://postgr.es/m/16953-
f255a18f8c51f1d5@postgresql.org
backpatch-through: 9.6
Tom Lane [Sun, 11 Apr 2021 21:02:04 +0000 (17:02 -0400)]
Silence some Coverity warnings and improve code consistency.
Coverity complained about possible overflow in expressions like
intresult = tm->tm_sec *
1000000 + fsec;
on the grounds that the multiplication would happen in 32-bit
arithmetic before widening to the int64 result. I think these
are all false positives because of the limited possible range of
tm_sec; but nonetheless it seems silly to spell it like that when
nearby lines have the identical computation written with a 64-bit
constant.
... or more accurately, with an LL constant, which is not project
style. Make all of these use INT64CONST(), as we do elsewhere.
This is all new code from
a2da77cdb, so no need for back-patch.
Tom Lane [Sun, 11 Apr 2021 17:22:56 +0000 (13:22 -0400)]
Add macro PGWARNING, and make PGERROR available on all platforms.
We'd previously noted the need for coping with Windows headers
that provide some other definition of macro "ERROR" than elog.h
does. It turns out that R also wants to define ERROR, and
WARNING too. PL/R has been working around this in a hacky way
that broke when we recently changed the numeric value of ERROR.
To let them have a more future-proof solution, provide an
alternate macro PGWARNING for WARNING, and make PGERROR visible
always, not only when #ifdef WIN32.
Discussion: https://postgr.es/m/CADK3HHK6iMChd1yoOqssxBn5Z14Zar8Ztr3G-N_fuG7F8YTP3w@mail.gmail.com
Tom Lane [Sun, 11 Apr 2021 15:46:32 +0000 (11:46 -0400)]
Fix uninitialized variable from commit
a4d75c86b.
The path for *exprs != NIL would misbehave, and likely crash,
since pull_varattnos expects its last argument to be valid
at call.
Found by Coverity --- we have no coverage of this path in
the regression tests.
Fujii Masao [Sun, 11 Apr 2021 15:05:58 +0000 (00:05 +0900)]
Avoid unnecessary table open/close in TRUNCATE command.
ExecuteTruncate() filters out the duplicate tables specified
in the TRUNCATE command, for example in the case where "TRUNCATE foo, foo"
is executed. Such duplicate tables obviously don't need to be opened
and closed because they are skipped. But previously it always opened
the tables before checking whether they were duplicated ones or not,
and then closed them if they were. That is, the duplicated tables were
opened and closed unnecessarily.
This commit changes ExecuteTruncate() so that it opens the table
after it confirms that table is not duplicated one, which leads to
avoid unnecessary table open/close.
Do not back-patch because such unnecessary table open/close is not
a bug though it exists in older versions.
Author: Bharath Rupireddy
Reviewed-by: Amul Sul, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACUdBO_sXJTa08OZ0YT0qk7F_gAmRa9hT4dxRcgPS4nsZA@mail.gmail.com
Fujii Masao [Sun, 11 Apr 2021 15:00:18 +0000 (00:00 +0900)]
Remove COMMIT_TS_SETTS record.
Commit
438fc4a39c prevented the WAL replay from writing
COMMIT_TS_SETTS record. By this change there is no code that
generates COMMIT_TS_SETTS record in PostgreSQL core.
Also we can think that there are no extensions using the record
because we've not received so far any complaints about the issue
that commit
438fc4a39c fixed. Therefore this commit removes
COMMIT_TS_SETTS record and its related code. Even without
this record, the timestamp required for commit timestamp feature
can be acquired from the COMMIT record.
Bump WAL page magic.
Reported-by: lx zou <zoulx1982@163.com>
Author: Fujii Masao
Reviewed-by: Alvaro Herrera
Discussion: https://postgr.es/m/16931-
620d0f2fdc6108f1@postgresql.org
Noah Misch [Sat, 10 Apr 2021 19:01:41 +0000 (12:01 -0700)]
Standardize pg_authid oid_symbol values.
Commit
c9c41c7a337d3e2deb0b2a193e9ecfb865d8f52b used two different
naming patterns. Standardize on the majority pattern, which was the
only pattern in the last reviewed version of that commit.
Peter Eisentraut [Sat, 10 Apr 2021 17:33:46 +0000 (19:33 +0200)]
Improve behavior of date_bin with origin in the future
Currently, when the origin is after the input, the result is the
timestamp at the end of the bin, rather than the beginning as
expected. This puts the result consistently at the beginning of the
bin.
Author: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/CAFBsxsGjLDxQofRfH+d4KSAXxPf3MMevUG7s6EDfdBOvHLDLjw@mail.gmail.com
Tom Lane [Sat, 10 Apr 2021 17:16:08 +0000 (13:16 -0400)]
Fix failure of xlogprefetch.h to include all prerequisite headers.
Per cpluspluscheck.
Tom Lane [Sat, 10 Apr 2021 16:08:28 +0000 (12:08 -0400)]
Doc: update documentation of check_function_bodies.
Adjust docs and description string to note that check_function_bodies
applies to procedures too. (In hindsight it should have been named
check_routine_bodies, but it seems too late for that now.)
Daniel Westermann
Discussion: https://postgr.es/m/GV0P278MB04834A9EB9A74B036DC7CE49D2739@GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
David Rowley [Sat, 10 Apr 2021 07:19:45 +0000 (19:19 +1200)]
Improve slightly misleading comments in nodeFuncs.c
There were some comments in nodeFuncs.c that, depending on your
interpretation of the word "result", could lead you to believe that the
comments were badly copied and pasted from somewhere else. If you thought
of "result" as the return value of the function that the comment is
written in, then you'd be misled. However, if you'd correctly
interpreted "result" to mean the result type of the given node type,
you'd not have seen any issues.
Here we do a small cleanup to try to prevent any future
misinterpretations. Per wording suggestion from Tom Lane.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvp+Bw=2Qiu5=uXMKfC7gd0+B=4JvexVgGJU=am2g9a1CA@mail.gmail.com
Peter Eisentraut [Fri, 9 Apr 2021 21:23:45 +0000 (23:23 +0200)]
doc: Fix man page whitespace issues
Whitespace between tags is significant, and in some cases it creates
extra vertical space in man pages. The fix is to remove some newlines
in the markup.
Alvaro Herrera [Fri, 9 Apr 2021 21:13:18 +0000 (17:13 -0400)]
Suppress length of Notice/Error msgs in PQtrace regress mode
A (relatively minor) annoyance of ErrorResponse/NoticeResponse messages
as printed by PQtrace() is that their length might vary when we move
error messages from one source file to another, one function to another,
or even when their location line numbers change number of digits.
To avoid having to adjust expected files for some tests, make the
regress mode of PQtrace() suppress the length word of NoticeResponse and
ErrorResponse messages.
Discussion: https://postgr.es/m/
20210402023010.GA13563@alvherre.pgsql
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Thomas Munro [Fri, 9 Apr 2021 20:41:07 +0000 (08:41 +1200)]
Make new GUC short descriptions more consistent.
Reported-by: Daniel Westermann (DWE) <daniel.westermann@dbi-services.com>
Discussion: https://postgr.es/m/GV0P278MB0483490FEAC879DCA5ED583DD2739%40GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
Thomas Munro [Fri, 9 Apr 2021 20:09:30 +0000 (08:09 +1200)]
Doc: Review for "Optionally prefetch referenced data in recovery."
Typos, corrections and language improvements in the docs, and a few in
code comments too.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/
20210409033703.GP6592%40telsasoft.com
Peter Eisentraut [Fri, 9 Apr 2021 19:55:08 +0000 (21:55 +0200)]
doc: Additional documentation for date_bin
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Author: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/CAFBsxsEEm1nuhZmfVQxvu_i3nDDEuvNJ_WMrDo9whFD_jusp-A@mail.gmail.com
Alvaro Herrera [Fri, 9 Apr 2021 17:38:07 +0000 (13:38 -0400)]
Document ANALYZE storage parameters for partitioned tables
Commit
0827e8af70f4 added parameters for autovacuum to support
partitioned tables, but didn't add any docs. Add them.
Discussion: https://postgr.es/m/
20210408213051.GL6592@telsasoft.com
Alvaro Herrera [Fri, 9 Apr 2021 15:29:08 +0000 (11:29 -0400)]
Set pg_class.reltuples for partitioned tables
When commit
0827e8af70f4 added auto-analyze support for partitioned
tables, it included code to obtain reltuples for the partitioned table
as a number of catalog accesses to read pg_class.reltuples for each
partition. That's not only very inefficient, but also problematic
because autovacuum doesn't hold any locks on any of those tables -- and
doesn't want to. Replace that code with a read of pg_class.reltuples
for the partitioned table, and make sure ANALYZE and TRUNCATE properly
maintain that value.
I found no code that would be affected by the change of relpages from
zero to non-zero for partitioned tables, and no other code that should
be maintaining it, but if there is, hopefully it'll be an easy fix.
Per buildfarm.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/
1823909.
1617862590@sss.pgh.pa.us
Magnus Hagander [Fri, 9 Apr 2021 10:40:14 +0000 (12:40 +0200)]
Fix typo
Author: Daniel Westermann
Backpatch-through: 9.6
Discussion: https://postgr.es/m/GV0P278MB0483A7AA85BAFCC06D90F453D2739@GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM
Michael Paquier [Fri, 9 Apr 2021 04:53:07 +0000 (13:53 +0900)]
Fix typos and grammar in documentation and code comments
Comment fixes are applied on HEAD, and documentation improvements are
applied on back-branches where needed.
Author: Justin Pryzby
Discussion: https://postgr.es/m/
20210408164008.GJ6592@telsasoft.com
Backpatch-through: 9.6
Peter Geoghegan [Thu, 8 Apr 2021 19:54:31 +0000 (12:54 -0700)]
Silence another _bt_check_unique compiler warning.
Per complaint from Tom Lane
Discussion: https://postgr.es/m/
1922884.
1617909599@sss.pgh.pa.us
Tom Lane [Thu, 8 Apr 2021 19:38:26 +0000 (15:38 -0400)]
Add support for tab-completion of type arguments in \df, \do.
Oversight in commit
a3027e1e7.
Tom Lane [Thu, 8 Apr 2021 19:14:26 +0000 (15:14 -0400)]
Suppress uninitialized-variable warning.
Several buildfarm critters that don't usually produce such
warnings are complaining about
e717a9a18. I think it's
actually safe, but move initialization to silence the warning.
Bruce Momjian [Thu, 8 Apr 2021 15:16:01 +0000 (11:16 -0400)]
Fixes for query_id feature
Ignore parallel workers in pg_stat_statements
Oversight in
4f0b0966c8 which exposed queryid in parallel workers.
Counters are aggregated by the main backend process so parallel workers
would report duplicated activity, and could also report activity for the
wrong entry as they are only aware of the top level queryid.
Fix thinko in pg_stat_get_activity when retrieving the queryid.
Remove unnecessary call to pgstat_report_queryid().
Reported-by: Amit Kapila, Andres Freund, Thomas Munro
Discussion: https://postgr.es/m/
20210408051735.lfbdzun5zdlax5gd@alap3.anarazel.de p634GTSOqnDW86Owrn6qDAVosC5dJjXjp7BMfc5Gz1Q@mail.gmail.com
Author: Julien Rouhaud
Magnus Hagander [Thu, 8 Apr 2021 13:15:17 +0000 (15:15 +0200)]
Merge v1.10 of pg_stat_statements into v1.9
v1.9 is already new in this version of PostgreSQL, so turn it into just
one change.
Author: Julien Rohaud
Discussion: https://postgr.es/m/
20210408120505.7zinijtdexbyghvb@nol
Thomas Munro [Thu, 8 Apr 2021 12:36:59 +0000 (00:36 +1200)]
Remove duplicate typedef.
Thinko in commit
323cbe7c, per complaint from BF animal locust's older
GCC compiler.
Fujii Masao [Thu, 8 Apr 2021 11:56:08 +0000 (20:56 +0900)]
Allow TRUNCATE command to truncate foreign tables.
This commit introduces new foreign data wrapper API for TRUNCATE.
It extends TRUNCATE command so that it accepts foreign tables as
the targets to truncate and invokes that API. Also it extends postgres_fdw
so that it can issue TRUNCATE command to foreign servers, by adding
new routine for that TRUNCATE API.
The information about options specified in TRUNCATE command, e.g.,
ONLY, CACADE, etc is passed to FDW via API. The list of foreign tables to
truncate is also passed to FDW. FDW truncates the foreign data sources
that the passed foreign tables specify, based on those information.
For example, postgres_fdw constructs TRUNCATE command using them
and issues it to the foreign server.
For performance, TRUNCATE command invokes the FDW routine for
TRUNCATE once per foreign server that foreign tables to truncate belong to.
Author: Kazutaka Onishi, Kohei KaiGai, slightly modified by Fujii Masao
Reviewed-by: Bharath Rupireddy, Michael Paquier, Zhihong Yu, Alvaro Herrera, Stephen Frost, Ashutosh Bapat, Amit Langote, Daniel Gustafsson, Ibrar Ahmed, Fujii Masao
Discussion: https://postgr.es/m/CAOP8fzb_gkReLput7OvOK+8NHgw-RKqNv59vem7=524krQTcWA@mail.gmail.com
Discussion: https://postgr.es/m/CAJuF6cMWDDqU-vn_knZgma+2GMaout68YUgn1uyDnexRhqqM5Q@mail.gmail.com
David Rowley [Thu, 8 Apr 2021 11:51:22 +0000 (23:51 +1200)]
Speedup ScalarArrayOpExpr evaluation
ScalarArrayOpExprs with "useOr=true" and a set of Consts on the righthand
side have traditionally been evaluated by using a linear search over the
array. When these arrays contain large numbers of elements then this
linear search could become a significant part of execution time.
Here we add a new method of evaluating ScalarArrayOpExpr expressions to
allow them to be evaluated by first building a hash table containing each
element, then on subsequent evaluations, we just probe that hash table to
determine if there is a match.
The planner is in charge of determining when this optimization is possible
and it enables it by setting hashfuncid in the ScalarArrayOpExpr. The
executor will only perform the hash table evaluation when the hashfuncid
is set.
This means that not all cases are optimized. For example CHECK constraints
containing an IN clause won't go through the planner, so won't get the
hashfuncid set. We could maybe do something about that at some later
date. The reason we're not doing it now is from fear that we may slow
down cases where the expression is evaluated only once. Those cases can
be common, for example, a single row INSERT to a table with a CHECK
constraint containing an IN clause.
In the planner, we enable this when there are suitable hash functions for
the ScalarArrayOpExpr's operator and only when there is at least
MIN_ARRAY_SIZE_FOR_HASHED_SAOP elements in the array. The threshold is
currently set to 9.
Author: James Coleman, David Rowley
Reviewed-by: David Rowley, Tomas Vondra, Heikki Linnakangas
Discussion: https://postgr.es/m/CAAaqYe8x62+=wn0zvNKCj55tPpg-JBHzhZFFc6ANovdqFw7-dA@mail.gmail.com
Thomas Munro [Thu, 8 Apr 2021 11:03:43 +0000 (23:03 +1200)]
Optionally prefetch referenced data in recovery.
Introduce a new GUC recovery_prefetch, disabled by default. When
enabled, look ahead in the WAL and try to initiate asynchronous reading
of referenced data blocks that are not yet cached in our buffer pool.
For now, this is done with posix_fadvise(), which has several caveats.
Better mechanisms will follow in later work on the I/O subsystem.
The GUC maintenance_io_concurrency is used to limit the number of
concurrent I/Os we allow ourselves to initiate, based on pessimistic
heuristics used to infer that I/Os have begun and completed.
The GUC wal_decode_buffer_size is used to limit the maximum distance we
are prepared to read ahead in the WAL to find uncached blocks.
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (parts)
Reviewed-by: Andres Freund <andres@anarazel.de> (parts)
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (parts)
Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com>
Tested-by: Dmitry Dolgov <9erthalion6@gmail.com>
Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
Thomas Munro [Thu, 8 Apr 2021 11:03:34 +0000 (23:03 +1200)]
Add circular WAL decoding buffer.
Teach xlogreader.c to decode its output into a circular buffer, to
support optimizations based on looking ahead.
* XLogReadRecord() works as before, consuming records one by one, and
allowing them to be examined via the traditional XLogRecGetXXX()
macros.
* An alternative new interface XLogNextRecord() is added that returns
pointers to DecodedXLogRecord structs that can be examined directly.
* XLogReadAhead() provides a second cursor that lets you see
further ahead, as long as data is available and there is enough space
in the decoding buffer. This returns DecodedXLogRecord pointers to the
caller, but also adds them to a queue of records that will later be
consumed by XLogNextRecord()/XLogReadRecord().
The buffer's size is controlled with wal_decode_buffer_size. The buffer
could potentially be placed into shared memory, for future projects.
Large records that don't fit in the circular buffer are called
"oversized" and allocated separately with palloc().
Discussion: https://postgr.es/m/CA+hUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq=AovOddfHpA@mail.gmail.com
Thomas Munro [Thu, 8 Apr 2021 11:03:23 +0000 (23:03 +1200)]
Remove read_page callback from XLogReader.
Previously, the XLogReader module would fetch new input data using a
callback function. Redesign the interface so that it tells the caller
to insert more data with a special return value instead. This API suits
later patches for prefetching, encryption and maybe other future
projects that would otherwise require continually extending the callback
interface.
As incidental cleanup work, move global variables readOff, readLen and
readSegNo inside XlogReaderState.
Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Author: Heikki Linnakangas <hlinnaka@iki.fi> (parts of earlier version)
Reviewed-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Takashi Menjo <takashi.menjo@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/
20190418.210257.
43726183.horiguchi.kyotaro%40lab.ntt.co.jp
David Rowley [Thu, 8 Apr 2021 10:35:48 +0000 (22:35 +1200)]
Cleanup partition pruning step generation
There was some code in gen_prune_steps_from_opexps that needlessly
checked a list was not empty when it clearly had to contain at least one
item. This prompted a further cleanup operation in partprune.c.
Additionally, the previous code could end up adding additional needless
INTERSECT steps. However, those do not appear to be able to cause any
misbehavior.
gen_prune_steps_from_opexps is now no longer in charge of generating
combine pruning steps. Instead, gen_partprune_steps_internal, which
already does some combine step creation has been given the sole
responsibility of generating all combine steps. This means that when
we recursively call gen_partprune_steps_internal, since it always now adds
a combine step when it produces multiple steps, we can just pay attention
to the final step returned.
In passing, do quite a bit of work on the comments to try to more clearly
explain the role of both gen_partprune_steps_internal and
gen_prune_steps_from_opexps. This is fairly complex code so some extra
effort to give any new readers an overview of how things work seems like
a good idea.
Author: Amit Langote
Reported-by: Andy Fan
Reviewed-by: Kyotaro Horiguchi, Andy Fan, Ryan Lambert, David Rowley
Discussion: https://postgr.es/m/CAKU4AWqWoVii+bRTeBQmeVW+PznkdO8DfbwqNsu9Gj4ubt9A6w@mail.gmail.com
Peter Eisentraut [Thu, 8 Apr 2021 10:20:11 +0000 (12:20 +0200)]
Add ORDER BY to some regression test queries
Apparently, an unrelated patch introduced some variation on the build
farm.
Reported-by: Magnus Hagander <magnus@hagander.net>
Magnus Hagander [Thu, 8 Apr 2021 09:32:14 +0000 (11:32 +0200)]
Add functions to wait for backend termination
This adds a function, pg_wait_for_backend_termination(), and a new
timeout argument to pg_terminate_backend(), which will wait for the
backend to actually terminate (with or without signaling it to do so
depending on which function is called). The default behaviour of
pg_terminate_backend() remains being timeout=0 which does not waiting.
For pg_wait_for_backend_termination() the default wait is 5 seconds.
Author: Bharath Rupireddy
Reviewed-By: Fujii Masao, David Johnston, Muhammad Usama,
Hou Zhijie, Magnus Hagander
Discussion: https://postgr.es/m/CALj2ACUBpunmyhYZw-kXCYs5NM+h6oG_7Df_Tn4mLmmUQifkqA@mail.gmail.com
Peter Eisentraut [Thu, 8 Apr 2021 08:51:26 +0000 (10:51 +0200)]
doc: Prefer explicit JOIN syntax over old implicit syntax in tutorial
Update src/tutorial/basics.source to match.
Author: Jürgen Purtz <juergen@purtz.de>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: "David G. Johnston" <david.g.johnston@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/
158996922318.7035.
10603922579567326239@wrigleys.postgresql.org
Magnus Hagander [Thu, 8 Apr 2021 08:23:10 +0000 (10:23 +0200)]
Track identical top vs nested queries independently in pg_stat_statements
Changing pg_stat_statements.track between 'all' and 'top' would control
if pg_stat_statements tracked just top level statements or also
statements inside functions, but when tracking all it would not
differentiate between the two. Being table to differentiate this is
useful both to track where the actual query is coming from, and to see
if there are differences in executions between the two.
To do this, add a boolean to the hash key indicating if the statement
was top level or not.
Experience from the pg_stat_kcache module shows that in at least some
"reasonable worloads" only <5% of the queries show up both top level and
nested. Based on this, admittedly small, dataset, this patch does not
try to de-duplicate those query *texts*, and will just store one copy
for the top level and one for the nested.
Author: Julien Rohaud
Reviewed-By: Magnus Hagander, Masahiro Ikeda
Discussion: https://postgr.es/m/
20201202040516.GA43757@nol
Peter Eisentraut [Thu, 8 Apr 2021 06:28:03 +0000 (08:28 +0200)]
Update Unicode data to CLDR 39
Thomas Munro [Thu, 8 Apr 2021 05:48:37 +0000 (17:48 +1200)]
Provide ReadRecentBuffer() to re-pin buffers by ID.
If you know the ID of a buffer that recently held a block that you would
like to pin, this function can be used check if it's still there. It
can be used to avoid a second lookup in the buffer mapping table after
PrefetchBuffer() reports a cache hit.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq=AovOddfHpA@mail.gmail.com
Alvaro Herrera [Thu, 8 Apr 2021 05:19:36 +0000 (01:19 -0400)]
autovacuum: handle analyze for partitioned tables
Previously, autovacuum would completely ignore partitioned tables, which
is not good regarding analyze -- failing to analyze those tables means
poor plans may be chosen. Make autovacuum aware of those tables by
propagating "changes since analyze" counts from the leaf partitions up
the partitioning hierarchy.
This also introduces necessary reloptions support for partitioned tables
(autovacuum_enabled, autovacuum_analyze_scale_factor,
autovacuum_analyze_threshold). It's unclear how best to document this
aspect.
Author: Yuzuko Hosoya <yuzukohosoya@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CAKkQ508_PwVgwJyBY=0Lmkz90j8CmWNPUxgHvCUwGhMrouz6UA@mail.gmail.com
Andres Freund [Thu, 8 Apr 2021 05:00:01 +0000 (22:00 -0700)]
Cope with NULL query string in ExecInitParallelPlan().
It's far from clear that this is the right approach - but a good
portion of the buildfarm has been red for a few hours, on the last day
of the CF. And this fixes at least the obvious crash. So let's go with
that for now.
Discussion: https://postgr.es/m/
20210407225806.majgznh4lk34hjvu%40alap3.anarazel.de
Amit Kapila [Thu, 8 Apr 2021 04:54:00 +0000 (10:24 +0530)]
Fix typo in jsonfuncs.c.
Author: Tatsuro Yamada
Discussion: https://postgr.es/m/
7c166a60-2808-6b89-9524-
feefc6233748@nttcom.co.jp_1
Alvaro Herrera [Thu, 8 Apr 2021 04:46:14 +0000 (00:46 -0400)]
Repair find_inheritance_children with no active snapshot
When working on a scan with only a catalog snapshot, we may not have an
ActiveSnapshot set. If we were to come across a detached partition,
that would cause a crash. Fix by only ignoring detached partitions when
there's an active snapshot.
Tom Lane [Thu, 8 Apr 2021 03:02:16 +0000 (23:02 -0400)]
Allow psql's \df and \do commands to specify argument types.
When dealing with overloaded function or operator names, having
to look through a long list of matches is tedious. Let's extend
these commands to allow specification of (input) argument types
to let such results be trimmed down. Each additional argument
is treated the same as the pattern argument of \dT and matched
against the appropriate argument's type name.
While at it, fix \dT (and these new options) to recognize the
usual notation of "foo[]" for "the array type over foo", and
to handle the special abbreviations allowed by the backend
grammar, such as "int" for "integer".
Greg Sabino Mullane, revised rather significantly by me
Discussion: https://postgr.es/m/CAKAnmmLF9Hhu02N+s7uAyLc5J1xZReg72HQUoiKhNiJV3_jACQ@mail.gmail.com
Bruce Momjian [Thu, 8 Apr 2021 02:30:30 +0000 (22:30 -0400)]
Add csvlog output for the new query_id value
This also adjusts the printf format for query id used by log_line_prefix
(%Q).
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/
20210408005402.GG24239@momjian.us
Author: Julien Rouhaud, Bruce Momjian
Peter Geoghegan [Wed, 7 Apr 2021 23:14:54 +0000 (16:14 -0700)]
Teach VACUUM to bypass unnecessary index vacuuming.
VACUUM has never needed to call ambulkdelete() for each index in cases
where there are precisely zero TIDs in its dead_tuples array by the end
of its first pass over the heap (also its only pass over the heap in
this scenario). Index vacuuming is simply not required when this
happens. Index cleanup will still go ahead, but in practice most calls
to amvacuumcleanup() are usually no-ops when there were zero preceding
ambulkdelete() calls. In short, VACUUM has generally managed to avoid
index scans when there were clearly no index tuples to delete from
indexes. But cases with _close to_ no index tuples to delete were
another matter -- a round of ambulkdelete() calls took place (one per
index), each of which performed a full index scan.
VACUUM now behaves just as if there were zero index tuples to delete in
cases where there are in fact "virtually zero" such tuples. That is, it
can now bypass index vacuuming and heap vacuuming as an optimization
(though not index cleanup). Whether or not VACUUM bypasses indexes is
determined dynamically, based on the just-observed number of heap pages
in the table that have one or more LP_DEAD items (LP_DEAD items in heap
pages have a 1:1 correspondence with index tuples that still need to be
deleted from each index in the worst case).
We only skip index vacuuming when 2% or less of the table's pages have
one or more LP_DEAD items -- bypassing index vacuuming as an
optimization must not noticeably impede setting bits in the visibility
map. As a further condition, the dead_tuples array (i.e. VACUUM's array
of LP_DEAD item TIDs) must not exceed 32MB at the point that the first
pass over the heap finishes, which is also when the decision to bypass
is made. (The VACUUM must also have been able to fit all TIDs in its
maintenance_work_mem-bound dead_tuples space, though with a default
maintenance_work_mem setting it can't matter.)
This avoids surprising jumps in the duration and overhead of routine
vacuuming with workloads where successive VACUUM operations consistently
have almost zero dead index tuples. The number of LP_DEAD items may
well accumulate over multiple VACUUM operations, before finally the
threshold is crossed and VACUUM performs conventional index vacuuming.
Even then, the optimization will have avoided a great deal of largely
unnecessary index vacuuming.
In the future we may teach VACUUM to skip index vacuuming on a per-index
basis, using a much more sophisticated approach. For now we only
consider the extreme cases, where we can be quite confident that index
vacuuming just isn't worth it using simple heuristics.
Also log information about how many heap pages have one or more LP_DEAD
items when autovacuum logging is enabled.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzmkebqPd4MVGuPTOS9bMFvp9MDs5cRTCOsv1rQJ3jCbXw@mail.gmail.com
Bruce Momjian [Wed, 7 Apr 2021 22:14:36 +0000 (18:14 -0400)]
Fix regression test failure caused by commit
4f0b0966c8
The query originally used was too simple, cause explain_filter() to be
unable to remove JIT output text.
Reported-by: Tom Lane
Author: Julien Rouhaud
Michael Paquier [Wed, 7 Apr 2021 21:55:00 +0000 (06:55 +0900)]
Fix some failures with connection tests on Windows hosts
The truncation of the log file, that this set of tests relies on to make
sure that a connection attempt matches with its expected backend log
pattern, fails, as reported by buildfarm member fairywren. Instead of a
truncation, do a rotation of the log file and restart the node. This
will ensure that the connection attempt data is unique for each test.
Discussion: https://postgr.es/m/YG05nCI8x8B+Ad3G@paquier.xyz
Peter Eisentraut [Wed, 7 Apr 2021 19:30:08 +0000 (21:30 +0200)]
SQL-standard function body
This adds support for writing CREATE FUNCTION and CREATE PROCEDURE
statements for language SQL with a function body that conforms to the
SQL standard and is portable to other implementations.
Instead of the PostgreSQL-specific AS $$ string literal $$ syntax,
this allows writing out the SQL statements making up the body
unquoted, either as a single statement:
CREATE FUNCTION add(a integer, b integer) RETURNS integer
LANGUAGE SQL
RETURN a + b;
or as a block
CREATE PROCEDURE insert_data(a integer, b integer)
LANGUAGE SQL
BEGIN ATOMIC
INSERT INTO tbl VALUES (a);
INSERT INTO tbl VALUES (b);
END;
The function body is parsed at function definition time and stored as
expression nodes in a new pg_proc column prosqlbody. So at run time,
no further parsing is required.
However, this form does not support polymorphic arguments, because
there is no more parse analysis done at call time.
Dependencies between the function and the objects it uses are fully
tracked.
A new RETURN statement is introduced. This can only be used inside
function bodies. Internally, it is treated much like a SELECT
statement.
psql needs some new intelligence to keep track of function body
boundaries so that it doesn't send off statements when it sees
semicolons that are inside a function body.
Tested-by: Jaime Casanova <jcasanov@systemguards.com.ec>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/
1c11f1eb-f00c-43b7-799d-
2d44132c02d7@2ndquadrant.com
Peter Geoghegan [Wed, 7 Apr 2021 19:37:45 +0000 (12:37 -0700)]
Add wraparound failsafe to VACUUM.
Add a failsafe mechanism that is triggered by VACUUM when it notices
that the table's relfrozenxid and/or relminmxid are dangerously far in
the past. VACUUM checks the age of the table dynamically, at regular
intervals.
When the failsafe triggers, VACUUM takes extraordinary measures to
finish as quickly as possible so that relfrozenxid and/or relminmxid can
be advanced. VACUUM will stop applying any cost-based delay that may be
in effect. VACUUM will also bypass any further index vacuuming and heap
vacuuming -- it only completes whatever remaining pruning and freezing
is required. Bypassing index/heap vacuuming is enabled by commit
8523492d, which made it possible to dynamically trigger the mechanism
already used within VACUUM when it is run with INDEX_CLEANUP off.
It is expected that the failsafe will almost always trigger within an
autovacuum to prevent wraparound, long after the autovacuum began.
However, the failsafe mechanism can trigger in any VACUUM operation.
Even in a non-aggressive VACUUM, where we're likely to not advance
relfrozenxid, it still seems like a good idea to finish off remaining
pruning and freezing. An aggressive/anti-wraparound VACUUM will be
launched immediately afterwards. Note that the anti-wraparound VACUUM
that follows will itself trigger the failsafe, usually before it even
begins its first (and only) pass over the heap.
The failsafe is controlled by two new GUCs: vacuum_failsafe_age, and
vacuum_multixact_failsafe_age. There are no equivalent reloptions,
since that isn't expected to be useful. The GUCs have rather high
defaults (both default to 1.6 billion), and are expected to generally
only be used to make the failsafe trigger sooner/more frequently.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzmgH3ySGYeC-m-eOBsa2=sDwa292-CFghV4rESYo39FsQ@mail.gmail.com
Bruce Momjian [Wed, 7 Apr 2021 18:03:56 +0000 (14:03 -0400)]
Make use of in-core query id added by commit
5fd9dfa5f5
Use the in-core query id computation for pg_stat_activity,
log_line_prefix, and EXPLAIN VERBOSE.
Similar to other fields in pg_stat_activity, only the queryid from the
top level statements are exposed, and if the backends status isn't
active then the queryid from the last executed statements is displayed.
Add a %Q placeholder to include the queryid in log_line_prefix, which
will also only expose top level statements.
For EXPLAIN VERBOSE, if a query identifier has been computed, either by
enabling compute_query_id or using a third-party module, display it.
Bump catalog version.
Discussion: https://postgr.es/m/
20210407125726.tkvjdbw76hxnpwfi@nol
Author: Julien Rouhaud
Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
Robert Haas [Wed, 7 Apr 2021 17:28:35 +0000 (13:28 -0400)]
amcheck: fix multiple problems with TOAST pointer validation
First, don't perform database access while holding a buffer lock.
When checking a heap, we can validate that TOAST pointers are sane by
performing a scan on the TOAST index and looking up the chunks that
correspond to each value ID that appears in a TOAST poiner in the main
table. But, to do that while holding a buffer lock at least risks
causing other backends to wait uninterruptibly, and probably can cause
undetected and uninterruptible deadlocks. So, instead, make a list of
checks to perform while holding the lock, and then perform the checks
after releasing it.
Second, adjust things so that we don't try to follow TOAST pointers
for tuples that are already eligible to be pruned. The TOAST tuples
become eligible for pruning at the same time that the main tuple does,
so trying to check them may lead to spurious reports of corruption,
as observed in the buildfarm. The necessary infrastructure to decide
whether or not the tuple being checked is prunable was added by
commit
3b6c1259f9ca8e21860aaf24ec6735a8e5598ea0, but it wasn't
actually used for its intended purpose prior to this patch.
Mark Dilger, adjusted by me to avoid a memory leak.
Discussion: http://postgr.es/m/
AC5479E4-6321-473D-AC92-
5EC36299FBC2@enterprisedb.com
Bruce Momjian [Wed, 7 Apr 2021 17:06:47 +0000 (13:06 -0400)]
Move pg_stat_statements query jumbling to core.
Add compute_query_id GUC to control whether a query identifier should be
computed by the core (off by default). It's thefore now possible to
disable core queryid computation and use pg_stat_statements with a
different algorithm to compute the query identifier by using a
third-party module.
To ensure that a single source of query identifier can be used and is
well defined, modules that calculate a query identifier should throw an
error if compute_query_id specified to compute a query id and if a query
idenfitier was already calculated.
Discussion: https://postgr.es/m/
20210407125726.tkvjdbw76hxnpwfi@nol
Author: Julien Rouhaud
Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
Tom Lane [Wed, 7 Apr 2021 16:50:17 +0000 (12:50 -0400)]
Remove channel binding requirement from clientcert=verify-full test.
This fails on older OpenSSL versions that lack channel binding
support. Since that feature is not essential to this test case,
just remove it, instead of complicating matters. Per buildfarm.
Jacob Champion
Discussion: https://postgr.es/m/
fa8dbbb58c20b1d1adf0082769f80d5466eaf485.camel@vmware.com
Tom Lane [Wed, 7 Apr 2021 16:21:54 +0000 (12:21 -0400)]
Comment cleanup for
a1115fa07.
Amit Langote
Discussion: https://postgr.es/m/CA+HiwqEcawatEaUh1uTbZMEZTJeLzbroRTz9_X9Z5CFjTWJkhw@mail.gmail.com
Robert Haas [Wed, 7 Apr 2021 16:11:44 +0000 (12:11 -0400)]
amcheck: Remove duplicate XID/MXID bounds checks.
Commit
3b6c1259f9ca8e21860aaf24ec6735a8e5598ea0 resulted in the same
xmin and xmax bounds checking being performed in both check_tuple()
and check_tuple_visibility(). Remove the duplication.
While at it, adjust some code comments that were overlooked in that
commit.
Mark Dilger
Discussion: http://postgr.es/m/
AC5479E4-6321-473D-AC92-
5EC36299FBC2@enterprisedb.com
Peter Geoghegan [Wed, 7 Apr 2021 15:47:15 +0000 (08:47 -0700)]
Truncate line pointer array during VACUUM.
Teach VACUUM to truncate the line pointer array of each heap page when a
contiguous group of LP_UNUSED line pointers appear at the end of the
array -- these unused and unreferenced items are excluded. This process
occurs during VACUUM's second pass over the heap, right after LP_DEAD
line pointers on the page (those encountered/pruned during the first
pass) are marked LP_UNUSED.
Truncation avoids line pointer bloat with certain workloads,
particularly those involving continual range DELETEs and bulk INSERTs
against the same table.
Also harden heapam code to check for an out-of-range page offset number
in places where we weren't already doing so.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAEze2WjgaQc55Y5f5CQd3L=eS5CZcff2Obxp=O6pto8-f0hC4w@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-Wzn6a64PJM1Ggzm=uvx2otsopJMhFQj_g1rAj4GWr3ZSzw@mail.gmail.com
Tom Lane [Wed, 7 Apr 2021 15:22:22 +0000 (11:22 -0400)]
Tighten up allowed names for custom GUC parameters.
Formerly we were pretty lax about what a custom GUC's name could
be; so long as it had at least one dot in it, we'd take it.
However, corner cases such as dashes or equal signs in the name
would cause various bits of functionality to misbehave. Rather
than trying to make the world perfectly safe for that, let's
just require that custom names look like "identifier.identifier",
where "identifier" means something that scan.l would accept
without double quotes.
Along the way, this patch refactors things slightly in guc.c
so that find_option() is responsible for reporting GUC-not-found
cases, allowing removal of duplicative code from its callers.
Per report from Hubert Depesz Lubaczewski. No back-patch,
since the consequences of the problem don't seem to warrant
changing behavior in stable branches.
Discussion: https://postgr.es/m/951335.
1612910077@sss.pgh.pa.us
Tomas Vondra [Wed, 7 Apr 2021 13:58:35 +0000 (15:58 +0200)]
Don't add non-existent pages to bitmap from BRIN
The code in bringetbitmap() simply added the whole matching page range
to the TID bitmap, as determined by pages_per_range, even if some of the
pages were beyond the end of the heap. The query then might fail with
an error like this:
ERROR: could not open file "base/20176/20228.2" (target block
262144): previous segment is only 131021 blocks
In this case, the relation has 262093 pages (131072 and 131021 pages),
but we're trying to acess block 262144, i.e. first block of the 3rd
segment. At that point _mdfd_getseg() notices the preceding segment is
incomplete, and fails.
Hitting this in practice is rather unlikely, because:
* Most indexes use power-of-two ranges, so segments and page ranges
align perfectly (segment end is also a page range end).
* The table size has to be just right, with the last segment being
almost full - less than one page range from full segment, so that the
last page range actually crosses the segment boundary.
* Prefetch has to be enabled. The regular page access checks that
pages are not beyond heap end, but prefetch does not. On older
releases (before 12) the execution stops after hitting the first
non-existent page, so the prefetch distance has to be sufficient
to reach the first page in the next segment to trigger the issue.
Since 12 it's enough to just have prefetch enabled, the prefetch
distance does not matter.
Fixed by not adding non-existent pages to the TID bitmap. Backpatch
all the way back to 9.6 (BRIN indexes were introduced in 9.5, but that
release is EOL).
Backpatch-through: 9.6
Peter Eisentraut [Wed, 7 Apr 2021 13:11:41 +0000 (15:11 +0200)]
libpq: Set Server Name Indication (SNI) for SSL connections
By default, have libpq set the TLS extension "Server Name Indication" (SNI).
This allows an SNI-aware SSL proxy to route connections. (This
requires a proxy that is aware of the PostgreSQL protocol, not just
any SSL proxy.)
In the future, this could also allow the server to use different SSL
certificates for different host specifications. (That would require
new server functionality. This would be the client-side functionality
for that.)
Since SNI makes the host name appear in cleartext in the network
traffic, this might be undesirable in some cases. Therefore, also add
a libpq connection option "sslsni" to turn it off.
Discussion: https://www.postgresql.org/message-id/flat/
7289d5eb-62a5-a732-c3b9-
438cee2cb709%40enterprisedb.com
Magnus Hagander [Wed, 7 Apr 2021 12:21:19 +0000 (14:21 +0200)]
Refactor hba_authname
The previous implementation (from
9afffcb833) had an unnecessary check
on the boundaries of the enum which trigtered compile warnings. To clean
it up, move the pre-existing static assert to a central location and
call that.
Reported-By: Erik Rijkers
Reviewed-By: Michael Paquier
Discussion: https://postgr.es/m/
1056399262.13159.
1617793249020@webmailclassic.xs4all.nl
Peter Eisentraut [Wed, 7 Apr 2021 11:52:26 +0000 (13:52 +0200)]
doc: Improve wording
Discussion: https://www.postgresql.org/message-id/flat/
161626776179.652.
11944895442156126506%40wrigleys.postgresql.org
Heikki Linnakangas [Wed, 7 Apr 2021 11:33:21 +0000 (14:33 +0300)]
Revert "Add sortsupport for gist_btree opclasses, for faster index builds."
This reverts commit
9f984ba6d23dc6eecebf479ab1d3f2e550a4e9be.
It was making the buildfarm unhappy, apparently setting client_min_messages
in a regression test produces different output if log_statement='all'.
Another issue is that I now suspect the bit sortsupport function was in
fact not correct to call byteacmp(). Revert to investigate both of those
issues.
Heikki Linnakangas [Wed, 7 Apr 2021 10:22:05 +0000 (13:22 +0300)]
Add sortsupport for gist_btree opclasses, for faster index builds.
Commit
16fa9b2b30 introduced a faster way to build GiST indexes, by
sorting all the data. This commit adds the sortsupport functions needed
to make use of that feature for btree_gist.
Author: Andrey Borodin
Discussion: https://www.postgresql.org/message-id/
2F3F7265-0D22-44DB-AD71-
8554C743D943@yandex-team.ru
Peter Eisentraut [Wed, 7 Apr 2021 05:49:27 +0000 (07:49 +0200)]
Fix use of cursor sensitivity terminology
Documentation and comments in code and tests have been using the terms
sensitive/insensitive cursor incorrectly relative to the SQL standard.
(Cursor sensitivity is only relevant for changes made in the same
transaction as the cursor, not for concurrent changes in other
sessions.) Moreover, some of the behavior of PostgreSQL is incorrect
according to the SQL standard, confusing the issue further. (WHERE
CURRENT OF changes are not visible in insensitive cursors, but they
should be.)
This change corrects the terminology and removes the claim that
sensitive cursors are supported. It also adds a test case that checks
the insensitive behavior in a "correct" way, using a change command
not using WHERE CURRENT OF. Finally, it adds the ASENSITIVE cursor
option to select the default asensitive behavior, per SQL standard.
There are no changes to cursor behavior in this patch.
Discussion: https://www.postgresql.org/message-id/flat/
96ee8b30-9889-9e1b-b053-
90e10c050e85%40enterprisedb.com
Peter Eisentraut [Wed, 7 Apr 2021 05:37:30 +0000 (07:37 +0200)]
Message improvement
The previous wording contained a superfluous comma. Adjust phrasing
for grammatical correctness and clarity.
Michael Paquier [Wed, 7 Apr 2021 05:35:26 +0000 (14:35 +0900)]
Remove redundant memset(0) calls for page init of some index AMs
Bloom, GIN, GiST and SP-GiST rely on PageInit() to initialize the
contents of a page, and this routine fills entirely a page with zeros
for a size of BLCKSZ, including the special space. Those index AMs have
been using an extra memset() call to fill with zeros the special page
space, or even the whole page, which is not necessary as PageInit()
already does this work, so let's remove them. GiST was not doing this
extra call, but has commented out a system call that did so since
6236991.
While on it, remove one MAXALIGN() for SP-GiST as PageInit() takes care
of that. This makes the whole page initialization logic more consistent
across all index AMs.
Author: Bharath Rupireddy
Reviewed-by: Vignesh C, Mahendra Singh Thalor
Discussion: https://postgr.es/m/CALj2ACViOo2qyaPT7krWm4LRyRTw9kOXt+g6PfNmYuGA=YHj9A@mail.gmail.com
Michael Paquier [Wed, 7 Apr 2021 01:16:39 +0000 (10:16 +0900)]
Add some information about authenticated identity via log_connections
The "authenticated identity" is the string used by an authentication
method to identify a particular user. In many common cases, this is the
same as the PostgreSQL username, but for some third-party authentication
methods, the identifier in use may be shortened or otherwise translated
(e.g. through pg_ident user mappings) before the server stores it.
To help administrators see who has actually interacted with the system,
this commit adds the capability to store the original identity when
authentication succeeds within the backend's Port, and generates a log
entry when log_connections is enabled. The log entries generated look
something like this (where a local user named "foouser" is connecting to
the database as the database user called "admin"):
LOG: connection received: host=[local]
LOG: connection authenticated: identity="foouser" method=peer (/data/pg_hba.conf:88)
LOG: connection authorized: user=admin database=postgres application_name=psql
Port->authn_id is set according to the authentication method:
bsd: the PostgreSQL username (aka the local username)
cert: the client's Subject DN
gss: the user principal
ident: the remote username
ldap: the final bind DN
pam: the PostgreSQL username (aka PAM username)
password (and all pw-challenge methods): the PostgreSQL username
peer: the peer's pw_name
radius: the PostgreSQL username (aka the RADIUS username)
sspi: either the down-level (SAM-compatible) logon name, if
compat_realm=1, or the User Principal Name if compat_realm=0
The trust auth method does not set an authenticated identity. Neither
does clientcert=verify-full.
Port->authn_id could be used for other purposes, like a superuser-only
extra column in pg_stat_activity, but this is left as future work.
PostgresNode::connect_{ok,fails}() have been modified to let tests check
the backend log files for required or prohibited patterns, using the
new log_like and log_unlike parameters. This uses a method based on a
truncation of the existing server log file, like issues_sql_like().
Tests are added to the ldap, kerberos, authentication and SSL test
suites.
Author: Jacob Champion
Reviewed-by: Stephen Frost, Magnus Hagander, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/
c55788dd1773c521c862e8e0dddb367df51222be.camel@vmware.com
Fujii Masao [Tue, 6 Apr 2021 22:42:36 +0000 (07:42 +0900)]
Fix test added by commit
9de9294b0c.
The buildfarm members "drongo" and "fairywren" reported that
the regression test (024_archive_recovery.pl) added by commit
9de9294b0c
failed. The cause of this failure is that the test calls $node->init()
without "allows_streaming => 1" and which doesn't add pg_hba.conf
entry for TCP/IP connection from pg_basebackup.
This commit fixes the issue by specifying "allows_streaming => 1"
when calling $node->init().
Author: Fujii Masao
Discussion: https://postgr.es/m/
3cc3909d-f779-7a74-c201-
f1f7f62c7497@oss.nttdata.com
Tom Lane [Tue, 6 Apr 2021 22:13:05 +0000 (18:13 -0400)]
Postpone some more stuff out of ExecInitModifyTable.
Delay creation of the projections for INSERT and UPDATE tuples
until they're needed. This saves a pretty fair amount of work
when only some of the partitions are actually touched.
The logic associated with identifying junk columns in UPDATE/DELETE
is moved to another loop, allowing removal of one loop over the
target relations; but it didn't actually change at all.
Extracted from a larger patch, which seemed to me to be too messy
to push in one commit.
Amit Langote, reviewed at different times by Heikki Linnakangas and
myself
Discussion: https://postgr.es/m/CA+HiwqG7ZruBmmih3wPsBZ4s0H2EhywrnXEduckY5Hr3fWzPWA@mail.gmail.com
David Rowley [Tue, 6 Apr 2021 21:51:33 +0000 (09:51 +1200)]
Fix compiler warning for MSVC in libpq_pipeline.c
DEBUG was already defined by the MSVC toolchain for "Debug" builds. On
these systems the unconditional #define DEBUG was causing a 'DEBUG': macro
redefinition warning.
Here we rename DEBUG to DEBUG_OUPUT and also get rid of the #define which
defined this constant. This appears to have been left in the code by
mistake.
Discussion: https://postgr.es/m/CAApHDvqTTgDm38s4HRj03nhzhzQ1oMOj-RXFUB1pE6Bj07jyuQ@mail.gmail.com
Tom Lane [Tue, 6 Apr 2021 19:56:55 +0000 (15:56 -0400)]
Postpone some stuff out of ExecInitModifyTable.
Arrange to do some things on-demand, rather than immediately during
executor startup, because there's a fair chance of never having to do
them at all:
* Don't open result relations' indexes until needed.
* Don't initialize partition tuple routing, nor the child-to-root
tuple conversion map, until needed.
This wins in UPDATEs on partitioned tables when only some of the
partitions will actually receive updates; with larger partition
counts the savings is quite noticeable. Also, we can remove some
sketchy heuristics in ExecInitModifyTable about whether to set up
tuple routing.
Also, remove execPartition.c's private hash table tracking which
partitions were already opened by the ModifyTable node. Instead
use the hash added to ModifyTable itself by commit
86dc90056.
To allow lazy computation of the conversion maps, we now set
ri_RootResultRelInfo in all child ResultRelInfos. We formerly set it
only in some, not terribly well-defined, cases. This has user-visible
side effects in that now more error messages refer to the root
relation instead of some partition (and provide error data in the
root's column order, too). It looks to me like this is a strict
improvement in consistency, so I don't have a problem with the
output changes visible in this commit.
Extracted from a larger patch, which seemed to me to be too messy
to push in one commit.
Amit Langote, reviewed at different times by Heikki Linnakangas and
myself
Discussion: https://postgr.es/m/CA+HiwqG7ZruBmmih3wPsBZ4s0H2EhywrnXEduckY5Hr3fWzPWA@mail.gmail.com
Fujii Masao [Tue, 6 Apr 2021 17:32:10 +0000 (02:32 +0900)]
postgres_fdw: Allow partitions specified in LIMIT TO to be imported.
Commit
f49bcd4ef3 disallowed postgres_fdw to import table partitions.
Because all data can be accessed through the partitioned table which
is the root of the partitioning hierarchy, importing only partitioned
table should allow access to all the data without creating extra objects.
This is a reasonable default when importing a whole schema. But there
may be the case where users want to explicitly import one of
a partitioned tables' partitions.
For that use case, this commit allows postgres_fdw to import tables or
foreign tables which are partitions of some other table only when they
are explicitly specified in LIMIT TO clause. It doesn't change
the behavior that any partitions not specified in LIMIT TO are
automatically excluded in IMPORT FOREIGN SCHEMA command.
Author: Matthias van de Meent
Reviewed-by: Bernd Helmle, Amit Langote, Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/CAEze2Whwg4i=mzApMe+PXxCEfgoZmHGqdqQFW7J4bmj_5p6t1A@mail.gmail.com
Andres Freund [Tue, 6 Apr 2021 16:24:50 +0000 (09:24 -0700)]
Increment xactCompletionCount during subtransaction abort.
Snapshot caching, introduced in
623a9ba79b, did not increment
xactCompletionCount during subtransaction abort. That could lead to an older
snapshot being reused. That is, at least as far as I can see, not a
correctness issue (for MVCC snapshots there's no difference between "in
progress" and "aborted"). The only difference between the old and new
snapshots would be a newer ->xmax.
While HeapTupleSatisfiesMVCC makes the same visibility determination, reusing
the old snapshot leads HeapTupleSatisfiesMVCC to not set
HEAP_XMIN_INVALID. Which subsequently causes the kill_prior_tuple optimization
to not kick in (via HeapTupleIsSurelyDead() returning false). The performance
effects of doing the same index-lookups over and over again is how the issue
was discovered...
Fix the issue by incrementing xactCompletionCount in
XidCacheRemoveRunningXids. It already acquires ProcArrayLock exclusively,
making that an easy proposition.
Add a test to ensure that kill_prior_tuple prevents index growth when it
involves aborted subtransaction of the current transaction.
Author: Andres Freund
Discussion: https://postgr.es/m/
20210406043521.lopeo7bbigad3n6t@alap3.anarazel.de
Discussion: https://postgr.es/m/
20210317055718.v6qs3ltzrformqoa%40alap3.anarazel.de
Peter Geoghegan [Tue, 6 Apr 2021 15:49:22 +0000 (08:49 -0700)]
Remove tupgone special case from vacuumlazy.c.
Retry the call to heap_prune_page() in rare cases where there is
disagreement between the heap_prune_page() call and the call to
HeapTupleSatisfiesVacuum() that immediately follows. Disagreement is
possible when a concurrently-aborted transaction makes a tuple DEAD
during the tiny window between each step. This was the only case where
a tuple considered DEAD by VACUUM still had storage following pruning.
VACUUM's definition of dead tuples is now uniformly simple and
unambiguous: dead tuples from each page are always LP_DEAD line pointers
that were encountered just after we performed pruning (and just before
we considered freezing remaining items with tuple storage).
Eliminating the tupgone=true special case enables INDEX_CLEANUP=off
style skipping of index vacuuming that takes place based on flexible,
dynamic criteria. The INDEX_CLEANUP=off case had to know about skipping
indexes up-front before now, due to a subtle interaction with the
special case (see commit
dd695979) -- this was a special case unto
itself. Now there are no special cases. And so now it won't matter
when or how we decide to skip index vacuuming: it won't affect how
pruning behaves, and it won't be affected by any of the implementation
details of pruning or freezing.
Also remove XLOG_HEAP2_CLEANUP_INFO records. These are no longer
necessary because we now rely entirely on heap pruning taking care of
recovery conflicts. There is no longer any need to generate recovery
conflicts for DEAD tuples that pruning just missed. This also means
that heap vacuuming now uses exactly the same strategy for recovery
conflicts as index vacuuming always has: REDO routines never need to
process a latestRemovedXid from the WAL record, since earlier REDO of
the WAL record from pruning is sufficient in all cases. The generic
XLOG_HEAP2_CLEAN record type is now split into two new record types to
reflect this new division (these are called XLOG_HEAP2_PRUNE and
XLOG_HEAP2_VACUUM).
Also stop acquiring a super-exclusive lock for heap pages when they're
vacuumed during VACUUM's second heap pass. A regular exclusive lock is
enough. This is correct because heap page vacuuming is now strictly a
matter of setting the LP_DEAD line pointers to LP_UNUSED. No other
backend can have a pointer to a tuple located in a pinned buffer that
can be invalidated by a concurrent heap page vacuum operation.
Heap vacuuming can now be thought of as conceptually similar to index
vacuuming and conceptually dissimilar to heap pruning. Heap pruning now
has sole responsibility for anything involving the logical contents of
the database (e.g., managing transaction status information, recovery
conflicts, considering what to do with HOT chains). Index vacuuming and
heap vacuuming are now only concerned with recycling garbage items from
physical data structures that back the logical database.
Bump XLOG_PAGE_MAGIC due to pruning and heap page vacuum WAL record
changes.
Credit for the idea of retrying pruning a page to avoid the tupgone case
goes to Andres Freund.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-WznneCXTzuFmcwx_EyRQgfsfJAAsu+CsqRFmFXCAar=nJw@mail.gmail.com
Tom Lane [Tue, 6 Apr 2021 15:23:48 +0000 (11:23 -0400)]
Fix missing #include in nodeResultCache.h.
Per cpluspluscheck.
Peter Eisentraut [Tue, 6 Apr 2021 14:58:10 +0000 (16:58 +0200)]
psql: Show all query results by default
Previously, psql printed only the last result if a command string
returned multiple result sets. Now it prints all of them. The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.
1904132231510.8961@lancre
Tomas Vondra [Tue, 6 Apr 2021 14:12:37 +0000 (16:12 +0200)]
Fix handling of clauses incompatible with extended statistics
Handling of incompatible clauses while applying extended statistics was
a bit confused - while handling a mix of compatible and incompatible
clauses it sometimes incorrectly treated the incompatible clauses as
compatible, resulting in a crash.
Fixed by reworking the code applying the selected statistics object to
make it easier to understand, and adding a proper compatibility check.
Reported-by: David Rowley
Discussion: https://postgr.es/m/CAApHDvpYT10-nkSp8xXe-nbO3jmoaRyRFHbzh-RWMfAJynqgpQ%40mail.gmail.com
Peter Geoghegan [Tue, 6 Apr 2021 14:49:39 +0000 (07:49 -0700)]
Refactor lazy_scan_heap() loop.
Add a lazy_scan_heap() subsidiary function that handles heap pruning and
tuple freezing: lazy_scan_prune(). This is a great deal cleaner. The
code that remains in lazy_scan_heap()'s per-block loop can now be
thought of as code that either comes before or after the call to
lazy_scan_prune(), which is now the clear focal point. This division is
enforced by the way in which we now manage state. lazy_scan_prune()
outputs state (using its own struct) that describes what to do with the
page following pruning and freezing (e.g., visibility map maintenance,
recording free space in the FSM). It doesn't get passed any special
instructional state from the preamble code, though.
Also cleanly separate the logic used by a VACUUM with INDEX_CLEANUP=off
from the logic used by single-heap-pass VACUUMs. The former case is now
structured as the omission of index and heap vacuuming by a two pass
VACUUM. The latter case goes back to being used only when the table
happens to have no indexes (just as it was before commit
a96c41fe).
This structure is much more natural, since the whole point of
INDEX_CLEANUP=off is to skip the index and heap vacuuming that would
otherwise take place. The single-heap-pass case doesn't skip any useful
work, though -- it just does heap pruning and heap vacuuming together
when the table happens to have no indexes.
Both of these changes are preparation for an upcoming patch that
generalizes the mechanism used by INDEX_CLEANUP=off. The later patch
will allow VACUUM to give up on index and heap vacuuming dynamically, as
problems emerge (e.g., with wraparound), so that an affected VACUUM
operation can finish up as soon as possible.
Also fix a very old bug in single-pass VACUUM VERBOSE output. We were
reporting the number of tuples deleted via pruning as a direct
substitute for reporting the number of LP_DEAD items removed in a
function that deals with the second pass over the heap. But that
doesn't work at all -- they're two different things.
To fix, start tracking the total number of LP_DEAD items encountered
during pruning, and use that in the report instead. A single pass
VACUUM will always vacuum away whatever LP_DEAD items a heap page has
immediately after it is pruned, so the total number of LP_DEAD items
encountered during pruning equals the total number vacuumed-away.
(They are _not_ equal in the INDEX_CLEANUP=off case, but that's okay
because skipping index vacuuming is now a totally orthogonal concept to
one-pass VACUUM.)
Also stop reporting the count of LP_UNUSED items in VACUUM VERBOSE
output. This makes the output of VACUUM VERBOSE more consistent with
log_autovacuum's output (because it never showed information about
LP_UNUSED items). VACUUM VERBOSE reported LP_UNUSED items left behind
by the last VACUUM, and LP_UNUSED items created via pruning HOT chains
during the current VACUUM (it never included LP_UNUSED items left behind
by the current VACUUM's second pass over the heap). This makes it
useless as an indicator of line pointer bloat, which must have been the
original intention. (Like the first VACUUM VERBOSE issue, this issue was
arguably an oversight in commit
282d2a03, which added the heap-only
tuple optimization.)
Finally, stop reporting empty_pages in VACUUM VERBOSE output, and start
reporting pages_removed instead. This also makes the output of VACUUM
VERBOSE more consistent with log_autovacuum's output (which does not
show empty_pages, but does show pages_removed). An empty page isn't
meaningfully different to a page that is almost empty, or a page that is
empty but for only a small number of remaining LP_UNUSED items.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-WznneCXTzuFmcwx_EyRQgfsfJAAsu+CsqRFmFXCAar=nJw@mail.gmail.com
Tom Lane [Tue, 6 Apr 2021 14:34:37 +0000 (10:34 -0400)]
Clean up treatment of missing default and CHECK-constraint records.
Andrew Gierth reported that it's possible to crash the backend if no
pg_attrdef record is found to match an attribute that has atthasdef set.
AttrDefaultFetch warns about this situation, but then leaves behind
a relation tupdesc that has null "adbin" pointer(s), which most places
don't guard against.
We considered promoting the warning to an error, but throwing errors
during relcache load is pretty drastic: it effectively locks one out
of using the relation at all. What seems better is to leave the
load-time behavior as a warning, but then throw an error in any code
path that wants to use a default and can't find it. This confines
the error to a subset of INSERT/UPDATE operations on the table, and
in particular will at least allow a pg_dump to succeed.
Also, we should fix AttrDefaultFetch to not leave any null pointers
in the tupdesc, because that just creates an untested bug hazard.
While at it, apply the same philosophy of "warn at load, throw error
only upon use of the known-missing info" to CHECK constraints.
CheckConstraintFetch is very nearly the same logic as AttrDefaultFetch,
but for reasons lost in the mists of time, it was throwing ERROR for
the same cases that AttrDefaultFetch treats as WARNING. Make the two
functions more nearly alike.
In passing, get rid of potentially-O(N^2) loops in equalTupleDesc
by making AttrDefaultFetch sort the entries after fetching them,
so that equalTupleDesc can assume that entries in two equal tupdescs
must be in matching order. (CheckConstraintFetch already was sorting
CHECK constraints, but equalTupleDesc hadn't been told about it.)
There's some argument for back-patching this, but with such a small
number of field reports, I'm content to fix it in HEAD.
Discussion: https://postgr.es/m/87pmzaq4gx.fsf@news-spur.riddles.org.uk
Fujii Masao [Tue, 6 Apr 2021 13:56:51 +0000 (22:56 +0900)]
Stop archive recovery if WAL generated with wal_level=minimal is found.
Previously if hot standby was enabled, archive recovery exited with
an error when it found WAL generated with wal_level=minimal.
But if hot standby was disabled, it just reported a warning and
continued in that case. Which could lead to data loss or errors
during normal operation. A warning was emitted, but users could
easily miss that and not notice this serious situation until
they encountered the actual errors.
To improve this situation, this commit changes archive recovery
so that it exits with FATAL error when it finds WAL generated with
wal_level=minimal whatever the setting of hot standby. This enables
users to notice the serious situation soon.
The FATAL error is thrown if archive recovery starts from a base
backup taken before wal_level is changed to minimal. When archive
recovery exits with the error, if users have a base backup taken
after setting wal_level to higher than minimal, they can recover
the database by starting archive recovery from that newer backup.
But note that if such backup doesn't exist, there is no easy way to
complete archive recovery, which may make the database server
unstartable and users may lose whole database. The commit adds
the note about this risk into the document.
Even in the case of unstartable database server, previously by just
disabling hot standby users could avoid the error during archive
recovery, forcibly start up the server and salvage data from it.
But note that this commit makes this procedure unavailable at all.
Author: Takamichi Osumi
Reviewed-by: Laurenz Albe, Kyotaro Horiguchi, David Steele, Fujii Masao
Discussion: https://postgr.es/m/OSBPR01MB4888CBE1DA08818FD2D90ED8EDF90@OSBPR01MB4888.jpnprd01.prod.outlook.com
Heikki Linnakangas [Tue, 6 Apr 2021 11:53:56 +0000 (14:53 +0300)]
Mark test_enc_conversion() as STRICT.
Reported-by: Jaime Casanova, using SQLsmith
Discussion: https://www.postgresql.org/message-id/
20210402235337.GA4082@ahch-to
Dean Rasheed [Tue, 6 Apr 2021 10:50:42 +0000 (11:50 +0100)]
pgbench: Function to generate random permutations.
This adds a new function, permute(), that generates pseudorandom
permutations of arbitrary sizes. This can be used to randomly shuffle
a set of values to remove unwanted correlations. For example,
permuting the output from a non-uniform random distribution so that
all the most common values aren't collocated, allowing more realistic
tests to be performed.
Formerly, hash() was recommended for this purpose, but that suffers
from collisions that might alter the distribution, so recommend
permute() for this purpose instead.
Fabien Coelho and Hironobu Suzuki, with additional hacking be me.
Reviewed by Thomas Munro, Alvaro Herrera and Muhammad Usama.
Discussion: https://postgr.es/m/alpine.DEB.2.21.
1807280944370.5142@lancre
Etsuro Fujita [Tue, 6 Apr 2021 10:15:00 +0000 (19:15 +0900)]
Adjust input value to WaitEventSetWait() in ExecAppendAsyncEventWait().
Adjust the number of events given to WaitEventSetWait() so that it
doesn't exceed the maximum number of events in the WaitEventSet given
to that function (set->nevents_space) in hopes of making the buildfarm
green.
Per valgrind failure report from Tom Lane and the buildfarm.
Author: Etsuro Fujita
Discussion: https://postgr.es/m/
3411577.
1617289776%40sss.pgh.pa.us