Peter Eisentraut [Tue, 24 Dec 2024 22:42:41 +0000 (23:42 +0100)]
jsonpath scanner: reentrant scanner
Use the flex %option reentrant to make the generated scanner
reentrant and thread-safe. Note: The parser was already pure.
Simplify flex scan buffer management: Instead of constructing the
buffer from pieces and then using yy_scan_buffer(), we can just use
yy_scan_string(), which does the same thing internally. (Actually, we
use yy_scan_bytes() here because we already have the length.)
Use flex yyextra to handle context information, instead of global
variables. This complements the other changes to make the scanner
reentrant.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/
eb6faeac-2a8a-4b69-9189-
c33c520e5b7b@eisentraut.org
Peter Geoghegan [Tue, 24 Dec 2024 19:06:16 +0000 (14:06 -0500)]
Fix nbtree symbol name comment reference.
Oversight in commit
5bf748b86b.
Peter Eisentraut [Mon, 2 Dec 2024 09:35:37 +0000 (10:35 +0100)]
syncrep parser: pure parser and reentrant scanner
Use the flex %option reentrant and the bison option %pure-parser to
make the generated scanner and parser pure, reentrant, and
thread-safe.
Make the generated scanner use palloc() etc. instead of malloc() etc.
Previously, we only used palloc() for the buffer, but flex would still
use malloc() for its internal structures. Now, all the memory is
under palloc() control.
Simplify flex scan buffer management: Instead of constructing the
buffer from pieces and then using yy_scan_buffer(), we can just use
yy_scan_string(), which does the same thing internally.
The previous code was necessary because we allocated the buffer with
palloc() and the rest of the state was handled by malloc(). But this
is no longer the case; everything is under palloc() now.
Use flex yyextra to handle context information, instead of global
variables. This complements the other changes to make the scanner
reentrant.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/
eb6faeac-2a8a-4b69-9189-
c33c520e5b7b@eisentraut.org
Peter Eisentraut [Mon, 2 Dec 2024 09:35:37 +0000 (10:35 +0100)]
replication parser: pure parser and reentrant scanner
Use the flex %option reentrant and the bison option %pure-parser to
make the generated scanner and parser pure, reentrant, and
thread-safe.
Make the generated scanner use palloc() etc. instead of malloc() etc.
Previously, we only used palloc() for the buffer, but flex would still
use malloc() for its internal structures. As a result, there could be
some small memory leaks in case of uncaught errors. Now, all the
memory is under palloc() control, so there are no more such issues.
Simplify flex scan buffer management: Instead of constructing the
buffer from pieces and then using yy_scan_buffer(), we can just use
yy_scan_string(), which does the same thing internally.
The previous code was necessary because we allocated the buffer with
palloc() and the rest of the state was handled by malloc(). But this
is no longer the case; everything is under palloc() now.
Use flex yyextra to handle context information, instead of global
variables. This complements the other changes to make the scanner
reentrant.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Co-authored-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/
eb6faeac-2a8a-4b69-9189-
c33c520e5b7b@eisentraut.org
Peter Eisentraut [Tue, 24 Dec 2024 13:02:42 +0000 (14:02 +0100)]
Remove pgrminclude and associated scripts
Per git log, the last time someone tried to do something with
pgrminclude was around 2011. And it's always had a tendency of
causing trouble when it was active. Also, pgcominclude is redundant
with headerscheck.
Discussion: https://www.postgresql.org/message-id/flat/
2d4dc7b2-cb2e-49b1-b8ca-
ba5f7024f05b%40eisentraut.org
Peter Eisentraut [Tue, 24 Dec 2024 10:48:08 +0000 (11:48 +0100)]
Remove pgrminclude annotations
Per git log, the last time someone tried to do something with
pgrminclude was around 2011. Many (not all) of the "pgrminclude
ignore" annotations are of a newer date but seem to have just been
copied around during refactorings and file moves and don't seem to
reflect an actual need anymore.
There have been some parallel experiments with include-what-you-use
(IWYU) annotations, but these don't seem to correspond very strongly
to pgrminclude annotations, so there is no value in keeping the
existing ones even for that kind of thing.
So, wipe them all away. We can always add new ones in the future
based on actual needs.
Discussion: https://www.postgresql.org/message-id/flat/
2d4dc7b2-cb2e-49b1-b8ca-
ba5f7024f05b%40eisentraut.org
David Rowley [Tue, 24 Dec 2024 01:54:24 +0000 (14:54 +1300)]
Fix race condition in TupleDescCompactAttr assert code
5983a4cff added CompactAttribute as an abbreviated alternative to
FormData_pg_attribute to allow more cache-friendly processing in tasks
related to TupleDescs. That commit contained some assert-only code to
check that the CompactAttribute had been populated correctly, however,
the method used to do that checking caused the TupleDesc's
CompactAttribute to be zeroed before it was repopulated and compared to
the snapshot taken before the memset call. This caused issues as the type
cache caches TupleDescs in shared memory which can be used by multiple
backend processes at the same time. There was a window of time between
the zero and repopulation of the CompactAttribute where another process
would mistakenly think that the CompactAttribute is invalid due to the
memset.
To fix this, instead of taking a snapshot of the CompactAttribute and
calling populate_compact_attribute() and comparing the snapshot to the
freshly populated TupleDesc's CompactAttribute, refactor things so we
can just populate a temporary CompactAttribute on the stack. This way
we don't touch the TupleDesc's memory.
Reported-by: Alexander Lakhin, SQLsmith
Discussion: https://postgr.es/m/
ca3a256a-5d12-42db-aabe-
a75a030d9fb9@gmail.com
Tom Lane [Mon, 23 Dec 2024 21:46:07 +0000 (16:46 -0500)]
Try to avoid semaphore-related test failures on NetBSD/OpenBSD.
These two platforms have a remarkably tight default limit on the
number of SysV semaphores in the system: SEMMNS is only 60
out-of-the-box. Unless manual action is taken to raise that,
we'll only be able to allocate 3 sets of 16 usable semaphores
each, leading to initdb setting max_connections to just 20.
That's problematic because the core regression tests expect
to be able to launch 20 concurrent sessions, leaving us with
no headroom. This seems to be the cause of intermittent
buildfarm failures on some machines.
While there's no getting around the fact that you'd better raise
SEMMNS for production use on these platforms, it does seem desirable
for "make check" to pass reliably without that. We can make that
happen, at least for awhile longer, with two small changes:
* Change sysv_sema.c's SEMAS_PER_SET to 19, so that we can eat up
all of the available semas not just most of them.
* Change initdb to make the smallest max_connections value it will
consider be 25 not 20.
As of HEAD this will leave us with four free semaphores (using the
default values for other relevant parameters such as max_wal_senders).
So we won't need to consider this again until we've invented five
more background processes. Maybe by then we can switch both these
platforms to some other semaphore API.
For the moment, do this only in master; there've not been field
complaints that might justify a back-patch.
Discussion: https://postgr.es/m/
db2773a2-aca0-43d0-99c1-
060efcd9954e@gmail.com
Peter Geoghegan [Mon, 23 Dec 2024 20:46:00 +0000 (15:46 -0500)]
Reset btpo_cycleid in nbtree VACUUM's REDO routine.
Reset btpo_cycleid to 0 in btree_xlog_vacuum for consistency with
_bt_delitems_vacuum (the corresponding original execution code). This
makes things neater.
There might be some performance benefit to being consistent like this.
When btvacuumpage doesn't call _bt_delitems_vacuum, it can still
proactively reset btpo_cycleid to 0 via a separate hint-like update
mechanism (it does so whenever it sees that it isn't already set to 0).
And so it's possible that being consistent about resetting btpo_cycleid
like this will save work later on, after standby promotion: subsequent
VACUUMs won't need to clear btpo_cycleid using the hint-like update
mechanism as often as they otherwise would.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/CAH2-Wz=+LDFxn9NZyEsCo8ifcyKt6+n-VLyygySEHgMz+oynqw@mail.gmail.com
Tom Lane [Mon, 23 Dec 2024 20:14:30 +0000 (15:14 -0500)]
postgres_fdw: re-issue cancel requests a few times if necessary.
Despite the best efforts of commit
0e5c82380, we're still seeing
occasional failures of postgres_fdw's query_cancel test in the
buildfarm. Investigation suggests that its 100ms timeout is
still not enough to reliably ensure that the remote side starts
the query before receiving the cancel request --- and if it
hasn't, it will just discard the request because it's idle.
We discussed allowing a cancel request to kill the next-received
query, but that would have wide and perhaps unpleasant side-effects.
What seems safer is to make postgres_fdw do what a human user would
likely do, which is issue another cancel request if the first one
didn't seem to do anything. We'll keep the same overall 30 second
grace period before concluding things are broken, but issue additional
cancel requests after 1 second, then 2 more seconds, then 4, then 8.
(The next one in series is 16 seconds, but we'll hit the 30 second
timeout before that.)
Having done that, revert the timeout in query_cancel.sql to 10 ms.
That will still be enough on most machines, most of the time, for
the remote query to start; but now we're intentionally risking the
race condition occurring sometimes in the buildfarm, so that the
repeat-cancel code path will get some testing.
As before, back-patch to v17. We might eventually contemplate
back-patching this further, and/or adding similar logic to dblink.
But given the lack of field complaints to date, this feels like
mostly an exercise in test case stabilization, so v17 is enough.
Discussion: https://postgr.es/m/colnv3lzzmc53iu5qoawynr6qq7etn47lmggqr65ddtpjliq5d@glkveb4m6nop
Heikki Linnakangas [Mon, 23 Dec 2024 10:42:55 +0000 (12:42 +0200)]
Don't allow GetTransactionSnapshot() in logical decoding
A historic snapshot should only be used for catalog access, not
general queries. We never call GetTransactionSnapshot() during logical
decoding, which is good because it wouldn't be very sensible, so the
code to deal with that was unreachable and untested. Turn it into an
error, to avoid doing that in the future either.
Discussion: https://www.postgresql.org/message-id/
a868fe78-ddb4-4b0a-9b96-
873d91d93cfd@iki.fi
Heikki Linnakangas [Mon, 23 Dec 2024 10:42:39 +0000 (12:42 +0200)]
Remove unnecessary GetTransactionSnapshot() calls
In get_database_list() and get_subscription_list(), the
GetTransactionSnapshot() call is not required because the catalog
table scans use the catalog snapshot, which is held until the end of
the scan. See table_beginscan_catalog(), which calls
RegisterSnapshot(GetCatalogSnapshot(relid)).
In InitPostgres, it's a little less obvious that it's not required,
but still true I believe. All the catalog lookups in InitPostgres()
also use the catalog snapshot, and the looked up values are copied
while still holding the snapshot.
Furthermore, as the removed FIXME comments said, calling
GetTransactionSnapshot() didn't really prevent MyProc->xmin from being
reset anyway.
Discussion: https://www.postgresql.org/message-id/
7c56f180-b9e1-481e-8c1d-
efa63de3ecbb@iki.fi
David Rowley [Mon, 23 Dec 2024 06:41:49 +0000 (19:41 +1300)]
Fix incorrect source filename references
Jian He reported the src/include/utility/tcop.h one and the remainder
were found by using a script to look for src/* and check that we have a
filename or directory of that name.
In passing, fix some out-date comments.
Reported-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CACJufxGoE3H-7VgO02=PrR4SNuVWDVbfTyUnwO0HvS-Lxurnog@mail.gmail.com
Michael Paquier [Mon, 23 Dec 2024 05:46:49 +0000 (14:46 +0900)]
Fix some comments related to library unloading
Library unloading has never been supported with its code removed in
ab02d702ef08, and there were some comments still mentioning that it was
a possible operation.
ChangAo has noticed the incorrect references in dfmgr.c, while I have
noticed the other ones while scanning the rest of the tree for similar
mistakes.
Author: ChangAo Chen, Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/tencent_1D09840A1632D406A610C8C4E2491D74DB0A@qq.com
Heikki Linnakangas [Sat, 21 Dec 2024 21:42:39 +0000 (23:42 +0200)]
Update TransactionXmin when MyProc->xmin is updated
GetSnapshotData() set TransactionXmin = MyProc->xmin, but when
SnapshotResetXmin() advanced MyProc->xmin, it did not advance
TransactionXmin correspondingly. That meant that TransactionXmin could
be older than MyProc->xmin, and XIDs between than TransactionXmin and
the real MyProc->xmin could be vacuumed away. One known consequence is
in pg_subtrans lookups: we might try to look up the status of an XID
that was already truncated away.
Back-patch to all supported versions.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/
d27a046d-a1e4-47d1-a95c-
fbabe41debb4@iki.fi
David Rowley [Fri, 20 Dec 2024 20:43:26 +0000 (09:43 +1300)]
Optimize alignment calculations in tuple form/deform
Here we convert CompactAttribute.attalign from a char, which is directly
derived from pg_attribute.attalign into a uint8, which stores the number
of bytes to align the column's value by in the tuple.
This allows tuple deformation and tuple size calculations to move away
from using the inefficient att_align_nominal() macro, which manually
checks each TYPALIGN_* char to translate that into the alignment bytes
for the given type. Effectively, this commit changes those to TYPEALIGN
calls, which are branchless and only perform some simple arithmetic with
some bit-twiddling.
The removed branches were often mispredicted by CPUs, especially so in
real-world tables which often contain a mishmash of different types
with different alignment requirements.
Author: David Rowley
Reviewed-by: Andres Freund, Victor Yegorov
Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com
Heikki Linnakangas [Thu, 19 Dec 2024 16:02:11 +0000 (18:02 +0200)]
Mark CatalogSnapshotData static
Like CurrentSnapshotData, it should not be accessed directly outside
snapmgr.c.
Heikki Linnakangas [Fri, 20 Dec 2024 17:36:33 +0000 (19:36 +0200)]
Fix variable reference in comment
This used to say "nsubxcnt isn't decreased when subtransactions
abort", but there's no variable called nsubxcnt. Commit
8548ddc61b
changed it to "subxcnt", among other typo fixes, but that was wrong
too: the comment actually talks about txn->nsubtxns. That's the field
that's incremented but never decremented and is used for the
allocation earlier in the function.
Melanie Plageman [Fri, 20 Dec 2024 14:41:41 +0000 (09:41 -0500)]
Fix overflow danger in SampleHeapTupleVisible(), take 2
28328ec87b45725 addressed one overflow danger in
SampleHeapTupleVisible() but introduced another, albeit a less likely
one. Modify the binary search code to remove this danger.
Reported-by: Richard Guo
Reviewed-by: Richard Guo, Ranier Vilela
Discussion: https://postgr.es/m/CAMbWs4_bE%2BNscChbKWzw6HZOipCUyXfA5133qvoXQ654D3B2gQ%40mail.gmail.com
Thomas Munro [Fri, 20 Dec 2024 08:53:25 +0000 (21:53 +1300)]
Fix corruption when relation truncation fails.
RelationTruncate() does three things, while holding an
AccessExclusiveLock and preventing checkpoints:
1. Logs the truncation.
2. Drops buffers, even if they're dirty.
3. Truncates some number of files.
Step 2 could previously be canceled if it had to wait for I/O, and step
3 could and still can fail in file APIs. All orderings of these
operations have data corruption hazards if interrupted, so we can't give
up until the whole operation is done. When dirty pages were discarded
but the corresponding blocks were left on disk due to ERROR, old page
versions could come back from disk, reviving deleted data (see
pgsql-bugs #18146 and several like it). When primary and standby were
allowed to disagree on relation size, standbys could panic (see
pgsql-bugs #18426) or revive data unknown to visibility management on
the primary (theorized).
Changes:
* WAL is now unconditionally flushed first
* smgrtruncate() is now called in a critical section, preventing
interrupts and causing PANIC on file API failure
* smgrtruncate() has a new parameter for existing fork sizes,
because it can't call smgrnblocks() itself inside a critical section
The changes apply to RelationTruncate(), smgr_redo() and
pg_truncate_visibility_map(). That last is also brought up to date with
other evolutions of the truncation protocol.
The VACUUM FileTruncate() failure mode had been discussed in older
reports than the ones referenced below, with independent analysis from
many people, but earlier theories on how to fix it were too complicated
to back-patch. The more recently invented cancellation bug was
diagnosed by Alexander Lakhin. Other corruption scenarios were spotted
by me while iterating on this patch and earlier commit
75818b3a.
Back-patch to all supported releases.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reported-by: rootcause000@gmail.com
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/18146-
04e908c662113ad5%40postgresql.org
Discussion: https://postgr.es/m/18426-
2d18da6586f152d6%40postgresql.org
David Rowley [Fri, 20 Dec 2024 10:22:37 +0000 (23:22 +1300)]
Remove pg_attribute.attcacheoff column
The column is no longer needed as the offset is now cached in the
CompactAttribute struct per commit
5983a4cff.
Author: David Rowley
Reviewed-by: Andres Freund, Victor Yegorov
Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com
Michael Paquier [Fri, 20 Dec 2024 10:00:18 +0000 (19:00 +0900)]
Relax regression test for fsync check of backend-level stats
One test added in
9aea73fc61d4 did not take into account that the
backend may have some fsync even after a checkpoint. Let's relax it to
be more flexible.
Per report from buildfarm member grassquit, via Alexander Lakhin.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/
6143ab0a-9e88-4790-8d9d-
50ba45657761@gmail.com
David Rowley [Fri, 20 Dec 2024 09:31:26 +0000 (22:31 +1300)]
Introduce CompactAttribute array in TupleDesc, take 2
The new compact_attrs array stores a few select fields from
FormData_pg_attribute in a more compact way, using only 16 bytes per
column instead of the 104 bytes that FormData_pg_attribute uses. Using
CompactAttribute allows performance-critical operations such as tuple
deformation to be performed without looking at the FormData_pg_attribute
element in TupleDesc which means fewer cacheline accesses.
For some workloads, tuple deformation can be the most CPU intensive part
of processing the query. Some testing with 16 columns on a table
where the first column is variable length showed around a 10% increase in
transactions per second for an OLAP type query performing aggregation on
the 16th column. However, in certain cases, the increases were much
higher, up to ~25% on one AMD Zen4 machine.
This also makes pg_attribute.attcacheoff redundant. A follow-on commit
will remove it, thus shrinking the FormData_pg_attribute struct by 4
bytes.
Author: David Rowley
Reviewed-by: Andres Freund, Victor Yegorov
Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com
Melanie Plageman [Thu, 19 Dec 2024 23:52:19 +0000 (18:52 -0500)]
Remove final mention of FREEZE_PAGE from comments
b7493e1ab35 removed leftover mentions of XLOG_HEAP2_FREEZE_PAGE records
from comments but neglected to remove one mention of FREEZE_PAGE.
Reported off-list by Alexander Lakhin
Tom Lane [Thu, 19 Dec 2024 23:07:00 +0000 (18:07 -0500)]
Get rid of old version of BuildTupleHashTable().
It was reasonable to preserve the old API of BuildTupleHashTable()
in the back branches, but in HEAD we should actively discourage use
of that version. There are no remaining callers in core, so just
get rid of it. Then rename BuildTupleHashTableExt() back to
BuildTupleHashTable().
While at it, fix up the miserably-poorly-maintained header comment
for BuildTupleHashTable[Ext]. It looks like more than one patch in
this area has had the opinion that updating comments is beneath them.
Discussion: https://postgr.es/m/538343.
1734646986@sss.pgh.pa.us
Tom Lane [Thu, 19 Dec 2024 22:07:14 +0000 (17:07 -0500)]
Use ExecGetCommonSlotOps infrastructure in more places.
Append, MergeAppend, and RecursiveUnion can all use the support
functions added in commit
276279295. The first two can report a
fixed result slot type if all their children return the same fixed
slot type. That does nothing for the append step itself, but might
allow optimizations in the parent plan node. RecursiveUnion can
optimize tuple hash table operations in the same way as SetOp now
does.
Patch by me; thanks to Richard Guo and David Rowley for review.
Discussion: https://postgr.es/m/
1850138.
1731549611@sss.pgh.pa.us
Tom Lane [Thu, 19 Dec 2024 22:02:25 +0000 (17:02 -0500)]
Improve planner's handling of SetOp plans.
Remove the code for inserting flag columns in the inputs of a SetOp.
That was the only reason why there would be resjunk columns in a
set-operations plan tree, so we can get rid of some code that
supported that, too.
Get rid of choose_hashed_setop() in favor of building Paths for
the hashed and sorted alternatives, and letting them fight it out
within add_path().
Remove set_operation_ordered_results_useful(), which was giving wrong
answers due to examining the wrong ancestor node: we need to examine
the immediate SetOperationStmt parent not the topmost node. Instead
make each caller of recurse_set_operations() pass down the relevant
parent node. (This thinko seems to have led only to wasted planning
cycles and possibly-inferior plans, not wrong query answers. Perhaps
we should back-patch it, but I'm not doing so right now.)
Teach generate_nonunion_paths() to consider pre-sorted inputs for
sorted SetOps, rather than always generating a Sort node.
Patch by me; thanks to Richard Guo and David Rowley for review.
Discussion: https://postgr.es/m/
1850138.
1731549611@sss.pgh.pa.us
Tom Lane [Thu, 19 Dec 2024 21:23:45 +0000 (16:23 -0500)]
Convert SetOp to read its inputs as outerPlan and innerPlan.
The original design for set operations involved appending the two
input relations into one and adding a flag column that allows
distinguishing which side each row came from. Then the SetOp node
pries them apart again based on the flag. This is bizarre. The
only apparent reason to do it is that when sorting, we'd only need
one Sort node not two. But since sorting is at least O(N log N),
sorting all the data is actually worse than sorting each side
separately --- plus, we have no chance of taking advantage of
presorted input. On top of that, adding the flag column frequently
requires an additional projection step that adds cycles, and then
the Append node isn't free either. Let's get rid of all of that
and make the SetOp node have two separate children, using the
existing outerPlan/innerPlan infrastructure.
This initial patch re-implements nodeSetop.c and does a bare minimum
of work on the planner side to generate correctly-shaped plans.
In particular, I've tried not to change the cost estimates here,
so that the visible changes in the regression test results will only
involve removal of useless projection steps and not any changes in
whether to use sorted vs hashed mode.
For SORTED mode, we combine successive identical tuples from each
input into groups, and then merge-join the groups. The tuple
comparisons now use SortSupport instead of simple equality, but
the group-formation part should involve roughly the same number of
tuple comparisons as before. The cross-comparisons between left and
right groups probably add to that, but I'm not sure to quantify how
many more comparisons we might need.
For HASHED mode, nodeSetop's logic is almost the same as before,
just refactored into two separate loops instead of one loop that
has an assumption that it will see all the left-hand inputs first.
In both modes, I added early-exit logic to not bother reading the
right-hand relation if the left-hand input is empty, since neither
INTERSECT nor EXCEPT modes can produce any output if the left input
is empty. This could have been done before in the hashed mode, but
not in sorted mode. Sorted mode can also stop as soon as it exhausts
the left input; any remaining right-hand tuples cannot have matches.
Also, this patch adds some infrastructure for detecting whether
child plan nodes all output the same type of tuple table slot.
If they do, the hash table logic can use slightly more efficient
code based on assuming that that's the input slot type it will see.
We'll make use of that infrastructure in other plan node types later.
Patch by me; thanks to Richard Guo and David Rowley for review.
Discussion: https://postgr.es/m/
1850138.
1731549611@sss.pgh.pa.us
Melanie Plageman [Thu, 19 Dec 2024 16:55:18 +0000 (11:55 -0500)]
Remove extra prefetch iterator setup for Bitmap Table Scan
1a0da347a7ac98db replaced Bitmap Table Scan's separate private and
shared bitmap iterators with a unified iterator. It accidentally set up
the prefetch iterator twice for non-parallel bitmap table scans. Remove
the extra set up call to tbm_begin_iterate().
Melanie Plageman [Thu, 19 Dec 2024 16:55:03 +0000 (11:55 -0500)]
Fix bitmap table scan crash on iterator release
1a0da347a7ac98db replaced Bitmap Table Scan's individual private and
shared iterators with a unified iterator. It neglected, however, to
check if the iterator had already been cleaned up before doing so on
rescan. Add this check both on rescan and end scan to be safe.
Reported-by: Richard Guo
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs48nrhcLY1kcd-u9oD%2B6yiS631F_8Fx8ZGsO-BYDwH%2Bbyw%40mail.gmail.com
Peter Geoghegan [Thu, 19 Dec 2024 16:08:55 +0000 (11:08 -0500)]
Avoid nbtree index scan SAOP scanBehind confusion.
Consistently reset so->scanBehind at the beginning of nbtree array
advancement, even during sktrig_required=false calls (calls where array
advancement is triggered by an unsatisfied non-required array scan key).
Otherwise, it's possible for queries to fail to return all relevant
tuples to the scan given a low-order required scan key that was
previously deemed "satisfied" by a truncated high key attribute value.
This only happened at the point where a later non-required array scan
key needed to be "advanced" once on the next leaf page (that is, once
the right sibling of the truncated high key page was reached).
The underlying issue was that later code within _bt_advance_array_keys
assumed that the so->scanBehind flag must have been set using the
current page's high key (not the previous page's high key). Any later
successful recheck call to _bt_check_compare would therefore spuriously
be prevented from making _bt_advance_array_keys return true, based on
the faulty belief that the truncated attribute must be from the scan's
current tuple (i.e. the non-pivot tuple at the start of the next page).
_bt_advance_array_keys would return false for the tuple, ultimately
resulting in _bt_checkkeys failing to return a matching tuple.
Oversight in commit
5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzkJKncfqyAUTeuB5GgRhT1vhsWO2q11dbZNqKmvjopP_g@mail.gmail.com
Backpatch: 17-, where commit
5bf748b8 first appears.
Peter Eisentraut [Thu, 19 Dec 2024 14:37:44 +0000 (15:37 +0100)]
bootstrap: pure parser and reentrant scanner
Use the flex %option reentrant and the bison option %pure-parser to
make the generated scanner and parser pure, reentrant, and
thread-safe.
Make the generated scanner use palloc() etc. instead of malloc() etc.
For the bootstrap scanner and parser, reentrancy and memory management
aren't that important, but we make this change here anyway so that all
the scanners and parsers in the backend use a similar set of options
and APIs.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/
eb6faeac-2a8a-4b69-9189-
c33c520e5b7b@eisentraut.org
Peter Eisentraut [Thu, 19 Dec 2024 12:00:31 +0000 (13:00 +0100)]
Small whitespace improvement
Author: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/
eb6faeac-2a8a-4b69-9189-
c33c520e5b7b@eisentraut.org
Peter Eisentraut [Thu, 19 Dec 2024 10:21:06 +0000 (11:21 +0100)]
Prevent redeclaration of typedef yyscan_t
Fix for
1f0de66ea2a: We need to prevent redeclaration of typedef
yyscan_t. (This will work with C11 but not currently with C99.) The
generated scanner files provide their own typedef, but we also need to
provide one for the interfaces that we expose. So we need to add some
preprocessor guards to avoid a redefinition. (This is how the
generated scanner files do it internally as well.) This way
everything now works independent of the order in which things are
included.
Discussion: https://www.postgresql.org/message-id/flat/
eb6faeac-2a8a-4b69-9189-
c33c520e5b7b@eisentraut.org
Michael Paquier [Thu, 19 Dec 2024 04:19:22 +0000 (13:19 +0900)]
Add backend-level statistics to pgstats
This adds a new variable-numbered statistics kind in pgstats, where the
object ID key of the stats entries is based on the proc number of the
backends. This acts as an upper-bound for the number of stats entries
that can exist at once. The entries are created when a backend starts
after authentication succeeds, and are removed when the backend exits,
making the stats entry exist for as long as their backend is up and
running. These are not written to the pgstats file at shutdown (note
that write_to_file is disabled, as a safety measure).
Currently, these stats include only information about the I/O generated
by a backend, using the same layer as pg_stat_io, except that it is now
possible to know how much activity is happening in each backend rather
than an overall aggregate of all the activity. A function called
pg_stat_get_backend_io() is added to access this data depending on the
PID of a backend. The existing structure could be expanded in the
future to add more information about other statistics related to
backends, depending on requirements or ideas.
Auxiliary processes are not included in this set of statistics. These
are less interesting to have than normal backends as they have dedicated
entries in pg_stat_io, and stats kinds of their own.
This commit includes also pg_stat_reset_backend_stats(), function able
to reset all the stats associated to a single backend.
Bump catalog version and PGSTAT_FILE_FORMAT_ID.
Author: Bertrand Drouvot
Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, Michael Paquier, Nazir
Bilal Yavuz
Discussion: https://postgr.es/m/ZtXR+CtkEVVE/LHF@ip-10-97-1-34.eu-west-3.compute.internal
Michael Paquier [Thu, 19 Dec 2024 01:16:02 +0000 (10:16 +0900)]
Extract logic filling pg_stat_get_io()'s tuplestore into its own routine
This commit adds pg_stat_io_build_tuples(), a helper routine for
pg_stat_get_io(), that fills its result tuplestore based on the contents
of PgStat_BktypeIO. This will be used in a follow-up commit that uses
the same structures as pg_stat_io for reporting, including the same
object types and contexts, but for a different statistics kind.
Author: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/ZtXR+CtkEVVE/LHF@ip-10-97-1-34.eu-west-3.compute.internal
David Rowley [Thu, 19 Dec 2024 00:57:21 +0000 (13:57 +1300)]
Optimize grouping equality checks with virtual slots
8f4ee9626 fixed an old Assert failure that could happen when the slot
type used to look up the hash table for BuildTupleHashTableExt() users
wasn't a TTSOpsMinimalTuple slot. The fix for that in the back branches
had to be to pass the TupleTableSlotOps as NULL, however in master,
since we have the inputOps parameter as was added by
d96d1d515, we can
pass that down instead.
At least one caller uses a fixed slot that's always TTSOpsVirtual, so
passing down inputOps for these cases allows ExecBuildGroupingEqual() to
skip adding the EEOP_INNER_FETCHSOME ExprEvalStep.
This should increase the performance of hashed subplans very slightly.
Author: Tom Lane, David Rowley
Discussion: https://postgr.es/m/
2543667.
1734483723@sss.pgh.pa.us
David Rowley [Thu, 19 Dec 2024 00:11:39 +0000 (13:11 +1300)]
Fix Assert failure in WITH RECURSIVE UNION queries
If the non-recursive part of a recursive CTE ended up using
TTSOpsBufferHeapTuple as the table slot type, then a duplicate value
could cause an Assert failure in CheckOpSlotCompatibility() when
checking the hash table for the duplicate value. The expected slot type
for the deform step was TTSOpsMinimalTuple so the Assert failed when the
TTSOpsBufferHeapTuple slot was used.
This is a long-standing bug which we likely didn't notice because it
seems much more likely that the non-recursive term would have required
projection and used a TTSOpsVirtual slot, which CheckOpSlotCompatibility
is ok with.
There doesn't seem to be any harm done here other than the Assert
failure. Both TTSOpsMinimalTuple and TTSOpsBufferHeapTuple slot types
require tuple deformation, so the EEOP_*_FETCHSOME ExprState step would
have properly existed in the ExprState.
The solution is to pass NULL for the ExecBuildGroupingEqual's 'lops'
parameter. This means the ExprState's EEOP_*_FETCHSOME step won't
expect a fixed slot type. This makes CheckOpSlotCompatibility() happy as
no checking is performed when the ExprEvalStep is not expecting a fixed
slot type.
Reported-by: Richard Guo
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAMbWs4-8U9q2LAtf8+ghV11zeUReA3AmrYkxzBEv0vKnDxwkKA@mail.gmail.com
Backpatch-through: 13, all supported versions
Melanie Plageman [Wed, 18 Dec 2024 23:47:21 +0000 (18:47 -0500)]
Remove leftover mentions of XLOG_HEAP2_FREEZE_PAGE records
f83d709760d merged the separate XLOG_HEAP2_FREEZE_PAGE records into a
new combined prune, freeze, and vacuum record with opcode
XLOG_HEAP2_PRUNE_VACUUM_SCAN. Remove the last few references to
XLOG_HEAP2_FREEZE_PAGE records which were accidentally left behind.
Reported-by: Tomas Vondra
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/CA%2BTgmoY1tYff-1CEn8kYt5FsOrynTbtr%3DUZw%3D7mTC1Hv1HpeBQ%40mail.gmail.com
Melanie Plageman [Wed, 18 Dec 2024 23:43:39 +0000 (18:43 -0500)]
Bitmap Table Scans use unified TBMIterator
With the repurposing of TBMIterator as an interface for both parallel
and serial iteration through TIDBitmaps in commit
7f9d4187e7bab10329cc,
bitmap table scans may now use it.
Modify bitmap table scan code to use the TBMIterator. This requires
moving around a bit of code, so a few variables are initialized
elsewhere.
Author: Melanie Plageman
Reviewed-by: Tomas Vondra
Discussion: https://postgr.es/m/
c736f6aa-8b35-4e20-9621-
62c7c82e2168%40vondra.me
Melanie Plageman [Wed, 18 Dec 2024 23:19:28 +0000 (18:19 -0500)]
Add common interface for TBMIterators
Add and use TBMPrivateIterator, which replaces the current TBMIterator
for serial use cases, and repurpose TBMIterator to be a unified
interface for both the serial ("private") and parallel ("shared") TID
Bitmap iterator interfaces. This encapsulation simplifies call sites for
callers supporting both parallel and serial TID Bitmap access.
TBMIterator is not yet used in this commit.
Author: Melanie Plageman
Reviewed-by: Tomas Vondra, Heikki Linnakangas
Discussion: https://postgr.es/m/
063e4eb4-32d9-439e-a0b1-
75565a9835a8%40iki.fi
Melanie Plageman [Wed, 18 Dec 2024 23:16:43 +0000 (18:16 -0500)]
Fix overflow danger in SampleHeapTupleVisible()
68d9662be1c4b70 made HeapScanDesc->rs_ntuples unsigned but neglected to
change how it was being used in SampleHeapTupleVisible().
Return early if rs_ntuples is 0 to avoid overflowing and incorrectly
executing the loop code in SampleHeapTupleVisible().
Reported-by: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAot_xQoZyPZjpj1aBUPrPykY5mOPHGyvfe%3Djz%2BWowdA3A%40mail.gmail.com
Melanie Plageman [Wed, 18 Dec 2024 16:47:38 +0000 (11:47 -0500)]
Make rs_cindex and rs_ntuples unsigned
HeapScanDescData.rs_cindex and rs_ntuples can't be less than 0. All scan
types using the heap scan descriptor expect these values to be >= 0.
Make that expectation clear by making rs_cindex and rs_ntuples unsigned.
Also remove the test in heapam_scan_bitmap_next_tuple() that checks if
rs_cindex < 0. This was never true, but now that rs_cindex is unsigned,
it makes even less sense.
While we are at it, initialize both rs_cindex and rs_ntuples to 0 in
initscan().
Author: Melanie Plageman
Reviewed-by: Dilip Kumar
Discussion: https://postgr.es/m/CAAKRu_ZxF8cDCM_BFi_L-t%3DRjdCZYP1usd1Gd45mjHfZxm0nZw%40mail.gmail.com
Peter Eisentraut [Wed, 18 Dec 2024 07:47:53 +0000 (08:47 +0100)]
seg: pure parser and reentrant scanner
Use the flex %option reentrant and the bison option %pure-parser to
make the generated scanner and parser pure, reentrant, and
thread-safe.
Make the generated scanner use palloc() etc. instead of malloc() etc.
Previously, we only used palloc() for the buffer, but flex would still
use malloc() for its internal structures. As a result, there could be
some small memory leaks in case of uncaught errors. (We do catch
normal syntax errors as soft errors.) Now, all the memory is under
palloc() control, so there are no more such issues.
Simplify flex scan buffer management: Instead of constructing the
buffer from pieces and then using yy_scan_buffer(), we can just use
yy_scan_string(), which does the same thing internally.
The previous code was necessary because we allocated the buffer with
palloc() and the rest of the state was handled by malloc(). But this
is no longer the case; everything is under palloc() now.
(We could even get rid of the yylex_destroy() call and just let the
memory context cleanup handle everything. But for now, we preserve
the existing behavior.)
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/
eb6faeac-2a8a-4b69-9189-
c33c520e5b7b@eisentraut.org
Peter Eisentraut [Wed, 18 Dec 2024 07:47:34 +0000 (08:47 +0100)]
cube: pure parser and reentrant scanner
Use the flex %option reentrant and the bison option %pure-parser to
make the generated scanner and parser pure, reentrant, and
thread-safe.
Make the generated scanner use palloc() etc. instead of malloc() etc.
Previously, we only used palloc() for the buffer, but flex would still
use malloc() for its internal structures. As a result, there could be
some small memory leaks in case of uncaught errors. (We do catch
normal syntax errors as soft errors.) Now, all the memory is under
palloc() control, so there are no more such issues.
Simplify flex scan buffer management: Instead of constructing the
buffer from pieces and then using yy_scan_buffer(), we can just use
yy_scan_string(), which does the same thing internally. (Actually, we
use yy_scan_bytes() here because we already have the length.)
The previous code was necessary because we allocated the buffer with
palloc() and the rest of the state was handled by malloc(). But this
is no longer the case; everything is under palloc() now.
(We could even get rid of the yylex_destroy() call and just let the
memory context cleanup handle everything. But for now, we preserve
the existing behavior.)
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/flat/
eb6faeac-2a8a-4b69-9189-
c33c520e5b7b@eisentraut.org
Michael Paquier [Wed, 18 Dec 2024 06:16:12 +0000 (15:16 +0900)]
psql: Add more information about service name
This commit adds support for the following items in psql, able to show a
service name, when available:
- Variable SERVICE.
- Substitution %s in PROMPT{1,2,3}.
This relies on
4b99fed7541e, that has made the service name available in
PGconn for libpq.
Author: Michael Banck
Reviewed-by: Greg Sabino Mullane
Discussion: https://postgr.es/m/
6723c612.
050a0220.1567f4.b94a@mx.google.com
Michael Paquier [Wed, 18 Dec 2024 05:53:42 +0000 (14:53 +0900)]
libpq: Add service name to PGconn and PQservice()
This commit adds one field to PGconn for the database service name (if
any), with PQservice() as routine to retrieve it. Like the other
routines of this area, NULL is returned as result if the connection is
NULL.
A follow-up patch will make use of this feature to be able to display
the service name in the psql prompt.
Author: Michael Banck
Reviewed-by: Greg Sabino Mullane
Discusion: https://postgr.es/m/
6723c612.
050a0220.1567f4.b94a@mx.google.com
Tom Lane [Wed, 18 Dec 2024 03:31:26 +0000 (22:31 -0500)]
Fix memory leak in pg_restore with zstd-compressed data.
EndCompressorZstd() neglected to free everything. This was
most visible with a lot of large objects in the dump.
Per report from Tomasz Szypowski. Back-patch to v16
where this code came in.
Discussion: https://postgr.es/m/DU0PR04MB94193D038A128EF989F922D199042@DU0PR04MB9419.eurprd04.prod.outlook.com
David Rowley [Tue, 17 Dec 2024 23:05:55 +0000 (12:05 +1300)]
Fix incorrect slot type in BuildTupleHashTableExt
0f5738202 adjusted the execGrouping.c code so it made use of ExprStates to
generate hash values. That commit made a wrong assumption that the slot
type to pass to ExecBuildHash32FromAttrs() is always &TTSOpsMinimalTuple.
That's not the case as the slot type depends on the slot type passed to
LookupTupleHashEntry(), which for nodeRecursiveunion.c, could be any of
the current slot types.
Here we fix this by adding a new parameter to BuildTupleHashTableExt()
to allow the slot type to be passed in. In the case of nodeSubplan.c
and nodeAgg.c the slot type is always &TTSOpsVirtual, so for both of
those cases, it's beneficial to pass the known slot type as that allows
ExecBuildHash32FromAttrs() to skip adding the tuple deform step to the
resulting ExprState. Another possible fix would have been to have
ExecBuildHash32FromAttrs() set "fetch.kind" to NULL so that
ExecComputeSlotInfo() always determines the EEOP_INNER_FETCHSOME is
required, however, that option isn't favorable as slows down aggregation
and hashed subplan evaluation due to the extra (needless) deform step.
Thanks to Nathan Bossart for bisecting to find the offending commit
based on Paul's report.
Reported-by: Paul Ramsey <pramsey@cleverelephant.ca>
Discussion: https://postgr.es/m/
99F064C1-B3EB-4BE7-97D2-
D2A0AA487A71@cleverelephant.ca
Nathan Bossart [Tue, 17 Dec 2024 21:24:45 +0000 (15:24 -0600)]
Accommodate very large dshash tables.
If a dshash table grows very large (e.g., the dshash table for
cumulative statistics when there are millions of tables), resizing
it may fail with an error like:
ERROR: invalid DSA memory alloc request size
1073741824
To fix, permit dshash resizing to allocate more than 1 GB by
providing the DSA_ALLOC_HUGE flag.
Reported-by: Andreas Scherbaum
Author: Matthias van de Meent
Reviewed-by: Cédric Villemain, Michael Paquier, Andres Freund
Discussion: https://postgr.es/m/
80a12d59-0d5e-4c54-866c-
e69cd6536471%40pgug.de
Backpatch-through: 13
Tom Lane [Tue, 17 Dec 2024 20:52:05 +0000 (15:52 -0500)]
Skip useless calculation of join RTE column names during EXPLAIN.
There's no need for set_simple_column_names() to compute unique
column names for join RTEs, because a finished plan tree will
not contain any join alias Vars that we could need names for.
Its other, internal callers will not pass it any join RTEs
anyway, so the upshot is we can just skip join RTEs here.
Aside from getting rid of a klugy against-its-documentation use of
set_relation_column_names, this can speed up EXPLAIN substantially
when considering many-join queries, because the upper join RTEs
tend to have a lot of columns.
Sami Imseih, with cosmetic changes by me
Discussion: https://postgr.es/m/CAA5RZ0th3q-0p1pri58z9grG8r8azmEBa8o1rtkwhLmJg_cH+g@mail.gmail.com
Melanie Plageman [Tue, 17 Dec 2024 19:13:27 +0000 (14:13 -0500)]
Count pages set all-visible and all-frozen in VM during vacuum
Heap vacuum already counts and logs pages with newly frozen tuples. Now
count and log the number of pages newly set all-visible and all-frozen
in the visibility map.
Pages that are all-visible but not all-frozen are debt for future
aggressive vacuums. The counts of newly all-visible and all-frozen pages
give us insight into the rate at which this debt is being accrued and
paid down.
Author: Melanie Plageman
Reviewed-by: Masahiko Sawada, Alastair Turner, Nitin Jadhav, Andres Freund, Bilal Yavuz, Tomas Vondra
Discussion: https://postgr.es/m/flat/CAAKRu_ZQe26xdvAqo4weHLR%3DivQ8J4xrSfDDD8uXnh-O-6P6Lg%40mail.gmail.com#
6d8d2b4219394f774889509bf3bdc13d,
https://postgr.es/m/ctdjzroezaxmiyah3gwbwm67defsrwj2b5fpfs4ku6msfpxeia%40mwjyqlhwr2wu
Melanie Plageman [Tue, 17 Dec 2024 19:13:18 +0000 (14:13 -0500)]
Make visibilitymap_set() return previous state of vmbits
It can be useful to know the state of a relation page's VM bits before
visibilitymap_set(). visibilitymap_set() has the old value on hand, so
returning it is simple. This commit does not use visibilitymap_set()'s
new return value.
Author: Melanie Plageman
Reviewed-by: Masahiko Sawada, Andres Freund, Nitin Jadhav, Bilal Yavuz
Discussion: https://postgr.es/m/flat/CAAKRu_ZQe26xdvAqo4weHLR%3DivQ8J4xrSfDDD8uXnh-O-6P6Lg%40mail.gmail.com#
6d8d2b4219394f774889509bf3bdc13d,
https://postgr.es/m/ctdjzroezaxmiyah3gwbwm67defsrwj2b5fpfs4ku6msfpxeia%40mwjyqlhwr2wu
Melanie Plageman [Tue, 17 Dec 2024 19:13:00 +0000 (14:13 -0500)]
Rename LVRelState->frozen_pages
Rename frozen_pages to new_frozen_tuple_pages in LVRelState, the struct
used for tracking state during vacuuming of a heap relation.
frozen_pages sounds like it tracks pages set all-frozen. That is a
misnomer. It only includes pages with at least one newly frozen tuple.
It also includes pages that are not all-frozen.
Author: Melanie Plageman
Reviewed-by: Andres Freund, Masahiko Sawada, Nitin Jadhav, Bilal Yavuz
Discussion: https://postgr.es/m/ctdjzroezaxmiyah3gwbwm67defsrwj2b5fpfs4ku6msfpxeia%40mwjyqlhwr2wu
Tom Lane [Tue, 17 Dec 2024 17:23:26 +0000 (12:23 -0500)]
Set max_safe_fds whenever we create shared memory and semaphores.
Formerly we skipped this in bootstrap/check mode and in single-user
mode. That's bad in check mode because it may allow accepting a
value of max_connections that doesn't actually work: on platforms
where semaphores consume file descriptors, there may not be enough
free FDs left over to satisfy fd.c, causing postmaster start to
fail. It's also not great in single-user mode, because fd.c will
operate with just the minimum allowable value of max_safe_fds,
resulting in excess file open/close overhead if anything moderately
complicated is done in single-user mode. (There may be some penalty
for bootstrap mode too, though probably not much.)
Discussion: https://postgr.es/m/
2081982.
1734393311@sss.pgh.pa.us
Tom Lane [Tue, 17 Dec 2024 17:08:39 +0000 (12:08 -0500)]
Set the stack_base_ptr in main(), not in random other places.
Previously we did this in PostmasterMain() and InitPostmasterChild(),
which meant that stack depth checking was disabled in non-postmaster
server processes, for instance in single-user mode. That seems like
a fairly bad idea, since there's no a-priori restriction on the
complexity of queries we will run in single-user mode. Moreover, this
led to not having quite the same stack depth limit in all processes,
which likely has no real-world effect but it offends my inner neatnik.
Setting the depth in main() guarantees that check_stack_depth() is
armed and has a consistent interpretation of stack depth in all forms
of server processes.
While at it, move the code associated with checking the stack depth
out of tcop/postgres.c (which was never a great home for it) into
a new file src/backend/utils/misc/stack_depth.c.
Discussion: https://postgr.es/m/
2081982.
1734393311@sss.pgh.pa.us
Tomas Vondra [Tue, 17 Dec 2024 15:47:23 +0000 (16:47 +0100)]
Detect version mismatch in brin_page_items
Commit
dae761a87ed modified brin_page_items() to return the new "empty"
flag for each BRIN range. But the new output parameter was added in the
middle, which may cause crashes when using the new binary with old
function definition.
The ideal solution would be to introduce API versioning similar to what
pg_stat_statements does, but it's too late for that as PG17 was already
released (so we can't introduce a new extension version). We could do
something similar in brin_page_items() by checking the number of output
columns (and ignoring the new flag), but it doesn't seem very nice.
Instead, simply error out and suggest updating the extension to the
latest version. pageinspect is a superuser-only extension, and there's
not much reason to run an older version. Moreover, there's a precedent
for this approach in
691e8b2e18.
Reported by Ľuboslav Špilák, investigation and patch by me. Backpatch to
17, same as
dae761a87ed.
Reported-by: Ľuboslav Špilák
Reviewed-by: Michael Paquier, Hayato Kuroda, Peter Geoghegan
Backpatch-through: 17
Discussion: https://postgr.es/m/VI1PR02MB63331C3D90E2104FD12399D38A5D2@VI1PR02MB6333.eurprd02.prod.outlook.com
Discussion: https://postgr.es/m/flat/
3385a58f-5484-49d0-b790-
9a198a0bf236@vondra.me
Tomas Vondra [Tue, 17 Dec 2024 14:40:07 +0000 (15:40 +0100)]
Update comments about index parallel builds
Commit
b43757171470 allowed parallel builds for BRIN, but left behind
two comments claiming only btree indexes support parallel builds.
Reported by Egor Rogov, along with similar issues in SGML docs.
Backpatch to 17, where parallel builds for BRIN were introduced.
Reported-by: Egor Rogov
Backpatch-through: 17
Discussion: https://postgr.es/m/
114e2d5d-125e-07d8-94aa-
5ad175fb7443@postgrespro.ru
Peter Eisentraut [Tue, 17 Dec 2024 13:04:55 +0000 (14:04 +0100)]
Remove ts_locale.c's lowerstr()
lowerstr() and lowerstr_with_len() in ts_locale.c do the same thing as
str_tolower() that the rest of the system uses, except that the former
don't use the common locale provider framework but instead use the
global libc locale settings.
This patch replaces uses of lowerstr*() with str_tolower(...,
DEFAULT_COLLATION_OID). For instances that use a libc locale
globally, this will result in exactly the same behavior. For
instances that use other locale providers, you now get consistent
behavior and are no longer dependent on the libc locale settings (for
this case; there are others).
Most uses of these functions are for processing dictionary and
configuration files. In those cases, using the default collation
seems appropriate. At least we don't have a more specific collation
available. But the code in contrib/pg_trgm should really depend on
the collation of the columns being processed. This is not done here,
this can be done in a separate patch.
(You can probably construct some edge cases where this change would
create some locale-related upgrade incompatibility, for example if
before you used a combination of ICU and a differently-behaving libc
locale. We can document this in the release notes, but I don't think
there is anything more we can do about this.)
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://www.postgresql.org/message-id/flat/
653f3b84-fc87-45a7-9a0c-
bfb4fcab3e7d%40eisentraut.org
Peter Eisentraut [Tue, 17 Dec 2024 11:48:58 +0000 (12:48 +0100)]
Remove ts_locale.c's t_isdigit(), t_isspace(), t_isprint()
These do the same thing as the standard isdigit(), isspace(), and
isprint() but with multibyte and encoding support. But all the
callers are only interested in analyzing single-byte ASCII characters.
So this extra layer is overkill and we can replace the uses with the
standard functions.
All the t_is*() functions in ts_locale.c are under scrutiny because
they don't use the common locale provider framework but instead use
the global libc locale settings. For the functions being touched by
this patch, we don't need all that anyway, as mentioned above, so the
simplest solution is to just remove them. The few remaining t_is*()
functions will need a different treatment in a separate patch.
pg_trgm has some compile-time options with macros such as
KEEPONLYALNUM. These are not documented, and the non-default variant
is not supported by any test cases. As part of this undertaking, I'm
removing the non-default variant, as it is in the way of cleanup. So
in this case, the not-KEEPONLYALNUM code path is gone.
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://www.postgresql.org/message-id/flat/
653f3b84-fc87-45a7-9a0c-
bfb4fcab3e7d%40eisentraut.org
Richard Guo [Tue, 17 Dec 2024 10:53:01 +0000 (19:53 +0900)]
Avoid unnecessary wrapping for more complex expressions
When pulling up a subquery that is under an outer join, if the
subquery's target list contains a strict expression that uses a
subquery variable, it's okay to pull up the expression without
wrapping it in a PlaceHolderVar: if the subquery variable is forced to
NULL by the outer join, the expression result will come out as NULL
too.
If the strict expression does not contain any subquery variables, the
current code always wraps it in a PlaceHolderVar. While this is not
incorrect, the analysis could be tighter: if the strict expression
contains any variables of rels that are under the same lowest nulling
outer join as the subquery, we can also avoid wrapping it. This is
safe because if the subquery variable is forced to NULL by the outer
join, the variables of rels that are under the same lowest nulling
outer join will also be forced to NULL, resulting in the expression
evaluating to NULL as well. Therefore, it's not necessary to force
the expression to be evaluated below the outer join. It could be
beneficial to get rid of such PHVs because they could imply lateral
dependencies, which force us to resort to nestloop joins.
This patch checks if the lateral references in the strict expression
contain any variables of rels under the same lowest nulling outer join
as the subquery, and avoids wrapping the expression in that case.
This is fundamentally a generalization of the optimizations for bare
Vars and PHVs introduced in commit
f64ec81a8.
No backpatch as this could result in plan changes.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4_ENtfRdLaM_bXAxiKRYO7DmwDBDG4_2=VTDi0mJP-jAw@mail.gmail.com
Amit Kapila [Tue, 17 Dec 2024 09:38:29 +0000 (15:08 +0530)]
Doc: Fix the wrong link on pg_createsubscriber page.
Commit
84db9a0eb1 has added the incorrect link to
'initial data synchronization'. It was a subsection of Row Filter and
didn't provide the required information.
Author: Peter Smith
Reviewed-by: Vignesh C, Pavel Luzanov
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/CAHut+PtnA4DB_pcv4TDr4NjUSM1=P2N_cuZx5DX09k7LVmaqUA@mail.gmail.com
Michael Paquier [Tue, 17 Dec 2024 05:32:35 +0000 (14:32 +0900)]
Tweak some comments related to variable-numbered stats in pgstat.c
These comments referred to database objects, but depending on the stats
kind dealt with this may not be true.
Issues found while reviewing a different patch in this area.
Discussion: https://postgr.es/m/ZtXR+CtkEVVE/LHF@ip-10-97-1-34.eu-west-3.compute.internal
Michael Paquier [Tue, 17 Dec 2024 00:44:06 +0000 (09:44 +0900)]
Print out error position for some more DDLs
The following commands gain some information about the error position in
the query, should they fail when looking at the type used:
- CREATE TYPE (LIKE)
- CREATE TABLE OF
Both are related to typenameType() where the type name lookup is done.
These calls gain the ParseState that already exists in these paths.
Author: Kirill Reshke, Jian He
Reviewed-by: Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/CALdSSPhqfvKbDwqJaY=yEePi_aq61GmMpW88i6ZH7CMG_2Z4Cg@mail.gmail.com
Michael Paquier [Tue, 17 Dec 2024 00:23:49 +0000 (09:23 +0900)]
pg_combinebackup: Fix PITR comparison test in 002_compare_backups
The test was creating both the dumps to compare from the same database
on the same node, so it would never detect any mismatches when comparing
the logical dumps of the two servers.
Fixing this issue has revealed that there is a difference in the dumps:
the tablespaces paths are different. This commit uses compare_text()
with a custom comparison function to erase the difference (slightly
tweaked to be able to work with WIN32 and non-WIN32 paths). This way,
the non-relevant parts of the tablespace path are ignored from the check
with the basic structure of the query string still compared.
Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/87h67653ns.fsf@wibble.ilmari.org
Backpatch-through: 17
Tomas Vondra [Mon, 16 Dec 2024 18:08:52 +0000 (19:08 +0100)]
doc: Mention BRIN indexes support parallel builds
Two places in the documentation suggest B-tree is the only index access
method allowing parallel builds. Commit
b4375717 added parallel builds
for BRIN too, but failed to update the docs. So fix that, and backpatch
to 17, where parallel BRIN builds were introduced.
Author: Egor Rogov
Backpatch-through: 17
Discussion: https://postgr.es/m/
114e2d5d-125e-07d8-94aa-
5ad175fb7443@postgrespro.ru
Tomas Vondra [Mon, 16 Dec 2024 17:12:29 +0000 (18:12 +0100)]
psql: Tab completion for JOIN ... USING column list
For JOIN ... USING, offer attribute names for the first member of the
column list.
Author: Andreas Karlsson
Reviewed-By: Tomas Vondra
Discussion: https://postgr.es/m/
3a7e27bc-d6ed-4cb0-9b21-
f21143fc1b37@proxel.se
Tomas Vondra [Mon, 16 Dec 2024 17:08:30 +0000 (18:08 +0100)]
psql: Tab completion for JOIN ... ON/USING
Offer ON/USING clauses for join types that require join conditions (i.e.
anything except for NATURAL/CROSS joins).
Author: Andreas Karlsson
Reviewed-By: Tomas Vondra
Discussion: https://postgr.es/m/
3a7e27bc-d6ed-4cb0-9b21-
f21143fc1b37@proxel.se
Tomas Vondra [Mon, 16 Dec 2024 16:55:00 +0000 (17:55 +0100)]
psql: Tab completion for LATERAL joins
When listing selectable objects after a JOIN, offer also LATERAL.
Author: Andreas Karlsson
Reviewed-By: Tomas Vondra
Discussion: https://postgr.es/m/
3a7e27bc-d6ed-4cb0-9b21-
f21143fc1b37@proxel.se
Jeff Davis [Mon, 16 Dec 2024 17:35:18 +0000 (09:35 -0800)]
Refactor string case conversion into provider-specific files.
Create API entry points pg_strlower(), etc., that work with any
provider and give the caller control over the destination
buffer. Then, move provider-specific logic into pg_locale_builtin.c,
pg_locale_icu.c, and pg_locale_libc.c as appropriate.
Discussion: https://postgr.es/m/
7aa46d77b377428058403723440862d12a8a129a.camel@j-davis.com
Tomas Vondra [Mon, 16 Dec 2024 15:46:56 +0000 (16:46 +0100)]
psql: Tab completion for CREATE MATERIALIZED VIEW ... USING
The tab completion didn't offer USING for CREATE MATERIALIZED VIEW, so
add it, and offer a list of access methods, followed by SELECT.
Author: Kirill Reshke
Reviewed-By: Karina Litskevich
Discussion: https://postgr.es/m/CALdSSPhVELkvutquqrDB=Ujfq_Pjz=6jn-kzh+291KPNViLTfw@mail.gmail.com
Tomas Vondra [Mon, 16 Dec 2024 15:38:35 +0000 (16:38 +0100)]
psql: Tab completion for CREATE TEMP TABLE ... USING
The USING keyword was offered only for persistent tables, not for
temporary ones. So improve that.
Author: Kirill Reshke
Reviewed-By: Karina Litskevich
Discussion: https://postgr.es/m/CALdSSPhVELkvutquqrDB=Ujfq_Pjz=6jn-kzh+291KPNViLTfw@mail.gmail.com
Tomas Vondra [Mon, 16 Dec 2024 15:20:04 +0000 (16:20 +0100)]
psql: Tab completion for ALTER TYPE ... CASCADE/RESTRICT
Updates table completion for ALTER TYPE to offer CASCADE/RESTRICT for a
number of actions on attributes:
ALTER TYPE ... ADD/DROP/RENAME ATTRIBUTE ... [CASCADE|RESTRICT]
ALTER TYPE ... TYPE ... [CASCADE|RESTRICT]
Author: Kirill Reshke
Reviewed-By: Karina Litskevich
Discussion: https://postgr.es/m/CALdSSPhVELkvutquqrDB=Ujfq_Pjz=6jn-kzh+291KPNViLTfw@mail.gmail.com
Tomas Vondra [Mon, 16 Dec 2024 14:53:36 +0000 (15:53 +0100)]
psql: Tab completion for ALTER TYPE ... ADD ATTRIBUTE
Improve psql tab completion for ALTER TYPE ... ADD ATTRIBUTE to offer a
list of existing data types (until now no options were offered).
Author: Kirill Reshke
Reviewed-By: Karina Litskevich
Discussion: https://postgr.es/m/CALdSSPhVELkvutquqrDB=Ujfq_Pjz=6jn-kzh+291KPNViLTfw@mail.gmail.com
Heikki Linnakangas [Mon, 16 Dec 2024 13:56:38 +0000 (15:56 +0200)]
Make 009_twophase.pl test pass with recovery_min_apply_delay set
The test failed if you ran the regression tests with TEMP_CONFIG with
recovery_min_apply_delay = '500ms'. Fix the race condition by waiting
for transaction to be applied in the replica, like in a few other
tests.
The failing test was introduced in commit
cbfbda7841. Backpatch to all
supported versions like that commit (except v12, which is no longer
supported).
Reported-by: Alexander Lakhin
Discussion: https://www.postgresql.org/message-id/
09e2a70a-a6c2-4b5c-aeae-
040a7449c9f2@gmail.com
Michael Paquier [Mon, 16 Dec 2024 05:52:11 +0000 (14:52 +0900)]
Print out error position for CREATE DOMAIN
This is simply done by pushing down the ParseState available in
ProcessUtility() to DefineDomain(), giving more information about the
position of an error when running a CREATE DOMAIN query.
Most of the queries impacted by this change have been added previously
in
0172b4c9449e.
Author: Kirill Reshke, Jian He
Reviewed-by: Álvaro Herrera, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CALdSSPhqfvKbDwqJaY=yEePi_aq61GmMpW88i6ZH7CMG_2Z4Cg@mail.gmail.com
Michael Paquier [Mon, 16 Dec 2024 02:23:38 +0000 (11:23 +0900)]
Add some tests for encoding conversion in COPY TO/FROM
This adds a couple of tests to trigger encoding conversion when input
and server encodings do not match in COPY FROM/TO, or need_transcoding
set to true in the COPY state data. These tests rely on UTF8 <-> LATIN1
for the valid cases as LATIN1 accepts any bytes, and UTF8 <-> EUC_JP for
some of the invalid cases where a character cannot be understood,
causing a conversion failure.
Both ENCODING and client_encoding are covered. Test suggested by Andres
Freund.
Author: Sutou Kouhei
Discussion: https://postgr.es/m/
20240206222445.hzq22pb2nye7rm67@awork3.anarazel.de
Tom Lane [Sun, 15 Dec 2024 20:50:07 +0000 (15:50 -0500)]
Declare a couple of variables inside not outside a PG_TRY block.
I went through the buildfarm's reports of "warning: variable 'foo'
might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]". As usual,
none of them are live problems according to my understanding of the
effects of setjmp/longjmp, to wit that local variables might revert
to their values as of PG_TRY entry, due to being kept in registers.
But I did happen to notice that XmlTableGetValue's "cstr" variable
doesn't need to be declared outside the PG_TRY block at all (thus
giving further proof that the -Wclobbered warning has little
connection to real problems). We might as well move it inside,
and "cur" too, in hopes of eliminating one of the bogus warnings.
Tom Lane [Sun, 15 Dec 2024 19:14:14 +0000 (14:14 -0500)]
pgbench: fix misprocessing of some nested \if constructs.
An \if command appearing within a false (not-to-be-executed) \if
branch was incorrectly treated the same as \elif. This could allow
statements within the inner \if to be executed when they should
not be. Also the missing inner \if stack entry would result in an
assertion failure (in assert-enabled builds) when the final \endif
is reached.
Report and patch by Michail Nikolaev. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/CANtu0oiA1ke=SP6tauhNqkUdv5QFsJtS1p=aOOf_iU+EhyKkjQ@mail.gmail.com
Fujii Masao [Sun, 15 Dec 2024 02:18:18 +0000 (11:18 +0900)]
doc: Clarify old WAL files are kept until they are summarized.
The documentation in wal.sgml explains that old WAL files cannot be
removed or recycled until they are archived (when WAL archiving is used)
or replicated (when using replication slots). However, it did not mention
that, similarly, old WAL files are also kept until they are summarized
if WAL summarization is enabled. This commit adds that clarification
to the documentation.
Back-patch to v17 where WAL summarization was added.
Author: Fujii Masao
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/
fd0eb0a5-f43b-4e06-b450-
cbca011b6cff@oss.nttdata.com
Tom Lane [Sat, 14 Dec 2024 21:07:18 +0000 (16:07 -0500)]
contrib/earthdistance: Use SQL-standard function bodies.
The @extschema:name@ feature added by
72a5b1fc8 allows us to
make earthdistance's references to the cube extension fully
search-path-secure, so long as all those references are
resolved at extension installation time not runtime.
To do that, we must convert earthdistance's SQL functions to
the new SQL-standard style; but we wanted to do that anyway.
The functions can be updated in our customary style by running
CREATE OR REPLACE FUNCTION in an extension update script.
However, there's still problems in the "CREATE DOMAIN earth"
command: its references to cube functions could be captured
by hostile objects in earthdistance's installation schema,
if that's not where the cube extension is. Worse, the reference
to the cube type itself as the domain's base could be captured,
and that's not something we could fix after-the-fact in the
update script.
What I've done about that is to change the "CREATE DOMAIN earth"
command in the base script earthdistance--1.1.sql. Ordinarily,
changing a released extension script is forbidden; but I think
it's okay here since the results of successful (non-trojaned)
script execution will be identical to before.
A good deal of care is still needed to make the extension's scripts
proof against search-path-based attacks. We have to make sure that
all the function and operator invocations have exact argument-type
matches, to forestall attacks based on supplying a better match.
Fortunately earthdistance isn't very big, so I've just gone through
it and inspected each call to be sure of that. The only actual code
changes needed were to spell all floating-point constants in the style
'-1'::float8, rather than depending on runtime type conversions and/or
negations. (I'm not sure that the shortcuts previously used were
attackable, but removing run-time effort is a good thing anyway.)
I believe that this fixes earthdistance enough that we could
mark it trusted and remove the warnings about it that were
added by
7eeb1d986; but I've not done that here.
The primary reason for dealing with this now is that we've
received reports of pg_upgrade failing for databases that use
earthdistance functions in contexts like generated columns.
That's a consequence of
2af07e2f7 having restricted the search_path
used while evaluating such expressions. The only way to fix that
is to make the earthdistance functions independent of run-time
search_path. This patch is very much nicer than the alternative of
attaching "SET search_path" clauses to earthdistance's functions:
it is more secure and doesn't create a run-time penalty. Therefore,
I've chosen to back-patch this to v16 where @extschema:name@
was added. It won't help unless users update to 16.7 and issue
"ALTER EXTENSION earthdistance UPDATE" before upgrading to 17,
but at least there's now a way to deal with the problem without
manual intervention in the dump/restore process.
Tom Lane and Ronan Dunklau
Discussion: https://postgr.es/m/
3316564.aeNJFYEL58@aivenlaptop
Discussion: https://postgr.es/m/
6a6439f1-8039-44e2-8fb9-
59028f7f2014@mailbox.org
Álvaro Herrera [Sat, 14 Dec 2024 11:55:00 +0000 (12:55 +0100)]
Refactor some SQL/JSON error messages
Turn type names into "%s" specifiers to 1) avoid getting them translated
and 2) reduce the total number of messages.
Thomas Munro [Sat, 14 Dec 2024 11:36:30 +0000 (00:36 +1300)]
Fix warnings about declaration of environ on MinGW.
POSIX says that the global variable environ shouldn't be declared in a
header, and that you have to declare it yourself. MinGW declares it in
<stdlib.h> with some macrology that messes up our declarations. Visual
Studio doesn't warn (there are clues that it may also declare it, but if
so, apparently compatibly). Suppress our declarations, on MinGW only.
This clears the last warnings on CI's optional MinGW task, and hopefully
on build farm animal fairywren too.
Like
1319997d, no back-patch for now as it's not known to be breaking
anything, and my humble goal is just to keep the MinGW build clean going
forward.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (earlier version)
Discussion: https://postgr.es/m/CA%2BhUKGJLMh%2B6W5E4M_jSFb43gnrA_-Q6-%2BBf3HkBXyGfRFcBsQ%40mail.gmail.com
Thomas Munro [Sat, 14 Dec 2024 07:59:58 +0000 (20:59 +1300)]
Remove EXTENSION_DONT_CHECK_SIZE from md.c.
Commits
7bb3102c and
3eb77eba removed the only user of the
EXTENSION_DONT_CHECK_SIZE flag, which had previously been required to
checkpoint truncated relations. Since
7bb3102c, segments have been
opened directly for synchronization without calling _mdfd_getseg(), so
it doesn't need a mode that tolerates non-final short segments. Remove
the redundant flag and associated comments.
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/nyj4k7yur5t27rtygvx2i2lrlp6rqfvvhoiiwx4fznynksf2et%404hj2sp42alpe
John Naylor [Sat, 14 Dec 2024 02:52:08 +0000 (09:52 +0700)]
Fix typo
Ryo Kanbayashi
Discussion: https://postgr.es/m/CANOn0ExEQiPVrzkjULkENVac_n4Lknm6dxsU69MSncQap0kJVA%40mail.gmail.com
Tom Lane [Fri, 13 Dec 2024 19:21:36 +0000 (14:21 -0500)]
Fix possible crash in pg_dump with identity sequences.
If an owned sequence is considered interesting, force its owning
table to be marked interesting too. This ensures, in particular,
that we'll fetch the owning table's column names so we have the
data needed for ALTER TABLE ... ADD GENERATED. Previously there were
edge cases where pg_dump could get SIGSEGV due to not having filled in
the column names. (The known case is where the owning table has been
made part of an extension while its identity sequence is not a member;
but there may be others.)
Also, if it's an identity sequence, force its dumped-components mask
to exactly match the owning table: dump definition only if we're
dumping the table's definition, dump data only if we're dumping the
table's data, etc. This generalizes the code introduced in commit
b965f2617 that set the sequence's dump mask to NONE if the owning
table's mask is NONE. That's insufficient to prevent failures,
because for example the table's mask might only request dumping ACLs,
which would lead us to still emit ALTER TABLE ADD GENERATED even
though we didn't create the table. It seems better to treat an
identity sequence as though it were an inseparable aspect of the
table, matching the treatment used in the backend's dependency logic.
Perhaps this policy needs additional refinement, but let's wait to
see some field use-cases before changing it further.
While here, add a comment in pg_dump.h warning against writing tests
like "if (dobj->dump == DUMP_COMPONENT_NONE)", which was a bug in this
case. There is one other example in getPublicationNamespaces, which
if it's not a bug is at least remarkably unclear and under-documented.
Changing that requires a separate discussion, however.
Per report from Artur Zakirov. Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAKNkYnwXFBf136=u9UqUxFUVagevLQJ=zGd5BsLhCsatDvQsKQ@mail.gmail.com
Álvaro Herrera [Fri, 13 Dec 2024 06:41:36 +0000 (07:41 +0100)]
Rewrite maybe_reread_subscription() comment
One sentence was gramatically wrong, but also too terse. Expand on it.
Álvaro Herrera [Fri, 13 Dec 2024 06:38:49 +0000 (07:38 +0100)]
Dump not-null constraints on inherited columns correctly
With not-null constraints defined in child tables for columns that are
coming from their parent tables, we were printing ALTER TABLE SET NOT
NULL commands that were missing the constraint name, so the original
constraint name was being lost, which is bogus. Fix by instead adding
a table-constraint constraint declaration with the correct constraint
name in the CREATE TABLE instead.
Oversight in commit
14e87ffa5c54.
We could have fixed it by changing the ALTER TABLE SET NOT NULL to ALTER
TABLE ADD CONSTRAINT, but I'm not sure that's any better. A potential
problem here might be that if sent to a non-Postgres server, the new
pg_dump output would fail because the "CONSTRAINT foo NOT NULL colname"
syntax isn't SQL-conforming. However, Postgres' implementation of
inheritance is already non-SQL-conforming, so that'd likely fail anyway.
This problem was only noticed by Ashutosh's proposed test framework for
pg_dump, https://postgr.es/m/CAExHW5uF5V=Cjecx3_Z=7xfh4rg2Wf61PT+hfquzjBqouRzQJQ@mail.gmail.com
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reported-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CAExHW5tbdgAKDfqjDJ-7Fk6PJtHg8D4zUF6FQ4H2Pq8zK38Nyw@mail.gmail.com
Nathan Bossart [Thu, 12 Dec 2024 21:52:04 +0000 (15:52 -0600)]
Revert "Don't truncate database and user names in startup packets."
This reverts commit
562bee0fc13dc95710b8db6a48edad2f3d052f2e.
We received a report from the field about this change in behavior,
so it seems best to revert this commit and to add proper
multibyte-aware truncation as a follow-up exercise.
Fixes bug #18711.
Reported-by: Adam Rauch
Reviewed-by: Tom Lane, Bertrand Drouvot, Bruce Momjian, Thomas Munro
Discussion: https://postgr.es/m/18711-
7503ee3e449d2c47%40postgresql.org
Backpatch-through: 17
Michael Paquier [Thu, 12 Dec 2024 07:59:22 +0000 (16:59 +0900)]
Adjust some comments about structure properties in pg_stat.h
One comment of PgStat_TableCounts mentioned that its pending stats use
memcmp() to check for the all-zero case if there is any activity. This
is not true since
07e9e28b56, as pg_memory_is_all_zeros() is used.
PgStat_FunctionCounts incorrectly documented that it relied on memcpy().
This has never been correct, and not relevant because function
statistics do not have an all-zero check for pending stats.
Checkpoint and bgwriter statistics have been always relying on memcmp()
or pg_memory_is_all_zeros() (since
07e9e28b56 for the latter), and never
mentioned the dependency on event counters for their all-zero checks.
Let's document these properties, like the table statistics.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/Z1hNLvcPgVLPxCoc@ip-10-97-1-34.eu-west-3.compute.internal
David Rowley [Thu, 12 Dec 2024 02:28:38 +0000 (15:28 +1300)]
Detect redundant GROUP BY columns using UNIQUE indexes
d4c3a156c added support that when the GROUP BY contained all of the
columns belonging to a relation's PRIMARY KEY, all other columns
belonging to that relation would be removed from the GROUP BY clause.
That's possible because all other columns are functionally dependent on
the PRIMARY KEY and those columns alone ensure the groups are distinct.
Here we expand on that optimization and allow it to work for any unique
indexes on the table rather than just the PRIMARY KEY index. This
normally requires that all columns in the index are defined with NOT NULL,
however, we can relax that requirement when the index is defined with
NULLS NOT DISTINCT.
When there are multiple suitable indexes to allow columns to be removed,
we prefer the index with the least number of columns as this allows us
to remove the highest number of GROUP BY columns. One day, we may want to
revisit that decision as it may make more sense to use the narrower set of
columns in terms of the width of the data types and stored/queried data.
This also adjusts the code to make use of RelOptInfo.indexlist rather
than looking up the catalog tables.
In passing, add another short-circuit path to allow bailing out earlier
in cases where it's certainly not possible to remove redundant GROUP BY
columns. This early exit is now cheaper to do than when this code was
originally written as
00b41463c made it cheaper to check for empty
Bitmapsets.
Patch originally by Zhang Mingli and later worked on by jian he, but after
I (David) worked on it, there was very little of the original left.
Author: Zhang Mingli, jian he, David Rowley
Reviewed-by: jian he, Andrei Lepikhov
Discussion: https://postgr.es/m/
327990c8-b9b2-4b0c-bffb-
462249f82de0%40Spark
Richard Guo [Thu, 12 Dec 2024 02:21:51 +0000 (11:21 +0900)]
Improve the test case from
5668a857d
In commit
5668a857d, we fixed an issue with incorrect results in right
semi joins and introduced a test case to verify the fix. The test
case involves SubPlans and InitPlans, which may not be immediately
apparent in relation to the issue we addressed.
This patch simplifies the test case with a more straightforward query.
Per discussion with Melanie Plageman.
Author: Richard Guo
Discussion: https://postgr.es/m/CAAKRu_a-Cip2XCXp13fmxq+T9BhLAVApHTyjr94awL2mbXHC-Q@mail.gmail.com
Michael Paquier [Thu, 12 Dec 2024 02:16:45 +0000 (11:16 +0900)]
Add some regression tests for missing DDL patterns
The following commands gain increased coverage for some of the errors
they can trigger:
- ALTER TABLE .. ALTER COLUMN
- CREATE DOMAIN
- CREATE TYPE (LIKE)
This has come up while discussing the possibility to add more
information about the location of the error in such queries, and it
is useful on its own as there was no coverage until now for the
patterns added in this commit.
Author: Jian He, Kirill Reshke
Reviewed-By: Álvaro Herrera, Michael Paquier
Discussion: https://postgr.es/m/CALdSSPhqfvKbDwqJaY=yEePi_aq61GmMpW88i6ZH7CMG_2Z4Cg@mail.gmail.com
David Rowley [Thu, 12 Dec 2024 01:22:15 +0000 (14:22 +1300)]
Defer remove_useless_groupby_columns() work until query_planner()
Traditionally, remove_useless_groupby_columns() was called during
grouping_planner() directly after the call to preprocess_groupclause().
While in many ways, it made sense to populate the field and remove the
functionally dependent columns from processed_groupClause at the same
time, it's just that doing so had the disadvantage that
remove_useless_groupby_columns() was being called before the RelOptInfos
were populated for the relations mentioned in the query. Not having
RelOptInfos available meant we needed to manually query the catalog tables
to get the required details about the primary key constraint for the
table.
Here we move the remove_useless_groupby_columns() call to
query_planner() and put it directly after the RelOptInfos are populated.
This is fine to do as processed_groupClause still isn't final at this
point as it can still be modified inside standard_qp_callback() by
make_pathkeys_for_sortclauses_extended().
This commit is just a refactor and simply moves
remove_useless_groupby_columns() into initsplan.c. A planned follow-up
commit will adjust that function so it uses RelOptInfo instead of doing
catalog lookups and also teach it how to use unique indexes as proofs to
expand the cases where we can remove functionally dependent columns from
the GROUP BY.
Reviewed-by: Andrei Lepikhov, jian he
Discussion: https://postgr.es/m/CAApHDvqLezKwoEBBQd0dp4Y9MDkFBDbny0f3SzEeqOFoU7Z5+A@mail.gmail.com
Masahiko Sawada [Wed, 11 Dec 2024 23:54:41 +0000 (15:54 -0800)]
Add UUID version 7 generation function.
This commit introduces the uuidv7() SQL function, which generates UUID
version 7 as specified in RFC 9652. UUIDv7 combines a Unix timestamp
in milliseconds and random bits, offering both uniqueness and
sortability.
In our implementation, the 12-bit sub-millisecond timestamp fraction
is stored immediately after the timestamp, in the space referred to as
"rand_a" in the RFC. This ensures additional monotonicity within a
millisecond. The rand_a bits also function as a counter. We select a
sub-millisecond timestamp so that it monotonically increases for
generated UUIDs within the same backend, even when the system clock
goes backward or when generating UUIDs at very high
frequency. Therefore, the monotonicity of generated UUIDs is ensured
within the same backend.
This commit also expands the uuid_extract_timestamp() function to
support UUID version 7.
Additionally, an alias uuidv4() is added for the existing
gen_random_uuid() SQL function to maintain consistency.
Bump catalog version.
Author: Andrey Borodin
Reviewed-by: Sergey Prokhorenko, Przemysław Sztoch, Nikolay Samokhvalov
Reviewed-by: Peter Eisentraut, Jelte Fennema-Nio, Aleksander Alekseev
Reviewed-by: Masahiko Sawada, Lukas Fittl, Michael Paquier, Japin Li
Reviewed-by: Marcos Pegoraro, Junwang Zhao, Stepan Neretin
Reviewed-by: Daniel Vérité
Discussion: https://postgr.es/m/CAAhFRxitJv%3DyoGnXUgeLB_O%2BM7J2BJAmb5jqAT9gZ3bij3uLDA%40mail.gmail.com
David Rowley [Wed, 11 Dec 2024 20:50:00 +0000 (09:50 +1300)]
Fix further fallout from EXPLAIN ANALYZE BUFFERS change
c2a4078eb adjusted EXPLAIN ANALYZE to default the BUFFERS to ON. This
(hopefully) fixes the last remaining issue with regression test failures
with -D RELCACHE_FORCE_RELEASE -D CATCACHE_FORCE_RELEASE builds, where
the planner accesses more buffers due to the cold caches.
Discussion: https://postgr.es/m/CAApHDvqLdzgz77JsE-yTki3w9UiKQ-uTMLRctazcu+99-ips3g@mail.gmail.com
Nathan Bossart [Wed, 11 Dec 2024 20:19:14 +0000 (14:19 -0600)]
Use pg_memory_is_all_zeros() in pgstatfuncs.c.
There are a few places in this file that use memset() and memcmp()
to determine whether a section of memory is all zeros. This commit
modifies them to use pg_memory_is_all_zeros() instead. These
aren't expected to be hot code paths, but this may optimize them a
bit. Plus, this allows us to remove some variables that were only
needed for the memset() and memcmp().
Author: Bertrand Drouvot
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/Z1hNubHfvMxlW6eu%40ip-10-97-1-34.eu-west-3.compute.internal
Masahiko Sawada [Wed, 11 Dec 2024 18:35:57 +0000 (10:35 -0800)]
Unmark gen_random_uuid() function leakproof.
The functions without arguments don't need to be marked
leakproof. This commit unmarks gen_random_uuid() leakproof for
consistency with upcoming UUID generation functions. Also, this commit
adds a regression test to prevent reintroducing such cases.
Bump catalog version.
Reported-by: Peter Eisentraut
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAD21AoBE1ePPWY1NQEgk3DkqjYzLPZwYTzCySHm0e%2B9a69PfZw%40mail.gmail.com
Daniel Gustafsson [Wed, 11 Dec 2024 11:48:22 +0000 (12:48 +0100)]
Fix a memory leak in dumping functions with TRANSFORMs
The gneration of the dump clause for functions with TRANSFORM
calls would leak the memory for holding the result of the Oid
array parsing. Fix by freeing.
While in there, switch to using pg_malloc instead of palloc in
order to be consistent with the rest of the file.
Author: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/
baf1ae4511288e5b421f41e79a3df1a0@postgrespro.ru
David Rowley [Wed, 11 Dec 2024 10:16:44 +0000 (23:16 +1300)]
Add missing BUFFERS OFF in regression tests, take 2
Similar to
9fa1aaa65, but running with -D RELCACHE_FORCE_RELEASE and
-D CATCACHE_FORCE_RELEASE yielded some additional missing places that
needed BUFFERS OFF.
Discussion: https://postgr.es/m/CANNMO++W7MM8T0KyXN3ZheXXt-uLVM3aEtZd+WNfZ=obxffUiA@mail.gmail.com