Peter Geoghegan [Thu, 6 Aug 2020 22:25:49 +0000 (15:25 -0700)]
amcheck: Sanitize metapage's allequalimage field.
This will be helpful if it ever proves necessary to revoke an opclass's
support for deduplication.
Backpatch: 13-, where nbtree deduplication was introduced.
David Rowley [Thu, 6 Aug 2020 22:22:18 +0000 (10:22 +1200)]
Fix bogus EXPLAIN output for Hash Aggregate
9bdb300de modified the EXPLAIN output for Hash Aggregate to show details
from parallel workers. However, it neglected to consider that a given
parallel worker may not have assisted with the given Hash Aggregate. This
can occur when workers fail to start or during Parallel Append with
enable_partitionwise_join enabled when only a single worker is working on
a non-parallel aware sub-plan. It could also happen if a worker simply
wasn't fast enough to get any work done before other processes went and
finished all the work.
The bogus output came from the fact that ExplainOpenWorker() skipped
showing any details for non-initialized workers but show_hashagg_info()
did show details from the worker. This meant that the worker properties
that were shown were not properly attributed to the worker that they
belong to.
In passing, we also now don't show Hash Aggregate properties for the
leader process when it did not contribute any work to the Hash Aggregate.
This can occur either during Parallel Append when only a parallel worker
worked on a given sub plan or with parallel_leader_participation set to
off. This aims to make the behavior of Hash Aggregate's EXPLAIN output
more similar to Sort's.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/
20200805012105.GZ28072%40telsasoft.com
Backpatch-through: 13, where the original breakage was introduced
Robert Haas [Thu, 6 Aug 2020 18:13:03 +0000 (14:13 -0400)]
Register llvm_shutdown using on_proc_exit, not before_shmem_exit.
This seems more correct, because other before_shmem_exit calls may
expect the infrastructure that is needed to run queries and access the
database to be working, and also because this cleanup has nothing to
do with shared memory.
There are no known user-visible consequences to this, though, apart
from what was previous fixed by commit
303640199d0436c5e7acdf50b837a027b5726594 and back-patched as commit
bcbc27251d35336a6442761f59638138a772b839 and commit
f7013683d9bb663a6a917421b1374306a32f165b, so for now, no back-patch.
Bharath Rupireddy
Discussion: http://postgr.es/m/CALj2ACWk7j4F2v2fxxYfrroOF=AdFNPr1WsV+AGtHAFQOqm_pw@mail.gmail.com
Bruce Momjian [Wed, 5 Aug 2020 21:12:10 +0000 (17:12 -0400)]
doc: clarify "state" table reference in tutorial
Reported-by: Vyacheslav Shablistyy
Discussion: https://postgr.es/m/
159586122762.680.
1361378513036616007@wrigleys.postgresql.org
Backpatch-through: 9.5
Tom Lane [Wed, 5 Aug 2020 19:38:55 +0000 (15:38 -0400)]
Fix matching of sub-partitions when a partitioned plan is stale.
Since we no longer require AccessExclusiveLock to add a partition,
the executor may see that a partitioned table has more partitions
than the planner saw. ExecCreatePartitionPruneState's code for
matching up the partition lists in such cases was faulty, and would
misbehave if the planner had successfully pruned any partitions from
the query. (Thus, trouble would occur only if a partition addition
happens concurrently with a query that uses both static and dynamic
partition pruning.) This led to an Assert failure in debug builds,
and probably to crashes or query misbehavior in production builds.
To repair the bug, just explicitly skip zeroes in the plan's
relid_map[] list. I also made some cosmetic changes to make the code
more readable (IMO anyway). Also, convert the cross-checking Assert
to a regular test-and-elog, since it's now apparent that this logic
is more fragile than one would like.
Currently, there's no way to repeatably exercise this code, except
with manual use of a debugger to stop the backend between planning
and execution. Hence, no test case in this patch. We oughta do
something about that testability gap, but that's for another day.
Amit Langote and Tom Lane, per report from Justin Pryzby. Oversight
in commit
898e5e329; backpatch to v12 where that appeared.
Discussion: https://postgr.es/m/
20200802181131.GA27754@telsasoft.com
Alexander Korotkov [Tue, 4 Aug 2020 23:15:34 +0000 (02:15 +0300)]
Remove btree page items after page unlink
Currently, page unlink leaves remaining items "as is", but replay of
corresponding WAL-record re-initializes page leaving it with no items.
For the sake of consistency, this commit makes primary delete all the items
during page unlink as well.
Thanks to this change, we now don't mask contents of deleted btree page for
WAL consistency checking.
Discussion: https://postgr.es/m/CAPpHfdt_OTyQpXaPJcWzV2N-LNeNJseNB-K_A66qG%3DL518VTFw%40mail.gmail.com
Author: Alexander Korotkov
Reviewed-by: Peter Geoghegan
Tom Lane [Tue, 4 Aug 2020 19:20:31 +0000 (15:20 -0400)]
Increase hard-wired timeout values in ecpg regression tests.
A couple of test cases had connect_timeout=14, a value that seems
to have been plucked from a hat. While it's more than sufficient
for normal cases, slow/overloaded buildfarm machines can get a timeout
failure here, as per recent report from "sungazer". Increase to 180
seconds, which is in line with our typical timeouts elsewhere in
the regression tests.
Back-patch to 9.6; the code looks different in 9.5, and this doesn't
seem to be quite worth the effort to adapt to that.
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2020-08-04%2007%3A12%3A22
Michael Paquier [Tue, 4 Aug 2020 05:36:01 +0000 (14:36 +0900)]
Make new SSL TAP test for channel_binding more robust
The test would fail in an environment including a certificate file in
~/.postgresql/.
bdd6e9b fixed a similar failure, and
d6e612f introduced
the same problem again with a new test.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/
20200804.120033.
31225582282178001.horikyota.ntt@gmail.com
Backpatch-through: 13
Peter Geoghegan [Mon, 3 Aug 2020 22:54:38 +0000 (15:54 -0700)]
Fix replica backward scan race condition.
It was possible for the logic used by backward scans (which must reason
about concurrent page splits/deletions in its own peculiar way) to
become confused when running on a replica. Concurrent replay of a WAL
record that describes the second phase of page deletion could cause
_bt_walk_left() to get confused. btree_xlog_unlink_page() simply failed
to adhere to the same locking protocol that we use on the primary, which
is obviously wrong once you consider these two disparate functions
together. This bug is present in all stable branches.
More concretely, the problem was that nothing stopped _bt_walk_left()
from observing inconsistencies between the deletion's target page and
its original sibling pages when running on a replica. This is true even
though the second phase of page deletion is supposed to work as a single
atomic action. Queries running on replicas raised "could not find left
sibling of block %u in index %s" can't-happen errors when they went back
to their scan's "original" page and observed that the page has not been
marked deleted (even though it really was concurrently deleted).
There is no evidence that this actually happened in the real world. The
issue came to light during unrelated feature development work. Note
that _bt_walk_left() is the only code that cares about the difference
between a half-dead page and a fully deleted page that isn't also
exclusively used by nbtree VACUUM (unless you include contrib/amcheck
code). It seems very likely that backward scans are the only thing that
could become confused by the inconsistency. Even amcheck's complex
bt_right_page_check_scankey() dance was unaffected.
To fix, teach btree_xlog_unlink_page() to lock the left sibling, target,
and right sibling pages in that order before releasing any locks (just
like _bt_unlink_halfdead_page()). This is the simplest possible
approach. There doesn't seem to be any opportunity to be more clever
about lock acquisition in the REDO routine, and it hardly seems worth
the trouble in any case.
This fix might enable contrib/amcheck verification of leaf page sibling
links with only an AccessShareLock on the relation. An amcheck patch
from Andrey Borodin was rejected back in January because it clashed with
btree_xlog_unlink_page()'s lax approach to locking pages. It now seems
likely that the real problem was with btree_xlog_unlink_page(), not the
patch.
This is a low severity, low likelihood bug, so no backpatch.
Author: Michail Nikolaev
Diagnosed-By: Michail Nikolaev
Discussion: https://postgr.es/m/CANtu0ohkR-evAWbpzJu54V8eCOtqjJyYp3PQ_SGoBTRGXWhWRw@mail.gmail.com
Peter Geoghegan [Mon, 3 Aug 2020 20:04:42 +0000 (13:04 -0700)]
Add nbtree page deletion assertion.
Add a documenting assertion that's similar to the nearby assertion added
by commit
cd8c73a3. This conveys that the entire call to _bt_pagedel()
does no work if it isn't possible to get a descent stack for the initial
scanblkno page.
Tom Lane [Mon, 3 Aug 2020 18:02:35 +0000 (14:02 -0400)]
Remove unnecessary "DISTINCT" in psql's queries for \dAc and \dAf.
A moment's examination of these queries is sufficient to see that
they do not produce duplicate rows, unless perhaps there's
catalog corruption. Using DISTINCT anyway is inefficient and
confusing; moreover it sets a poor example for anyone who
refers to psql -E output to see how to query the catalogs.
Tom Lane [Mon, 3 Aug 2020 17:11:16 +0000 (13:11 -0400)]
Doc: fix obsolete info about allowed range of TZ offsets in timetz.
We've allowed UTC offsets up to +/- 15:59 since commit
cd0ff9c0f, but
that commit forgot to fix the documentation about timetz.
Per bug #16571 from osdba.
Discussion: https://postgr.es/m/16571-
eb7501598de78c8a@postgresql.org
Tom Lane [Mon, 3 Aug 2020 13:46:12 +0000 (09:46 -0400)]
Fix behavior of ecpg's "EXEC SQL elif name".
This ought to work much like C's "#elif defined(name)"; but the code
implemented it in a way equivalent to endif followed by ifdef, so that
it didn't matter whether any previous branch of the IF construct had
succeeded. Fix that; add some test cases covering elif and nested IFs;
and improve the documentation, which also seemed a bit confused.
AFAICS the code has been like this since the feature was added in 1999
(commit
b57b0e044). So while it's surely wrong, there might be code
out there relying on the current behavior. Hence, don't back-patch
into stable branches. It seems all right to fix it in v13 though.
Per report from Ashutosh Sharma. Reviewed by Ashutosh Sharma and
Michael Meskes.
Discussion: https://postgr.es/m/CAE9k0P=dQk9X0cU2tN49S7a9tv733-e1pVdpB1P-pWJ5PdTktg@mail.gmail.com
Michael Paquier [Mon, 3 Aug 2020 04:38:48 +0000 (13:38 +0900)]
Add %P to log_line_prefix for parallel group leader
This is useful for monitoring purposes with log parsing. Similarly to
pg_stat_activity, the leader's PID is shown only for active parallel
workers, minimizing the log footprint for the leaders as the equivalent
shared memory field is set as long as a backend is alive.
Author: Justin Pryzby
Reviewed-by: Álvaro Herrera, Michael Paquier, Julien Rouhaud, Tom Lane
Discussion: https://postgr.es/m/
20200315111831.GA21492@telsasoft.com
Thomas Munro [Mon, 3 Aug 2020 00:39:15 +0000 (12:39 +1200)]
Fix rare failure in LDAP tests.
Instead of writing a query to psql's stdin, use -c. This avoids a
failure where psql exits before we write, seen a few times on the build
farm. Thanks to Tom Lane for the suggestion.
Back-patch to 11, where the LDAP tests arrived.
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/CA%2BhUKGLFmW%2BHQYPeKiwSp5sdFFHtFViCpw4Mh6yAgEx74r5-Cw%40mail.gmail.com
Thomas Munro [Mon, 3 Aug 2020 00:17:41 +0000 (12:17 +1200)]
Correct comment in simplehash.h.
Post-commit review for commit
84c0e4b9.
Author: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/CAApHDvptBx_%2BUPAzY0uXzopbvPVGKPeZ6Hoy8rnPcWz20Cr0Bw%40mail.gmail.com
Tom Lane [Sun, 2 Aug 2020 21:00:26 +0000 (17:00 -0400)]
Fix minor issues in psql's new \dAc and related commands.
The type-name pattern in \dAc and \dAf was matched only to the actual
pg_type.typname string, which is fairly user-unfriendly in cases where
that is not what's shown to the user by format_type (compare "_int4"
and "integer[]"). Make this code match what \dT does, i.e. match the
pattern against either typname or format_type() output. Also fix its
broken handling of schema-name restrictions. (IOW, make these
processSQLNamePattern calls match \dT's.) While here, adjust
whitespace to make the query a little prettier in -E output, too.
Also improve some inaccuracies and shaky grammar in the related
documentation.
Noted while working on a patch for intarray's opclasses; I wondered
why I couldn't get a match to "integer*" for the input type name.
David Rowley [Sun, 2 Aug 2020 02:24:46 +0000 (14:24 +1200)]
Use int64 instead of long in incremental sort code
Windows 64bit has 4-byte long values which is not suitable for tracking
disk space usage in the incremental sort code. Let's just make all these
fields int64s.
Author: James Coleman
Discussion: https://postgr.es/m/CAApHDvpky%2BUhof8mryPf5i%3D6e6fib2dxHqBrhp0Qhu0NeBhLJw%40mail.gmail.com
Backpatch-through: 13, where the incremental sort code was added
Noah Misch [Sat, 1 Aug 2020 22:31:01 +0000 (15:31 -0700)]
Change XID and mxact limits to warn at 40M and stop at 3M.
We have edge-case bugs when assigning values in the last few dozen pages
before the wrap limit. We may introduce similar bugs in the future. At
default BLCKSZ, this makes such bugs unreachable outside of single-user
mode. Also, when VACUUM began to consume mxacts, multiStopLimit did not
change to compensate.
pg_upgrade may fail on a cluster that was already printing "must be
vacuumed" warnings. Follow the warning's instructions to clear the
warning, then run pg_upgrade again. One can still, peacefully consume
98% of XIDs or mxacts, so DBAs need not change routine VACUUM settings.
Discussion: https://postgr.es/m/
20200621083513.GA3074645@rfd.leadboat.com
Tom Lane [Sat, 1 Aug 2020 21:12:47 +0000 (17:12 -0400)]
Invent "amadjustmembers" AM method for validating opclass members.
This allows AM-specific knowledge to be applied during creation of
pg_amop and pg_amproc entries. Specifically, the AM knows better than
core code which entries to consider as required or optional. Giving
the latter entries the appropriate sort of dependency allows them to
be dropped without taking out the whole opclass or opfamily; which
is something we'd like to have to correct obsolescent entries in
extensions.
This callback also opens the door to performing AM-specific validity
checks during opclass creation, rather than hoping than an opclass
developer will remember to test with "amvalidate". For the most part
I've not actually added any such checks yet; that can happen in a
follow-on patch. (Note that we shouldn't remove any tests from
"amvalidate", as those are still needed to cross-check manually
constructed entries in the initdb data. So adding tests to
"amadjustmembers" will be somewhat duplicative, but it seems like
a good idea anyway.)
Patch by me, reviewed by Alexander Korotkov, Hamid Akhtar, and
Anastasia Lubennikova.
Discussion: https://postgr.es/m/4578.
1565195302@sss.pgh.pa.us
Thomas Munro [Sat, 1 Aug 2020 11:39:36 +0000 (23:39 +1200)]
Use pg_pread() and pg_pwrite() in slru.c.
This avoids lseek() system calls at every SLRU I/O, as was
done for relation files in commit
c24dcd0c.
Reviewed-by: Ashwin Agrawal <aagrawal@pivotal.io>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKG%2Biqke4uTRFj8D8uEUUgj%2BRokPSp%2BCWM6YYzaaamG9Wvg%40mail.gmail.com
Discussion: https://postgr.es/m/CA%2BhUKGJ%2BoHhnvqjn3%3DHro7xu-YDR8FPr0FL6LF35kHRX%3D_bUzg%40mail.gmail.com
Michael Paquier [Sat, 1 Aug 2020 02:49:13 +0000 (11:49 +0900)]
Minimize slot creation for multi-inserts of pg_shdepend
When doing multiple insertions in pg_shdepend for the copy of
dependencies from a template database in CREATE DATABASE, the same
number of slots would have been created and used all the time. As the
number of items to insert is not known in advance, this makes most of
the slots created for nothing. This improves the slot handling so as
slot creation only happens when needed, minimizing the overhead of the
operation.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/
20200731024148.GB3317@paquier.xyz
Thomas Munro [Sat, 1 Aug 2020 00:16:15 +0000 (12:16 +1200)]
Improve programmer docs for simplehash and dynahash.
When reading the code it's not obvious when one should prefer dynahash
over simplehash and vice-versa, so, for programmer-friendliness, add
comments to inform that decision.
Show sample simplehash method signatures.
Author: James Coleman <jtc331@gmail.com>
Discussion: https://postgr.es/m/CAAaqYe_dOF39gAJ8rL-a3YO3Qo96MHMRQ2whFjK5ZcU6YvMQSA%40mail.gmail.com
Peter Geoghegan [Fri, 31 Jul 2020 22:34:28 +0000 (15:34 -0700)]
Restore lost amcheck TOAST test coverage.
Commit
eba77534 fixed an amcheck false positive bug involving
inconsistencies in TOAST input state between table and index. A test
case was added that verified that such an inconsistency didn't result in
a spurious corruption related error.
Test coverage from the test was accidentally lost by commit
501e41dd,
which propagated ALTER TABLE ... SET STORAGE attstorage state to
indexes. This broke the test because the test specifically relied on
attstorage not being propagated. This artificially forced there to be
index tuples whose datums were equivalent to the datums in the heap
without the datums actually being bitwise equal.
Fix this by updating pg_attribute directly instead. Commit
501e41dd
made similar changes to a test_decoding TOAST-related test case which
made the same assumption, but overlooked the amcheck test case.
Backpatch: 11-, just like commit
eba77534 (and commit
501e41dd).
Tom Lane [Fri, 31 Jul 2020 21:11:28 +0000 (17:11 -0400)]
Fix oversight in ALTER TYPE: typmodin/typmodout must propagate to arrays.
If a base type supports typmods, its array type does too, with the
same interpretation. Hence changes in pg_type.typmodin/typmodout
must be propagated to the array type.
While here, improve AlterTypeRecurse to not recurse to domains if
there is nothing we'd need to change.
Oversight in
fe30e7ebf. Back-patch to v13 where that came in.
Tom Lane [Fri, 31 Jul 2020 15:43:12 +0000 (11:43 -0400)]
Fix recently-introduced performance problem in ts_headline().
The new hlCover() algorithm that I introduced in commit
c9b0c678d
turns out to potentially take O(N^2) or worse time on long documents,
if there are many occurrences of individual query words but few or no
substrings that actually satisfy the query. (One way to hit this
behavior is with a "common_word & rare_word" type of query.) This
seems unavoidable given the original goal of checking every substring
of the document, so we have to back off that idea. Fortunately, it
seems unlikely that anyone would really want headlines spanning all of
a long document, so we can avoid the worse-than-linear behavior by
imposing a maximum length of substring that we'll consider.
For now, just hard-wire that maximum length as a multiple of max_words
times max_fragments. Perhaps at some point somebody will argue for
exposing it as a ts_headline parameter, but I'm hesitant to make such
a feature addition in a back-patched bug fix.
I also noted that the hlFirstIndex() function I'd added in that
commit was unnecessarily stupid: it really only needs to check whether
a HeadlineWordEntry's item pointer is null or not. This wouldn't make
all that much difference in typical cases with queries having just
a few terms, but a cycle shaved is a cycle earned.
In addition, add a CHECK_FOR_INTERRUPTS call in TS_execute_recurse.
This ensures that hlCover's loop is cancellable if it manages to take
a long time, and it may protect some other TS_execute callers as well.
Back-patch to 9.6 as the previous commit was. I also chose to add the
CHECK_FOR_INTERRUPTS call to 9.5. The old hlCover() algorithm seems
to avoid the O(N^2) behavior, at least on the test case I tried, but
nonetheless it's not very quick on a long document.
Per report from Stephen Frost.
Discussion: https://postgr.es/m/
20200724160535.GW12375@tamriel.snowman.net
Thomas Munro [Fri, 31 Jul 2020 07:08:09 +0000 (19:08 +1200)]
Fix compiler warning from Clang.
Per build farm.
Discussion: https://postgr.es/m/
20200731062626.GD3317%40paquier.xyz
Thomas Munro [Fri, 31 Jul 2020 05:27:09 +0000 (17:27 +1200)]
Preallocate some DSM space at startup.
Create an optional region in the main shared memory segment that can be
used to acquire and release "fast" DSM segments, and can benefit from
huge pages allocated at cluster startup time, if configured. Fall back
to the existing mechanisms when that space is full. The size is
controlled by a new GUC min_dynamic_shared_memory, defaulting to 0.
Main region DSM segments initially contain whatever garbage the memory
held last time they were used, rather than zeroes. That change revealed
that DSA areas failed to initialize themselves correctly in memory that
wasn't zeroed first, so fix that problem.
Discussion: https://postgr.es/m/CA%2BhUKGLAE2QBv-WgGp%2BD9P_J-%3Dyne3zof9nfMaqq1h3EGHFXYQ%40mail.gmail.com
Michael Paquier [Fri, 31 Jul 2020 05:17:29 +0000 (14:17 +0900)]
Fix comment in instrument.h
local_blks_dirtied tracks the number of local blocks dirtied, not shared
ones.
Author: Kirk Jamison
Discussion: https://postgr.es/m/OSBPR01MB2341760686DC056DE89D2AB9EF710@OSBPR01MB2341.jpnprd01.prod.outlook.com
Thomas Munro [Fri, 31 Jul 2020 02:15:18 +0000 (14:15 +1200)]
Cache smgrnblocks() results in recovery.
Avoid repeatedly calling lseek(SEEK_END) during recovery by caching
the size of each fork. For now, we can't use the same technique in
other processes, because we lack a shared invalidation mechanism.
Do this by generalizing the pre-existing caching used by FSM and VM
to support all forks.
Discussion: https://postgr.es/m/CAEepm%3D3SSw-Ty1DFcK%3D1rU-K6GSzYzfdD4d%2BZwapdN7dTa6%3DnQ%40mail.gmail.com
Michael Paquier [Fri, 31 Jul 2020 01:54:26 +0000 (10:54 +0900)]
Use multi-inserts for pg_attribute and pg_shdepend
For pg_attribute, this allows to insert at once a full set of attributes
for a relation (roughly 15% of WAL reduction in extreme cases). For
pg_shdepend, this reduces the work done when creating new shared
dependencies from a database template. The number of slots used for the
insertion is capped at 64kB of data inserted for both, depending on the
number of items to insert and the length of the rows involved.
More can be done for other catalogs, like pg_depend. This part requires
a different approach as the number of slots to use depends also on the
number of entries discarded as pinned dependencies. This is also
related to the rework or dependency handling for ALTER TABLE and CREATE
TABLE, mainly.
Author: Daniel Gustafsson
Reviewed-by: Andres Freund, Michael Paquier
Discussion: https://postgr.es/m/
20190213182737.mxn6hkdxwrzgxk35@alap3.anarazel.de
Tatsuo Ishii [Thu, 30 Jul 2020 22:18:41 +0000 (07:18 +0900)]
Doc: fix high availability solutions comparison.
In "High Availability, Load Balancing, and Replication" chapter,
certain descriptions of Pgpool-II were not correct at this point. It
does not need conflict resolution. Also "Multiple-Server Parallel
Query Execution" is not supported anymore.
Discussion: https://postgr.es/m/
20200726.230128.
53842489850344110.t-ishii%40sraoss.co.jp
Author: Tatsuo Ishii
Reviewed-by: Bruce Momjian
Backpatch-through: 9.5
Jeff Davis [Thu, 30 Jul 2020 15:44:58 +0000 (08:44 -0700)]
Use pg_bitutils for HyperLogLog.
Using pg_leftmost_one_post32() yields substantial performance benefits.
Backpatching to version 13 because HLL is used for HashAgg
improvements in
9878b643, which was also backpatched to 13.
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-WzkGvDKVDo+0YvfvZ+1CE=iCi88DCOGFF3i1hTGGaxcKPw@mail.gmail.com
Backpatch-through: 13
Michael Paquier [Thu, 30 Jul 2020 07:57:37 +0000 (16:57 +0900)]
Include partitioned tables for tab completion of VACUUM in psql
The relkinds that support indexing are the same as the ones supporting
VACUUM, so the code gets refactored a bit with the completion query used
for CLUSTER, but there is no change for CLUSTER in this commit.
Author: Justin Pryzby
Reviewed-by: Fujii Masao, Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/
20200728170408.GI20393@telsasoft.com
Michael Paquier [Thu, 30 Jul 2020 06:48:44 +0000 (15:48 +0900)]
doc: Mention index references in pg_inherits
Partitioned indexes are also registered in pg_inherits, but the
description of this catalog did not reflect that.
Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/87k0ynj35y.fsf@wibble.ilmari.org
Backpatch-through: 11
Thomas Munro [Thu, 30 Jul 2020 05:25:48 +0000 (17:25 +1200)]
Introduce a WaitEventSet for the stats collector.
This avoids avoids some epoll/kqueue system calls for every wait.
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGJAC4Oqao%3DqforhNey20J8CiG2R%3DoBPqvfR0vOJrFysGw%40mail.gmail.com
Thomas Munro [Thu, 30 Jul 2020 05:23:32 +0000 (17:23 +1200)]
Use WaitLatch() for condition variables.
Previously, condition_variable.c created a long lived WaitEventSet to
avoid extra system calls. WaitLatch() now uses something similar
internally, so there is no point in wasting an extra kernel descriptor.
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGJAC4Oqao%3DqforhNey20J8CiG2R%3DoBPqvfR0vOJrFysGw%40mail.gmail.com
Thomas Munro [Thu, 30 Jul 2020 05:08:11 +0000 (17:08 +1200)]
Use a long lived WaitEventSet for WaitLatch().
Create LatchWaitSet at backend startup time, and use it to implement
WaitLatch(). This avoids repeated epoll/kqueue setup and teardown
system calls.
Reorder SubPostmasterMain() slightly so that we restore the postmaster
pipe and Windows signal emulation before we reach InitPostmasterChild(),
to make this work in EXEC_BACKEND builds.
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGJAC4Oqao%3DqforhNey20J8CiG2R%3DoBPqvfR0vOJrFysGw%40mail.gmail.com
Peter Geoghegan [Wed, 29 Jul 2020 21:14:58 +0000 (14:14 -0700)]
Add hash_mem_multiplier GUC.
Add a GUC that acts as a multiplier on work_mem. It gets applied when
sizing executor node hash tables that were previously size constrained
using work_mem alone.
The new GUC can be used to preferentially give hash-based nodes more
memory than the generic work_mem limit. It is intended to enable admin
tuning of the executor's memory usage. Overall system throughput and
system responsiveness can be improved by giving hash-based executor
nodes more memory (especially over sort-based alternatives, which are
often much less sensitive to being memory constrained).
The default value for hash_mem_multiplier is 1.0, which is also the
minimum valid value. This means that hash-based nodes continue to apply
work_mem in the traditional way by default.
hash_mem_multiplier is generally useful. However, it is being added now
due to concerns about hash aggregate performance stability for users
that upgrade to Postgres 13 (which added disk-based hash aggregation in
commit
1f39bce0). While the old hash aggregate behavior risked
out-of-memory errors, it is nevertheless likely that many users actually
benefited. Hash agg's previous indifference to work_mem during query
execution was not just faster; it also accidentally made aggregation
resilient to grouping estimate problems (at least in cases where this
didn't create destabilizing memory pressure).
hash_mem_multiplier can provide a certain kind of continuity with the
behavior of Postgres 12 hash aggregates in cases where the planner
incorrectly estimates that all groups (plus related allocations) will
fit in work_mem/hash_mem. This seems necessary because hash-based
aggregation is usually much slower when only a small fraction of all
groups can fit. Even when it isn't possible to totally avoid hash
aggregates that spill, giving hash aggregation more memory will reliably
improve performance (the same cannot be said for external sort
operations, which appear to be almost unaffected by memory availability
provided it's at least possible to get a single merge pass).
The PostgreSQL 13 release notes should advise users that increasing
hash_mem_multiplier can help with performance regressions associated
with hash aggregation. That can be taken care of by a later commit.
Author: Peter Geoghegan
Reviewed-By: Álvaro Herrera, Jeff Davis
Discussion: https://postgr.es/m/
20200625203629.7m6yvut7eqblgmfo@alap3.anarazel.de
Discussion: https://postgr.es/m/CAH2-WzmD%2Bi1pG6rc1%2BCjc4V6EaFJ_qSuKCCHVnH%3DoruqD-zqow%40mail.gmail.com
Backpatch: 13-, where disk-based hash aggregation was introduced.
Fujii Masao [Wed, 29 Jul 2020 14:21:55 +0000 (23:21 +0900)]
pg_stat_statements: track number of rows processed by some utility commands.
This commit makes pg_stat_statements track the total number
of rows retrieved or affected by CREATE TABLE AS, SELECT INTO,
CREATE MATERIALIZED VIEW and FETCH commands.
Suggested-by: Pascal Legrand
Author: Fujii Masao
Reviewed-by: Asif Rehman
Discussion: https://postgr.es/m/
1584293755198-0.post@n3.nabble.com
Fujii Masao [Wed, 29 Jul 2020 12:24:26 +0000 (21:24 +0900)]
Remove non-fast promotion.
When fast promotion was supported in 9.3, non-fast promotion became
undocumented feature and it's basically not available for ordinary users.
However we decided not to remove non-fast promotion at that moment,
to leave it for a release or two for debugging purpose or as an emergency
method because fast promotion might have some issues, and then to
remove it later. Now, several versions were released since that decision
and there is no longer reason to keep supporting non-fast promotion.
Therefore this commit removes non-fast promotion.
Author: Fujii Masao
Reviewed-by: Hamid Akhtar, Kyotaro Horiguchi
Discussion: https://postgr.es/m/
76066434-648f-f567-437b-
54853b43398f@oss.nttdata.com
Jeff Davis [Wed, 29 Jul 2020 06:15:47 +0000 (23:15 -0700)]
HashAgg: use better cardinality estimate for recursive spilling.
Use HyperLogLog to estimate the group cardinality in a spilled
partition. This estimate is used to choose the number of partitions if
we recurse.
The previous behavior was to use the number of tuples in a spilled
partition as the estimate for the number of groups, which lead to
overpartitioning. That could cause the number of batches to be much
higher than expected (with each batch being very small), which made it
harder to interpret EXPLAIN ANALYZE results.
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/
a856635f9284bc36f7a77d02f47bbb6aaf7b59b3.camel@j-davis.com
Backpatch-through: 13
Michael Paquier [Wed, 29 Jul 2020 05:44:32 +0000 (14:44 +0900)]
Fix incorrect print format in json.c
Oid is unsigned, so %u needs to be used and not %d. The code path
involved here is not normally reachable, so no backpatch is done.
Author: Justin Pryzby
Discussion: https://postgr.es/m/
20200728015523.GA27308@telsasoft.com
Thomas Munro [Wed, 29 Jul 2020 04:46:58 +0000 (16:46 +1200)]
Move syncscan.c to src/backend/access/common.
Since the tableam.c code needs to make use of the syncscan.c routines
itself, and since other block-oriented AMs might also want to use it one
day, it didn't make sense for it to live under src/backend/access/heap.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGLCnG%3DNEAByg6bk%2BCT9JZD97Y%3DAxKhh27Su9FeGWOKvDg%40mail.gmail.com
Peter Geoghegan [Wed, 29 Jul 2020 00:59:16 +0000 (17:59 -0700)]
Rename another "hash_mem" local variable.
Missed by my commit
564ce621.
Backpatch: 13-, where disk-based hash aggregation was introduced.
Peter Geoghegan [Wed, 29 Jul 2020 00:14:07 +0000 (17:14 -0700)]
Correct obsolete UNION hash aggs comment.
Oversight in commit
1f39bce0, which added disk-based hash aggregation.
Backpatch: 13-, where disk-based hash aggregation was introduced.
Peter Geoghegan [Tue, 28 Jul 2020 23:59:01 +0000 (16:59 -0700)]
Doc: Remove obsolete CREATE AGGREGATE note.
The planner is in fact willing to use hash aggregation when work_mem is
not set high enough for everything to fit in memory. This has been the
case since commit
1f39bce0, which added disk-based hash aggregation.
There are a few remaining cases in which hash aggregation is avoided as
a matter of policy when the planner surmises that spilling will be
necessary. For example, callers of choose_hashed_setop() still
conservatively avoid hash aggregation when spilling is anticipated.
That doesn't seem like a good enough reason to mention hash aggregation
in this context.
Backpatch: 13-, where disk-based hash aggregation was introduced.
David Rowley [Tue, 28 Jul 2020 23:42:21 +0000 (11:42 +1200)]
Make EXPLAIN ANALYZE of HashAgg more similar to Hash Join
There were various unnecessary differences between Hash Agg's EXPLAIN
ANALYZE output and Hash Join's. Here we modify the Hash Agg output so
that it's better aligned to Hash Join's.
The following changes have been made:
1. Start batches counter at 1 instead of 0.
2. Always display the "Batches" property, even when we didn't spill to
disk.
3. Use the text "Batches" instead of "HashAgg Batches" for text format.
4. Use the text "Memory Usage" instead of "Peak Memory Usage" for text
format.
5. Include "Batches" before "Memory Usage" in both text and non-text
formats.
In passing also modify the "Planned Partitions" property so that we show
it regardless of if the value is 0 or not for non-text EXPLAIN formats.
This was pointed out by Justin Pryzby and probably should have been part
of
40efbf870.
Reviewed-by: Justin Pryzby, Jeff Davis
Discussion: https://postgr.es/m/CAApHDvrshRnA6C0VFnu7Fb9TVvgGo80PUMm5+2DiaS1gEkPvtw@mail.gmail.com
Backpatch-through: 13, where HashAgg batching was introduced
David Rowley [Tue, 28 Jul 2020 10:52:03 +0000 (22:52 +1200)]
Doc: Improve documentation for pg_jit_available()
Per complaint from Scott Ribe. Based on wording suggestion from Tom Lane.
Discussion: https://postgr.es/m/
1956E806-1468-4417-9A9D-
235AE1D5FE1A@elevated-dev.com
Backpatch-through: 11, where pg_jit_available() was added
Amit Kapila [Tue, 28 Jul 2020 02:36:44 +0000 (08:06 +0530)]
Extend the logical decoding output plugin API with stream methods.
This adds seven methods to the output plugin API, adding support for
streaming changes of large in-progress transactions.
* stream_start
* stream_stop
* stream_abort
* stream_commit
* stream_change
* stream_message
* stream_truncate
Most of this is a simple extension of the existing methods, with
the semantic difference that the transaction (or subtransaction)
is incomplete and may be aborted later (which is something the
regular API does not really need to deal with).
This also extends the 'test_decoding' plugin, implementing these
new stream methods.
The stream_start/start_stop are used to demarcate a chunk of changes
streamed for a particular toplevel transaction.
This commit simply adds these new APIs and the upcoming patch to "allow
the streaming mode in ReorderBuffer" will use these APIs.
Author: Tomas Vondra, Dilip Kumar, Amit Kapila
Reviewed-by: Amit Kapila
Tested-by: Neha Sharma and Mahendra Singh Thalor
Discussion: https://postgr.es/m/
688b0b7f-2f6c-d827-c27b-
216a8e3ea700@2ndquadrant.com
Etsuro Fujita [Tue, 28 Jul 2020 02:00:00 +0000 (11:00 +0900)]
Fix some issues with step generation in partition pruning.
In the case of range partitioning, get_steps_using_prefix() assumes that
the passed-in prefix list contains at least one clause for each of the
partition keys earlier than one specified in the passed-in
step_lastkeyno, but the caller (ie, gen_prune_steps_from_opexps())
didn't take it into account, which led to a server crash or incorrect
results when the list contained no clauses for such partition keys, as
reported in bug #16500 and #16501 from Kobayashi Hisanori. Update the
caller to call that function only when the list created there contains
at least one clause for each of the earlier partition keys in the case
of range partitioning.
While at it, fix some other issues:
* The list to pass to get_steps_using_prefix() is allowed to contain
multiple clauses for the same partition key, as described in the
comment for that function, but that function actually assumed that the
list contained just a single clause for each of middle partition keys,
which led to an assertion failure when the list contained multiple
clauses for such partition keys. Update that function to match the
comment.
* In the case of hash partitioning, partition keys are allowed to be
NULL, in which case the list to pass to get_steps_using_prefix()
contains no clauses for NULL partition keys, but that function treats
that case as like the case of range partitioning, which led to the
assertion failure. Update the assertion test to take into account
NULL partition keys in the case of hash partitioning.
* Fix a typo in a comment in get_steps_using_prefix_recurse().
* gen_partprune_steps() failed to detect self-contradiction from
strict-qual clauses and an IS NULL clause for the same partition key
in some cases, producing incorrect partition-pruning steps, which led
to incorrect results of partition pruning, but didn't cause any
user-visible problems fortunately, as the self-contradiction is
detected later in the query planning. Update that function to detect
the self-contradiction.
Per bug #16500 and #16501 from Kobayashi Hisanori. Patch by me, initial
diagnosis for the reported issue and review by Dmitry Dolgov.
Back-patch to v11, where partition pruning was introduced.
Discussion: https://postgr.es/m/16500-
d1613f2a78e1e090%40postgresql.org
Discussion: https://postgr.es/m/16501-
5234a9a0394f6754%40postgresql.org
Peter Geoghegan [Tue, 28 Jul 2020 00:53:19 +0000 (17:53 -0700)]
Remove hashagg_avoid_disk_plan GUC.
Note: This GUC was originally named enable_hashagg_disk when it appeared
in commit
1f39bce0, which added disk-based hash aggregation. It was
subsequently renamed in commit
92c58fd9.
Author: Peter Geoghegan
Reviewed-By: Jeff Davis, Álvaro Herrera
Discussion: https://postgr.es/m/
9d9d1e1252a52ea1bad84ea40dbebfd54e672a0f.camel%40j-davis.com
Backpatch: 13-, where disk-based hash aggregation was introduced.
Michael Paquier [Mon, 27 Jul 2020 06:58:32 +0000 (15:58 +0900)]
Fix corner case with 16kB-long decompression in pgcrypto, take 2
A compressed stream may end with an empty packet. In this case
decompression finishes before reading the empty packet and the
remaining stream packet causes a failure in reading the following
data. This commit makes sure to consume such extra data, avoiding a
failure when decompression the data. This corner case was reproducible
easily with a data length of 16kB, and existed since
e94dd6a. A cheap
regression test is added to cover this case based on a random,
incompressible string.
The first attempt of this patch has allowed to find an older failure
within the compression logic of pgcrypto, fixed by
b9b6105. This
involved SLES 15 with z390 where a custom flavor of libz gets used.
Bonus thanks to Mark Wong for providing access to the specific
environment.
Reported-by: Frank Gagnepain
Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/16476-
692ef7b84e5fb893@postgresql.org
Backpatch-through: 9.5
Michael Paquier [Mon, 27 Jul 2020 01:28:06 +0000 (10:28 +0900)]
Fix handling of structure for bytea data type in ECPG
Some code paths dedicated to bytea used the structure for varchar. This
did not lead to any actual bugs, as bytea and varchar have the same
definition, but it could become a trap if one of these definitions
changes for a new feature or a bug fix.
Issue introduced by
050710b.
Author: Shenhao Wang
Reviewed-by: Vignesh C, Michael Paquier
Discussion: https://postgr.es/m/
07ac7dee1efc44f99d7f53a074420177@G08CNEXMBPEKD06.g08.fujitsu.local
Backpatch-through: 12
Jeff Davis [Sun, 26 Jul 2020 21:55:52 +0000 (14:55 -0700)]
Fix LookupTupleHashEntryHash() pipeline-stall issue.
Refactor hash lookups in nodeAgg.c to improve performance.
Author: Andres Freund and Jeff Davis
Discussion: https://postgr.es/m/
20200612213715.op4ye4q7gktqvpuo%40alap3.anarazel.de
Backpatch-through: 13
David Rowley [Sun, 26 Jul 2020 09:02:45 +0000 (21:02 +1200)]
Allocate consecutive blocks during parallel seqscans
Previously we would allocate blocks to parallel workers during a parallel
sequential scan 1 block at a time. Since other workers were likely to
request a block before a worker returns for another block number to work
on, this could lead to non-sequential I/O patterns in each worker which
could cause the operating system's readahead to perform poorly or not at
all.
Here we change things so that we allocate consecutive "chunks" of blocks
to workers and have them work on those until they're done, at which time
we allocate another chunk for the worker. The size of these chunks is
based on the size of the relation.
Initial patch here was by Thomas Munro which showed some good improvements
just having a fixed chunk size of 64 blocks with a simple ramp-down near
the end of the scan. The revisions of the patch to make the chunk size
based on the relation size and the adjusted ramp-down in powers of two was
done by me, along with quite extensive benchmarking to determine the
optimal chunk sizes.
For the most part, benchmarks have shown significant performance
improvements for large parallel sequential scans on Linux, FreeBSD and
Windows using SSDs. It's less clear how this affects the performance of
cloud providers. Tests done so far are unable to obtain stable enough
performance to provide meaningful benchmark results. It is possible that
this could cause some performance regressions on more obscure filesystems,
so we may need to later provide users with some ability to get something
closer to the old behavior. For now, let's leave that until we see that
it's really required.
Author: Thomas Munro, David Rowley
Reviewed-by: Ranier Vilela, Soumyadeep Chakraborty, Robert Haas
Reviewed-by: Amit Kapila, Kirk Jamison
Discussion: https://postgr.es/m/CA+hUKGJ_EErDv41YycXcbMbCBkztA34+z1ts9VQH+ACRuvpxig@mail.gmail.com
Michael Paquier [Sun, 26 Jul 2020 07:32:11 +0000 (16:32 +0900)]
Tweak behavior of pg_stat_activity.leader_pid
The initial implementation of leader_pid in pg_stat_activity added by
b025f32 took the approach to strictly print what a PGPROC entry
includes. In short, if a backend has been involved in parallel query at
least once, leader_pid would remain set as long as the backend is alive.
For a parallel group leader, this means that the field would always be
set after it participated at least once in parallel query, and after
more discussions this could be confusing if using for example a
connection pooler.
This commit changes the data printed so as leader_pid becomes always
NULL for a parallel group leader, showing up a non-NULL value only for
the parallel workers, and actually as long as a parallel query is
running as workers are shut down once the query has completed.
This does not change the definition of any catalog, so no catalog bump
is needed. Per discussion with Justin Pryzby, Álvaro Herrera, Julien
Rouhaud and me.
Discussion: https://postgr.es/m/
20200721035145.GB17300@paquier.xyz
Backpatch-through: 13
Noah Misch [Sat, 25 Jul 2020 21:50:59 +0000 (14:50 -0700)]
Remove optimization for RAND_poll() failing.
The loop to generate seed data will exit on RAND_status(), so we don't
need to handle the case of RAND_poll() failing separately. Failures
here are rare, so this a code cleanup, essentially.
Daniel Gustafsson, reviewed by David Steele and Michael Paquier.
Discussion: https://postgr.es/m/
9B038FA5-23E8-40D0-B932-
D515E1D8F66A@yesql.se
Noah Misch [Sat, 25 Jul 2020 21:50:59 +0000 (14:50 -0700)]
Use RAND_poll() for seeding randomness after fork().
OpenSSL deprecated RAND_cleanup(), and OpenSSL 1.1.0 made it into a
no-op. Replace it with RAND_poll(), per an OpenSSL community
recommendation. While this has no user-visible consequences under
OpenSSL defaults, it might help under non-default settings.
Daniel Gustafsson, reviewed by David Steele and Michael Paquier.
Discussion: https://postgr.es/m/
9B038FA5-23E8-40D0-B932-
D515E1D8F66A@yesql.se
Tom Lane [Sat, 25 Jul 2020 20:34:35 +0000 (16:34 -0400)]
Improve performance of binary COPY FROM through better buffering.
At least on Linux and macOS, fread() turns out to have far higher
per-call overhead than one could wish. Reading 64KB of data at a time
and then parceling it out with our own memcpy logic makes binary COPY
from a file significantly faster --- around 30% in simple testing for
cases with narrow text columns (on Linux ... even more on macOS).
In binary COPY from frontend, there's no per-call fread(), and this
patch introduces an extra layer of memcpy'ing, but it still manages
to eke out a small win. Apparently, the control-logic overhead in
CopyGetData() is enough to be worth avoiding for small fetches.
Bharath Rupireddy and Amit Langote, reviewed by Vignesh C,
cosmetic tweaks by me
Discussion: https://postgr.es/m/CALj2ACU5Bz06HWLwqSzNMN=Gupoj6Rcn_QVC+k070V4em9wu=A@mail.gmail.com
Tom Lane [Sat, 25 Jul 2020 16:54:58 +0000 (12:54 -0400)]
Mark built-in coercion functions as leakproof where possible.
Making these leakproof seems helpful since (for example) if you have a
function f(int8) that is leakproof, you don't want it to effectively
become non-leakproof when you apply it to an int4 or int2 column.
But that's what happens today, since the implicit up-coercion will
not be leakproof.
Most of the coercion functions that visibly can't throw errors are
functions that convert numeric datatypes to other, wider ones.
Notable is that float4_numeric and float8_numeric can be marked
leakproof; before commit
a57d312a7 they could not have been.
I also marked the functions that coerce strings to "name" as leakproof;
that's okay today because they truncate silently, but if we ever
reconsidered that behavior then they could no longer be leakproof.
I desisted from marking rtrim1() as leakproof; it appears so right now,
but the code seems a little too complex and perhaps subject to change,
since it's shared with other SQL functions.
Discussion: https://postgr.es/m/459322.
1595607431@sss.pgh.pa.us
Amit Kapila [Sat, 25 Jul 2020 04:50:39 +0000 (10:20 +0530)]
Fix buffer usage stats for nodes above Gather Merge.
Commit
85c9d347 addressed a similar problem for Gather and Gather
Merge nodes but forgot to account for nodes above parallel nodes. This
still works for nodes above Gather node because we shut down the workers
for Gather node as soon as there are no more tuples. We can do a similar
thing for Gather Merge as well but it seems better to account for stats
during nodes shutdown after completing the execution.
Reported-by: Stéphane Lorek, Jehan-Guillaume de Rorthais
Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Reviewed-by: Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/
20200718160206.
584532a2@firost
Tom Lane [Fri, 24 Jul 2020 19:43:56 +0000 (15:43 -0400)]
Replace TS_execute's TS_EXEC_CALC_NOT flag with TS_EXEC_SKIP_NOT.
It's fairly silly that ignoring NOT subexpressions is TS_execute's
default behavior. It's wrong on its face and it encourages errors
of omission. Moreover, the only two remaining callers that aren't
specifying CALC_NOT are in ts_headline calculations, and it's very
arguable that those are bugs: if you've specified "!foo" in your
query, why would you want to get a headline that includes "foo"?
Hence, rip that out and change the default behavior to be to calculate
NOT accurately. As a concession to the slim chance that there is still
somebody somewhere who needs the incorrect behavior, provide a new
SKIP_NOT flag to explicitly request that.
Back-patch into v13, mainly because it seems better to change this
at the same time as the previous commit's rejiggering of TS_execute
related APIs. Any outside callers affected by this change are
probably also affected by that one.
Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
Tom Lane [Fri, 24 Jul 2020 19:26:51 +0000 (15:26 -0400)]
Fix assorted bugs by changing TS_execute's callback API to ternary logic.
Text search sometimes failed to find valid matches, for instance
'!crew:A'::tsquery might fail to locate 'crew:1B'::tsvector during
an index search. The root of the issue is that TS_execute's callback
functions were not changed to use ternary (yes/no/maybe) reporting
when we made the search logic itself do so. It's somewhat annoying
to break that API, but on the other hand we now see that any code
using plain boolean logic is almost certainly broken since the
addition of phrase search. There seem to be very few outside callers
of this code anyway, so we'll just break them intentionally to get
them to adapt.
This allows removal of tsginidx.c's private re-implementation of
TS_execute, since that's now entirely duplicative. It's also no
longer necessary to avoid use of CALC_NOT in tsgistidx.c, since
the underlying callbacks can now do something reasonable.
Back-patch into v13. We can't change this in stable branches,
but it seems not quite too late to fix it in v13.
Tom Lane and Pavel Borisov
Discussion: https://postgr.es/m/CALT9ZEE-aLotzBg-pOp2GFTesGWVYzXA3=mZKzRDa_OKnLF7Mg@mail.gmail.com
Peter Eisentraut [Fri, 24 Jul 2020 08:34:16 +0000 (10:34 +0200)]
Rename configure.in to configure.ac
The new name has been preferred by Autoconf for a long time. Future
versions of Autoconf will warn about the old name.
Discussion: https://www.postgresql.org/message-id/flat/
e796c185-5ece-8569-248f-
dd3799701be1%402ndquadrant.com
Tom Lane [Thu, 23 Jul 2020 21:19:37 +0000 (17:19 -0400)]
Fix ancient violation of zlib's API spec.
contrib/pgcrypto mishandled the case where deflate() does not consume
all of the offered input on the first try. It reset the next_in pointer
to the start of the input instead of leaving it alone, causing the wrong
data to be fed to the next deflate() call.
This has been broken since pgcrypto was committed. The reason for the
lack of complaints seems to be that it's fairly hard to get stock zlib
to not consume all the input, so long as the output buffer is big enough
(which it normally would be in pgcrypto's usage; AFAICT the input is
always going to be packetized into packets no larger than ZIP_OUT_BUF).
However, IBM's zlibNX implementation for AIX evidently will do it
in some cases.
I did not add a test case for this, because I couldn't find one that
would fail with stock zlib. When we put back the test case for
bug #16476, that will cover the zlibNX situation well enough.
While here, write deflate()'s second argument as Z_NO_FLUSH per its
API spec, instead of hard-wiring the value zero.
Per buildfarm results and subsequent investigation.
Discussion: https://postgr.es/m/16476-
692ef7b84e5fb893@postgresql.org
Peter Eisentraut [Thu, 23 Jul 2020 15:13:00 +0000 (17:13 +0200)]
doc: Document that ssl_ciphers does not affect TLS 1.3
TLS 1.3 uses a different way of specifying ciphers and a different
OpenSSL API. PostgreSQL currently does not support setting those
ciphers. For now, just document this. In the future, support for
this might be added somehow.
Reviewed-by: Jonathan S. Katz <jkatz@postgresql.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Thomas Munro [Thu, 23 Jul 2020 09:10:49 +0000 (21:10 +1200)]
Fix error message.
Remove extra space. Back-patch to all releases, like commit
7897e3bb.
Author: Lu, Chenyang <lucy.fnst@cn.fujitsu.com>
Discussion: https://postgr.es/m/
795d03c6129844d3803e7eea48f5af0d%40G08CNEXMBPEKD04.g08.fujitsu.local
Amit Kapila [Thu, 23 Jul 2020 02:49:07 +0000 (08:19 +0530)]
WAL Log invalidations at command end with wal_level=logical.
When wal_level=logical, write invalidations at command end into WAL so
that decoding can use this information.
This patch is required to allow the streaming of in-progress transactions
in logical decoding. The actual work to allow streaming will be committed
as a separate patch.
We still add the invalidations to the cache and write them to WAL at
commit time in RecordTransactionCommit(). This uses the existing
XLOG_INVALIDATIONS xlog record type, from the RM_STANDBY_ID resource
manager (see LogStandbyInvalidations for details).
So existing code relying on those invalidations (e.g. redo) does not need
to be changed.
The invalidations written at command end uses a new xlog record type
XLOG_XACT_INVALIDATIONS, from RM_XACT_ID resource manager. See
LogLogicalInvalidations for details.
These new xlog records are ignored by existing redo procedures, which
still rely on the invalidations written to commit records.
The invalidations are decoded and accumulated in top-transaction, and then
executed during replay. This obviates the need to decode the
invalidations as part of a commit record.
Bump XLOG_PAGE_MAGIC, since this introduces XLOG_XACT_INVALIDATIONS.
Author: Dilip Kumar, Tomas Vondra, Amit Kapila
Reviewed-by: Amit Kapila
Tested-by: Neha Sharma and Mahendra Singh Thalor
Discussion: https://postgr.es/m/
688b0b7f-2f6c-d827-c27b-
216a8e3ea700@2ndquadrant.com
Michael Paquier [Wed, 22 Jul 2020 23:29:08 +0000 (08:29 +0900)]
Revert "Fix corner case with PGP decompression in pgcrypto"
This reverts commit
9e10898, after finding out that buildfarm members
running SLES 15 on z390 complain on the compression and decompression
logic of the new test: pipistrelles, barbthroat and steamerduck.
Those hosts are visibly using hardware-specific changes to improve zlib
performance, requiring more investigation.
Thanks to Tom Lane for the discussion.
Discussion: https://postgr.es/m/
20200722093749.GA2564@paquier.xyz
Backpatch-through: 9.5
Tom Lane [Wed, 22 Jul 2020 23:19:44 +0000 (19:19 -0400)]
Support infinity and -infinity in the numeric data type.
Add infinities that behave the same as they do in the floating-point
data types. Aside from any intrinsic usefulness these may have,
this closes an important gap in our ability to convert floating
values to numeric and/or replace float-based APIs with numeric.
The new values are represented by bit patterns that were formerly
not used (although old code probably would take them for NaNs).
So there shouldn't be any pg_upgrade hazard.
Patch by me, reviewed by Dean Rasheed and Andrew Gierth
Discussion: https://postgr.es/m/606717.
1591924582@sss.pgh.pa.us
Michael Paquier [Wed, 22 Jul 2020 05:52:23 +0000 (14:52 +0900)]
Fix corner case with PGP decompression in pgcrypto
A compressed stream may end with an empty packet, and PGP decompression
finished before reading this empty packet in the remaining stream. This
caused a failure in pgcrypto, handling this case as corrupted data.
This commit makes sure to consume such extra data, avoiding a failure
when decompression the entire stream. This corner case was reproducible
with a data length of 16kB, and existed since its introduction in
e94dd6a. A cheap regression test is added to cover this case.
Thanks to Jeff Janes for the extra investigation.
Reported-by: Frank Gagnepain
Author: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/16476-
692ef7b84e5fb893@postgresql.org
Backpatch-through: 9.5
Thomas Munro [Wed, 22 Jul 2020 04:38:20 +0000 (16:38 +1200)]
Fix conversion table generator scripts.
convutils.pm used implicit conversion of undefined value to integer
zero. Some of conversion scripts are susceptible to regexp greediness.
Fix, avoiding whitespace changes in the output. Also update ICU URLs
that moved.
No need to back-patch, because the output of these scripts is also in
the source tree so we shouldn't need to rerun them on back-branches.
Author: Kyotaro Horiguchi <horikyoga.ntt@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGJ7SEGLbj%3D%3DTQCcyKRA9aqj8%2B6L%3DexSq1y25TA%3DWxLziQ%40mail.gmail.com
Michael Paquier [Wed, 22 Jul 2020 01:16:21 +0000 (10:16 +0900)]
Fix comment in sha2.h
An incorrect reference to SHA-1 was present.
Author: Daniel Gustafsson
Discussion: https://postgr.es/m/
FE26C953-FA87-4BB9-9105-
AA1F8705B0D0@yesql.se
Tom Lane [Tue, 21 Jul 2020 23:40:44 +0000 (19:40 -0400)]
neqjoinsel must now pass through collation to eqjoinsel.
Since commit
044c99bc5, eqjoinsel passes the passed-in collation
to any operators it invokes. However, neqjoinsel failed to pass
on whatever collation it got, so that if we invoked a
collation-dependent operator via that code path, we'd get "could not
determine which collation to use for string comparison" or the like.
Per report from Justin Pryzby. Back-patch to v12, like the previous
commit.
Discussion: https://postgr.es/m/
20200721191606.GL5748@telsasoft.com
Peter Geoghegan [Tue, 21 Jul 2020 22:50:58 +0000 (15:50 -0700)]
Add nbtree Valgrind buffer lock checks.
Holding just a buffer pin (with no buffer lock) on an nbtree buffer/page
provides very weak guarantees, especially compared to heapam, where it's
often safe to read a page while only holding a buffer pin. This commit
has Valgrind enforce the following rule: it is never okay to access an
nbtree buffer without holding both a pin and a lock on the buffer.
A draft version of this patch detected questionable code that was
cleaned up by commits
fa7ff642 and
7154aa16. The code in question used
to access an nbtree buffer page's special/opaque area with no buffer
lock (only a buffer pin). This practice (which isn't obviously unsafe)
is hereby formally disallowed in nbtree. There doesn't seem to be any
reason to allow it, and banning it keeps things simple for Valgrind.
The new checks are implemented by adding custom nbtree client requests
(located in LockBuffer() wrapper functions); these requests are
"superimposed" on top of the generic bufmgr.c Valgrind client requests
added by commit
1e0dfd16. No custom resource management cleanup code is
needed to undo the effects of marking buffers as non-accessible under
this scheme.
Author: Peter Geoghegan
Reviewed-By: Anastasia Lubennikova, Georgios Kokolatos
Discussion: https://postgr.es/m/CAH2-WzkLgyN3zBvRZ1pkNJThC=xi_0gpWRUb_45eexLH1+k2_Q@mail.gmail.com
Tom Lane [Tue, 21 Jul 2020 19:19:46 +0000 (15:19 -0400)]
Weaken type-OID-matching checks in array_recv and record_recv.
Rather than always insisting on an exact match of the type OID in the
data to the element type or column type we expect, complain only when
both OIDs fall within the manually-assigned range. This acknowledges
the reality that user-defined types don't have stable OIDs, while
still preserving some of the mistake-detection value of the old test.
(It's not entirely clear whether to error if one OID is manually
assigned and the other isn't. But perhaps that case could arise in
cross-version cases where a former extension type has been imported
into core, so I let it pass.)
This change allows us to remove the prohibition on binary transfer
of user-defined arrays and composites in the recently-landed support
for binary logical replication (commit
9de77b545). We can just
unconditionally drop that check, since if the client has asked for
binary transfer it must be >= v14 and must have this change.
Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
Alvaro Herrera [Tue, 21 Jul 2020 17:11:23 +0000 (13:11 -0400)]
Glossary: Add term "base backup"
Author: Jürgen Purtz <juergen@purtz.de>
Discussion: https://postgr.es/m/
95f90a5d-7692-701d-2c0c-
0c88eb5cea7d@purtz.de
Alvaro Herrera [Tue, 21 Jul 2020 17:09:42 +0000 (13:09 -0400)]
Minor glossary tweaks
Add "(process)" qualifier to two terms, remove self-reference in one
term.
Author: Jürgen Purtz <juergen@purtz.de>
Discussion: https://postgr.es/m/
95f90a5d-7692-701d-2c0c-
0c88eb5cea7d@purtz.de
Tom Lane [Tue, 21 Jul 2020 17:03:48 +0000 (13:03 -0400)]
Be more careful about marking catalog columns NOT NULL by default.
The bug fixed in commit
72eab84a5 would not have occurred if initdb
had a less surprising rule about which columns should be marked
NOT NULL by default. Let's make that rule be strictly that the
column must be fixed-width and its predecessors must be fixed-width
and NOT NULL, removing the hacky and unsafe exceptions for oidvector
and int2vector.
Since we do still want all existing oidvector and int2vector columns
to be marked NOT NULL, we have to put BKI_FORCE_NOT_NULL labels on
them. But making this less magic and more documented seems like a
good idea, even if it's a shade more verbose.
I didn't bump catversion since the initial catalog contents are
not actually changed by this patch. Note however that the
contents of postgres.bki do change, and feeding an old copy of
that to a new backend will produce wrong results.
Discussion: https://postgr.es/m/204760.
1595181800@sss.pgh.pa.us
Tom Lane [Tue, 21 Jul 2020 16:38:08 +0000 (12:38 -0400)]
Assert that we don't insert nulls into attnotnull catalog columns.
The executor checks for this error, and so does the bootstrap catalog
loader, but we never checked for it in retail catalog manipulations.
The folly of that has now been exposed, so let's add assertions
checking it. Checking in CatalogTupleInsert[WithInfo] and
CatalogTupleUpdate[WithInfo] should be enough to cover this.
Back-patch to v10; the aforesaid functions didn't exist before that,
and it didn't seem worth adapting the patch to the oldest branches.
But given the risk of JIT crashes, I think we certainly need this
as far back as v11.
Pre-v13, we have to explicitly exclude pg_subscription.subslotname
and pg_subscription_rel.srsublsn from the checks, since they are
mismarked. (Even if we change our mind about applying BKI_FORCE_NULL
in the branch tips, it doesn't seem wise to have assertions that
would fire in existing databases.)
Discussion: https://postgr.es/m/298837.
1595196283@sss.pgh.pa.us
Michael Paquier [Tue, 21 Jul 2020 03:05:07 +0000 (12:05 +0900)]
Rework tab completion of COPY and \copy in psql
This corrects and simplifies $subject in a number of ways:
- Remove from the completion the pre-9.0 grammar still supported for
compatibility purposes. This simplifies the code, and allows to extend
it more easily with new patterns.
- Add completion for the options of FORMAT within a WITH clause.
- Complete WHERE and WITH clauses correctly depending on if TO or FROM
are used, WHERE being only available with COPY FROM.
Author: Vignesh C, Michael Paquier
Reviewed-by: Ahsan Hadi
Discussion: https://postgr.es/m/CALDaNm3zWr=OmxeNqOqfT=uZTSdam_j-gkX94CL8eTNfgUtf6A@mail.gmail.com
Tom Lane [Tue, 21 Jul 2020 02:03:18 +0000 (22:03 -0400)]
Fix some corner cases for window ranges with infinite offsets.
Many situations where the offset is infinity were not handled sanely.
We should generally allow the val versus base +/- offset comparison to
proceed according to the normal rules of IEEE arithmetic; however, we
must do something special for the corner cases where base +/- offset
would produce NaN due to subtracting two like-signed infinities.
That corresponds to asking which values infinitely precede +inf or
infinitely follow -inf, which should certainly be true of any finite
value or of the opposite-signed infinity. After some discussion it
seems that the best decision is to make it true of the same-signed
infinity as well, ie, just return constant TRUE if the calculation
would produce a NaN.
(We could write this with a bit less code by subtracting anyway,
and then checking for a NaN result. However, I prefer this
formulation because it'll be easier to transpose into numeric.c.)
Although this seems like clearly a bug fix with respect to finite
values, it is less obviously correct for infinite values. Between
that and the fact that the whole issue only arises for very strange
window specifications (e.g. RANGE BETWEEN 'inf' PRECEDING AND 'inf'
PRECEDING), I'll desist from back-patching.
Noted by Dean Rasheed.
Discussion: https://postgr.es/m/
3393130.
1594925893@sss.pgh.pa.us
Tom Lane [Mon, 20 Jul 2020 23:44:41 +0000 (19:44 -0400)]
Make floating-point "NaN / 0" return NaN instead of raising an error.
This is more consistent with the IEEE 754 spec and our treatment of
NaNs elsewhere; in particular, the case has always acted that way in
"numeric" arithmetic.
Noted by Dean Rasheed.
Discussion: https://postgr.es/m/
3421746.
1594927785@sss.pgh.pa.us
Peter Geoghegan [Mon, 20 Jul 2020 23:03:38 +0000 (16:03 -0700)]
Assert that buffer is pinned in LockBuffer().
Strengthen the LockBuffer() assertion that verifies BufferIsValid() by
making it verify BufferIsPinned() instead. Do the same in nearby
related functions.
There is probably not much chance that anybody will try to lock a buffer
that is not already pinned, but we might as well make sure of that.
Tom Lane [Mon, 20 Jul 2020 18:55:56 +0000 (14:55 -0400)]
Correctly mark pg_subscription_rel.srsublsn as nullable.
The code has always set this column to NULL when it's not valid,
but the catalog header's description failed to reflect that,
as did the SGML docs, as did some of the code. To prevent future
coding errors of the same ilk, let's hide the field from C code
as though it were variable-length (which, in a sense, it is).
As with commit
72eab84a5, we can only fix this cleanly in HEAD
and v13; the problem extends further back but we'll need some
klugery in the released branches.
Discussion: https://postgr.es/m/367660.
1595202498@sss.pgh.pa.us
Tom Lane [Mon, 20 Jul 2020 17:40:16 +0000 (13:40 -0400)]
Fix construction of updated-columns bitmap in logical replication.
Commit
b9c130a1f failed to apply the publisher-to-subscriber column
mapping while checking which columns were updated. Perhaps less
significantly, it didn't exclude dropped columns either. This could
result in an incorrect updated-columns bitmap and thus wrong decisions
about whether to fire column-specific triggers on the subscriber while
applying updates. In HEAD (since commit
9de77b545), it could also
result in accesses off the end of the colstatus array, as detected by
buildfarm member skink. Fix the logic, and adjust 003_constraints.pl
so that the problem is exposed in unpatched code.
In HEAD, also add some assertions to check that we don't access off
the ends of these newly variable-sized arrays.
Back-patch to v10, as
b9c130a1f was.
Discussion: https://postgr.es/m/CAH2-Wz=79hKQ4++c5A060RYbjTHgiYTHz=fw6mptCtgghH2gJA@mail.gmail.com
Alexander Korotkov [Mon, 20 Jul 2020 10:59:50 +0000 (13:59 +0300)]
Update btree_gist extension for parallel query
All functions provided by this extension are PARALLEL SAFE.
Discussion: https://postgr.es/m/AM5PR0901MB1587E47B1ACF23C6089DFCA3FD9B0%40AM5PR0901MB1587.eurprd09.prod.outlook.com
Author: Steven Winfield
Fujii Masao [Mon, 20 Jul 2020 04:30:18 +0000 (13:30 +0900)]
Rename wal_keep_segments to wal_keep_size.
max_slot_wal_keep_size that was added in v13 and wal_keep_segments are
the GUC parameters to specify how much WAL files to retain for
the standby servers. While max_slot_wal_keep_size accepts the number of
bytes of WAL files, wal_keep_segments accepts the number of WAL files.
This difference of setting units between those similar parameters could
be confusing to users.
To alleviate this situation, this commit renames wal_keep_segments to
wal_keep_size, and make users specify the WAL size in it instead of
the number of WAL files.
There was also the idea to rename max_slot_wal_keep_size to
max_slot_wal_keep_segments, in the discussion. But we have been moving
away from measuring in segments, for example, checkpoint_segments was
replaced by max_wal_size. So we concluded to rename wal_keep_segments
to wal_keep_size.
Back-patch to v13 where max_slot_wal_keep_size was added.
Author: Fujii Masao
Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, David Steele
Discussion: https://postgr.es/m/
574b4ea3-e0f9-b175-ead2-
ebea7faea855@oss.nttdata.com
Amit Kapila [Mon, 20 Jul 2020 03:18:26 +0000 (08:48 +0530)]
Immediately WAL-log subtransaction and top-level XID association.
The logical decoding infrastructure needs to know which top-level
transaction the subxact belongs to, in order to decode all the
changes. Until now that might be delayed until commit, due to the
caching (GPROC_MAX_CACHED_SUBXIDS), preventing features requiring
incremental decoding.
So we also write the assignment info into WAL immediately, as part
of the next WAL record (to minimize overhead) only when wal_level=logical.
We can not remove the existing XLOG_XACT_ASSIGNMENT WAL as that is
required for avoiding overflow in the hot standby snapshot.
Bump XLOG_PAGE_MAGIC, since this introduces XLR_BLOCK_ID_TOPLEVEL_XID.
Author: Tomas Vondra, Dilip Kumar, Amit Kapila
Reviewed-by: Amit Kapila
Tested-by: Neha Sharma and Mahendra Singh Thalor
Discussion: https://postgr.es/m/
688b0b7f-2f6c-d827-c27b-
216a8e3ea700@2ndquadrant.com
Fujii Masao [Mon, 20 Jul 2020 02:55:50 +0000 (11:55 +0900)]
Add generic_plans and custom_plans fields into pg_prepared_statements.
There was no easy way to find how many times generic and custom plans
have been executed for a prepared statement. This commit exposes those
numbers of times in pg_prepared_statements view.
Author: Atsushi Torikoshi, Kyotaro Horiguchi
Reviewed-by: Tatsuro Yamada, Masahiro Ikeda, Fujii Masao
Discussion: https://postgr.es/m/CACZ0uYHZ4M=NZpofH6JuPHeX=__5xcDELF8hT8_2T+R55w4RQw@mail.gmail.com
Amit Kapila [Mon, 20 Jul 2020 02:15:26 +0000 (07:45 +0530)]
Fix minor typo in nodeIncrementalSort.c.
Author: Vignesh C
Reviewed-by: James Coleman
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/CALDaNm0WjZqRvdeL59ZfYH0o4mLbKQ23jm-bnjXcFzgpANx55g@mail.gmail.com
Peter Geoghegan [Sun, 19 Jul 2020 23:12:51 +0000 (16:12 -0700)]
Avoid harmless Valgrind no-buffer-pin errors.
Valgrind builds with assertions enabled sometimes perform a
theoretically unsafe page access inside an assertion in
heapam_tuple_lock(). This happened when the eval-plan-qual isolation
test ran one of the permutations added by commit
a2418f9e238.
Avoid complaints from Valgrind by moving the assertion ever so slightly.
This is minor cleanup for commit
1e0dfd16, which added Valgrind buffer
access instrumentation.
No backpatch, since this only happens within an assertion, and seems
very unlikely to cause any real problems even with assert-enabled
builds.
Peter Geoghegan [Sun, 19 Jul 2020 16:46:44 +0000 (09:46 -0700)]
Mark buffers as defined to Valgrind consistently.
Make PinBuffer() mark buffers as defined to Valgrind unconditionally,
including when the buffer header spinlock must be acquired. Failure to
handle that case could lead to false positive reports from Valgrind.
This theoretically creates a risk that we'll mark buffers defined even
when external callers don't end up with a buffer pin. That seems
perfectly acceptable, though, since in general we make no guarantees
about buffers that are unsafe to access being reliably marked as unsafe.
Oversight in commit
1e0dfd16, which added valgrind buffer access
instrumentation.
Tom Lane [Sun, 19 Jul 2020 16:37:23 +0000 (12:37 -0400)]
Correctly mark pg_subscription.subslotname as nullable.
Due to the layout of this catalog, subslotname has to be explicitly
marked BKI_FORCE_NULL, else initdb will default to the assumption
that it's non-nullable. Since, in fact, CREATE/ALTER SUBSCRIPTION
will store null values there, the existing marking is just wrong,
and has been since this catalog was invented.
We haven't noticed because not much in the system actually depends
on attnotnull being truthful. However, JIT'ed tuple deconstruction
does depend on that in some cases, allowing crashes or wrong answers
in queries that inspect pg_subscription. Commit
9de77b545 quite
accidentally exposed this on the buildfarm members that force JIT
activation.
Back-patch to v13. The problem goes further back, but we cannot
force initdb in released branches, so some klugier solution will
be needed there. Before working on that, push this simple fix
to try to get the buildfarm back to green.
Discussion: https://postgr.es/m/
4118109.
1595096139@sss.pgh.pa.us
Peter Eisentraut [Sun, 19 Jul 2020 10:14:42 +0000 (12:14 +0200)]
Define OPENSSL_API_COMPAT
This avoids deprecation warnings from newer OpenSSL versions (3.0.0 in
particular).
Discussion: https://www.postgresql.org/message-id/flat/
FEF81714-D479-4512-839B-
C769D2605F8A%40yesql.se
Tom Lane [Sat, 18 Jul 2020 18:58:18 +0000 (14:58 -0400)]
Fix replication/worker_internal.h to compile without other headers.
This header hasn't changed recently, so the fact that it now fails
headerscheck/cpluspluscheck testing must be due to changes in what
it includes. Probably
f21916791 is to blame, but I didn't try to
verify that.
Discussion: https://postgr.es/m/
3699703.
1595016554@sss.pgh.pa.us
Tom Lane [Sat, 18 Jul 2020 16:44:51 +0000 (12:44 -0400)]
Allow logical replication to transfer data in binary format.
This patch adds a "binary" option to CREATE/ALTER SUBSCRIPTION.
When that's set, the publisher will send data using the data type's
typsend function if any, rather than typoutput. This is generally
faster, if slightly less robust.
As committed, we won't try to transfer user-defined array or composite
types in binary, for fear that type OIDs won't match at the subscriber.
This might be changed later, but it seems like fit material for a
follow-on patch.
Dave Cramer, reviewed by Daniel Gustafsson, Petr Jelinek, and others;
adjusted some by me
Discussion: https://postgr.es/m/CADK3HH+R3xMn=8t3Ct+uD+qJ1KD=Hbif5NFMJ+d5DkoCzp6Vgw@mail.gmail.com
Michael Paquier [Sat, 18 Jul 2020 13:43:35 +0000 (22:43 +0900)]
doc: Refresh more URLs in the docs
This updates some URLs that are redirections, mostly to an equivalent
using https. One URL referring to generalized partial indexes was
outdated.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/
20200717.121308.
1369606287593685396.horikyota.ntt@gmail.com
Backpatch-through: 9.5
Amit Kapila [Sat, 18 Jul 2020 04:27:23 +0000 (09:57 +0530)]
Adjust minor comment in reorderbuffer.c.
Author: Dave Cramer
Reviewed-by: David G. Johnston
Discussion: https://postgr.es/m/CADK3HHL8do4Fp1bsymgNasx375njV3AR7zY3UgYwzbL_Dx-n2Q@mail.gmail.com