Testing by Amit Kapila, Andres Freund, and myself, with and without
other patches that also aim to improve scalability, seems to indicate
that this change is a significant win over the current value and over
smaller values such as 64. It's not clear how high we can push this
value before it starts to have negative side-effects elsewhere, but
going this far looks OK.
/*
* We use this structure to keep track of locked LWLocks for release
- * during error recovery. The maximum size could be determined at runtime
- * if necessary, but it seems unlikely that more than a few locks could
- * ever be held simultaneously.
+ * during error recovery. Normally, only a few will be held at once, but
+ * occasionally the number can be much higher; for example, the pg_buffercache
+ * extension locks all buffer partitions simultaneously.
*/
-#define MAX_SIMUL_LWLOCKS 100
+#define MAX_SIMUL_LWLOCKS 200
static int num_held_lwlocks = 0;
static LWLock *held_lwlocks[MAX_SIMUL_LWLOCKS];
*/
/* Number of partitions of the shared buffer mapping hashtable */
-#define NUM_BUFFER_PARTITIONS 16
+#define NUM_BUFFER_PARTITIONS 128
/* Number of partitions the shared lock tables are divided into */
#define LOG2_NUM_LOCK_PARTITIONS 4