This runtime-computed GUC shows the number of huge pages required
for the server's main shared memory area, taking advantage of the
work done in
0c39c29 and
0bd305e. This is useful for users to estimate
the amount of huge pages required for a server as it becomes possible to
do an estimation without having to start the server and potentially
allocate a large chunk of shared memory.
The number of huge pages is calculated based on the existing GUC
huge_page_size if set, or by using the system's default by looking at
/proc/meminfo on Linux. There is nothing new here as this commit reuses
the existing calculation methods, and just exposes this information
directly to the user. The routine calculating the huge page size is
refactored to limit the number of files with platform-specific flags.
This new GUC's name was the most popular choice based on the discussion
done. This is only supported on Linux.
I have taken the time to test the change on Linux, Windows and MacOS,
though for the last two ones large pages are not supported. The first
one calculates correctly the number of pages depending on the existing
GUC huge_page_size or the system's default.
Thanks to Andres Freund, Robert Haas, Kyotaro Horiguchi, Tom Lane,
Justin Pryzby (and anybody forgotten here) for the discussion.
Author: Nathan Bossart
Discussion: https://postgr.es/m/
F2772387-CE0F-46BF-B5F1-
CC55516EB885@amazon.com
</listitem>
</varlistentry>
+ <varlistentry id="guc-shared-memory-size-in-huge-pages" xreflabel="shared_memory_size_in_huge_pages">
+ <term><varname>shared_memory_size_in_huge_pages</varname> (<type>integer</type>)
+ <indexterm>
+ <primary><varname>shared_memory_size_in_huge_pages</varname> configuration parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Reports the number of huge pages that are needed for the main shared
+ memory area based on the specified <xref linkend="guc-huge-page-size"/>.
+ If huge pages are not supported, this will be <literal>-1</literal>.
+ </para>
+ <para>
+ This setting is supported only on <productname>Linux</productname>. It
+ is always set to <literal>-1</literal> on other platforms. For more
+ details about using huge pages on <productname>Linux</productname>, see
+ <xref linkend="linux-huge-pages"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="guc-ssl-library" xreflabel="ssl_library">
<term><varname>ssl_library</varname> (<type>string</type>)
<indexterm>
<para>
This can be used on a running server for most parameters. However,
the server must be shut down for some runtime-computed parameters
- (e.g., <xref linkend="guc-shared-memory-size"/> and
+ (e.g., <xref linkend="guc-shared-memory-size"/>,
+ <xref linkend="guc-shared-memory-size-in-huge-pages"/>, and
<xref linkend="guc-wal-segment-size"/>).
</para>
with <varname>CONFIG_HUGETLBFS=y</varname> and
<varname>CONFIG_HUGETLB_PAGE=y</varname>. You will also have to configure
the operating system to provide enough huge pages of the desired size.
- To estimate the number of huge pages needed, start
- <productname>PostgreSQL</productname> without huge pages enabled and check
- the postmaster's anonymous shared memory segment size, as well as the
- system's default and supported huge page sizes, using the
- <filename>/proc</filename> and <filename>/sys</filename> file systems.
+ To determine the number of huge pages needed, use the
+ <command>postgres</command> command to see the value of
+ <xref linkend="guc-shared-memory-size-in-huge-pages"/>. Note that the
+ server must be shut down to view this runtime-computed parameter.
This might look like:
<programlisting>
-$ <userinput>head -1 $PGDATA/postmaster.pid</userinput>
-4170
-$ <userinput>pmap 4170 | awk '/rw-s/ && /zero/ {print $2}'</userinput>
-6490428K
+$ <userinput>postgres -D $PGDATA -C shared_memory_size_in_huge_pages</userinput>
+3170
$ <userinput>grep ^Hugepagesize /proc/meminfo</userinput>
Hugepagesize: 2048 kB
$ <userinput>ls /sys/kernel/mm/hugepages</userinput>
</programlisting>
In this example the default is 2MB, but you can also explicitly request
- either 2MB or 1GB with <xref linkend="guc-huge-page-size"/>.
+ either 2MB or 1GB with <xref linkend="guc-huge-page-size"/> to adapt
+ the number of pages calculated by
+ <varname>shared_memory_size_in_huge_pages</varname>.
- Assuming <literal>2MB</literal> huge pages,
- <literal>6490428</literal> / <literal>2048</literal> gives approximately
- <literal>3169.154</literal>, so in this example we need at
- least <literal>3170</literal> huge pages. A larger setting would be
- appropriate if other programs on the machine also need huge pages.
+ While we need at least <literal>3170</literal> huge pages in this example,
+ a larger setting would be appropriate if other programs on the machine
+ also need huge pages.
We can set this with:
<programlisting>
# <userinput>sysctl -w vm.nr_hugepages=3170</userinput>
return shmStat.shm_nattch == 0 ? SHMSTATE_UNATTACHED : SHMSTATE_ATTACHED;
}
-#ifdef MAP_HUGETLB
-
/*
* Identify the huge page size to use, and compute the related mmap flags.
*
* hugepage sizes, we might want to think about more invasive strategies,
* such as increasing shared_buffers to absorb the extra space.
*
- * Returns the (real, assumed or config provided) page size into *hugepagesize,
- * and the hugepage-related mmap flags to use into *mmap_flags.
+ * Returns the (real, assumed or config provided) page size into
+ * *hugepagesize, and the hugepage-related mmap flags to use into
+ * *mmap_flags if requested by the caller. If huge pages are not supported,
+ * *hugepagesize and *mmap_flags are set to 0.
*/
-static void
+void
GetHugePageSize(Size *hugepagesize, int *mmap_flags)
{
+#ifdef MAP_HUGETLB
+
Size default_hugepagesize = 0;
+ Size hugepagesize_local = 0;
+ int mmap_flags_local = 0;
/*
* System-dependent code to find out the default huge page size.
if (huge_page_size != 0)
{
/* If huge page size is requested explicitly, use that. */
- *hugepagesize = (Size) huge_page_size * 1024;
+ hugepagesize_local = (Size) huge_page_size * 1024;
}
else if (default_hugepagesize != 0)
{
/* Otherwise use the system default, if we have it. */
- *hugepagesize = default_hugepagesize;
+ hugepagesize_local = default_hugepagesize;
}
else
{
* writing, there are no reports of any non-Linux systems being picky
* about that.
*/
- *hugepagesize = 2 * 1024 * 1024;
+ hugepagesize_local = 2 * 1024 * 1024;
}
- *mmap_flags = MAP_HUGETLB;
+ mmap_flags_local = MAP_HUGETLB;
/*
* On recent enough Linux, also include the explicit page size, if
* necessary.
*/
#if defined(MAP_HUGE_MASK) && defined(MAP_HUGE_SHIFT)
- if (*hugepagesize != default_hugepagesize)
+ if (hugepagesize_local != default_hugepagesize)
{
- int shift = pg_ceil_log2_64(*hugepagesize);
+ int shift = pg_ceil_log2_64(hugepagesize_local);
- *mmap_flags |= (shift & MAP_HUGE_MASK) << MAP_HUGE_SHIFT;
+ mmap_flags_local |= (shift & MAP_HUGE_MASK) << MAP_HUGE_SHIFT;
}
#endif
-}
+
+ /* assign the results found */
+ if (mmap_flags)
+ *mmap_flags = mmap_flags_local;
+ if (hugepagesize)
+ *hugepagesize = hugepagesize_local;
+
+#else
+
+ if (hugepagesize)
+ *hugepagesize = 0;
+ if (mmap_flags)
+ *mmap_flags = 0;
#endif /* MAP_HUGETLB */
+}
/*
* Creates an anonymous mmap()ed shared memory segment.
return true;
}
+
+/*
+ * This function is provided for consistency with sysv_shmem.c and does not
+ * provide any useful information for Windows. To obtain the large page size,
+ * use GetLargePageMinimum() instead.
+ */
+void
+GetHugePageSize(Size *hugepagesize, int *mmap_flags)
+{
+ if (hugepagesize)
+ *hugepagesize = 0;
+ if (mmap_flags)
+ *mmap_flags = 0;
+}
char buf[64];
Size size_b;
Size size_mb;
+ Size hp_size;
/*
* Calculate the shared memory size and round up to the nearest megabyte.
size_mb = add_size(size_b, (1024 * 1024) - 1) / (1024 * 1024);
sprintf(buf, "%zu", size_mb);
SetConfigOption("shared_memory_size", buf, PGC_INTERNAL, PGC_S_OVERRIDE);
+
+ /*
+ * Calculate the number of huge pages required.
+ */
+ GetHugePageSize(&hp_size, NULL);
+ if (hp_size != 0)
+ {
+ Size hp_required;
+
+ hp_required = add_size(size_b / hp_size, 1);
+ sprintf(buf, "%zu", hp_required);
+ SetConfigOption("shared_memory_size_in_huge_pages", buf, PGC_INTERNAL, PGC_S_OVERRIDE);
+ }
}
static int block_size;
static int segment_size;
static int shared_memory_size_mb;
+static int shared_memory_size_in_huge_pages;
static int wal_block_size;
static bool data_checksums;
static bool integer_datetimes;
NULL, NULL, NULL
},
+ {
+ {"shared_memory_size_in_huge_pages", PGC_INTERNAL, PRESET_OPTIONS,
+ gettext_noop("Shows the number of huge pages needed for the main shared memory area."),
+ gettext_noop("-1 indicates that the value could not be determined."),
+ GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_RUNTIME_COMPUTED
+ },
+ &shared_memory_size_in_huge_pages,
+ -1, -1, INT_MAX,
+ NULL, NULL, NULL
+ },
+
{
{"temp_buffers", PGC_USERSET, RESOURCES_MEM,
gettext_noop("Sets the maximum number of temporary buffers used by each session."),
PGShmemHeader **shim);
extern bool PGSharedMemoryIsInUse(unsigned long id1, unsigned long id2);
extern void PGSharedMemoryDetach(void);
+extern void GetHugePageSize(Size *hugepagesize, int *mmap_flags);
#endif /* PG_SHMEM_H */