read_stream: Fix overflow hazard with large shared buffers
authorAndres Freund <andres@anarazel.de>
Mon, 7 Apr 2025 12:47:30 +0000 (08:47 -0400)
committerAndres Freund <andres@anarazel.de>
Mon, 7 Apr 2025 13:45:00 +0000 (09:45 -0400)
If the limit returned by GetAdditionalPinLimit() is large, the buffer_limit
variable in read_stream_start_pending_read() can overflow. While the code is
careful to limit buffer_limit PG_INT16_MAX, we subsequently add the number of
forwarded buffers.

The overflow can lead to assertion failures, crashes or wrong query results
when using large shared buffers.

It seems easier to avoid this if we make the buffer_limit variable an int,
instead of an int16.  Do so, and clamp buffer_limit after adding the number of
forwarded buffers.

It's possible we might want to address this and related issues more widely by
changing to int instead of int16 more widely, but since the consequences of
this bug can be confusing, it seems better to fix it now.

This bug was introduced in ed0b87caaca.

Discussion: https://postgr.es/m/ewvz3cbtlhrwqk7h6ca6cctiqh7r64ol3pzb3iyjycn2r5nxk5@tnhw3a5zatlr

src/backend/storage/aio/read_stream.c

index 36c54fb695b0533ec6608819cd4ff0debc95cc68..0e7f5557f5cb93b5c5ac2411167d17f3ba4decc4 100644 (file)
@@ -237,7 +237,7 @@ read_stream_start_pending_read(ReadStream *stream)
    int16       io_index;
    int16       overflow;
    int16       buffer_index;
-   int16       buffer_limit;
+   int         buffer_limit;
 
    /* This should only be called with a pending read. */
    Assert(stream->pending_read_nblocks > 0);
@@ -294,7 +294,10 @@ read_stream_start_pending_read(ReadStream *stream)
    else
        buffer_limit = Min(GetAdditionalPinLimit(), PG_INT16_MAX);
    Assert(stream->forwarded_buffers <= stream->pending_read_nblocks);
+
    buffer_limit += stream->forwarded_buffers;
+   buffer_limit = Min(buffer_limit, PG_INT16_MAX);
+
    if (buffer_limit == 0 && stream->pinned_buffers == 0)
        buffer_limit = 1;       /* guarantee progress */