LCOV - code coverage report
Current view: top level - src/backend/access/heap - visibilitymap.c (source / functions) Hit Total Coverage
Test: PostgreSQL 19devel Lines: 124 132 93.9 %
Date: 2025-09-16 20:18:04 Functions: 9 9 100.0 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * visibilitymap.c
       4             :  *    bitmap for tracking visibility of heap tuples
       5             :  *
       6             :  * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
       7             :  * Portions Copyright (c) 1994, Regents of the University of California
       8             :  *
       9             :  *
      10             :  * IDENTIFICATION
      11             :  *    src/backend/access/heap/visibilitymap.c
      12             :  *
      13             :  * INTERFACE ROUTINES
      14             :  *      visibilitymap_clear  - clear bits for one page in the visibility map
      15             :  *      visibilitymap_pin    - pin a map page for setting a bit
      16             :  *      visibilitymap_pin_ok - check whether correct map page is already pinned
      17             :  *      visibilitymap_set    - set a bit in a previously pinned page
      18             :  *      visibilitymap_get_status - get status of bits
      19             :  *      visibilitymap_count  - count number of bits set in visibility map
      20             :  *      visibilitymap_prepare_truncate -
      21             :  *          prepare for truncation of the visibility map
      22             :  *
      23             :  * NOTES
      24             :  *
      25             :  * The visibility map is a bitmap with two bits (all-visible and all-frozen)
      26             :  * per heap page. A set all-visible bit means that all tuples on the page are
      27             :  * known visible to all transactions, and therefore the page doesn't need to
      28             :  * be vacuumed. A set all-frozen bit means that all tuples on the page are
      29             :  * completely frozen, and therefore the page doesn't need to be vacuumed even
      30             :  * if whole table scanning vacuum is required (e.g. anti-wraparound vacuum).
      31             :  * The all-frozen bit must be set only when the page is already all-visible.
      32             :  *
      33             :  * The map is conservative in the sense that we make sure that whenever a bit
      34             :  * is set, we know the condition is true, but if a bit is not set, it might or
      35             :  * might not be true.
      36             :  *
      37             :  * Clearing visibility map bits is not separately WAL-logged.  The callers
      38             :  * must make sure that whenever a bit is cleared, the bit is cleared on WAL
      39             :  * replay of the updating operation as well.
      40             :  *
      41             :  * When we *set* a visibility map during VACUUM, we must write WAL.  This may
      42             :  * seem counterintuitive, since the bit is basically a hint: if it is clear,
      43             :  * it may still be the case that every tuple on the page is visible to all
      44             :  * transactions; we just don't know that for certain.  The difficulty is that
      45             :  * there are two bits which are typically set together: the PD_ALL_VISIBLE bit
      46             :  * on the page itself, and the visibility map bit.  If a crash occurs after the
      47             :  * visibility map page makes it to disk and before the updated heap page makes
      48             :  * it to disk, redo must set the bit on the heap page.  Otherwise, the next
      49             :  * insert, update, or delete on the heap page will fail to realize that the
      50             :  * visibility map bit must be cleared, possibly causing index-only scans to
      51             :  * return wrong answers.
      52             :  *
      53             :  * VACUUM will normally skip pages for which the visibility map bit is set;
      54             :  * such pages can't contain any dead tuples and therefore don't need vacuuming.
      55             :  *
      56             :  * LOCKING
      57             :  *
      58             :  * In heapam.c, whenever a page is modified so that not all tuples on the
      59             :  * page are visible to everyone anymore, the corresponding bit in the
      60             :  * visibility map is cleared. In order to be crash-safe, we need to do this
      61             :  * while still holding a lock on the heap page and in the same critical
      62             :  * section that logs the page modification. However, we don't want to hold
      63             :  * the buffer lock over any I/O that may be required to read in the visibility
      64             :  * map page.  To avoid this, we examine the heap page before locking it;
      65             :  * if the page-level PD_ALL_VISIBLE bit is set, we pin the visibility map
      66             :  * bit.  Then, we lock the buffer.  But this creates a race condition: there
      67             :  * is a possibility that in the time it takes to lock the buffer, the
      68             :  * PD_ALL_VISIBLE bit gets set.  If that happens, we have to unlock the
      69             :  * buffer, pin the visibility map page, and relock the buffer.  This shouldn't
      70             :  * happen often, because only VACUUM currently sets visibility map bits,
      71             :  * and the race will only occur if VACUUM processes a given page at almost
      72             :  * exactly the same time that someone tries to further modify it.
      73             :  *
      74             :  * To set a bit, you need to hold a lock on the heap page. That prevents
      75             :  * the race condition where VACUUM sees that all tuples on the page are
      76             :  * visible to everyone, but another backend modifies the page before VACUUM
      77             :  * sets the bit in the visibility map.
      78             :  *
      79             :  * When a bit is set, the LSN of the visibility map page is updated to make
      80             :  * sure that the visibility map update doesn't get written to disk before the
      81             :  * WAL record of the changes that made it possible to set the bit is flushed.
      82             :  * But when a bit is cleared, we don't have to do that because it's always
      83             :  * safe to clear a bit in the map from correctness point of view.
      84             :  *
      85             :  *-------------------------------------------------------------------------
      86             :  */
      87             : #include "postgres.h"
      88             : 
      89             : #include "access/heapam_xlog.h"
      90             : #include "access/visibilitymap.h"
      91             : #include "access/xloginsert.h"
      92             : #include "access/xlogutils.h"
      93             : #include "miscadmin.h"
      94             : #include "port/pg_bitutils.h"
      95             : #include "storage/bufmgr.h"
      96             : #include "storage/smgr.h"
      97             : #include "utils/inval.h"
      98             : #include "utils/rel.h"
      99             : 
     100             : 
     101             : /*#define TRACE_VISIBILITYMAP */
     102             : 
     103             : /*
     104             :  * Size of the bitmap on each visibility map page, in bytes. There's no
     105             :  * extra headers, so the whole page minus the standard page header is
     106             :  * used for the bitmap.
     107             :  */
     108             : #define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData))
     109             : 
     110             : /* Number of heap blocks we can represent in one byte */
     111             : #define HEAPBLOCKS_PER_BYTE (BITS_PER_BYTE / BITS_PER_HEAPBLOCK)
     112             : 
     113             : /* Number of heap blocks we can represent in one visibility map page. */
     114             : #define HEAPBLOCKS_PER_PAGE (MAPSIZE * HEAPBLOCKS_PER_BYTE)
     115             : 
     116             : /* Mapping from heap block number to the right bit in the visibility map */
     117             : #define HEAPBLK_TO_MAPBLOCK(x) ((x) / HEAPBLOCKS_PER_PAGE)
     118             : #define HEAPBLK_TO_MAPBYTE(x) (((x) % HEAPBLOCKS_PER_PAGE) / HEAPBLOCKS_PER_BYTE)
     119             : #define HEAPBLK_TO_OFFSET(x) (((x) % HEAPBLOCKS_PER_BYTE) * BITS_PER_HEAPBLOCK)
     120             : 
     121             : /* Masks for counting subsets of bits in the visibility map. */
     122             : #define VISIBLE_MASK8   (0x55)  /* The lower bit of each bit pair */
     123             : #define FROZEN_MASK8    (0xaa)  /* The upper bit of each bit pair */
     124             : 
     125             : /* prototypes for internal routines */
     126             : static Buffer vm_readbuf(Relation rel, BlockNumber blkno, bool extend);
     127             : static Buffer vm_extend(Relation rel, BlockNumber vm_nblocks);
     128             : 
     129             : 
     130             : /*
     131             :  *  visibilitymap_clear - clear specified bits for one page in visibility map
     132             :  *
     133             :  * You must pass a buffer containing the correct map page to this function.
     134             :  * Call visibilitymap_pin first to pin the right one. This function doesn't do
     135             :  * any I/O.  Returns true if any bits have been cleared and false otherwise.
     136             :  */
     137             : bool
     138       36628 : visibilitymap_clear(Relation rel, BlockNumber heapBlk, Buffer vmbuf, uint8 flags)
     139             : {
     140       36628 :     BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
     141       36628 :     int         mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
     142       36628 :     int         mapOffset = HEAPBLK_TO_OFFSET(heapBlk);
     143       36628 :     uint8       mask = flags << mapOffset;
     144             :     char       *map;
     145       36628 :     bool        cleared = false;
     146             : 
     147             :     /* Must never clear all_visible bit while leaving all_frozen bit set */
     148             :     Assert(flags & VISIBILITYMAP_VALID_BITS);
     149             :     Assert(flags != VISIBILITYMAP_ALL_VISIBLE);
     150             : 
     151             : #ifdef TRACE_VISIBILITYMAP
     152             :     elog(DEBUG1, "vm_clear %s %d", RelationGetRelationName(rel), heapBlk);
     153             : #endif
     154             : 
     155       36628 :     if (!BufferIsValid(vmbuf) || BufferGetBlockNumber(vmbuf) != mapBlock)
     156           0 :         elog(ERROR, "wrong buffer passed to visibilitymap_clear");
     157             : 
     158       36628 :     LockBuffer(vmbuf, BUFFER_LOCK_EXCLUSIVE);
     159       36628 :     map = PageGetContents(BufferGetPage(vmbuf));
     160             : 
     161       36628 :     if (map[mapByte] & mask)
     162             :     {
     163       32518 :         map[mapByte] &= ~mask;
     164             : 
     165       32518 :         MarkBufferDirty(vmbuf);
     166       32518 :         cleared = true;
     167             :     }
     168             : 
     169       36628 :     LockBuffer(vmbuf, BUFFER_LOCK_UNLOCK);
     170             : 
     171       36628 :     return cleared;
     172             : }
     173             : 
     174             : /*
     175             :  *  visibilitymap_pin - pin a map page for setting a bit
     176             :  *
     177             :  * Setting a bit in the visibility map is a two-phase operation. First, call
     178             :  * visibilitymap_pin, to pin the visibility map page containing the bit for
     179             :  * the heap page. Because that can require I/O to read the map page, you
     180             :  * shouldn't hold a lock on the heap page while doing that. Then, call
     181             :  * visibilitymap_set to actually set the bit.
     182             :  *
     183             :  * On entry, *vmbuf should be InvalidBuffer or a valid buffer returned by
     184             :  * an earlier call to visibilitymap_pin or visibilitymap_get_status on the same
     185             :  * relation. On return, *vmbuf is a valid buffer with the map page containing
     186             :  * the bit for heapBlk.
     187             :  *
     188             :  * If the page doesn't exist in the map file yet, it is extended.
     189             :  */
     190             : void
     191     1392524 : visibilitymap_pin(Relation rel, BlockNumber heapBlk, Buffer *vmbuf)
     192             : {
     193     1392524 :     BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
     194             : 
     195             :     /* Reuse the old pinned buffer if possible */
     196     1392524 :     if (BufferIsValid(*vmbuf))
     197             :     {
     198     1263376 :         if (BufferGetBlockNumber(*vmbuf) == mapBlock)
     199     1263376 :             return;
     200             : 
     201           0 :         ReleaseBuffer(*vmbuf);
     202             :     }
     203      129148 :     *vmbuf = vm_readbuf(rel, mapBlock, true);
     204             : }
     205             : 
     206             : /*
     207             :  *  visibilitymap_pin_ok - do we already have the correct page pinned?
     208             :  *
     209             :  * On entry, vmbuf should be InvalidBuffer or a valid buffer returned by
     210             :  * an earlier call to visibilitymap_pin or visibilitymap_get_status on the same
     211             :  * relation.  The return value indicates whether the buffer covers the
     212             :  * given heapBlk.
     213             :  */
     214             : bool
     215       28800 : visibilitymap_pin_ok(BlockNumber heapBlk, Buffer vmbuf)
     216             : {
     217       28800 :     BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
     218             : 
     219       28800 :     return BufferIsValid(vmbuf) && BufferGetBlockNumber(vmbuf) == mapBlock;
     220             : }
     221             : 
     222             : /*
     223             :  *  visibilitymap_set - set bit(s) on a previously pinned page
     224             :  *
     225             :  * recptr is the LSN of the XLOG record we're replaying, if we're in recovery,
     226             :  * or InvalidXLogRecPtr in normal running.  The VM page LSN is advanced to the
     227             :  * one provided; in normal running, we generate a new XLOG record and set the
     228             :  * page LSN to that value (though the heap page's LSN may *not* be updated;
     229             :  * see below).  cutoff_xid is the largest xmin on the page being marked
     230             :  * all-visible; it is needed for Hot Standby, and can be InvalidTransactionId
     231             :  * if the page contains no tuples.  It can also be set to InvalidTransactionId
     232             :  * when a page that is already all-visible is being marked all-frozen.
     233             :  *
     234             :  * Caller is expected to set the heap page's PD_ALL_VISIBLE bit before calling
     235             :  * this function. Except in recovery, caller should also pass the heap
     236             :  * buffer. When checksums are enabled and we're not in recovery, we must add
     237             :  * the heap buffer to the WAL chain to protect it from being torn.
     238             :  *
     239             :  * You must pass a buffer containing the correct map page to this function.
     240             :  * Call visibilitymap_pin first to pin the right one. This function doesn't do
     241             :  * any I/O.
     242             :  *
     243             :  * Returns the state of the page's VM bits before setting flags.
     244             :  */
     245             : uint8
     246      113760 : visibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,
     247             :                   XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid,
     248             :                   uint8 flags)
     249             : {
     250      113760 :     BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
     251      113760 :     uint32      mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
     252      113760 :     uint8       mapOffset = HEAPBLK_TO_OFFSET(heapBlk);
     253             :     Page        page;
     254             :     uint8      *map;
     255             :     uint8       status;
     256             : 
     257             : #ifdef TRACE_VISIBILITYMAP
     258             :     elog(DEBUG1, "vm_set flags 0x%02X for %s %d",
     259             :          flags, RelationGetRelationName(rel), heapBlk);
     260             : #endif
     261             : 
     262             :     Assert(InRecovery || XLogRecPtrIsInvalid(recptr));
     263             :     Assert(InRecovery || PageIsAllVisible(BufferGetPage(heapBuf)));
     264             :     Assert((flags & VISIBILITYMAP_VALID_BITS) == flags);
     265             : 
     266             :     /* Must never set all_frozen bit without also setting all_visible bit */
     267             :     Assert(flags != VISIBILITYMAP_ALL_FROZEN);
     268             : 
     269             :     /* Check that we have the right heap page pinned, if present */
     270      113760 :     if (BufferIsValid(heapBuf) && BufferGetBlockNumber(heapBuf) != heapBlk)
     271           0 :         elog(ERROR, "wrong heap buffer passed to visibilitymap_set");
     272             : 
     273             :     Assert(!BufferIsValid(heapBuf) || BufferIsExclusiveLocked(heapBuf));
     274             : 
     275             :     /* Check that we have the right VM page pinned */
     276      113760 :     if (!BufferIsValid(vmBuf) || BufferGetBlockNumber(vmBuf) != mapBlock)
     277           0 :         elog(ERROR, "wrong VM buffer passed to visibilitymap_set");
     278             : 
     279      113760 :     page = BufferGetPage(vmBuf);
     280      113760 :     map = (uint8 *) PageGetContents(page);
     281      113760 :     LockBuffer(vmBuf, BUFFER_LOCK_EXCLUSIVE);
     282             : 
     283      113760 :     status = (map[mapByte] >> mapOffset) & VISIBILITYMAP_VALID_BITS;
     284      113760 :     if (flags != status)
     285             :     {
     286      113760 :         START_CRIT_SECTION();
     287             : 
     288      113760 :         map[mapByte] |= (flags << mapOffset);
     289      113760 :         MarkBufferDirty(vmBuf);
     290             : 
     291      113760 :         if (RelationNeedsWAL(rel))
     292             :         {
     293      105648 :             if (XLogRecPtrIsInvalid(recptr))
     294             :             {
     295             :                 Assert(!InRecovery);
     296       89990 :                 recptr = log_heap_visible(rel, heapBuf, vmBuf, cutoff_xid, flags);
     297             : 
     298             :                 /*
     299             :                  * If data checksums are enabled (or wal_log_hints=on), we
     300             :                  * need to protect the heap page from being torn.
     301             :                  *
     302             :                  * If not, then we must *not* update the heap page's LSN. In
     303             :                  * this case, the FPI for the heap page was omitted from the
     304             :                  * WAL record inserted above, so it would be incorrect to
     305             :                  * update the heap page's LSN.
     306             :                  */
     307       89990 :                 if (XLogHintBitIsNeeded())
     308             :                 {
     309       83204 :                     Page        heapPage = BufferGetPage(heapBuf);
     310             : 
     311       83204 :                     PageSetLSN(heapPage, recptr);
     312             :                 }
     313             :             }
     314      105648 :             PageSetLSN(page, recptr);
     315             :         }
     316             : 
     317      113760 :         END_CRIT_SECTION();
     318             :     }
     319             : 
     320      113760 :     LockBuffer(vmBuf, BUFFER_LOCK_UNLOCK);
     321      113760 :     return status;
     322             : }
     323             : 
     324             : /*
     325             :  *  visibilitymap_get_status - get status of bits
     326             :  *
     327             :  * Are all tuples on heapBlk visible to all or are marked frozen, according
     328             :  * to the visibility map?
     329             :  *
     330             :  * On entry, *vmbuf should be InvalidBuffer or a valid buffer returned by an
     331             :  * earlier call to visibilitymap_pin or visibilitymap_get_status on the same
     332             :  * relation. On return, *vmbuf is a valid buffer with the map page containing
     333             :  * the bit for heapBlk, or InvalidBuffer. The caller is responsible for
     334             :  * releasing *vmbuf after it's done testing and setting bits.
     335             :  *
     336             :  * NOTE: This function is typically called without a lock on the heap page,
     337             :  * so somebody else could change the bit just after we look at it.  In fact,
     338             :  * since we don't lock the visibility map page either, it's even possible that
     339             :  * someone else could have changed the bit just before we look at it, but yet
     340             :  * we might see the old value.  It is the caller's responsibility to deal with
     341             :  * all concurrency issues!
     342             :  */
     343             : uint8
     344     7881982 : visibilitymap_get_status(Relation rel, BlockNumber heapBlk, Buffer *vmbuf)
     345             : {
     346     7881982 :     BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);
     347     7881982 :     uint32      mapByte = HEAPBLK_TO_MAPBYTE(heapBlk);
     348     7881982 :     uint8       mapOffset = HEAPBLK_TO_OFFSET(heapBlk);
     349             :     char       *map;
     350             :     uint8       result;
     351             : 
     352             : #ifdef TRACE_VISIBILITYMAP
     353             :     elog(DEBUG1, "vm_get_status %s %d", RelationGetRelationName(rel), heapBlk);
     354             : #endif
     355             : 
     356             :     /* Reuse the old pinned buffer if possible */
     357     7881982 :     if (BufferIsValid(*vmbuf))
     358             :     {
     359     6141422 :         if (BufferGetBlockNumber(*vmbuf) != mapBlock)
     360             :         {
     361           0 :             ReleaseBuffer(*vmbuf);
     362           0 :             *vmbuf = InvalidBuffer;
     363             :         }
     364             :     }
     365             : 
     366     7881982 :     if (!BufferIsValid(*vmbuf))
     367             :     {
     368     1740560 :         *vmbuf = vm_readbuf(rel, mapBlock, false);
     369     1740560 :         if (!BufferIsValid(*vmbuf))
     370     1532422 :             return (uint8) 0;
     371             :     }
     372             : 
     373     6349560 :     map = PageGetContents(BufferGetPage(*vmbuf));
     374             : 
     375             :     /*
     376             :      * A single byte read is atomic.  There could be memory-ordering effects
     377             :      * here, but for performance reasons we make it the caller's job to worry
     378             :      * about that.
     379             :      */
     380     6349560 :     result = ((map[mapByte] >> mapOffset) & VISIBILITYMAP_VALID_BITS);
     381     6349560 :     return result;
     382             : }
     383             : 
     384             : /*
     385             :  *  visibilitymap_count  - count number of bits set in visibility map
     386             :  *
     387             :  * Note: we ignore the possibility of race conditions when the table is being
     388             :  * extended concurrently with the call.  New pages added to the table aren't
     389             :  * going to be marked all-visible or all-frozen, so they won't affect the result.
     390             :  */
     391             : void
     392      257426 : visibilitymap_count(Relation rel, BlockNumber *all_visible, BlockNumber *all_frozen)
     393             : {
     394             :     BlockNumber mapBlock;
     395      257426 :     BlockNumber nvisible = 0;
     396      257426 :     BlockNumber nfrozen = 0;
     397             : 
     398             :     /* all_visible must be specified */
     399             :     Assert(all_visible);
     400             : 
     401      257426 :     for (mapBlock = 0;; mapBlock++)
     402       97354 :     {
     403             :         Buffer      mapBuffer;
     404             :         uint64     *map;
     405             : 
     406             :         /*
     407             :          * Read till we fall off the end of the map.  We assume that any extra
     408             :          * bytes in the last page are zeroed, so we don't bother excluding
     409             :          * them from the count.
     410             :          */
     411      354780 :         mapBuffer = vm_readbuf(rel, mapBlock, false);
     412      354780 :         if (!BufferIsValid(mapBuffer))
     413      257426 :             break;
     414             : 
     415             :         /*
     416             :          * We choose not to lock the page, since the result is going to be
     417             :          * immediately stale anyway if anyone is concurrently setting or
     418             :          * clearing bits, and we only really need an approximate value.
     419             :          */
     420       97354 :         map = (uint64 *) PageGetContents(BufferGetPage(mapBuffer));
     421             : 
     422       97354 :         nvisible += pg_popcount_masked((const char *) map, MAPSIZE, VISIBLE_MASK8);
     423       97354 :         if (all_frozen)
     424       97354 :             nfrozen += pg_popcount_masked((const char *) map, MAPSIZE, FROZEN_MASK8);
     425             : 
     426       97354 :         ReleaseBuffer(mapBuffer);
     427             :     }
     428             : 
     429      257426 :     *all_visible = nvisible;
     430      257426 :     if (all_frozen)
     431      257426 :         *all_frozen = nfrozen;
     432      257426 : }
     433             : 
     434             : /*
     435             :  *  visibilitymap_prepare_truncate -
     436             :  *          prepare for truncation of the visibility map
     437             :  *
     438             :  * nheapblocks is the new size of the heap.
     439             :  *
     440             :  * Return the number of blocks of new visibility map.
     441             :  * If it's InvalidBlockNumber, there is nothing to truncate;
     442             :  * otherwise the caller is responsible for calling smgrtruncate()
     443             :  * to truncate the visibility map pages.
     444             :  */
     445             : BlockNumber
     446         330 : visibilitymap_prepare_truncate(Relation rel, BlockNumber nheapblocks)
     447             : {
     448             :     BlockNumber newnblocks;
     449             : 
     450             :     /* last remaining block, byte, and bit */
     451         330 :     BlockNumber truncBlock = HEAPBLK_TO_MAPBLOCK(nheapblocks);
     452         330 :     uint32      truncByte = HEAPBLK_TO_MAPBYTE(nheapblocks);
     453         330 :     uint8       truncOffset = HEAPBLK_TO_OFFSET(nheapblocks);
     454             : 
     455             : #ifdef TRACE_VISIBILITYMAP
     456             :     elog(DEBUG1, "vm_truncate %s %d", RelationGetRelationName(rel), nheapblocks);
     457             : #endif
     458             : 
     459             :     /*
     460             :      * If no visibility map has been created yet for this relation, there's
     461             :      * nothing to truncate.
     462             :      */
     463         330 :     if (!smgrexists(RelationGetSmgr(rel), VISIBILITYMAP_FORKNUM))
     464           0 :         return InvalidBlockNumber;
     465             : 
     466             :     /*
     467             :      * Unless the new size is exactly at a visibility map page boundary, the
     468             :      * tail bits in the last remaining map page, representing truncated heap
     469             :      * blocks, need to be cleared. This is not only tidy, but also necessary
     470             :      * because we don't get a chance to clear the bits if the heap is extended
     471             :      * again.
     472             :      */
     473         330 :     if (truncByte != 0 || truncOffset != 0)
     474         200 :     {
     475             :         Buffer      mapBuffer;
     476             :         Page        page;
     477             :         char       *map;
     478             : 
     479         200 :         newnblocks = truncBlock + 1;
     480             : 
     481         200 :         mapBuffer = vm_readbuf(rel, truncBlock, false);
     482         200 :         if (!BufferIsValid(mapBuffer))
     483             :         {
     484             :             /* nothing to do, the file was already smaller */
     485           0 :             return InvalidBlockNumber;
     486             :         }
     487             : 
     488         200 :         page = BufferGetPage(mapBuffer);
     489         200 :         map = PageGetContents(page);
     490             : 
     491         200 :         LockBuffer(mapBuffer, BUFFER_LOCK_EXCLUSIVE);
     492             : 
     493             :         /* NO EREPORT(ERROR) from here till changes are logged */
     494         200 :         START_CRIT_SECTION();
     495             : 
     496             :         /* Clear out the unwanted bytes. */
     497         200 :         MemSet(&map[truncByte + 1], 0, MAPSIZE - (truncByte + 1));
     498             : 
     499             :         /*----
     500             :          * Mask out the unwanted bits of the last remaining byte.
     501             :          *
     502             :          * ((1 << 0) - 1) = 00000000
     503             :          * ((1 << 1) - 1) = 00000001
     504             :          * ...
     505             :          * ((1 << 6) - 1) = 00111111
     506             :          * ((1 << 7) - 1) = 01111111
     507             :          *----
     508             :          */
     509         200 :         map[truncByte] &= (1 << truncOffset) - 1;
     510             : 
     511             :         /*
     512             :          * Truncation of a relation is WAL-logged at a higher-level, and we
     513             :          * will be called at WAL replay. But if checksums are enabled, we need
     514             :          * to still write a WAL record to protect against a torn page, if the
     515             :          * page is flushed to disk before the truncation WAL record. We cannot
     516             :          * use MarkBufferDirtyHint here, because that will not dirty the page
     517             :          * during recovery.
     518             :          */
     519         200 :         MarkBufferDirty(mapBuffer);
     520         200 :         if (!InRecovery && RelationNeedsWAL(rel) && XLogHintBitIsNeeded())
     521         158 :             log_newpage_buffer(mapBuffer, false);
     522             : 
     523         200 :         END_CRIT_SECTION();
     524             : 
     525         200 :         UnlockReleaseBuffer(mapBuffer);
     526             :     }
     527             :     else
     528         130 :         newnblocks = truncBlock;
     529             : 
     530         330 :     if (smgrnblocks(RelationGetSmgr(rel), VISIBILITYMAP_FORKNUM) <= newnblocks)
     531             :     {
     532             :         /* nothing to do, the file was already smaller than requested size */
     533         200 :         return InvalidBlockNumber;
     534             :     }
     535             : 
     536         130 :     return newnblocks;
     537             : }
     538             : 
     539             : /*
     540             :  * Read a visibility map page.
     541             :  *
     542             :  * If the page doesn't exist, InvalidBuffer is returned, or if 'extend' is
     543             :  * true, the visibility map file is extended.
     544             :  */
     545             : static Buffer
     546     2224688 : vm_readbuf(Relation rel, BlockNumber blkno, bool extend)
     547             : {
     548             :     Buffer      buf;
     549             :     SMgrRelation reln;
     550             : 
     551             :     /*
     552             :      * Caution: re-using this smgr pointer could fail if the relcache entry
     553             :      * gets closed.  It's safe as long as we only do smgr-level operations
     554             :      * between here and the last use of the pointer.
     555             :      */
     556     2224688 :     reln = RelationGetSmgr(rel);
     557             : 
     558             :     /*
     559             :      * If we haven't cached the size of the visibility map fork yet, check it
     560             :      * first.
     561             :      */
     562     2224688 :     if (reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM] == InvalidBlockNumber)
     563             :     {
     564      280612 :         if (smgrexists(reln, VISIBILITYMAP_FORKNUM))
     565      116852 :             smgrnblocks(reln, VISIBILITYMAP_FORKNUM);
     566             :         else
     567      163760 :             reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM] = 0;
     568             :     }
     569             : 
     570             :     /*
     571             :      * For reading we use ZERO_ON_ERROR mode, and initialize the page if
     572             :      * necessary. It's always safe to clear bits, so it's better to clear
     573             :      * corrupt pages than error out.
     574             :      *
     575             :      * We use the same path below to initialize pages when extending the
     576             :      * relation, as a concurrent extension can end up with vm_extend()
     577             :      * returning an already-initialized page.
     578             :      */
     579     2224688 :     if (blkno >= reln->smgr_cached_nblocks[VISIBILITYMAP_FORKNUM])
     580             :     {
     581     1795824 :         if (extend)
     582        5976 :             buf = vm_extend(rel, blkno + 1);
     583             :         else
     584     1789848 :             return InvalidBuffer;
     585             :     }
     586             :     else
     587      428864 :         buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,
     588             :                                  RBM_ZERO_ON_ERROR, NULL);
     589             : 
     590             :     /*
     591             :      * Initializing the page when needed is trickier than it looks, because of
     592             :      * the possibility of multiple backends doing this concurrently, and our
     593             :      * desire to not uselessly take the buffer lock in the normal path where
     594             :      * the page is OK.  We must take the lock to initialize the page, so
     595             :      * recheck page newness after we have the lock, in case someone else
     596             :      * already did it.  Also, because we initially check PageIsNew with no
     597             :      * lock, it's possible to fall through and return the buffer while someone
     598             :      * else is still initializing the page (i.e., we might see pd_upper as set
     599             :      * but other page header fields are still zeroes).  This is harmless for
     600             :      * callers that will take a buffer lock themselves, but some callers
     601             :      * inspect the page without any lock at all.  The latter is OK only so
     602             :      * long as it doesn't depend on the page header having correct contents.
     603             :      * Current usage is safe because PageGetContents() does not require that.
     604             :      */
     605      434840 :     if (PageIsNew(BufferGetPage(buf)))
     606             :     {
     607        6104 :         LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
     608        6104 :         if (PageIsNew(BufferGetPage(buf)))
     609        6104 :             PageInit(BufferGetPage(buf), BLCKSZ, 0);
     610        6104 :         LockBuffer(buf, BUFFER_LOCK_UNLOCK);
     611             :     }
     612      434840 :     return buf;
     613             : }
     614             : 
     615             : /*
     616             :  * Ensure that the visibility map fork is at least vm_nblocks long, extending
     617             :  * it if necessary with zeroed pages.
     618             :  */
     619             : static Buffer
     620        5976 : vm_extend(Relation rel, BlockNumber vm_nblocks)
     621             : {
     622             :     Buffer      buf;
     623             : 
     624        5976 :     buf = ExtendBufferedRelTo(BMR_REL(rel), VISIBILITYMAP_FORKNUM, NULL,
     625             :                               EB_CREATE_FORK_IF_NEEDED |
     626             :                               EB_CLEAR_SIZE_CACHE,
     627             :                               vm_nblocks,
     628             :                               RBM_ZERO_ON_ERROR);
     629             : 
     630             :     /*
     631             :      * Send a shared-inval message to force other backends to close any smgr
     632             :      * references they may have for this rel, which we are about to change.
     633             :      * This is a useful optimization because it means that backends don't have
     634             :      * to keep checking for creation or extension of the file, which happens
     635             :      * infrequently.
     636             :      */
     637        5976 :     CacheInvalidateSmgr(RelationGetSmgr(rel)->smgr_rlocator);
     638             : 
     639        5976 :     return buf;
     640             : }

Generated by: LCOV version 1.16