LCOV - code coverage report
Current view: top level - src/backend/access/heap - rewriteheap.c (source / functions) Hit Total Coverage
Test: PostgreSQL 13devel Lines: 271 328 82.6 %
Date: 2019-11-13 23:06:49 Functions: 11 12 91.7 %
Legend: Lines: hit not hit

          Line data    Source code
       1             : /*-------------------------------------------------------------------------
       2             :  *
       3             :  * rewriteheap.c
       4             :  *    Support functions to rewrite tables.
       5             :  *
       6             :  * These functions provide a facility to completely rewrite a heap, while
       7             :  * preserving visibility information and update chains.
       8             :  *
       9             :  * INTERFACE
      10             :  *
      11             :  * The caller is responsible for creating the new heap, all catalog
      12             :  * changes, supplying the tuples to be written to the new heap, and
      13             :  * rebuilding indexes.  The caller must hold AccessExclusiveLock on the
      14             :  * target table, because we assume no one else is writing into it.
      15             :  *
      16             :  * To use the facility:
      17             :  *
      18             :  * begin_heap_rewrite
      19             :  * while (fetch next tuple)
      20             :  * {
      21             :  *     if (tuple is dead)
      22             :  *         rewrite_heap_dead_tuple
      23             :  *     else
      24             :  *     {
      25             :  *         // do any transformations here if required
      26             :  *         rewrite_heap_tuple
      27             :  *     }
      28             :  * }
      29             :  * end_heap_rewrite
      30             :  *
      31             :  * The contents of the new relation shouldn't be relied on until after
      32             :  * end_heap_rewrite is called.
      33             :  *
      34             :  *
      35             :  * IMPLEMENTATION
      36             :  *
      37             :  * This would be a fairly trivial affair, except that we need to maintain
      38             :  * the ctid chains that link versions of an updated tuple together.
      39             :  * Since the newly stored tuples will have tids different from the original
      40             :  * ones, if we just copied t_ctid fields to the new table the links would
      41             :  * be wrong.  When we are required to copy a (presumably recently-dead or
      42             :  * delete-in-progress) tuple whose ctid doesn't point to itself, we have
      43             :  * to substitute the correct ctid instead.
      44             :  *
      45             :  * For each ctid reference from A -> B, we might encounter either A first
      46             :  * or B first.  (Note that a tuple in the middle of a chain is both A and B
      47             :  * of different pairs.)
      48             :  *
      49             :  * If we encounter A first, we'll store the tuple in the unresolved_tups
      50             :  * hash table. When we later encounter B, we remove A from the hash table,
      51             :  * fix the ctid to point to the new location of B, and insert both A and B
      52             :  * to the new heap.
      53             :  *
      54             :  * If we encounter B first, we can insert B to the new heap right away.
      55             :  * We then add an entry to the old_new_tid_map hash table showing B's
      56             :  * original tid (in the old heap) and new tid (in the new heap).
      57             :  * When we later encounter A, we get the new location of B from the table,
      58             :  * and can write A immediately with the correct ctid.
      59             :  *
      60             :  * Entries in the hash tables can be removed as soon as the later tuple
      61             :  * is encountered.  That helps to keep the memory usage down.  At the end,
      62             :  * both tables are usually empty; we should have encountered both A and B
      63             :  * of each pair.  However, it's possible for A to be RECENTLY_DEAD and B
      64             :  * entirely DEAD according to HeapTupleSatisfiesVacuum, because the test
      65             :  * for deadness using OldestXmin is not exact.  In such a case we might
      66             :  * encounter B first, and skip it, and find A later.  Then A would be added
      67             :  * to unresolved_tups, and stay there until end of the rewrite.  Since
      68             :  * this case is very unusual, we don't worry about the memory usage.
      69             :  *
      70             :  * Using in-memory hash tables means that we use some memory for each live
      71             :  * update chain in the table, from the time we find one end of the
      72             :  * reference until we find the other end.  That shouldn't be a problem in
      73             :  * practice, but if you do something like an UPDATE without a where-clause
      74             :  * on a large table, and then run CLUSTER in the same transaction, you
      75             :  * could run out of memory.  It doesn't seem worthwhile to add support for
      76             :  * spill-to-disk, as there shouldn't be that many RECENTLY_DEAD tuples in a
      77             :  * table under normal circumstances.  Furthermore, in the typical scenario
      78             :  * of CLUSTERing on an unchanging key column, we'll see all the versions
      79             :  * of a given tuple together anyway, and so the peak memory usage is only
      80             :  * proportional to the number of RECENTLY_DEAD versions of a single row, not
      81             :  * in the whole table.  Note that if we do fail halfway through a CLUSTER,
      82             :  * the old table is still valid, so failure is not catastrophic.
      83             :  *
      84             :  * We can't use the normal heap_insert function to insert into the new
      85             :  * heap, because heap_insert overwrites the visibility information.
      86             :  * We use a special-purpose raw_heap_insert function instead, which
      87             :  * is optimized for bulk inserting a lot of tuples, knowing that we have
      88             :  * exclusive access to the heap.  raw_heap_insert builds new pages in
      89             :  * local storage.  When a page is full, or at the end of the process,
      90             :  * we insert it to WAL as a single record and then write it to disk
      91             :  * directly through smgr.  Note, however, that any data sent to the new
      92             :  * heap's TOAST table will go through the normal bufmgr.
      93             :  *
      94             :  *
      95             :  * Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
      96             :  * Portions Copyright (c) 1994-5, Regents of the University of California
      97             :  *
      98             :  * IDENTIFICATION
      99             :  *    src/backend/access/heap/rewriteheap.c
     100             :  *
     101             :  *-------------------------------------------------------------------------
     102             :  */
     103             : #include "postgres.h"
     104             : 
     105             : #include <sys/stat.h>
     106             : #include <unistd.h>
     107             : 
     108             : #include "access/heapam.h"
     109             : #include "access/heapam_xlog.h"
     110             : #include "access/heaptoast.h"
     111             : #include "access/rewriteheap.h"
     112             : #include "access/transam.h"
     113             : #include "access/xact.h"
     114             : #include "access/xloginsert.h"
     115             : #include "catalog/catalog.h"
     116             : #include "lib/ilist.h"
     117             : #include "miscadmin.h"
     118             : #include "pgstat.h"
     119             : #include "replication/logical.h"
     120             : #include "replication/slot.h"
     121             : #include "storage/bufmgr.h"
     122             : #include "storage/fd.h"
     123             : #include "storage/procarray.h"
     124             : #include "storage/smgr.h"
     125             : #include "utils/memutils.h"
     126             : #include "utils/rel.h"
     127             : 
     128             : /*
     129             :  * State associated with a rewrite operation. This is opaque to the user
     130             :  * of the rewrite facility.
     131             :  */
     132             : typedef struct RewriteStateData
     133             : {
     134             :     Relation    rs_old_rel;     /* source heap */
     135             :     Relation    rs_new_rel;     /* destination heap */
     136             :     Page        rs_buffer;      /* page currently being built */
     137             :     BlockNumber rs_blockno;     /* block where page will go */
     138             :     bool        rs_buffer_valid;    /* T if any tuples in buffer */
     139             :     bool        rs_use_wal;     /* must we WAL-log inserts? */
     140             :     bool        rs_logical_rewrite; /* do we need to do logical rewriting */
     141             :     TransactionId rs_oldest_xmin;   /* oldest xmin used by caller to determine
     142             :                                      * tuple visibility */
     143             :     TransactionId rs_freeze_xid;    /* Xid that will be used as freeze cutoff
     144             :                                      * point */
     145             :     TransactionId rs_logical_xmin;  /* Xid that will be used as cutoff point
     146             :                                      * for logical rewrites */
     147             :     MultiXactId rs_cutoff_multi;    /* MultiXactId that will be used as cutoff
     148             :                                      * point for multixacts */
     149             :     MemoryContext rs_cxt;       /* for hash tables and entries and tuples in
     150             :                                  * them */
     151             :     XLogRecPtr  rs_begin_lsn;   /* XLogInsertLsn when starting the rewrite */
     152             :     HTAB       *rs_unresolved_tups; /* unmatched A tuples */
     153             :     HTAB       *rs_old_new_tid_map; /* unmatched B tuples */
     154             :     HTAB       *rs_logical_mappings;    /* logical remapping files */
     155             :     uint32      rs_num_rewrite_mappings;    /* # in memory mappings */
     156             : }           RewriteStateData;
     157             : 
     158             : /*
     159             :  * The lookup keys for the hash tables are tuple TID and xmin (we must check
     160             :  * both to avoid false matches from dead tuples).  Beware that there is
     161             :  * probably some padding space in this struct; it must be zeroed out for
     162             :  * correct hashtable operation.
     163             :  */
     164             : typedef struct
     165             : {
     166             :     TransactionId xmin;         /* tuple xmin */
     167             :     ItemPointerData tid;        /* tuple location in old heap */
     168             : } TidHashKey;
     169             : 
     170             : /*
     171             :  * Entry structures for the hash tables
     172             :  */
     173             : typedef struct
     174             : {
     175             :     TidHashKey  key;            /* expected xmin/old location of B tuple */
     176             :     ItemPointerData old_tid;    /* A's location in the old heap */
     177             :     HeapTuple   tuple;          /* A's tuple contents */
     178             : } UnresolvedTupData;
     179             : 
     180             : typedef UnresolvedTupData *UnresolvedTup;
     181             : 
     182             : typedef struct
     183             : {
     184             :     TidHashKey  key;            /* actual xmin/old location of B tuple */
     185             :     ItemPointerData new_tid;    /* where we put it in the new heap */
     186             : } OldToNewMappingData;
     187             : 
     188             : typedef OldToNewMappingData *OldToNewMapping;
     189             : 
     190             : /*
     191             :  * In-Memory data for an xid that might need logical remapping entries
     192             :  * to be logged.
     193             :  */
     194             : typedef struct RewriteMappingFile
     195             : {
     196             :     TransactionId xid;          /* xid that might need to see the row */
     197             :     int         vfd;            /* fd of mappings file */
     198             :     off_t       off;            /* how far have we written yet */
     199             :     uint32      num_mappings;   /* number of in-memory mappings */
     200             :     dlist_head  mappings;       /* list of in-memory mappings */
     201             :     char        path[MAXPGPATH];    /* path, for error messages */
     202             : } RewriteMappingFile;
     203             : 
     204             : /*
     205             :  * A single In-Memory logical rewrite mapping, hanging off
     206             :  * RewriteMappingFile->mappings.
     207             :  */
     208             : typedef struct RewriteMappingDataEntry
     209             : {
     210             :     LogicalRewriteMappingData map;  /* map between old and new location of the
     211             :                                      * tuple */
     212             :     dlist_node  node;
     213             : } RewriteMappingDataEntry;
     214             : 
     215             : 
     216             : /* prototypes for internal functions */
     217             : static void raw_heap_insert(RewriteState state, HeapTuple tup);
     218             : 
     219             : /* internal logical remapping prototypes */
     220             : static void logical_begin_heap_rewrite(RewriteState state);
     221             : static void logical_rewrite_heap_tuple(RewriteState state, ItemPointerData old_tid, HeapTuple new_tuple);
     222             : static void logical_end_heap_rewrite(RewriteState state);
     223             : 
     224             : 
     225             : /*
     226             :  * Begin a rewrite of a table
     227             :  *
     228             :  * old_heap     old, locked heap relation tuples will be read from
     229             :  * new_heap     new, locked heap relation to insert tuples to
     230             :  * oldest_xmin  xid used by the caller to determine which tuples are dead
     231             :  * freeze_xid   xid before which tuples will be frozen
     232             :  * cutoff_multi multixact before which multis will be removed
     233             :  * use_wal      should the inserts to the new heap be WAL-logged?
     234             :  *
     235             :  * Returns an opaque RewriteState, allocated in current memory context,
     236             :  * to be used in subsequent calls to the other functions.
     237             :  */
     238             : RewriteState
     239         308 : begin_heap_rewrite(Relation old_heap, Relation new_heap, TransactionId oldest_xmin,
     240             :                    TransactionId freeze_xid, MultiXactId cutoff_multi,
     241             :                    bool use_wal)
     242             : {
     243             :     RewriteState state;
     244             :     MemoryContext rw_cxt;
     245             :     MemoryContext old_cxt;
     246             :     HASHCTL     hash_ctl;
     247             : 
     248             :     /*
     249             :      * To ease cleanup, make a separate context that will contain the
     250             :      * RewriteState struct itself plus all subsidiary data.
     251             :      */
     252         308 :     rw_cxt = AllocSetContextCreate(CurrentMemoryContext,
     253             :                                    "Table rewrite",
     254             :                                    ALLOCSET_DEFAULT_SIZES);
     255         308 :     old_cxt = MemoryContextSwitchTo(rw_cxt);
     256             : 
     257             :     /* Create and fill in the state struct */
     258         308 :     state = palloc0(sizeof(RewriteStateData));
     259             : 
     260         308 :     state->rs_old_rel = old_heap;
     261         308 :     state->rs_new_rel = new_heap;
     262         308 :     state->rs_buffer = (Page) palloc(BLCKSZ);
     263             :     /* new_heap needn't be empty, just locked */
     264         308 :     state->rs_blockno = RelationGetNumberOfBlocks(new_heap);
     265         308 :     state->rs_buffer_valid = false;
     266         308 :     state->rs_use_wal = use_wal;
     267         308 :     state->rs_oldest_xmin = oldest_xmin;
     268         308 :     state->rs_freeze_xid = freeze_xid;
     269         308 :     state->rs_cutoff_multi = cutoff_multi;
     270         308 :     state->rs_cxt = rw_cxt;
     271             : 
     272             :     /* Initialize hash tables used to track update chains */
     273         308 :     memset(&hash_ctl, 0, sizeof(hash_ctl));
     274         308 :     hash_ctl.keysize = sizeof(TidHashKey);
     275         308 :     hash_ctl.entrysize = sizeof(UnresolvedTupData);
     276         308 :     hash_ctl.hcxt = state->rs_cxt;
     277             : 
     278         308 :     state->rs_unresolved_tups =
     279         308 :         hash_create("Rewrite / Unresolved ctids",
     280             :                     128,        /* arbitrary initial size */
     281             :                     &hash_ctl,
     282             :                     HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
     283             : 
     284         308 :     hash_ctl.entrysize = sizeof(OldToNewMappingData);
     285             : 
     286         308 :     state->rs_old_new_tid_map =
     287         308 :         hash_create("Rewrite / Old to new tid map",
     288             :                     128,        /* arbitrary initial size */
     289             :                     &hash_ctl,
     290             :                     HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
     291             : 
     292         308 :     MemoryContextSwitchTo(old_cxt);
     293             : 
     294         308 :     logical_begin_heap_rewrite(state);
     295             : 
     296         308 :     return state;
     297             : }
     298             : 
     299             : /*
     300             :  * End a rewrite.
     301             :  *
     302             :  * state and any other resources are freed.
     303             :  */
     304             : void
     305         308 : end_heap_rewrite(RewriteState state)
     306             : {
     307             :     HASH_SEQ_STATUS seq_status;
     308             :     UnresolvedTup unresolved;
     309             : 
     310             :     /*
     311             :      * Write any remaining tuples in the UnresolvedTups table. If we have any
     312             :      * left, they should in fact be dead, but let's err on the safe side.
     313             :      */
     314         308 :     hash_seq_init(&seq_status, state->rs_unresolved_tups);
     315             : 
     316         616 :     while ((unresolved = hash_seq_search(&seq_status)) != NULL)
     317             :     {
     318           0 :         ItemPointerSetInvalid(&unresolved->tuple->t_data->t_ctid);
     319           0 :         raw_heap_insert(state, unresolved->tuple);
     320             :     }
     321             : 
     322             :     /* Write the last page, if any */
     323         308 :     if (state->rs_buffer_valid)
     324             :     {
     325         236 :         if (state->rs_use_wal)
     326         150 :             log_newpage(&state->rs_new_rel->rd_node,
     327             :                         MAIN_FORKNUM,
     328             :                         state->rs_blockno,
     329             :                         state->rs_buffer,
     330             :                         true);
     331         236 :         RelationOpenSmgr(state->rs_new_rel);
     332             : 
     333         236 :         PageSetChecksumInplace(state->rs_buffer, state->rs_blockno);
     334             : 
     335         236 :         smgrextend(state->rs_new_rel->rd_smgr, MAIN_FORKNUM, state->rs_blockno,
     336         236 :                    (char *) state->rs_buffer, true);
     337             :     }
     338             : 
     339             :     /*
     340             :      * If the rel is WAL-logged, must fsync before commit.  We use heap_sync
     341             :      * to ensure that the toast table gets fsync'd too.
     342             :      *
     343             :      * It's obvious that we must do this when not WAL-logging. It's less
     344             :      * obvious that we have to do it even if we did WAL-log the pages. The
     345             :      * reason is the same as in storage.c's RelationCopyStorage(): we're
     346             :      * writing data that's not in shared buffers, and so a CHECKPOINT
     347             :      * occurring during the rewriteheap operation won't have fsync'd data we
     348             :      * wrote before the checkpoint.
     349             :      */
     350         308 :     if (RelationNeedsWAL(state->rs_new_rel))
     351         304 :         heap_sync(state->rs_new_rel);
     352             : 
     353         308 :     logical_end_heap_rewrite(state);
     354             : 
     355             :     /* Deleting the context frees everything */
     356         308 :     MemoryContextDelete(state->rs_cxt);
     357         308 : }
     358             : 
     359             : /*
     360             :  * Add a tuple to the new heap.
     361             :  *
     362             :  * Visibility information is copied from the original tuple, except that
     363             :  * we "freeze" very-old tuples.  Note that since we scribble on new_tuple,
     364             :  * it had better be temp storage not a pointer to the original tuple.
     365             :  *
     366             :  * state        opaque state as returned by begin_heap_rewrite
     367             :  * old_tuple    original tuple in the old heap
     368             :  * new_tuple    new, rewritten tuple to be inserted to new heap
     369             :  */
     370             : void
     371      215992 : rewrite_heap_tuple(RewriteState state,
     372             :                    HeapTuple old_tuple, HeapTuple new_tuple)
     373             : {
     374             :     MemoryContext old_cxt;
     375             :     ItemPointerData old_tid;
     376             :     TidHashKey  hashkey;
     377             :     bool        found;
     378             :     bool        free_new;
     379             : 
     380      215992 :     old_cxt = MemoryContextSwitchTo(state->rs_cxt);
     381             : 
     382             :     /*
     383             :      * Copy the original tuple's visibility information into new_tuple.
     384             :      *
     385             :      * XXX we might later need to copy some t_infomask2 bits, too? Right now,
     386             :      * we intentionally clear the HOT status bits.
     387             :      */
     388      215992 :     memcpy(&new_tuple->t_data->t_choice.t_heap,
     389      215992 :            &old_tuple->t_data->t_choice.t_heap,
     390             :            sizeof(HeapTupleFields));
     391             : 
     392      215992 :     new_tuple->t_data->t_infomask &= ~HEAP_XACT_MASK;
     393      215992 :     new_tuple->t_data->t_infomask2 &= ~HEAP2_XACT_MASK;
     394      431984 :     new_tuple->t_data->t_infomask |=
     395      215992 :         old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
     396             : 
     397             :     /*
     398             :      * While we have our hands on the tuple, we may as well freeze any
     399             :      * eligible xmin or xmax, so that future VACUUM effort can be saved.
     400             :      */
     401      647976 :     heap_freeze_tuple(new_tuple->t_data,
     402      215992 :                       state->rs_old_rel->rd_rel->relfrozenxid,
     403      215992 :                       state->rs_old_rel->rd_rel->relminmxid,
     404             :                       state->rs_freeze_xid,
     405             :                       state->rs_cutoff_multi);
     406             : 
     407             :     /*
     408             :      * Invalid ctid means that ctid should point to the tuple itself. We'll
     409             :      * override it later if the tuple is part of an update chain.
     410             :      */
     411      215992 :     ItemPointerSetInvalid(&new_tuple->t_data->t_ctid);
     412             : 
     413             :     /*
     414             :      * If the tuple has been updated, check the old-to-new mapping hash table.
     415             :      */
     416      292738 :     if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
     417      153492 :           HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
     418      153492 :         !HeapTupleHeaderIndicatesMovedPartitions(old_tuple->t_data) &&
     419       76746 :         !(ItemPointerEquals(&(old_tuple->t_self),
     420       76746 :                             &(old_tuple->t_data->t_ctid))))
     421             :     {
     422             :         OldToNewMapping mapping;
     423             : 
     424        1148 :         memset(&hashkey, 0, sizeof(hashkey));
     425        1148 :         hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
     426        1148 :         hashkey.tid = old_tuple->t_data->t_ctid;
     427             : 
     428        1148 :         mapping = (OldToNewMapping)
     429        1148 :             hash_search(state->rs_old_new_tid_map, &hashkey,
     430             :                         HASH_FIND, NULL);
     431             : 
     432        1148 :         if (mapping != NULL)
     433             :         {
     434             :             /*
     435             :              * We've already copied the tuple that t_ctid points to, so we can
     436             :              * set the ctid of this tuple to point to the new location, and
     437             :              * insert it right away.
     438             :              */
     439         388 :             new_tuple->t_data->t_ctid = mapping->new_tid;
     440             : 
     441             :             /* We don't need the mapping entry anymore */
     442         388 :             hash_search(state->rs_old_new_tid_map, &hashkey,
     443             :                         HASH_REMOVE, &found);
     444             :             Assert(found);
     445             :         }
     446             :         else
     447             :         {
     448             :             /*
     449             :              * We haven't seen the tuple t_ctid points to yet. Stash this
     450             :              * tuple into unresolved_tups to be written later.
     451             :              */
     452             :             UnresolvedTup unresolved;
     453             : 
     454         760 :             unresolved = hash_search(state->rs_unresolved_tups, &hashkey,
     455             :                                      HASH_ENTER, &found);
     456             :             Assert(!found);
     457             : 
     458         760 :             unresolved->old_tid = old_tuple->t_self;
     459         760 :             unresolved->tuple = heap_copytuple(new_tuple);
     460             : 
     461             :             /*
     462             :              * We can't do anything more now, since we don't know where the
     463             :              * tuple will be written.
     464             :              */
     465         760 :             MemoryContextSwitchTo(old_cxt);
     466         760 :             return;
     467             :         }
     468             :     }
     469             : 
     470             :     /*
     471             :      * Now we will write the tuple, and then check to see if it is the B tuple
     472             :      * in any new or known pair.  When we resolve a known pair, we will be
     473             :      * able to write that pair's A tuple, and then we have to check if it
     474             :      * resolves some other pair.  Hence, we need a loop here.
     475             :      */
     476      215232 :     old_tid = old_tuple->t_self;
     477      215232 :     free_new = false;
     478             : 
     479             :     for (;;)
     480         760 :     {
     481             :         ItemPointerData new_tid;
     482             : 
     483             :         /* Insert the tuple and find out where it's put in new_heap */
     484      215992 :         raw_heap_insert(state, new_tuple);
     485      215992 :         new_tid = new_tuple->t_self;
     486             : 
     487      215992 :         logical_rewrite_heap_tuple(state, old_tid, new_tuple);
     488             : 
     489             :         /*
     490             :          * If the tuple is the updated version of a row, and the prior version
     491             :          * wouldn't be DEAD yet, then we need to either resolve the prior
     492             :          * version (if it's waiting in rs_unresolved_tups), or make an entry
     493             :          * in rs_old_new_tid_map (so we can resolve it when we do see it). The
     494             :          * previous tuple's xmax would equal this one's xmin, so it's
     495             :          * RECENTLY_DEAD if and only if the xmin is not before OldestXmin.
     496             :          */
     497      222072 :         if ((new_tuple->t_data->t_infomask & HEAP_UPDATED) &&
     498        6080 :             !TransactionIdPrecedes(HeapTupleHeaderGetXmin(new_tuple->t_data),
     499             :                                    state->rs_oldest_xmin))
     500             :         {
     501             :             /*
     502             :              * Okay, this is B in an update pair.  See if we've seen A.
     503             :              */
     504             :             UnresolvedTup unresolved;
     505             : 
     506        1148 :             memset(&hashkey, 0, sizeof(hashkey));
     507        1148 :             hashkey.xmin = HeapTupleHeaderGetXmin(new_tuple->t_data);
     508        1148 :             hashkey.tid = old_tid;
     509             : 
     510        1148 :             unresolved = hash_search(state->rs_unresolved_tups, &hashkey,
     511             :                                      HASH_FIND, NULL);
     512             : 
     513        1148 :             if (unresolved != NULL)
     514             :             {
     515             :                 /*
     516             :                  * We have seen and memorized the previous tuple already. Now
     517             :                  * that we know where we inserted the tuple its t_ctid points
     518             :                  * to, fix its t_ctid and insert it to the new heap.
     519             :                  */
     520         760 :                 if (free_new)
     521         242 :                     heap_freetuple(new_tuple);
     522         760 :                 new_tuple = unresolved->tuple;
     523         760 :                 free_new = true;
     524         760 :                 old_tid = unresolved->old_tid;
     525         760 :                 new_tuple->t_data->t_ctid = new_tid;
     526             : 
     527             :                 /*
     528             :                  * We don't need the hash entry anymore, but don't free its
     529             :                  * tuple just yet.
     530             :                  */
     531         760 :                 hash_search(state->rs_unresolved_tups, &hashkey,
     532             :                             HASH_REMOVE, &found);
     533             :                 Assert(found);
     534             : 
     535             :                 /* loop back to insert the previous tuple in the chain */
     536         760 :                 continue;
     537             :             }
     538             :             else
     539             :             {
     540             :                 /*
     541             :                  * Remember the new tid of this tuple. We'll use it to set the
     542             :                  * ctid when we find the previous tuple in the chain.
     543             :                  */
     544             :                 OldToNewMapping mapping;
     545             : 
     546         388 :                 mapping = hash_search(state->rs_old_new_tid_map, &hashkey,
     547             :                                       HASH_ENTER, &found);
     548             :                 Assert(!found);
     549             : 
     550         388 :                 mapping->new_tid = new_tid;
     551             :             }
     552             :         }
     553             : 
     554             :         /* Done with this (chain of) tuples, for now */
     555      215232 :         if (free_new)
     556         518 :             heap_freetuple(new_tuple);
     557      215232 :         break;
     558             :     }
     559             : 
     560      215232 :     MemoryContextSwitchTo(old_cxt);
     561             : }
     562             : 
     563             : /*
     564             :  * Register a dead tuple with an ongoing rewrite. Dead tuples are not
     565             :  * copied to the new table, but we still make note of them so that we
     566             :  * can release some resources earlier.
     567             :  *
     568             :  * Returns true if a tuple was removed from the unresolved_tups table.
     569             :  * This indicates that that tuple, previously thought to be "recently dead",
     570             :  * is now known really dead and won't be written to the output.
     571             :  */
     572             : bool
     573        9954 : rewrite_heap_dead_tuple(RewriteState state, HeapTuple old_tuple)
     574             : {
     575             :     /*
     576             :      * If we have already seen an earlier tuple in the update chain that
     577             :      * points to this tuple, let's forget about that earlier tuple. It's in
     578             :      * fact dead as well, our simple xmax < OldestXmin test in
     579             :      * HeapTupleSatisfiesVacuum just wasn't enough to detect it. It happens
     580             :      * when xmin of a tuple is greater than xmax, which sounds
     581             :      * counter-intuitive but is perfectly valid.
     582             :      *
     583             :      * We don't bother to try to detect the situation the other way round,
     584             :      * when we encounter the dead tuple first and then the recently dead one
     585             :      * that points to it. If that happens, we'll have some unmatched entries
     586             :      * in the UnresolvedTups hash table at the end. That can happen anyway,
     587             :      * because a vacuum might have removed the dead tuple in the chain before
     588             :      * us.
     589             :      */
     590             :     UnresolvedTup unresolved;
     591             :     TidHashKey  hashkey;
     592             :     bool        found;
     593             : 
     594        9954 :     memset(&hashkey, 0, sizeof(hashkey));
     595        9954 :     hashkey.xmin = HeapTupleHeaderGetXmin(old_tuple->t_data);
     596        9954 :     hashkey.tid = old_tuple->t_self;
     597             : 
     598        9954 :     unresolved = hash_search(state->rs_unresolved_tups, &hashkey,
     599             :                              HASH_FIND, NULL);
     600             : 
     601        9954 :     if (unresolved != NULL)
     602             :     {
     603             :         /* Need to free the contained tuple as well as the hashtable entry */
     604           0 :         heap_freetuple(unresolved->tuple);
     605           0 :         hash_search(state->rs_unresolved_tups, &hashkey,
     606             :                     HASH_REMOVE, &found);
     607             :         Assert(found);
     608           0 :         return true;
     609             :     }
     610             : 
     611        9954 :     return false;
     612             : }
     613             : 
     614             : /*
     615             :  * Insert a tuple to the new relation.  This has to track heap_insert
     616             :  * and its subsidiary functions!
     617             :  *
     618             :  * t_self of the tuple is set to the new TID of the tuple. If t_ctid of the
     619             :  * tuple is invalid on entry, it's replaced with the new TID as well (in
     620             :  * the inserted data only, not in the caller's copy).
     621             :  */
     622             : static void
     623      215992 : raw_heap_insert(RewriteState state, HeapTuple tup)
     624             : {
     625      215992 :     Page        page = state->rs_buffer;
     626             :     Size        pageFreeSpace,
     627             :                 saveFreeSpace;
     628             :     Size        len;
     629             :     OffsetNumber newoff;
     630             :     HeapTuple   heaptup;
     631             : 
     632             :     /*
     633             :      * If the new tuple is too big for storage or contains already toasted
     634             :      * out-of-line attributes from some other relation, invoke the toaster.
     635             :      *
     636             :      * Note: below this point, heaptup is the data we actually intend to store
     637             :      * into the relation; tup is the caller's original untoasted data.
     638             :      */
     639      215992 :     if (state->rs_new_rel->rd_rel->relkind == RELKIND_TOASTVALUE)
     640             :     {
     641             :         /* toast table entries should never be recursively toasted */
     642             :         Assert(!HeapTupleHasExternal(tup));
     643           0 :         heaptup = tup;
     644             :     }
     645      215992 :     else if (HeapTupleHasExternal(tup) || tup->t_len > TOAST_TUPLE_THRESHOLD)
     646         426 :     {
     647         426 :         int         options = HEAP_INSERT_SKIP_FSM;
     648             : 
     649         426 :         if (!state->rs_use_wal)
     650         164 :             options |= HEAP_INSERT_SKIP_WAL;
     651             : 
     652             :         /*
     653             :          * While rewriting the heap for VACUUM FULL / CLUSTER, make sure data
     654             :          * for the TOAST table are not logically decoded.  The main heap is
     655             :          * WAL-logged as XLOG FPI records, which are not logically decoded.
     656             :          */
     657         426 :         options |= HEAP_INSERT_NO_LOGICAL;
     658             : 
     659         426 :         heaptup = heap_toast_insert_or_update(state->rs_new_rel, tup, NULL,
     660             :                                               options);
     661             :     }
     662             :     else
     663      215566 :         heaptup = tup;
     664             : 
     665      215992 :     len = MAXALIGN(heaptup->t_len); /* be conservative */
     666             : 
     667             :     /*
     668             :      * If we're gonna fail for oversize tuple, do it right away
     669             :      */
     670      215992 :     if (len > MaxHeapTupleSize)
     671           0 :         ereport(ERROR,
     672             :                 (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
     673             :                  errmsg("row is too big: size %zu, maximum size %zu",
     674             :                         len, MaxHeapTupleSize)));
     675             : 
     676             :     /* Compute desired extra freespace due to fillfactor option */
     677      215992 :     saveFreeSpace = RelationGetTargetPageFreeSpace(state->rs_new_rel,
     678             :                                                    HEAP_DEFAULT_FILLFACTOR);
     679             : 
     680             :     /* Now we can check to see if there's enough free space already. */
     681      215992 :     if (state->rs_buffer_valid)
     682             :     {
     683      215756 :         pageFreeSpace = PageGetHeapFreeSpace(page);
     684             : 
     685      215756 :         if (len + saveFreeSpace > pageFreeSpace)
     686             :         {
     687             :             /* Doesn't fit, so write out the existing page */
     688             : 
     689             :             /* XLOG stuff */
     690        3212 :             if (state->rs_use_wal)
     691        2610 :                 log_newpage(&state->rs_new_rel->rd_node,
     692             :                             MAIN_FORKNUM,
     693             :                             state->rs_blockno,
     694             :                             page,
     695             :                             true);
     696             : 
     697             :             /*
     698             :              * Now write the page. We say skipFsync = true because there's no
     699             :              * need for smgr to schedule an fsync for this write; we'll do it
     700             :              * ourselves in end_heap_rewrite.
     701             :              */
     702        3212 :             RelationOpenSmgr(state->rs_new_rel);
     703             : 
     704        3212 :             PageSetChecksumInplace(page, state->rs_blockno);
     705             : 
     706        3212 :             smgrextend(state->rs_new_rel->rd_smgr, MAIN_FORKNUM,
     707             :                        state->rs_blockno, (char *) page, true);
     708             : 
     709        3212 :             state->rs_blockno++;
     710        3212 :             state->rs_buffer_valid = false;
     711             :         }
     712             :     }
     713             : 
     714      215992 :     if (!state->rs_buffer_valid)
     715             :     {
     716             :         /* Initialize a new empty page */
     717        3448 :         PageInit(page, BLCKSZ, 0);
     718        3448 :         state->rs_buffer_valid = true;
     719             :     }
     720             : 
     721             :     /* And now we can insert the tuple into the page */
     722      215992 :     newoff = PageAddItem(page, (Item) heaptup->t_data, heaptup->t_len,
     723             :                          InvalidOffsetNumber, false, true);
     724      215992 :     if (newoff == InvalidOffsetNumber)
     725           0 :         elog(ERROR, "failed to add tuple");
     726             : 
     727             :     /* Update caller's t_self to the actual position where it was stored */
     728      215992 :     ItemPointerSet(&(tup->t_self), state->rs_blockno, newoff);
     729             : 
     730             :     /*
     731             :      * Insert the correct position into CTID of the stored tuple, too, if the
     732             :      * caller didn't supply a valid CTID.
     733             :      */
     734      215992 :     if (!ItemPointerIsValid(&tup->t_data->t_ctid))
     735             :     {
     736             :         ItemId      newitemid;
     737             :         HeapTupleHeader onpage_tup;
     738             : 
     739      214844 :         newitemid = PageGetItemId(page, newoff);
     740      214844 :         onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
     741             : 
     742      214844 :         onpage_tup->t_ctid = tup->t_self;
     743             :     }
     744             : 
     745             :     /* If heaptup is a private copy, release it. */
     746      215992 :     if (heaptup != tup)
     747         426 :         heap_freetuple(heaptup);
     748      215992 : }
     749             : 
     750             : /* ------------------------------------------------------------------------
     751             :  * Logical rewrite support
     752             :  *
     753             :  * When doing logical decoding - which relies on using cmin/cmax of catalog
     754             :  * tuples, via xl_heap_new_cid records - heap rewrites have to log enough
     755             :  * information to allow the decoding backend to updates its internal mapping
     756             :  * of (relfilenode,ctid) => (cmin, cmax) to be correct for the rewritten heap.
     757             :  *
     758             :  * For that, every time we find a tuple that's been modified in a catalog
     759             :  * relation within the xmin horizon of any decoding slot, we log a mapping
     760             :  * from the old to the new location.
     761             :  *
     762             :  * To deal with rewrites that abort the filename of a mapping file contains
     763             :  * the xid of the transaction performing the rewrite, which then can be
     764             :  * checked before being read in.
     765             :  *
     766             :  * For efficiency we don't immediately spill every single map mapping for a
     767             :  * row to disk but only do so in batches when we've collected several of them
     768             :  * in memory or when end_heap_rewrite() has been called.
     769             :  *
     770             :  * Crash-Safety: This module diverts from the usual patterns of doing WAL
     771             :  * since it cannot rely on checkpoint flushing out all buffers and thus
     772             :  * waiting for exclusive locks on buffers. Usually the XLogInsert() covering
     773             :  * buffer modifications is performed while the buffer(s) that are being
     774             :  * modified are exclusively locked guaranteeing that both the WAL record and
     775             :  * the modified heap are on either side of the checkpoint. But since the
     776             :  * mapping files we log aren't in shared_buffers that interlock doesn't work.
     777             :  *
     778             :  * Instead we simply write the mapping files out to disk, *before* the
     779             :  * XLogInsert() is performed. That guarantees that either the XLogInsert() is
     780             :  * inserted after the checkpoint's redo pointer or that the checkpoint (via
     781             :  * CheckPointLogicalRewriteHeap()) has flushed the (partial) mapping file to
     782             :  * disk. That leaves the tail end that has not yet been flushed open to
     783             :  * corruption, which is solved by including the current offset in the
     784             :  * xl_heap_rewrite_mapping records and truncating the mapping file to it
     785             :  * during replay. Every time a rewrite is finished all generated mapping files
     786             :  * are synced to disk.
     787             :  *
     788             :  * Note that if we were only concerned about crash safety we wouldn't have to
     789             :  * deal with WAL logging at all - an fsync() at the end of a rewrite would be
     790             :  * sufficient for crash safety. Any mapping that hasn't been safely flushed to
     791             :  * disk has to be by an aborted (explicitly or via a crash) transaction and is
     792             :  * ignored by virtue of the xid in its name being subject to a
     793             :  * TransactionDidCommit() check. But we want to support having standbys via
     794             :  * physical replication, both for availability and to do logical decoding
     795             :  * there.
     796             :  * ------------------------------------------------------------------------
     797             :  */
     798             : 
     799             : /*
     800             :  * Do preparations for logging logical mappings during a rewrite if
     801             :  * necessary. If we detect that we don't need to log anything we'll prevent
     802             :  * any further action by the various logical rewrite functions.
     803             :  */
     804             : static void
     805         308 : logical_begin_heap_rewrite(RewriteState state)
     806             : {
     807             :     HASHCTL     hash_ctl;
     808             :     TransactionId logical_xmin;
     809             : 
     810             :     /*
     811             :      * We only need to persist these mappings if the rewritten table can be
     812             :      * accessed during logical decoding, if not, we can skip doing any
     813             :      * additional work.
     814             :      */
     815         308 :     state->rs_logical_rewrite =
     816         308 :         RelationIsAccessibleInLogicalDecoding(state->rs_old_rel);
     817             : 
     818         308 :     if (!state->rs_logical_rewrite)
     819         536 :         return;
     820             : 
     821          40 :     ProcArrayGetReplicationSlotXmin(NULL, &logical_xmin);
     822             : 
     823             :     /*
     824             :      * If there are no logical slots in progress we don't need to do anything,
     825             :      * there cannot be any remappings for relevant rows yet. The relation's
     826             :      * lock protects us against races.
     827             :      */
     828          40 :     if (logical_xmin == InvalidTransactionId)
     829             :     {
     830           0 :         state->rs_logical_rewrite = false;
     831           0 :         return;
     832             :     }
     833             : 
     834          40 :     state->rs_logical_xmin = logical_xmin;
     835          40 :     state->rs_begin_lsn = GetXLogInsertRecPtr();
     836          40 :     state->rs_num_rewrite_mappings = 0;
     837             : 
     838          40 :     memset(&hash_ctl, 0, sizeof(hash_ctl));
     839          40 :     hash_ctl.keysize = sizeof(TransactionId);
     840          40 :     hash_ctl.entrysize = sizeof(RewriteMappingFile);
     841          40 :     hash_ctl.hcxt = state->rs_cxt;
     842             : 
     843          40 :     state->rs_logical_mappings =
     844          40 :         hash_create("Logical rewrite mapping",
     845             :                     128,        /* arbitrary initial size */
     846             :                     &hash_ctl,
     847             :                     HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
     848             : }
     849             : 
     850             : /*
     851             :  * Flush all logical in-memory mappings to disk, but don't fsync them yet.
     852             :  */
     853             : static void
     854          18 : logical_heap_rewrite_flush_mappings(RewriteState state)
     855             : {
     856             :     HASH_SEQ_STATUS seq_status;
     857             :     RewriteMappingFile *src;
     858             :     dlist_mutable_iter iter;
     859             : 
     860             :     Assert(state->rs_logical_rewrite);
     861             : 
     862             :     /* no logical rewrite in progress, no need to iterate over mappings */
     863          18 :     if (state->rs_num_rewrite_mappings == 0)
     864           0 :         return;
     865             : 
     866          18 :     elog(DEBUG1, "flushing %u logical rewrite mapping entries",
     867             :          state->rs_num_rewrite_mappings);
     868             : 
     869          18 :     hash_seq_init(&seq_status, state->rs_logical_mappings);
     870         214 :     while ((src = (RewriteMappingFile *) hash_seq_search(&seq_status)) != NULL)
     871             :     {
     872             :         char       *waldata;
     873             :         char       *waldata_start;
     874             :         xl_heap_rewrite_mapping xlrec;
     875             :         Oid         dboid;
     876             :         uint32      len;
     877             :         int         written;
     878             : 
     879             :         /* this file hasn't got any new mappings */
     880         178 :         if (src->num_mappings == 0)
     881           0 :             continue;
     882             : 
     883         178 :         if (state->rs_old_rel->rd_rel->relisshared)
     884           0 :             dboid = InvalidOid;
     885             :         else
     886         178 :             dboid = MyDatabaseId;
     887             : 
     888         178 :         xlrec.num_mappings = src->num_mappings;
     889         178 :         xlrec.mapped_rel = RelationGetRelid(state->rs_old_rel);
     890         178 :         xlrec.mapped_xid = src->xid;
     891         178 :         xlrec.mapped_db = dboid;
     892         178 :         xlrec.offset = src->off;
     893         178 :         xlrec.start_lsn = state->rs_begin_lsn;
     894             : 
     895             :         /* write all mappings consecutively */
     896         178 :         len = src->num_mappings * sizeof(LogicalRewriteMappingData);
     897         178 :         waldata_start = waldata = palloc(len);
     898             : 
     899             :         /*
     900             :          * collect data we need to write out, but don't modify ondisk data yet
     901             :          */
     902        1502 :         dlist_foreach_modify(iter, &src->mappings)
     903             :         {
     904             :             RewriteMappingDataEntry *pmap;
     905             : 
     906        1324 :             pmap = dlist_container(RewriteMappingDataEntry, node, iter.cur);
     907             : 
     908        1324 :             memcpy(waldata, &pmap->map, sizeof(pmap->map));
     909        1324 :             waldata += sizeof(pmap->map);
     910             : 
     911             :             /* remove from the list and free */
     912        1324 :             dlist_delete(&pmap->node);
     913        1324 :             pfree(pmap);
     914             : 
     915             :             /* update bookkeeping */
     916        1324 :             state->rs_num_rewrite_mappings--;
     917        1324 :             src->num_mappings--;
     918             :         }
     919             : 
     920             :         Assert(src->num_mappings == 0);
     921             :         Assert(waldata == waldata_start + len);
     922             : 
     923             :         /*
     924             :          * Note that we deviate from the usual WAL coding practices here,
     925             :          * check the above "Logical rewrite support" comment for reasoning.
     926             :          */
     927         178 :         written = FileWrite(src->vfd, waldata_start, len, src->off,
     928             :                             WAIT_EVENT_LOGICAL_REWRITE_WRITE);
     929         178 :         if (written != len)
     930           0 :             ereport(ERROR,
     931             :                     (errcode_for_file_access(),
     932             :                      errmsg("could not write to file \"%s\", wrote %d of %d: %m", src->path,
     933             :                             written, len)));
     934         178 :         src->off += len;
     935             : 
     936         178 :         XLogBeginInsert();
     937         178 :         XLogRegisterData((char *) (&xlrec), sizeof(xlrec));
     938         178 :         XLogRegisterData(waldata_start, len);
     939             : 
     940             :         /* write xlog record */
     941         178 :         XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_REWRITE);
     942             : 
     943         178 :         pfree(waldata_start);
     944             :     }
     945             :     Assert(state->rs_num_rewrite_mappings == 0);
     946             : }
     947             : 
     948             : /*
     949             :  * Logical remapping part of end_heap_rewrite().
     950             :  */
     951             : static void
     952         308 : logical_end_heap_rewrite(RewriteState state)
     953             : {
     954             :     HASH_SEQ_STATUS seq_status;
     955             :     RewriteMappingFile *src;
     956             : 
     957             :     /* done, no logical rewrite in progress */
     958         308 :     if (!state->rs_logical_rewrite)
     959         268 :         return;
     960             : 
     961             :     /* writeout remaining in-memory entries */
     962          40 :     if (state->rs_num_rewrite_mappings > 0)
     963          18 :         logical_heap_rewrite_flush_mappings(state);
     964             : 
     965             :     /* Iterate over all mappings we have written and fsync the files. */
     966          40 :     hash_seq_init(&seq_status, state->rs_logical_mappings);
     967         258 :     while ((src = (RewriteMappingFile *) hash_seq_search(&seq_status)) != NULL)
     968             :     {
     969         178 :         if (FileSync(src->vfd, WAIT_EVENT_LOGICAL_REWRITE_SYNC) != 0)
     970           0 :             ereport(data_sync_elevel(ERROR),
     971             :                     (errcode_for_file_access(),
     972             :                      errmsg("could not fsync file \"%s\": %m", src->path)));
     973         178 :         FileClose(src->vfd);
     974             :     }
     975             :     /* memory context cleanup will deal with the rest */
     976             : }
     977             : 
     978             : /*
     979             :  * Log a single (old->new) mapping for 'xid'.
     980             :  */
     981             : static void
     982        1324 : logical_rewrite_log_mapping(RewriteState state, TransactionId xid,
     983             :                             LogicalRewriteMappingData *map)
     984             : {
     985             :     RewriteMappingFile *src;
     986             :     RewriteMappingDataEntry *pmap;
     987             :     Oid         relid;
     988             :     bool        found;
     989             : 
     990        1324 :     relid = RelationGetRelid(state->rs_old_rel);
     991             : 
     992             :     /* look for existing mappings for this 'mapped' xid */
     993        1324 :     src = hash_search(state->rs_logical_mappings, &xid,
     994             :                       HASH_ENTER, &found);
     995             : 
     996             :     /*
     997             :      * We haven't yet had the need to map anything for this xid, create
     998             :      * per-xid data structures.
     999             :      */
    1000        1324 :     if (!found)
    1001             :     {
    1002             :         char        path[MAXPGPATH];
    1003             :         Oid         dboid;
    1004             : 
    1005         178 :         if (state->rs_old_rel->rd_rel->relisshared)
    1006           0 :             dboid = InvalidOid;
    1007             :         else
    1008         178 :             dboid = MyDatabaseId;
    1009             : 
    1010         534 :         snprintf(path, MAXPGPATH,
    1011             :                  "pg_logical/mappings/" LOGICAL_REWRITE_FORMAT,
    1012             :                  dboid, relid,
    1013         178 :                  (uint32) (state->rs_begin_lsn >> 32),
    1014         178 :                  (uint32) state->rs_begin_lsn,
    1015             :                  xid, GetCurrentTransactionId());
    1016             : 
    1017         178 :         dlist_init(&src->mappings);
    1018         178 :         src->num_mappings = 0;
    1019         178 :         src->off = 0;
    1020         178 :         memcpy(src->path, path, sizeof(path));
    1021         178 :         src->vfd = PathNameOpenFile(path,
    1022             :                                     O_CREAT | O_EXCL | O_WRONLY | PG_BINARY);
    1023         178 :         if (src->vfd < 0)
    1024           0 :             ereport(ERROR,
    1025             :                     (errcode_for_file_access(),
    1026             :                      errmsg("could not create file \"%s\": %m", path)));
    1027             :     }
    1028             : 
    1029        1324 :     pmap = MemoryContextAlloc(state->rs_cxt,
    1030             :                               sizeof(RewriteMappingDataEntry));
    1031        1324 :     memcpy(&pmap->map, map, sizeof(LogicalRewriteMappingData));
    1032        1324 :     dlist_push_tail(&src->mappings, &pmap->node);
    1033        1324 :     src->num_mappings++;
    1034        1324 :     state->rs_num_rewrite_mappings++;
    1035             : 
    1036             :     /*
    1037             :      * Write out buffer every time we've too many in-memory entries across all
    1038             :      * mapping files.
    1039             :      */
    1040        1324 :     if (state->rs_num_rewrite_mappings >= 1000 /* arbitrary number */ )
    1041           0 :         logical_heap_rewrite_flush_mappings(state);
    1042        1324 : }
    1043             : 
    1044             : /*
    1045             :  * Perform logical remapping for a tuple that's mapped from old_tid to
    1046             :  * new_tuple->t_self by rewrite_heap_tuple() if necessary for the tuple.
    1047             :  */
    1048             : static void
    1049      215992 : logical_rewrite_heap_tuple(RewriteState state, ItemPointerData old_tid,
    1050             :                            HeapTuple new_tuple)
    1051             : {
    1052      215992 :     ItemPointerData new_tid = new_tuple->t_self;
    1053      215992 :     TransactionId cutoff = state->rs_logical_xmin;
    1054             :     TransactionId xmin;
    1055             :     TransactionId xmax;
    1056      215992 :     bool        do_log_xmin = false;
    1057      215992 :     bool        do_log_xmax = false;
    1058             :     LogicalRewriteMappingData map;
    1059             : 
    1060             :     /* no logical rewrite in progress, we don't need to log anything */
    1061      215992 :     if (!state->rs_logical_rewrite)
    1062      386872 :         return;
    1063             : 
    1064       43818 :     xmin = HeapTupleHeaderGetXmin(new_tuple->t_data);
    1065             :     /* use *GetUpdateXid to correctly deal with multixacts */
    1066       43818 :     xmax = HeapTupleHeaderGetUpdateXid(new_tuple->t_data);
    1067             : 
    1068             :     /*
    1069             :      * Log the mapping iff the tuple has been created recently.
    1070             :      */
    1071       43818 :     if (TransactionIdIsNormal(xmin) && !TransactionIdPrecedes(xmin, cutoff))
    1072         976 :         do_log_xmin = true;
    1073             : 
    1074       43818 :     if (!TransactionIdIsNormal(xmax))
    1075             :     {
    1076             :         /*
    1077             :          * no xmax is set, can't have any permanent ones, so this check is
    1078             :          * sufficient
    1079             :          */
    1080             :     }
    1081         906 :     else if (HEAP_XMAX_IS_LOCKED_ONLY(new_tuple->t_data->t_infomask))
    1082             :     {
    1083             :         /* only locked, we don't care */
    1084             :     }
    1085         906 :     else if (!TransactionIdPrecedes(xmax, cutoff))
    1086             :     {
    1087             :         /* tuple has been deleted recently, log */
    1088         906 :         do_log_xmax = true;
    1089             :     }
    1090             : 
    1091             :     /* if neither needs to be logged, we're done */
    1092       43818 :     if (!do_log_xmin && !do_log_xmax)
    1093       42524 :         return;
    1094             : 
    1095             :     /* fill out mapping information */
    1096        1294 :     map.old_node = state->rs_old_rel->rd_node;
    1097        1294 :     map.old_tid = old_tid;
    1098        1294 :     map.new_node = state->rs_new_rel->rd_node;
    1099        1294 :     map.new_tid = new_tid;
    1100             : 
    1101             :     /* ---
    1102             :      * Now persist the mapping for the individual xids that are affected. We
    1103             :      * need to log for both xmin and xmax if they aren't the same transaction
    1104             :      * since the mapping files are per "affected" xid.
    1105             :      * We don't muster all that much effort detecting whether xmin and xmax
    1106             :      * are actually the same transaction, we just check whether the xid is the
    1107             :      * same disregarding subtransactions. Logging too much is relatively
    1108             :      * harmless and we could never do the check fully since subtransaction
    1109             :      * data is thrown away during restarts.
    1110             :      * ---
    1111             :      */
    1112        1294 :     if (do_log_xmin)
    1113         976 :         logical_rewrite_log_mapping(state, xmin, &map);
    1114             :     /* separately log mapping for xmax unless it'd be redundant */
    1115        1294 :     if (do_log_xmax && !TransactionIdEquals(xmin, xmax))
    1116         348 :         logical_rewrite_log_mapping(state, xmax, &map);
    1117             : }
    1118             : 
    1119             : /*
    1120             :  * Replay XLOG_HEAP2_REWRITE records
    1121             :  */
    1122             : void
    1123           0 : heap_xlog_logical_rewrite(XLogReaderState *r)
    1124             : {
    1125             :     char        path[MAXPGPATH];
    1126             :     int         fd;
    1127             :     xl_heap_rewrite_mapping *xlrec;
    1128             :     uint32      len;
    1129             :     char       *data;
    1130             : 
    1131           0 :     xlrec = (xl_heap_rewrite_mapping *) XLogRecGetData(r);
    1132             : 
    1133           0 :     snprintf(path, MAXPGPATH,
    1134             :              "pg_logical/mappings/" LOGICAL_REWRITE_FORMAT,
    1135             :              xlrec->mapped_db, xlrec->mapped_rel,
    1136           0 :              (uint32) (xlrec->start_lsn >> 32),
    1137           0 :              (uint32) xlrec->start_lsn,
    1138           0 :              xlrec->mapped_xid, XLogRecGetXid(r));
    1139             : 
    1140           0 :     fd = OpenTransientFile(path,
    1141             :                            O_CREAT | O_WRONLY | PG_BINARY);
    1142           0 :     if (fd < 0)
    1143           0 :         ereport(ERROR,
    1144             :                 (errcode_for_file_access(),
    1145             :                  errmsg("could not create file \"%s\": %m", path)));
    1146             : 
    1147             :     /*
    1148             :      * Truncate all data that's not guaranteed to have been safely fsynced (by
    1149             :      * previous record or by the last checkpoint).
    1150             :      */
    1151           0 :     pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_TRUNCATE);
    1152           0 :     if (ftruncate(fd, xlrec->offset) != 0)
    1153           0 :         ereport(ERROR,
    1154             :                 (errcode_for_file_access(),
    1155             :                  errmsg("could not truncate file \"%s\" to %u: %m",
    1156             :                         path, (uint32) xlrec->offset)));
    1157           0 :     pgstat_report_wait_end();
    1158             : 
    1159             :     /* now seek to the position we want to write our data to */
    1160           0 :     if (lseek(fd, xlrec->offset, SEEK_SET) != xlrec->offset)
    1161           0 :         ereport(ERROR,
    1162             :                 (errcode_for_file_access(),
    1163             :                  errmsg("could not seek to end of file \"%s\": %m",
    1164             :                         path)));
    1165             : 
    1166           0 :     data = XLogRecGetData(r) + sizeof(*xlrec);
    1167             : 
    1168           0 :     len = xlrec->num_mappings * sizeof(LogicalRewriteMappingData);
    1169             : 
    1170             :     /* write out tail end of mapping file (again) */
    1171           0 :     errno = 0;
    1172           0 :     pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_MAPPING_WRITE);
    1173           0 :     if (write(fd, data, len) != len)
    1174             :     {
    1175             :         /* if write didn't set errno, assume problem is no disk space */
    1176           0 :         if (errno == 0)
    1177           0 :             errno = ENOSPC;
    1178           0 :         ereport(ERROR,
    1179             :                 (errcode_for_file_access(),
    1180             :                  errmsg("could not write to file \"%s\": %m", path)));
    1181             :     }
    1182           0 :     pgstat_report_wait_end();
    1183             : 
    1184             :     /*
    1185             :      * Now fsync all previously written data. We could improve things and only
    1186             :      * do this for the last write to a file, but the required bookkeeping
    1187             :      * doesn't seem worth the trouble.
    1188             :      */
    1189           0 :     pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_MAPPING_SYNC);
    1190           0 :     if (pg_fsync(fd) != 0)
    1191           0 :         ereport(data_sync_elevel(ERROR),
    1192             :                 (errcode_for_file_access(),
    1193             :                  errmsg("could not fsync file \"%s\": %m", path)));
    1194           0 :     pgstat_report_wait_end();
    1195             : 
    1196           0 :     if (CloseTransientFile(fd) != 0)
    1197           0 :         ereport(ERROR,
    1198             :                 (errcode_for_file_access(),
    1199             :                  errmsg("could not close file \"%s\": %m", path)));
    1200           0 : }
    1201             : 
    1202             : /* ---
    1203             :  * Perform a checkpoint for logical rewrite mappings
    1204             :  *
    1205             :  * This serves two tasks:
    1206             :  * 1) Remove all mappings not needed anymore based on the logical restart LSN
    1207             :  * 2) Flush all remaining mappings to disk, so that replay after a checkpoint
    1208             :  *    only has to deal with the parts of a mapping that have been written out
    1209             :  *    after the checkpoint started.
    1210             :  * ---
    1211             :  */
    1212             : void
    1213        2832 : CheckPointLogicalRewriteHeap(void)
    1214             : {
    1215             :     XLogRecPtr  cutoff;
    1216             :     XLogRecPtr  redo;
    1217             :     DIR        *mappings_dir;
    1218             :     struct dirent *mapping_de;
    1219             :     char        path[MAXPGPATH + 20];
    1220             : 
    1221             :     /*
    1222             :      * We start of with a minimum of the last redo pointer. No new decoding
    1223             :      * slot will start before that, so that's a safe upper bound for removal.
    1224             :      */
    1225        2832 :     redo = GetRedoRecPtr();
    1226             : 
    1227             :     /* now check for the restart ptrs from existing slots */
    1228        2832 :     cutoff = ReplicationSlotsComputeLogicalRestartLSN();
    1229             : 
    1230             :     /* don't start earlier than the restart lsn */
    1231        2832 :     if (cutoff != InvalidXLogRecPtr && redo < cutoff)
    1232           0 :         cutoff = redo;
    1233             : 
    1234        2832 :     mappings_dir = AllocateDir("pg_logical/mappings");
    1235       11684 :     while ((mapping_de = ReadDir(mappings_dir, "pg_logical/mappings")) != NULL)
    1236             :     {
    1237             :         struct stat statbuf;
    1238             :         Oid         dboid;
    1239             :         Oid         relid;
    1240             :         XLogRecPtr  lsn;
    1241             :         TransactionId rewrite_xid;
    1242             :         TransactionId create_xid;
    1243             :         uint32      hi,
    1244             :                     lo;
    1245             : 
    1246        9208 :         if (strcmp(mapping_de->d_name, ".") == 0 ||
    1247        3188 :             strcmp(mapping_de->d_name, "..") == 0)
    1248       11328 :             continue;
    1249             : 
    1250         356 :         snprintf(path, sizeof(path), "pg_logical/mappings/%s", mapping_de->d_name);
    1251         356 :         if (lstat(path, &statbuf) == 0 && !S_ISREG(statbuf.st_mode))
    1252           0 :             continue;
    1253             : 
    1254             :         /* Skip over files that cannot be ours. */
    1255         356 :         if (strncmp(mapping_de->d_name, "map-", 4) != 0)
    1256           0 :             continue;
    1257             : 
    1258         356 :         if (sscanf(mapping_de->d_name, LOGICAL_REWRITE_FORMAT,
    1259             :                    &dboid, &relid, &hi, &lo, &rewrite_xid, &create_xid) != 6)
    1260           0 :             elog(ERROR, "could not parse filename \"%s\"", mapping_de->d_name);
    1261             : 
    1262         356 :         lsn = ((uint64) hi) << 32 | lo;
    1263             : 
    1264         356 :         if (lsn < cutoff || cutoff == InvalidXLogRecPtr)
    1265             :         {
    1266         178 :             elog(DEBUG1, "removing logical rewrite file \"%s\"", path);
    1267         356 :             if (unlink(path) < 0)
    1268           0 :                 ereport(ERROR,
    1269             :                         (errcode_for_file_access(),
    1270             :                          errmsg("could not remove file \"%s\": %m", path)));
    1271             :         }
    1272             :         else
    1273             :         {
    1274             :             /* on some operating systems fsyncing a file requires O_RDWR */
    1275         178 :             int         fd = OpenTransientFile(path, O_RDWR | PG_BINARY);
    1276             : 
    1277             :             /*
    1278             :              * The file cannot vanish due to concurrency since this function
    1279             :              * is the only one removing logical mappings and it's run while
    1280             :              * CheckpointLock is held exclusively.
    1281             :              */
    1282         178 :             if (fd < 0)
    1283           0 :                 ereport(ERROR,
    1284             :                         (errcode_for_file_access(),
    1285             :                          errmsg("could not open file \"%s\": %m", path)));
    1286             : 
    1287             :             /*
    1288             :              * We could try to avoid fsyncing files that either haven't
    1289             :              * changed or have only been created since the checkpoint's start,
    1290             :              * but it's currently not deemed worth the effort.
    1291             :              */
    1292         178 :             pgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_CHECKPOINT_SYNC);
    1293         178 :             if (pg_fsync(fd) != 0)
    1294           0 :                 ereport(data_sync_elevel(ERROR),
    1295             :                         (errcode_for_file_access(),
    1296             :                          errmsg("could not fsync file \"%s\": %m", path)));
    1297         178 :             pgstat_report_wait_end();
    1298             : 
    1299         178 :             if (CloseTransientFile(fd) != 0)
    1300           0 :                 ereport(ERROR,
    1301             :                         (errcode_for_file_access(),
    1302             :                          errmsg("could not close file \"%s\": %m", path)));
    1303             :         }
    1304             :     }
    1305        2832 :     FreeDir(mappings_dir);
    1306        2832 : }

Generated by: LCOV version 1.13