Line data Source code
1 : /*-------------------------------------------------------------------------
2 : *
3 : * inval.c
4 : * POSTGRES cache invalidation dispatcher code.
5 : *
6 : * This is subtle stuff, so pay attention:
7 : *
8 : * When a tuple is updated or deleted, our standard visibility rules
9 : * consider that it is *still valid* so long as we are in the same command,
10 : * ie, until the next CommandCounterIncrement() or transaction commit.
11 : * (See access/heap/heapam_visibility.c, and note that system catalogs are
12 : * generally scanned under the most current snapshot available, rather than
13 : * the transaction snapshot.) At the command boundary, the old tuple stops
14 : * being valid and the new version, if any, becomes valid. Therefore,
15 : * we cannot simply flush a tuple from the system caches during heap_update()
16 : * or heap_delete(). The tuple is still good at that point; what's more,
17 : * even if we did flush it, it might be reloaded into the caches by a later
18 : * request in the same command. So the correct behavior is to keep a list
19 : * of outdated (updated/deleted) tuples and then do the required cache
20 : * flushes at the next command boundary. We must also keep track of
21 : * inserted tuples so that we can flush "negative" cache entries that match
22 : * the new tuples; again, that mustn't happen until end of command.
23 : *
24 : * Once we have finished the command, we still need to remember inserted
25 : * tuples (including new versions of updated tuples), so that we can flush
26 : * them from the caches if we abort the transaction. Similarly, we'd better
27 : * be able to flush "negative" cache entries that may have been loaded in
28 : * place of deleted tuples, so we still need the deleted ones too.
29 : *
30 : * If we successfully complete the transaction, we have to broadcast all
31 : * these invalidation events to other backends (via the SI message queue)
32 : * so that they can flush obsolete entries from their caches. Note we have
33 : * to record the transaction commit before sending SI messages, otherwise
34 : * the other backends won't see our updated tuples as good.
35 : *
36 : * When a subtransaction aborts, we can process and discard any events
37 : * it has queued. When a subtransaction commits, we just add its events
38 : * to the pending lists of the parent transaction.
39 : *
40 : * In short, we need to remember until xact end every insert or delete
41 : * of a tuple that might be in the system caches. Updates are treated as
42 : * two events, delete + insert, for simplicity. (If the update doesn't
43 : * change the tuple hash value, catcache.c optimizes this into one event.)
44 : *
45 : * We do not need to register EVERY tuple operation in this way, just those
46 : * on tuples in relations that have associated catcaches. We do, however,
47 : * have to register every operation on every tuple that *could* be in a
48 : * catcache, whether or not it currently is in our cache. Also, if the
49 : * tuple is in a relation that has multiple catcaches, we need to register
50 : * an invalidation message for each such catcache. catcache.c's
51 : * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
52 : * catcaches may need invalidation for a given tuple.
53 : *
54 : * Also, whenever we see an operation on a pg_class, pg_attribute, or
55 : * pg_index tuple, we register a relcache flush operation for the relation
56 : * described by that tuple (as specified in CacheInvalidateHeapTuple()).
57 : * Likewise for pg_constraint tuples for foreign keys on relations.
58 : *
59 : * We keep the relcache flush requests in lists separate from the catcache
60 : * tuple flush requests. This allows us to issue all the pending catcache
61 : * flushes before we issue relcache flushes, which saves us from loading
62 : * a catcache tuple during relcache load only to flush it again right away.
63 : * Also, we avoid queuing multiple relcache flush requests for the same
64 : * relation, since a relcache flush is relatively expensive to do.
65 : * (XXX is it worth testing likewise for duplicate catcache flush entries?
66 : * Probably not.)
67 : *
68 : * Many subsystems own higher-level caches that depend on relcache and/or
69 : * catcache, and they register callbacks here to invalidate their caches.
70 : * While building a higher-level cache entry, a backend may receive a
71 : * callback for the being-built entry or one of its dependencies. This
72 : * implies the new higher-level entry would be born stale, and it might
73 : * remain stale for the life of the backend. Many caches do not prevent
74 : * that. They rely on DDL for can't-miss catalog changes taking
75 : * AccessExclusiveLock on suitable objects. (For a change made with less
76 : * locking, backends might never read the change.) The relation cache,
77 : * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
78 : * than the beginning of the next transaction. Hence, when a relevant
79 : * invalidation callback arrives during a build, relcache.c reattempts that
80 : * build. Caches with similar needs could do likewise.
81 : *
82 : * If a relcache flush is issued for a system relation that we preload
83 : * from the relcache init file, we must also delete the init file so that
84 : * it will be rebuilt during the next backend restart. The actual work of
85 : * manipulating the init file is in relcache.c, but we keep track of the
86 : * need for it here.
87 : *
88 : * Currently, inval messages are sent without regard for the possibility
89 : * that the object described by the catalog tuple might be a session-local
90 : * object such as a temporary table. This is because (1) this code has
91 : * no practical way to tell the difference, and (2) it is not certain that
92 : * other backends don't have catalog cache or even relcache entries for
93 : * such tables, anyway; there is nothing that prevents that. It might be
94 : * worth trying to avoid sending such inval traffic in the future, if those
95 : * problems can be overcome cheaply.
96 : *
97 : * When making a nontransactional change to a cacheable object, we must
98 : * likewise send the invalidation immediately, before ending the change's
99 : * critical section. This includes inplace heap updates, relmap, and smgr.
100 : *
101 : * When wal_level=logical, write invalidations into WAL at each command end to
102 : * support the decoding of the in-progress transactions. See
103 : * CommandEndInvalidationMessages.
104 : *
105 : * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
106 : * Portions Copyright (c) 1994, Regents of the University of California
107 : *
108 : * IDENTIFICATION
109 : * src/backend/utils/cache/inval.c
110 : *
111 : *-------------------------------------------------------------------------
112 : */
113 : #include "postgres.h"
114 :
115 : #include <limits.h>
116 :
117 : #include "access/htup_details.h"
118 : #include "access/xact.h"
119 : #include "access/xloginsert.h"
120 : #include "catalog/catalog.h"
121 : #include "catalog/pg_constraint.h"
122 : #include "miscadmin.h"
123 : #include "storage/sinval.h"
124 : #include "storage/smgr.h"
125 : #include "utils/catcache.h"
126 : #include "utils/injection_point.h"
127 : #include "utils/inval.h"
128 : #include "utils/memdebug.h"
129 : #include "utils/memutils.h"
130 : #include "utils/rel.h"
131 : #include "utils/relmapper.h"
132 : #include "utils/snapmgr.h"
133 : #include "utils/syscache.h"
134 :
135 :
136 : /*
137 : * Pending requests are stored as ready-to-send SharedInvalidationMessages.
138 : * We keep the messages themselves in arrays in TopTransactionContext (there
139 : * are separate arrays for catcache and relcache messages). For transactional
140 : * messages, control information is kept in a chain of TransInvalidationInfo
141 : * structs, also allocated in TopTransactionContext. (We could keep a
142 : * subtransaction's TransInvalidationInfo in its CurTransactionContext; but
143 : * that's more wasteful not less so, since in very many scenarios it'd be the
144 : * only allocation in the subtransaction's CurTransactionContext.) For
145 : * inplace update messages, control information appears in an
146 : * InvalidationInfo, allocated in CurrentMemoryContext.
147 : *
148 : * We can store the message arrays densely, and yet avoid moving data around
149 : * within an array, because within any one subtransaction we need only
150 : * distinguish between messages emitted by prior commands and those emitted
151 : * by the current command. Once a command completes and we've done local
152 : * processing on its messages, we can fold those into the prior-commands
153 : * messages just by changing array indexes in the TransInvalidationInfo
154 : * struct. Similarly, we need distinguish messages of prior subtransactions
155 : * from those of the current subtransaction only until the subtransaction
156 : * completes, after which we adjust the array indexes in the parent's
157 : * TransInvalidationInfo to include the subtransaction's messages. Inplace
158 : * invalidations don't need a concept of command or subtransaction boundaries,
159 : * since we send them during the WAL insertion critical section.
160 : *
161 : * The ordering of the individual messages within a command's or
162 : * subtransaction's output is not considered significant, although this
163 : * implementation happens to preserve the order in which they were queued.
164 : * (Previous versions of this code did not preserve it.)
165 : *
166 : * For notational convenience, control information is kept in two-element
167 : * arrays, the first for catcache messages and the second for relcache
168 : * messages.
169 : */
170 : #define CatCacheMsgs 0
171 : #define RelCacheMsgs 1
172 :
173 : /* Pointers to main arrays in TopTransactionContext */
174 : typedef struct InvalMessageArray
175 : {
176 : SharedInvalidationMessage *msgs; /* palloc'd array (can be expanded) */
177 : int maxmsgs; /* current allocated size of array */
178 : } InvalMessageArray;
179 :
180 : static InvalMessageArray InvalMessageArrays[2];
181 :
182 : /* Control information for one logical group of messages */
183 : typedef struct InvalidationMsgsGroup
184 : {
185 : int firstmsg[2]; /* first index in relevant array */
186 : int nextmsg[2]; /* last+1 index */
187 : } InvalidationMsgsGroup;
188 :
189 : /* Macros to help preserve InvalidationMsgsGroup abstraction */
190 : #define SetSubGroupToFollow(targetgroup, priorgroup, subgroup) \
191 : do { \
192 : (targetgroup)->firstmsg[subgroup] = \
193 : (targetgroup)->nextmsg[subgroup] = \
194 : (priorgroup)->nextmsg[subgroup]; \
195 : } while (0)
196 :
197 : #define SetGroupToFollow(targetgroup, priorgroup) \
198 : do { \
199 : SetSubGroupToFollow(targetgroup, priorgroup, CatCacheMsgs); \
200 : SetSubGroupToFollow(targetgroup, priorgroup, RelCacheMsgs); \
201 : } while (0)
202 :
203 : #define NumMessagesInSubGroup(group, subgroup) \
204 : ((group)->nextmsg[subgroup] - (group)->firstmsg[subgroup])
205 :
206 : #define NumMessagesInGroup(group) \
207 : (NumMessagesInSubGroup(group, CatCacheMsgs) + \
208 : NumMessagesInSubGroup(group, RelCacheMsgs))
209 :
210 :
211 : /*----------------
212 : * Transactional invalidation messages are divided into two groups:
213 : * 1) events so far in current command, not yet reflected to caches.
214 : * 2) events in previous commands of current transaction; these have
215 : * been reflected to local caches, and must be either broadcast to
216 : * other backends or rolled back from local cache when we commit
217 : * or abort the transaction.
218 : * Actually, we need such groups for each level of nested transaction,
219 : * so that we can discard events from an aborted subtransaction. When
220 : * a subtransaction commits, we append its events to the parent's groups.
221 : *
222 : * The relcache-file-invalidated flag can just be a simple boolean,
223 : * since we only act on it at transaction commit; we don't care which
224 : * command of the transaction set it.
225 : *----------------
226 : */
227 :
228 : /* fields common to both transactional and inplace invalidation */
229 : typedef struct InvalidationInfo
230 : {
231 : /* Events emitted by current command */
232 : InvalidationMsgsGroup CurrentCmdInvalidMsgs;
233 :
234 : /* init file must be invalidated? */
235 : bool RelcacheInitFileInval;
236 : } InvalidationInfo;
237 :
238 : /* subclass adding fields specific to transactional invalidation */
239 : typedef struct TransInvalidationInfo
240 : {
241 : /* Base class */
242 : struct InvalidationInfo ii;
243 :
244 : /* Events emitted by previous commands of this (sub)transaction */
245 : InvalidationMsgsGroup PriorCmdInvalidMsgs;
246 :
247 : /* Back link to parent transaction's info */
248 : struct TransInvalidationInfo *parent;
249 :
250 : /* Subtransaction nesting depth */
251 : int my_level;
252 : } TransInvalidationInfo;
253 :
254 : static TransInvalidationInfo *transInvalInfo = NULL;
255 :
256 : static InvalidationInfo *inplaceInvalInfo = NULL;
257 :
258 : /* GUC storage */
259 : int debug_discard_caches = 0;
260 :
261 : /*
262 : * Dynamically-registered callback functions. Current implementation
263 : * assumes there won't be enough of these to justify a dynamically resizable
264 : * array; it'd be easy to improve that if needed.
265 : *
266 : * To avoid searching in CallSyscacheCallbacks, all callbacks for a given
267 : * syscache are linked into a list pointed to by syscache_callback_links[id].
268 : * The link values are syscache_callback_list[] index plus 1, or 0 for none.
269 : */
270 :
271 : #define MAX_SYSCACHE_CALLBACKS 64
272 : #define MAX_RELCACHE_CALLBACKS 10
273 :
274 : static struct SYSCACHECALLBACK
275 : {
276 : int16 id; /* cache number */
277 : int16 link; /* next callback index+1 for same cache */
278 : SyscacheCallbackFunction function;
279 : Datum arg;
280 : } syscache_callback_list[MAX_SYSCACHE_CALLBACKS];
281 :
282 : static int16 syscache_callback_links[SysCacheSize];
283 :
284 : static int syscache_callback_count = 0;
285 :
286 : static struct RELCACHECALLBACK
287 : {
288 : RelcacheCallbackFunction function;
289 : Datum arg;
290 : } relcache_callback_list[MAX_RELCACHE_CALLBACKS];
291 :
292 : static int relcache_callback_count = 0;
293 :
294 : /* ----------------------------------------------------------------
295 : * Invalidation subgroup support functions
296 : * ----------------------------------------------------------------
297 : */
298 :
299 : /*
300 : * AddInvalidationMessage
301 : * Add an invalidation message to a (sub)group.
302 : *
303 : * The group must be the last active one, since we assume we can add to the
304 : * end of the relevant InvalMessageArray.
305 : *
306 : * subgroup must be CatCacheMsgs or RelCacheMsgs.
307 : */
308 : static void
309 6838566 : AddInvalidationMessage(InvalidationMsgsGroup *group, int subgroup,
310 : const SharedInvalidationMessage *msg)
311 : {
312 6838566 : InvalMessageArray *ima = &InvalMessageArrays[subgroup];
313 6838566 : int nextindex = group->nextmsg[subgroup];
314 :
315 6838566 : if (nextindex >= ima->maxmsgs)
316 : {
317 750746 : if (ima->msgs == NULL)
318 : {
319 : /* Create new storage array in TopTransactionContext */
320 695606 : int reqsize = 32; /* arbitrary */
321 :
322 695606 : ima->msgs = (SharedInvalidationMessage *)
323 695606 : MemoryContextAlloc(TopTransactionContext,
324 : reqsize * sizeof(SharedInvalidationMessage));
325 695606 : ima->maxmsgs = reqsize;
326 : Assert(nextindex == 0);
327 : }
328 : else
329 : {
330 : /* Enlarge storage array */
331 55140 : int reqsize = 2 * ima->maxmsgs;
332 :
333 55140 : ima->msgs = (SharedInvalidationMessage *)
334 55140 : repalloc(ima->msgs,
335 : reqsize * sizeof(SharedInvalidationMessage));
336 55140 : ima->maxmsgs = reqsize;
337 : }
338 : }
339 : /* Okay, add message to current group */
340 6838566 : ima->msgs[nextindex] = *msg;
341 6838566 : group->nextmsg[subgroup]++;
342 6838566 : }
343 :
344 : /*
345 : * Append one subgroup of invalidation messages to another, resetting
346 : * the source subgroup to empty.
347 : */
348 : static void
349 1919720 : AppendInvalidationMessageSubGroup(InvalidationMsgsGroup *dest,
350 : InvalidationMsgsGroup *src,
351 : int subgroup)
352 : {
353 : /* Messages must be adjacent in main array */
354 : Assert(dest->nextmsg[subgroup] == src->firstmsg[subgroup]);
355 :
356 : /* ... which makes this easy: */
357 1919720 : dest->nextmsg[subgroup] = src->nextmsg[subgroup];
358 :
359 : /*
360 : * This is handy for some callers and irrelevant for others. But we do it
361 : * always, reasoning that it's bad to leave different groups pointing at
362 : * the same fragment of the message array.
363 : */
364 1919720 : SetSubGroupToFollow(src, dest, subgroup);
365 1919720 : }
366 :
367 : /*
368 : * Process a subgroup of invalidation messages.
369 : *
370 : * This is a macro that executes the given code fragment for each message in
371 : * a message subgroup. The fragment should refer to the message as *msg.
372 : */
373 : #define ProcessMessageSubGroup(group, subgroup, codeFragment) \
374 : do { \
375 : int _msgindex = (group)->firstmsg[subgroup]; \
376 : int _endmsg = (group)->nextmsg[subgroup]; \
377 : for (; _msgindex < _endmsg; _msgindex++) \
378 : { \
379 : SharedInvalidationMessage *msg = \
380 : &InvalMessageArrays[subgroup].msgs[_msgindex]; \
381 : codeFragment; \
382 : } \
383 : } while (0)
384 :
385 : /*
386 : * Process a subgroup of invalidation messages as an array.
387 : *
388 : * As above, but the code fragment can handle an array of messages.
389 : * The fragment should refer to the messages as msgs[], with n entries.
390 : */
391 : #define ProcessMessageSubGroupMulti(group, subgroup, codeFragment) \
392 : do { \
393 : int n = NumMessagesInSubGroup(group, subgroup); \
394 : if (n > 0) { \
395 : SharedInvalidationMessage *msgs = \
396 : &InvalMessageArrays[subgroup].msgs[(group)->firstmsg[subgroup]]; \
397 : codeFragment; \
398 : } \
399 : } while (0)
400 :
401 :
402 : /* ----------------------------------------------------------------
403 : * Invalidation group support functions
404 : *
405 : * These routines understand about the division of a logical invalidation
406 : * group into separate physical arrays for catcache and relcache entries.
407 : * ----------------------------------------------------------------
408 : */
409 :
410 : /*
411 : * Add a catcache inval entry
412 : */
413 : static void
414 5466100 : AddCatcacheInvalidationMessage(InvalidationMsgsGroup *group,
415 : int id, uint32 hashValue, Oid dbId)
416 : {
417 : SharedInvalidationMessage msg;
418 :
419 : Assert(id < CHAR_MAX);
420 5466100 : msg.cc.id = (int8) id;
421 5466100 : msg.cc.dbId = dbId;
422 5466100 : msg.cc.hashValue = hashValue;
423 :
424 : /*
425 : * Define padding bytes in SharedInvalidationMessage structs to be
426 : * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
427 : * multiple processes, will cause spurious valgrind warnings about
428 : * undefined memory being used. That's because valgrind remembers the
429 : * undefined bytes from the last local process's store, not realizing that
430 : * another process has written since, filling the previously uninitialized
431 : * bytes
432 : */
433 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
434 :
435 5466100 : AddInvalidationMessage(group, CatCacheMsgs, &msg);
436 5466100 : }
437 :
438 : /*
439 : * Add a whole-catalog inval entry
440 : */
441 : static void
442 222 : AddCatalogInvalidationMessage(InvalidationMsgsGroup *group,
443 : Oid dbId, Oid catId)
444 : {
445 : SharedInvalidationMessage msg;
446 :
447 222 : msg.cat.id = SHAREDINVALCATALOG_ID;
448 222 : msg.cat.dbId = dbId;
449 222 : msg.cat.catId = catId;
450 : /* check AddCatcacheInvalidationMessage() for an explanation */
451 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
452 :
453 222 : AddInvalidationMessage(group, CatCacheMsgs, &msg);
454 222 : }
455 :
456 : /*
457 : * Add a relcache inval entry
458 : */
459 : static void
460 1994838 : AddRelcacheInvalidationMessage(InvalidationMsgsGroup *group,
461 : Oid dbId, Oid relId)
462 : {
463 : SharedInvalidationMessage msg;
464 :
465 : /*
466 : * Don't add a duplicate item. We assume dbId need not be checked because
467 : * it will never change. InvalidOid for relId means all relations so we
468 : * don't need to add individual ones when it is present.
469 : */
470 5735416 : ProcessMessageSubGroup(group, RelCacheMsgs,
471 : if (msg->rc.id == SHAREDINVALRELCACHE_ID &&
472 : (msg->rc.relId == relId ||
473 : msg->rc.relId == InvalidOid))
474 : return);
475 :
476 : /* OK, add the item */
477 864450 : msg.rc.id = SHAREDINVALRELCACHE_ID;
478 864450 : msg.rc.dbId = dbId;
479 864450 : msg.rc.relId = relId;
480 : /* check AddCatcacheInvalidationMessage() for an explanation */
481 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
482 :
483 864450 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
484 : }
485 :
486 : /*
487 : * Add a snapshot inval entry
488 : *
489 : * We put these into the relcache subgroup for simplicity.
490 : */
491 : static void
492 1005950 : AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group,
493 : Oid dbId, Oid relId)
494 : {
495 : SharedInvalidationMessage msg;
496 :
497 : /* Don't add a duplicate item */
498 : /* We assume dbId need not be checked because it will never change */
499 1468888 : ProcessMessageSubGroup(group, RelCacheMsgs,
500 : if (msg->sn.id == SHAREDINVALSNAPSHOT_ID &&
501 : msg->sn.relId == relId)
502 : return);
503 :
504 : /* OK, add the item */
505 507794 : msg.sn.id = SHAREDINVALSNAPSHOT_ID;
506 507794 : msg.sn.dbId = dbId;
507 507794 : msg.sn.relId = relId;
508 : /* check AddCatcacheInvalidationMessage() for an explanation */
509 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
510 :
511 507794 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
512 : }
513 :
514 : /*
515 : * Append one group of invalidation messages to another, resetting
516 : * the source group to empty.
517 : */
518 : static void
519 959860 : AppendInvalidationMessages(InvalidationMsgsGroup *dest,
520 : InvalidationMsgsGroup *src)
521 : {
522 959860 : AppendInvalidationMessageSubGroup(dest, src, CatCacheMsgs);
523 959860 : AppendInvalidationMessageSubGroup(dest, src, RelCacheMsgs);
524 959860 : }
525 :
526 : /*
527 : * Execute the given function for all the messages in an invalidation group.
528 : * The group is not altered.
529 : *
530 : * catcache entries are processed first, for reasons mentioned above.
531 : */
532 : static void
533 728660 : ProcessInvalidationMessages(InvalidationMsgsGroup *group,
534 : void (*func) (SharedInvalidationMessage *msg))
535 : {
536 5663912 : ProcessMessageSubGroup(group, CatCacheMsgs, func(msg));
537 1810728 : ProcessMessageSubGroup(group, RelCacheMsgs, func(msg));
538 728654 : }
539 :
540 : /*
541 : * As above, but the function is able to process an array of messages
542 : * rather than just one at a time.
543 : */
544 : static void
545 363662 : ProcessInvalidationMessagesMulti(InvalidationMsgsGroup *group,
546 : void (*func) (const SharedInvalidationMessage *msgs, int n))
547 : {
548 363662 : ProcessMessageSubGroupMulti(group, CatCacheMsgs, func(msgs, n));
549 363662 : ProcessMessageSubGroupMulti(group, RelCacheMsgs, func(msgs, n));
550 363662 : }
551 :
552 : /* ----------------------------------------------------------------
553 : * private support functions
554 : * ----------------------------------------------------------------
555 : */
556 :
557 : /*
558 : * RegisterCatcacheInvalidation
559 : *
560 : * Register an invalidation event for a catcache tuple entry.
561 : */
562 : static void
563 5466100 : RegisterCatcacheInvalidation(int cacheId,
564 : uint32 hashValue,
565 : Oid dbId,
566 : void *context)
567 : {
568 5466100 : InvalidationInfo *info = (InvalidationInfo *) context;
569 :
570 5466100 : AddCatcacheInvalidationMessage(&info->CurrentCmdInvalidMsgs,
571 : cacheId, hashValue, dbId);
572 5466100 : }
573 :
574 : /*
575 : * RegisterCatalogInvalidation
576 : *
577 : * Register an invalidation event for all catcache entries from a catalog.
578 : */
579 : static void
580 222 : RegisterCatalogInvalidation(InvalidationInfo *info, Oid dbId, Oid catId)
581 : {
582 222 : AddCatalogInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, catId);
583 222 : }
584 :
585 : /*
586 : * RegisterRelcacheInvalidation
587 : *
588 : * As above, but register a relcache invalidation event.
589 : */
590 : static void
591 1994838 : RegisterRelcacheInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
592 : {
593 1994838 : AddRelcacheInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, relId);
594 :
595 : /*
596 : * Most of the time, relcache invalidation is associated with system
597 : * catalog updates, but there are a few cases where it isn't. Quick hack
598 : * to ensure that the next CommandCounterIncrement() will think that we
599 : * need to do CommandEndInvalidationMessages().
600 : */
601 1994838 : (void) GetCurrentCommandId(true);
602 :
603 : /*
604 : * If the relation being invalidated is one of those cached in a relcache
605 : * init file, mark that we need to zap that file at commit. For simplicity
606 : * invalidations for a specific database always invalidate the shared file
607 : * as well. Also zap when we are invalidating whole relcache.
608 : */
609 1994838 : if (relId == InvalidOid || RelationIdIsInInitFile(relId))
610 151212 : info->RelcacheInitFileInval = true;
611 1994838 : }
612 :
613 : /*
614 : * RegisterSnapshotInvalidation
615 : *
616 : * Register an invalidation event for MVCC scans against a given catalog.
617 : * Only needed for catalogs that don't have catcaches.
618 : */
619 : static void
620 1005950 : RegisterSnapshotInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
621 : {
622 1005950 : AddSnapshotInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, relId);
623 1005950 : }
624 :
625 : /*
626 : * PrepareInvalidationState
627 : * Initialize inval data for the current (sub)transaction.
628 : */
629 : static InvalidationInfo *
630 3956450 : PrepareInvalidationState(void)
631 : {
632 : TransInvalidationInfo *myInfo;
633 :
634 : Assert(IsTransactionState());
635 : /* Can't queue transactional message while collecting inplace messages. */
636 : Assert(inplaceInvalInfo == NULL);
637 :
638 7672440 : if (transInvalInfo != NULL &&
639 3715990 : transInvalInfo->my_level == GetCurrentTransactionNestLevel())
640 3715848 : return (InvalidationInfo *) transInvalInfo;
641 :
642 : myInfo = (TransInvalidationInfo *)
643 240602 : MemoryContextAllocZero(TopTransactionContext,
644 : sizeof(TransInvalidationInfo));
645 240602 : myInfo->parent = transInvalInfo;
646 240602 : myInfo->my_level = GetCurrentTransactionNestLevel();
647 :
648 : /* Now, do we have a previous stack entry? */
649 240602 : if (transInvalInfo != NULL)
650 : {
651 : /* Yes; this one should be for a deeper nesting level. */
652 : Assert(myInfo->my_level > transInvalInfo->my_level);
653 :
654 : /*
655 : * The parent (sub)transaction must not have any current (i.e.,
656 : * not-yet-locally-processed) messages. If it did, we'd have a
657 : * semantic problem: the new subtransaction presumably ought not be
658 : * able to see those events yet, but since the CommandCounter is
659 : * linear, that can't work once the subtransaction advances the
660 : * counter. This is a convenient place to check for that, as well as
661 : * being important to keep management of the message arrays simple.
662 : */
663 142 : if (NumMessagesInGroup(&transInvalInfo->ii.CurrentCmdInvalidMsgs) != 0)
664 0 : elog(ERROR, "cannot start a subtransaction when there are unprocessed inval messages");
665 :
666 : /*
667 : * MemoryContextAllocZero set firstmsg = nextmsg = 0 in each group,
668 : * which is fine for the first (sub)transaction, but otherwise we need
669 : * to update them to follow whatever is already in the arrays.
670 : */
671 142 : SetGroupToFollow(&myInfo->PriorCmdInvalidMsgs,
672 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
673 142 : SetGroupToFollow(&myInfo->ii.CurrentCmdInvalidMsgs,
674 : &myInfo->PriorCmdInvalidMsgs);
675 : }
676 : else
677 : {
678 : /*
679 : * Here, we need only clear any array pointers left over from a prior
680 : * transaction.
681 : */
682 240460 : InvalMessageArrays[CatCacheMsgs].msgs = NULL;
683 240460 : InvalMessageArrays[CatCacheMsgs].maxmsgs = 0;
684 240460 : InvalMessageArrays[RelCacheMsgs].msgs = NULL;
685 240460 : InvalMessageArrays[RelCacheMsgs].maxmsgs = 0;
686 : }
687 :
688 240602 : transInvalInfo = myInfo;
689 240602 : return (InvalidationInfo *) myInfo;
690 : }
691 :
692 : /*
693 : * PrepareInplaceInvalidationState
694 : * Initialize inval data for an inplace update.
695 : *
696 : * See previous function for more background.
697 : */
698 : static InvalidationInfo *
699 235192 : PrepareInplaceInvalidationState(void)
700 : {
701 : InvalidationInfo *myInfo;
702 :
703 : Assert(IsTransactionState());
704 : /* limit of one inplace update under assembly */
705 : Assert(inplaceInvalInfo == NULL);
706 :
707 : /* gone after WAL insertion CritSection ends, so use current context */
708 235192 : myInfo = (InvalidationInfo *) palloc0(sizeof(InvalidationInfo));
709 :
710 : /* Stash our messages past end of the transactional messages, if any. */
711 235192 : if (transInvalInfo != NULL)
712 104284 : SetGroupToFollow(&myInfo->CurrentCmdInvalidMsgs,
713 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
714 : else
715 : {
716 130908 : InvalMessageArrays[CatCacheMsgs].msgs = NULL;
717 130908 : InvalMessageArrays[CatCacheMsgs].maxmsgs = 0;
718 130908 : InvalMessageArrays[RelCacheMsgs].msgs = NULL;
719 130908 : InvalMessageArrays[RelCacheMsgs].maxmsgs = 0;
720 : }
721 :
722 235192 : inplaceInvalInfo = myInfo;
723 235192 : return myInfo;
724 : }
725 :
726 : /* ----------------------------------------------------------------
727 : * public functions
728 : * ----------------------------------------------------------------
729 : */
730 :
731 : void
732 4094 : InvalidateSystemCachesExtended(bool debug_discard)
733 : {
734 : int i;
735 :
736 4094 : InvalidateCatalogSnapshot();
737 4094 : ResetCatalogCachesExt(debug_discard);
738 4094 : RelationCacheInvalidate(debug_discard); /* gets smgr and relmap too */
739 :
740 70548 : for (i = 0; i < syscache_callback_count; i++)
741 : {
742 66454 : struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
743 :
744 66454 : ccitem->function(ccitem->arg, ccitem->id, 0);
745 : }
746 :
747 9370 : for (i = 0; i < relcache_callback_count; i++)
748 : {
749 5276 : struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
750 :
751 5276 : ccitem->function(ccitem->arg, InvalidOid);
752 : }
753 4094 : }
754 :
755 : /*
756 : * LocalExecuteInvalidationMessage
757 : *
758 : * Process a single invalidation message (which could be of any type).
759 : * Only the local caches are flushed; this does not transmit the message
760 : * to other backends.
761 : */
762 : void
763 34873122 : LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
764 : {
765 34873122 : if (msg->id >= 0)
766 : {
767 27974910 : if (msg->cc.dbId == MyDatabaseId || msg->cc.dbId == InvalidOid)
768 : {
769 19979320 : InvalidateCatalogSnapshot();
770 :
771 19979320 : SysCacheInvalidate(msg->cc.id, msg->cc.hashValue);
772 :
773 19979320 : CallSyscacheCallbacks(msg->cc.id, msg->cc.hashValue);
774 : }
775 : }
776 6898212 : else if (msg->id == SHAREDINVALCATALOG_ID)
777 : {
778 886 : if (msg->cat.dbId == MyDatabaseId || msg->cat.dbId == InvalidOid)
779 : {
780 738 : InvalidateCatalogSnapshot();
781 :
782 738 : CatalogCacheFlushCatalog(msg->cat.catId);
783 :
784 : /* CatalogCacheFlushCatalog calls CallSyscacheCallbacks as needed */
785 : }
786 : }
787 6897326 : else if (msg->id == SHAREDINVALRELCACHE_ID)
788 : {
789 3744682 : if (msg->rc.dbId == MyDatabaseId || msg->rc.dbId == InvalidOid)
790 : {
791 : int i;
792 :
793 2655176 : if (msg->rc.relId == InvalidOid)
794 426 : RelationCacheInvalidate(false);
795 : else
796 2654750 : RelationCacheInvalidateEntry(msg->rc.relId);
797 :
798 7197100 : for (i = 0; i < relcache_callback_count; i++)
799 : {
800 4541930 : struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
801 :
802 4541930 : ccitem->function(ccitem->arg, msg->rc.relId);
803 : }
804 : }
805 : }
806 3152644 : else if (msg->id == SHAREDINVALSMGR_ID)
807 : {
808 : /*
809 : * We could have smgr entries for relations of other databases, so no
810 : * short-circuit test is possible here.
811 : */
812 : RelFileLocatorBackend rlocator;
813 :
814 422006 : rlocator.locator = msg->sm.rlocator;
815 422006 : rlocator.backend = (msg->sm.backend_hi << 16) | (int) msg->sm.backend_lo;
816 422006 : smgrreleaserellocator(rlocator);
817 : }
818 2730638 : else if (msg->id == SHAREDINVALRELMAP_ID)
819 : {
820 : /* We only care about our own database and shared catalogs */
821 654 : if (msg->rm.dbId == InvalidOid)
822 258 : RelationMapInvalidate(true);
823 396 : else if (msg->rm.dbId == MyDatabaseId)
824 262 : RelationMapInvalidate(false);
825 : }
826 2729984 : else if (msg->id == SHAREDINVALSNAPSHOT_ID)
827 : {
828 : /* We only care about our own database and shared catalogs */
829 2729984 : if (msg->sn.dbId == InvalidOid)
830 84528 : InvalidateCatalogSnapshot();
831 2645456 : else if (msg->sn.dbId == MyDatabaseId)
832 1954672 : InvalidateCatalogSnapshot();
833 : }
834 : else
835 0 : elog(FATAL, "unrecognized SI message ID: %d", msg->id);
836 34873116 : }
837 :
838 : /*
839 : * InvalidateSystemCaches
840 : *
841 : * This blows away all tuples in the system catalog caches and
842 : * all the cached relation descriptors and smgr cache entries.
843 : * Relation descriptors that have positive refcounts are then rebuilt.
844 : *
845 : * We call this when we see a shared-inval-queue overflow signal,
846 : * since that tells us we've lost some shared-inval messages and hence
847 : * don't know what needs to be invalidated.
848 : */
849 : void
850 4094 : InvalidateSystemCaches(void)
851 : {
852 4094 : InvalidateSystemCachesExtended(false);
853 4094 : }
854 :
855 : /*
856 : * AcceptInvalidationMessages
857 : * Read and process invalidation messages from the shared invalidation
858 : * message queue.
859 : *
860 : * Note:
861 : * This should be called as the first step in processing a transaction.
862 : */
863 : void
864 33208632 : AcceptInvalidationMessages(void)
865 : {
866 33208632 : ReceiveSharedInvalidMessages(LocalExecuteInvalidationMessage,
867 : InvalidateSystemCaches);
868 :
869 : /*----------
870 : * Test code to force cache flushes anytime a flush could happen.
871 : *
872 : * This helps detect intermittent faults caused by code that reads a cache
873 : * entry and then performs an action that could invalidate the entry, but
874 : * rarely actually does so. This can spot issues that would otherwise
875 : * only arise with badly timed concurrent DDL, for example.
876 : *
877 : * The default debug_discard_caches = 0 does no forced cache flushes.
878 : *
879 : * If used with CLOBBER_FREED_MEMORY,
880 : * debug_discard_caches = 1 (formerly known as CLOBBER_CACHE_ALWAYS)
881 : * provides a fairly thorough test that the system contains no cache-flush
882 : * hazards. However, it also makes the system unbelievably slow --- the
883 : * regression tests take about 100 times longer than normal.
884 : *
885 : * If you're a glutton for punishment, try
886 : * debug_discard_caches = 3 (formerly known as CLOBBER_CACHE_RECURSIVELY).
887 : * This slows things by at least a factor of 10000, so I wouldn't suggest
888 : * trying to run the entire regression tests that way. It's useful to try
889 : * a few simple tests, to make sure that cache reload isn't subject to
890 : * internal cache-flush hazards, but after you've done a few thousand
891 : * recursive reloads it's unlikely you'll learn more.
892 : *----------
893 : */
894 : #ifdef DISCARD_CACHES_ENABLED
895 : {
896 : static int recursion_depth = 0;
897 :
898 : if (recursion_depth < debug_discard_caches)
899 : {
900 : recursion_depth++;
901 : InvalidateSystemCachesExtended(true);
902 : recursion_depth--;
903 : }
904 : }
905 : #endif
906 33208632 : }
907 :
908 : /*
909 : * PostPrepare_Inval
910 : * Clean up after successful PREPARE.
911 : *
912 : * Here, we want to act as though the transaction aborted, so that we will
913 : * undo any syscache changes it made, thereby bringing us into sync with the
914 : * outside world, which doesn't believe the transaction committed yet.
915 : *
916 : * If the prepared transaction is later aborted, there is nothing more to
917 : * do; if it commits, we will receive the consequent inval messages just
918 : * like everyone else.
919 : */
920 : void
921 752 : PostPrepare_Inval(void)
922 : {
923 752 : AtEOXact_Inval(false);
924 752 : }
925 :
926 : /*
927 : * xactGetCommittedInvalidationMessages() is called by
928 : * RecordTransactionCommit() to collect invalidation messages to add to the
929 : * commit record. This applies only to commit message types, never to
930 : * abort records. Must always run before AtEOXact_Inval(), since that
931 : * removes the data we need to see.
932 : *
933 : * Remember that this runs before we have officially committed, so we
934 : * must not do anything here to change what might occur *if* we should
935 : * fail between here and the actual commit.
936 : *
937 : * see also xact_redo_commit() and xact_desc_commit()
938 : */
939 : int
940 378276 : xactGetCommittedInvalidationMessages(SharedInvalidationMessage **msgs,
941 : bool *RelcacheInitFileInval)
942 : {
943 : SharedInvalidationMessage *msgarray;
944 : int nummsgs;
945 : int nmsgs;
946 :
947 : /* Quick exit if we haven't done anything with invalidation messages. */
948 378276 : if (transInvalInfo == NULL)
949 : {
950 224164 : *RelcacheInitFileInval = false;
951 224164 : *msgs = NULL;
952 224164 : return 0;
953 : }
954 :
955 : /* Must be at top of stack */
956 : Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
957 :
958 : /*
959 : * Relcache init file invalidation requires processing both before and
960 : * after we send the SI messages. However, we need not do anything unless
961 : * we committed.
962 : */
963 154112 : *RelcacheInitFileInval = transInvalInfo->ii.RelcacheInitFileInval;
964 :
965 : /*
966 : * Collect all the pending messages into a single contiguous array of
967 : * invalidation messages, to simplify what needs to happen while building
968 : * the commit WAL message. Maintain the order that they would be
969 : * processed in by AtEOXact_Inval(), to ensure emulated behaviour in redo
970 : * is as similar as possible to original. We want the same bugs, if any,
971 : * not new ones.
972 : */
973 154112 : nummsgs = NumMessagesInGroup(&transInvalInfo->PriorCmdInvalidMsgs) +
974 154112 : NumMessagesInGroup(&transInvalInfo->ii.CurrentCmdInvalidMsgs);
975 :
976 154112 : *msgs = msgarray = (SharedInvalidationMessage *)
977 154112 : MemoryContextAlloc(CurTransactionContext,
978 : nummsgs * sizeof(SharedInvalidationMessage));
979 :
980 154112 : nmsgs = 0;
981 154112 : ProcessMessageSubGroupMulti(&transInvalInfo->PriorCmdInvalidMsgs,
982 : CatCacheMsgs,
983 : (memcpy(msgarray + nmsgs,
984 : msgs,
985 : n * sizeof(SharedInvalidationMessage)),
986 : nmsgs += n));
987 154112 : ProcessMessageSubGroupMulti(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
988 : CatCacheMsgs,
989 : (memcpy(msgarray + nmsgs,
990 : msgs,
991 : n * sizeof(SharedInvalidationMessage)),
992 : nmsgs += n));
993 154112 : ProcessMessageSubGroupMulti(&transInvalInfo->PriorCmdInvalidMsgs,
994 : RelCacheMsgs,
995 : (memcpy(msgarray + nmsgs,
996 : msgs,
997 : n * sizeof(SharedInvalidationMessage)),
998 : nmsgs += n));
999 154112 : ProcessMessageSubGroupMulti(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
1000 : RelCacheMsgs,
1001 : (memcpy(msgarray + nmsgs,
1002 : msgs,
1003 : n * sizeof(SharedInvalidationMessage)),
1004 : nmsgs += n));
1005 : Assert(nmsgs == nummsgs);
1006 :
1007 154112 : return nmsgs;
1008 : }
1009 :
1010 : /*
1011 : * inplaceGetInvalidationMessages() is called by the inplace update to collect
1012 : * invalidation messages to add to its WAL record. Like the previous
1013 : * function, we might still fail.
1014 : */
1015 : int
1016 92014 : inplaceGetInvalidationMessages(SharedInvalidationMessage **msgs,
1017 : bool *RelcacheInitFileInval)
1018 : {
1019 : SharedInvalidationMessage *msgarray;
1020 : int nummsgs;
1021 : int nmsgs;
1022 :
1023 : /* Quick exit if we haven't done anything with invalidation messages. */
1024 92014 : if (inplaceInvalInfo == NULL)
1025 : {
1026 26640 : *RelcacheInitFileInval = false;
1027 26640 : *msgs = NULL;
1028 26640 : return 0;
1029 : }
1030 :
1031 65374 : *RelcacheInitFileInval = inplaceInvalInfo->RelcacheInitFileInval;
1032 65374 : nummsgs = NumMessagesInGroup(&inplaceInvalInfo->CurrentCmdInvalidMsgs);
1033 65374 : *msgs = msgarray = (SharedInvalidationMessage *)
1034 65374 : palloc(nummsgs * sizeof(SharedInvalidationMessage));
1035 :
1036 65374 : nmsgs = 0;
1037 65374 : ProcessMessageSubGroupMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1038 : CatCacheMsgs,
1039 : (memcpy(msgarray + nmsgs,
1040 : msgs,
1041 : n * sizeof(SharedInvalidationMessage)),
1042 : nmsgs += n));
1043 65374 : ProcessMessageSubGroupMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1044 : RelCacheMsgs,
1045 : (memcpy(msgarray + nmsgs,
1046 : msgs,
1047 : n * sizeof(SharedInvalidationMessage)),
1048 : nmsgs += n));
1049 : Assert(nmsgs == nummsgs);
1050 :
1051 65374 : return nmsgs;
1052 : }
1053 :
1054 : /*
1055 : * ProcessCommittedInvalidationMessages is executed by xact_redo_commit() or
1056 : * standby_redo() to process invalidation messages. Currently that happens
1057 : * only at end-of-xact.
1058 : *
1059 : * Relcache init file invalidation requires processing both
1060 : * before and after we send the SI messages. See AtEOXact_Inval()
1061 : */
1062 : void
1063 53568 : ProcessCommittedInvalidationMessages(SharedInvalidationMessage *msgs,
1064 : int nmsgs, bool RelcacheInitFileInval,
1065 : Oid dbid, Oid tsid)
1066 : {
1067 53568 : if (nmsgs <= 0)
1068 10204 : return;
1069 :
1070 43364 : elog(DEBUG4, "replaying commit with %d messages%s", nmsgs,
1071 : (RelcacheInitFileInval ? " and relcache file invalidation" : ""));
1072 :
1073 43364 : if (RelcacheInitFileInval)
1074 : {
1075 630 : elog(DEBUG4, "removing relcache init files for database %u", dbid);
1076 :
1077 : /*
1078 : * RelationCacheInitFilePreInvalidate, when the invalidation message
1079 : * is for a specific database, requires DatabasePath to be set, but we
1080 : * should not use SetDatabasePath during recovery, since it is
1081 : * intended to be used only once by normal backends. Hence, a quick
1082 : * hack: set DatabasePath directly then unset after use.
1083 : */
1084 630 : if (OidIsValid(dbid))
1085 630 : DatabasePath = GetDatabasePath(dbid, tsid);
1086 :
1087 630 : RelationCacheInitFilePreInvalidate();
1088 :
1089 630 : if (OidIsValid(dbid))
1090 : {
1091 630 : pfree(DatabasePath);
1092 630 : DatabasePath = NULL;
1093 : }
1094 : }
1095 :
1096 43364 : SendSharedInvalidMessages(msgs, nmsgs);
1097 :
1098 43364 : if (RelcacheInitFileInval)
1099 630 : RelationCacheInitFilePostInvalidate();
1100 : }
1101 :
1102 : /*
1103 : * AtEOXact_Inval
1104 : * Process queued-up invalidation messages at end of main transaction.
1105 : *
1106 : * If isCommit, we must send out the messages in our PriorCmdInvalidMsgs list
1107 : * to the shared invalidation message queue. Note that these will be read
1108 : * not only by other backends, but also by our own backend at the next
1109 : * transaction start (via AcceptInvalidationMessages). This means that
1110 : * we can skip immediate local processing of anything that's still in
1111 : * CurrentCmdInvalidMsgs, and just send that list out too.
1112 : *
1113 : * If not isCommit, we are aborting, and must locally process the messages
1114 : * in PriorCmdInvalidMsgs. No messages need be sent to other backends,
1115 : * since they'll not have seen our changed tuples anyway. We can forget
1116 : * about CurrentCmdInvalidMsgs too, since those changes haven't touched
1117 : * the caches yet.
1118 : *
1119 : * In any case, reset our state to empty. We need not physically
1120 : * free memory here, since TopTransactionContext is about to be emptied
1121 : * anyway.
1122 : *
1123 : * Note:
1124 : * This should be called as the last step in processing a transaction.
1125 : */
1126 : void
1127 803290 : AtEOXact_Inval(bool isCommit)
1128 : {
1129 803290 : inplaceInvalInfo = NULL;
1130 :
1131 : /* Quick exit if no transactional messages */
1132 803290 : if (transInvalInfo == NULL)
1133 562894 : return;
1134 :
1135 : /* Must be at top of stack */
1136 : Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
1137 :
1138 240396 : INJECTION_POINT("AtEOXact_Inval-with-transInvalInfo");
1139 :
1140 240396 : if (isCommit)
1141 : {
1142 : /*
1143 : * Relcache init file invalidation requires processing both before and
1144 : * after we send the SI messages. However, we need not do anything
1145 : * unless we committed.
1146 : */
1147 235880 : if (transInvalInfo->ii.RelcacheInitFileInval)
1148 34708 : RelationCacheInitFilePreInvalidate();
1149 :
1150 235880 : AppendInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1151 235880 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
1152 :
1153 235880 : ProcessInvalidationMessagesMulti(&transInvalInfo->PriorCmdInvalidMsgs,
1154 : SendSharedInvalidMessages);
1155 :
1156 235880 : if (transInvalInfo->ii.RelcacheInitFileInval)
1157 34708 : RelationCacheInitFilePostInvalidate();
1158 : }
1159 : else
1160 : {
1161 4516 : ProcessInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1162 : LocalExecuteInvalidationMessage);
1163 : }
1164 :
1165 : /* Need not free anything explicitly */
1166 240396 : transInvalInfo = NULL;
1167 : }
1168 :
1169 : /*
1170 : * PreInplace_Inval
1171 : * Process queued-up invalidation before inplace update critical section.
1172 : *
1173 : * Tasks belong here if they are safe even if the inplace update does not
1174 : * complete. Currently, this just unlinks a cache file, which can fail. The
1175 : * sum of this and AtInplace_Inval() mirrors AtEOXact_Inval(isCommit=true).
1176 : */
1177 : void
1178 154422 : PreInplace_Inval(void)
1179 : {
1180 : Assert(CritSectionCount == 0);
1181 :
1182 154422 : if (inplaceInvalInfo && inplaceInvalInfo->RelcacheInitFileInval)
1183 32486 : RelationCacheInitFilePreInvalidate();
1184 154422 : }
1185 :
1186 : /*
1187 : * AtInplace_Inval
1188 : * Process queued-up invalidations after inplace update buffer mutation.
1189 : */
1190 : void
1191 154422 : AtInplace_Inval(void)
1192 : {
1193 : Assert(CritSectionCount > 0);
1194 :
1195 154422 : if (inplaceInvalInfo == NULL)
1196 26640 : return;
1197 :
1198 127782 : ProcessInvalidationMessagesMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1199 : SendSharedInvalidMessages);
1200 :
1201 127782 : if (inplaceInvalInfo->RelcacheInitFileInval)
1202 32486 : RelationCacheInitFilePostInvalidate();
1203 :
1204 127782 : inplaceInvalInfo = NULL;
1205 : }
1206 :
1207 : /*
1208 : * ForgetInplace_Inval
1209 : * Alternative to PreInplace_Inval()+AtInplace_Inval(): discard queued-up
1210 : * invalidations. This lets inplace update enumerate invalidations
1211 : * optimistically, before locking the buffer.
1212 : */
1213 : void
1214 112810 : ForgetInplace_Inval(void)
1215 : {
1216 112810 : inplaceInvalInfo = NULL;
1217 112810 : }
1218 :
1219 : /*
1220 : * AtEOSubXact_Inval
1221 : * Process queued-up invalidation messages at end of subtransaction.
1222 : *
1223 : * If isCommit, process CurrentCmdInvalidMsgs if any (there probably aren't),
1224 : * and then attach both CurrentCmdInvalidMsgs and PriorCmdInvalidMsgs to the
1225 : * parent's PriorCmdInvalidMsgs list.
1226 : *
1227 : * If not isCommit, we are aborting, and must locally process the messages
1228 : * in PriorCmdInvalidMsgs. No messages need be sent to other backends.
1229 : * We can forget about CurrentCmdInvalidMsgs too, since those changes haven't
1230 : * touched the caches yet.
1231 : *
1232 : * In any case, pop the transaction stack. We need not physically free memory
1233 : * here, since CurTransactionContext is about to be emptied anyway
1234 : * (if aborting). Beware of the possibility of aborting the same nesting
1235 : * level twice, though.
1236 : */
1237 : void
1238 20026 : AtEOSubXact_Inval(bool isCommit)
1239 : {
1240 : int my_level;
1241 : TransInvalidationInfo *myInfo;
1242 :
1243 : /*
1244 : * Successful inplace update must clear this, but we clear it on abort.
1245 : * Inplace updates allocate this in CurrentMemoryContext, which has
1246 : * lifespan <= subtransaction lifespan. Hence, don't free it explicitly.
1247 : */
1248 20026 : if (isCommit)
1249 : Assert(inplaceInvalInfo == NULL);
1250 : else
1251 9298 : inplaceInvalInfo = NULL;
1252 :
1253 : /* Quick exit if no transactional messages. */
1254 20026 : myInfo = transInvalInfo;
1255 20026 : if (myInfo == NULL)
1256 18394 : return;
1257 :
1258 : /* Also bail out quickly if messages are not for this level. */
1259 1632 : my_level = GetCurrentTransactionNestLevel();
1260 1632 : if (myInfo->my_level != my_level)
1261 : {
1262 : Assert(myInfo->my_level < my_level);
1263 1352 : return;
1264 : }
1265 :
1266 280 : if (isCommit)
1267 : {
1268 : /* If CurrentCmdInvalidMsgs still has anything, fix it */
1269 98 : CommandEndInvalidationMessages();
1270 :
1271 : /*
1272 : * We create invalidation stack entries lazily, so the parent might
1273 : * not have one. Instead of creating one, moving all the data over,
1274 : * and then freeing our own, we can just adjust the level of our own
1275 : * entry.
1276 : */
1277 98 : if (myInfo->parent == NULL || myInfo->parent->my_level < my_level - 1)
1278 : {
1279 74 : myInfo->my_level--;
1280 74 : return;
1281 : }
1282 :
1283 : /*
1284 : * Pass up my inval messages to parent. Notice that we stick them in
1285 : * PriorCmdInvalidMsgs, not CurrentCmdInvalidMsgs, since they've
1286 : * already been locally processed. (This would trigger the Assert in
1287 : * AppendInvalidationMessageSubGroup if the parent's
1288 : * CurrentCmdInvalidMsgs isn't empty; but we already checked that in
1289 : * PrepareInvalidationState.)
1290 : */
1291 24 : AppendInvalidationMessages(&myInfo->parent->PriorCmdInvalidMsgs,
1292 : &myInfo->PriorCmdInvalidMsgs);
1293 :
1294 : /* Must readjust parent's CurrentCmdInvalidMsgs indexes now */
1295 24 : SetGroupToFollow(&myInfo->parent->ii.CurrentCmdInvalidMsgs,
1296 : &myInfo->parent->PriorCmdInvalidMsgs);
1297 :
1298 : /* Pending relcache inval becomes parent's problem too */
1299 24 : if (myInfo->ii.RelcacheInitFileInval)
1300 0 : myInfo->parent->ii.RelcacheInitFileInval = true;
1301 :
1302 : /* Pop the transaction state stack */
1303 24 : transInvalInfo = myInfo->parent;
1304 :
1305 : /* Need not free anything else explicitly */
1306 24 : pfree(myInfo);
1307 : }
1308 : else
1309 : {
1310 182 : ProcessInvalidationMessages(&myInfo->PriorCmdInvalidMsgs,
1311 : LocalExecuteInvalidationMessage);
1312 :
1313 : /* Pop the transaction state stack */
1314 182 : transInvalInfo = myInfo->parent;
1315 :
1316 : /* Need not free anything else explicitly */
1317 182 : pfree(myInfo);
1318 : }
1319 : }
1320 :
1321 : /*
1322 : * CommandEndInvalidationMessages
1323 : * Process queued-up invalidation messages at end of one command
1324 : * in a transaction.
1325 : *
1326 : * Here, we send no messages to the shared queue, since we don't know yet if
1327 : * we will commit. We do need to locally process the CurrentCmdInvalidMsgs
1328 : * list, so as to flush our caches of any entries we have outdated in the
1329 : * current command. We then move the current-cmd list over to become part
1330 : * of the prior-cmds list.
1331 : *
1332 : * Note:
1333 : * This should be called during CommandCounterIncrement(),
1334 : * after we have advanced the command ID.
1335 : */
1336 : void
1337 1089664 : CommandEndInvalidationMessages(void)
1338 : {
1339 : /*
1340 : * You might think this shouldn't be called outside any transaction, but
1341 : * bootstrap does it, and also ABORT issued when not in a transaction. So
1342 : * just quietly return if no state to work on.
1343 : */
1344 1089664 : if (transInvalInfo == NULL)
1345 365702 : return;
1346 :
1347 723962 : ProcessInvalidationMessages(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
1348 : LocalExecuteInvalidationMessage);
1349 :
1350 : /* WAL Log per-command invalidation messages for wal_level=logical */
1351 723956 : if (XLogLogicalInfoActive())
1352 8450 : LogLogicalInvalidations();
1353 :
1354 723956 : AppendInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1355 723956 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
1356 : }
1357 :
1358 :
1359 : /*
1360 : * CacheInvalidateHeapTupleCommon
1361 : * Common logic for end-of-command and inplace variants.
1362 : */
1363 : static void
1364 21448860 : CacheInvalidateHeapTupleCommon(Relation relation,
1365 : HeapTuple tuple,
1366 : HeapTuple newtuple,
1367 : InvalidationInfo *(*prepare_callback) (void))
1368 : {
1369 : InvalidationInfo *info;
1370 : Oid tupleRelId;
1371 : Oid databaseId;
1372 : Oid relationId;
1373 :
1374 : /* Do nothing during bootstrap */
1375 21448860 : if (IsBootstrapProcessingMode())
1376 1175040 : return;
1377 :
1378 : /*
1379 : * We only need to worry about invalidation for tuples that are in system
1380 : * catalogs; user-relation tuples are never in catcaches and can't affect
1381 : * the relcache either.
1382 : */
1383 20273820 : if (!IsCatalogRelation(relation))
1384 16244994 : return;
1385 :
1386 : /*
1387 : * IsCatalogRelation() will return true for TOAST tables of system
1388 : * catalogs, but we don't care about those, either.
1389 : */
1390 4028826 : if (IsToastRelation(relation))
1391 30820 : return;
1392 :
1393 : /* Allocate any required resources. */
1394 3998006 : info = prepare_callback();
1395 :
1396 : /*
1397 : * First let the catcache do its thing
1398 : */
1399 3998006 : tupleRelId = RelationGetRelid(relation);
1400 3998006 : if (RelationInvalidatesSnapshotsOnly(tupleRelId))
1401 : {
1402 1005950 : databaseId = IsSharedRelation(tupleRelId) ? InvalidOid : MyDatabaseId;
1403 1005950 : RegisterSnapshotInvalidation(info, databaseId, tupleRelId);
1404 : }
1405 : else
1406 2992056 : PrepareToInvalidateCacheTuple(relation, tuple, newtuple,
1407 : RegisterCatcacheInvalidation,
1408 : (void *) info);
1409 :
1410 : /*
1411 : * Now, is this tuple one of the primary definers of a relcache entry? See
1412 : * comments in file header for deeper explanation.
1413 : *
1414 : * Note we ignore newtuple here; we assume an update cannot move a tuple
1415 : * from being part of one relcache entry to being part of another.
1416 : */
1417 3998006 : if (tupleRelId == RelationRelationId)
1418 : {
1419 654216 : Form_pg_class classtup = (Form_pg_class) GETSTRUCT(tuple);
1420 :
1421 654216 : relationId = classtup->oid;
1422 654216 : if (classtup->relisshared)
1423 38526 : databaseId = InvalidOid;
1424 : else
1425 615690 : databaseId = MyDatabaseId;
1426 : }
1427 3343790 : else if (tupleRelId == AttributeRelationId)
1428 : {
1429 1076094 : Form_pg_attribute atttup = (Form_pg_attribute) GETSTRUCT(tuple);
1430 :
1431 1076094 : relationId = atttup->attrelid;
1432 :
1433 : /*
1434 : * KLUGE ALERT: we always send the relcache event with MyDatabaseId,
1435 : * even if the rel in question is shared (which we can't easily tell).
1436 : * This essentially means that only backends in this same database
1437 : * will react to the relcache flush request. This is in fact
1438 : * appropriate, since only those backends could see our pg_attribute
1439 : * change anyway. It looks a bit ugly though. (In practice, shared
1440 : * relations can't have schema changes after bootstrap, so we should
1441 : * never come here for a shared rel anyway.)
1442 : */
1443 1076094 : databaseId = MyDatabaseId;
1444 : }
1445 2267696 : else if (tupleRelId == IndexRelationId)
1446 : {
1447 63038 : Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
1448 :
1449 : /*
1450 : * When a pg_index row is updated, we should send out a relcache inval
1451 : * for the index relation. As above, we don't know the shared status
1452 : * of the index, but in practice it doesn't matter since indexes of
1453 : * shared catalogs can't have such updates.
1454 : */
1455 63038 : relationId = indextup->indexrelid;
1456 63038 : databaseId = MyDatabaseId;
1457 : }
1458 2204658 : else if (tupleRelId == ConstraintRelationId)
1459 : {
1460 79888 : Form_pg_constraint constrtup = (Form_pg_constraint) GETSTRUCT(tuple);
1461 :
1462 : /*
1463 : * Foreign keys are part of relcache entries, too, so send out an
1464 : * inval for the table that the FK applies to.
1465 : */
1466 79888 : if (constrtup->contype == CONSTRAINT_FOREIGN &&
1467 8076 : OidIsValid(constrtup->conrelid))
1468 : {
1469 8076 : relationId = constrtup->conrelid;
1470 8076 : databaseId = MyDatabaseId;
1471 : }
1472 : else
1473 71812 : return;
1474 : }
1475 : else
1476 2124770 : return;
1477 :
1478 : /*
1479 : * Yes. We need to register a relcache invalidation event.
1480 : */
1481 1801424 : RegisterRelcacheInvalidation(info, databaseId, relationId);
1482 : }
1483 :
1484 : /*
1485 : * CacheInvalidateHeapTuple
1486 : * Register the given tuple for invalidation at end of command
1487 : * (ie, current command is creating or outdating this tuple) and end of
1488 : * transaction. Also, detect whether a relcache invalidation is implied.
1489 : *
1490 : * For an insert or delete, tuple is the target tuple and newtuple is NULL.
1491 : * For an update, we are called just once, with tuple being the old tuple
1492 : * version and newtuple the new version. This allows avoidance of duplicate
1493 : * effort during an update.
1494 : */
1495 : void
1496 21181628 : CacheInvalidateHeapTuple(Relation relation,
1497 : HeapTuple tuple,
1498 : HeapTuple newtuple)
1499 : {
1500 21181628 : CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1501 : PrepareInvalidationState);
1502 21181628 : }
1503 :
1504 : /*
1505 : * CacheInvalidateHeapTupleInplace
1506 : * Register the given tuple for nontransactional invalidation pertaining
1507 : * to an inplace update. Also, detect whether a relcache invalidation is
1508 : * implied.
1509 : *
1510 : * Like CacheInvalidateHeapTuple(), but for inplace updates.
1511 : */
1512 : void
1513 267232 : CacheInvalidateHeapTupleInplace(Relation relation,
1514 : HeapTuple tuple,
1515 : HeapTuple newtuple)
1516 : {
1517 267232 : CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1518 : PrepareInplaceInvalidationState);
1519 267232 : }
1520 :
1521 : /*
1522 : * CacheInvalidateCatalog
1523 : * Register invalidation of the whole content of a system catalog.
1524 : *
1525 : * This is normally used in VACUUM FULL/CLUSTER, where we haven't so much
1526 : * changed any tuples as moved them around. Some uses of catcache entries
1527 : * expect their TIDs to be correct, so we have to blow away the entries.
1528 : *
1529 : * Note: we expect caller to verify that the rel actually is a system
1530 : * catalog. If it isn't, no great harm is done, just a wasted sinval message.
1531 : */
1532 : void
1533 222 : CacheInvalidateCatalog(Oid catalogId)
1534 : {
1535 : Oid databaseId;
1536 :
1537 222 : if (IsSharedRelation(catalogId))
1538 36 : databaseId = InvalidOid;
1539 : else
1540 186 : databaseId = MyDatabaseId;
1541 :
1542 222 : RegisterCatalogInvalidation(PrepareInvalidationState(),
1543 : databaseId, catalogId);
1544 222 : }
1545 :
1546 : /*
1547 : * CacheInvalidateRelcache
1548 : * Register invalidation of the specified relation's relcache entry
1549 : * at end of command.
1550 : *
1551 : * This is used in places that need to force relcache rebuild but aren't
1552 : * changing any of the tuples recognized as contributors to the relcache
1553 : * entry by CacheInvalidateHeapTuple. (An example is dropping an index.)
1554 : */
1555 : void
1556 123660 : CacheInvalidateRelcache(Relation relation)
1557 : {
1558 : Oid databaseId;
1559 : Oid relationId;
1560 :
1561 123660 : relationId = RelationGetRelid(relation);
1562 123660 : if (relation->rd_rel->relisshared)
1563 5210 : databaseId = InvalidOid;
1564 : else
1565 118450 : databaseId = MyDatabaseId;
1566 :
1567 123660 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1568 : databaseId, relationId);
1569 123660 : }
1570 :
1571 : /*
1572 : * CacheInvalidateRelcacheAll
1573 : * Register invalidation of the whole relcache at the end of command.
1574 : *
1575 : * This is used by alter publication as changes in publications may affect
1576 : * large number of tables.
1577 : */
1578 : void
1579 148 : CacheInvalidateRelcacheAll(void)
1580 : {
1581 148 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1582 : InvalidOid, InvalidOid);
1583 148 : }
1584 :
1585 : /*
1586 : * CacheInvalidateRelcacheByTuple
1587 : * As above, but relation is identified by passing its pg_class tuple.
1588 : */
1589 : void
1590 69606 : CacheInvalidateRelcacheByTuple(HeapTuple classTuple)
1591 : {
1592 69606 : Form_pg_class classtup = (Form_pg_class) GETSTRUCT(classTuple);
1593 : Oid databaseId;
1594 : Oid relationId;
1595 :
1596 69606 : relationId = classtup->oid;
1597 69606 : if (classtup->relisshared)
1598 1890 : databaseId = InvalidOid;
1599 : else
1600 67716 : databaseId = MyDatabaseId;
1601 69606 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1602 : databaseId, relationId);
1603 69606 : }
1604 :
1605 : /*
1606 : * CacheInvalidateRelcacheByRelid
1607 : * As above, but relation is identified by passing its OID.
1608 : * This is the least efficient of the three options; use one of
1609 : * the above routines if you have a Relation or pg_class tuple.
1610 : */
1611 : void
1612 27378 : CacheInvalidateRelcacheByRelid(Oid relid)
1613 : {
1614 : HeapTuple tup;
1615 :
1616 27378 : tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
1617 27378 : if (!HeapTupleIsValid(tup))
1618 0 : elog(ERROR, "cache lookup failed for relation %u", relid);
1619 27378 : CacheInvalidateRelcacheByTuple(tup);
1620 27378 : ReleaseSysCache(tup);
1621 27378 : }
1622 :
1623 :
1624 : /*
1625 : * CacheInvalidateSmgr
1626 : * Register invalidation of smgr references to a physical relation.
1627 : *
1628 : * Sending this type of invalidation msg forces other backends to close open
1629 : * smgr entries for the rel. This should be done to flush dangling open-file
1630 : * references when the physical rel is being dropped or truncated. Because
1631 : * these are nontransactional (i.e., not-rollback-able) operations, we just
1632 : * send the inval message immediately without any queuing.
1633 : *
1634 : * Note: in most cases there will have been a relcache flush issued against
1635 : * the rel at the logical level. We need a separate smgr-level flush because
1636 : * it is possible for backends to have open smgr entries for rels they don't
1637 : * have a relcache entry for, e.g. because the only thing they ever did with
1638 : * the rel is write out dirty shared buffers.
1639 : *
1640 : * Note: because these messages are nontransactional, they won't be captured
1641 : * in commit/abort WAL entries. Instead, calls to CacheInvalidateSmgr()
1642 : * should happen in low-level smgr.c routines, which are executed while
1643 : * replaying WAL as well as when creating it.
1644 : *
1645 : * Note: In order to avoid bloating SharedInvalidationMessage, we store only
1646 : * three bytes of the ProcNumber using what would otherwise be padding space.
1647 : * Thus, the maximum possible ProcNumber is 2^23-1.
1648 : */
1649 : void
1650 95878 : CacheInvalidateSmgr(RelFileLocatorBackend rlocator)
1651 : {
1652 : SharedInvalidationMessage msg;
1653 :
1654 95878 : msg.sm.id = SHAREDINVALSMGR_ID;
1655 95878 : msg.sm.backend_hi = rlocator.backend >> 16;
1656 95878 : msg.sm.backend_lo = rlocator.backend & 0xffff;
1657 95878 : msg.sm.rlocator = rlocator.locator;
1658 : /* check AddCatcacheInvalidationMessage() for an explanation */
1659 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1660 :
1661 95878 : SendSharedInvalidMessages(&msg, 1);
1662 95878 : }
1663 :
1664 : /*
1665 : * CacheInvalidateRelmap
1666 : * Register invalidation of the relation mapping for a database,
1667 : * or for the shared catalogs if databaseId is zero.
1668 : *
1669 : * Sending this type of invalidation msg forces other backends to re-read
1670 : * the indicated relation mapping file. It is also necessary to send a
1671 : * relcache inval for the specific relations whose mapping has been altered,
1672 : * else the relcache won't get updated with the new filenode data.
1673 : *
1674 : * Note: because these messages are nontransactional, they won't be captured
1675 : * in commit/abort WAL entries. Instead, calls to CacheInvalidateRelmap()
1676 : * should happen in low-level relmapper.c routines, which are executed while
1677 : * replaying WAL as well as when creating it.
1678 : */
1679 : void
1680 402 : CacheInvalidateRelmap(Oid databaseId)
1681 : {
1682 : SharedInvalidationMessage msg;
1683 :
1684 402 : msg.rm.id = SHAREDINVALRELMAP_ID;
1685 402 : msg.rm.dbId = databaseId;
1686 : /* check AddCatcacheInvalidationMessage() for an explanation */
1687 : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1688 :
1689 402 : SendSharedInvalidMessages(&msg, 1);
1690 402 : }
1691 :
1692 :
1693 : /*
1694 : * CacheRegisterSyscacheCallback
1695 : * Register the specified function to be called for all future
1696 : * invalidation events in the specified cache. The cache ID and the
1697 : * hash value of the tuple being invalidated will be passed to the
1698 : * function.
1699 : *
1700 : * NOTE: Hash value zero will be passed if a cache reset request is received.
1701 : * In this case the called routines should flush all cached state.
1702 : * Yes, there's a possibility of a false match to zero, but it doesn't seem
1703 : * worth troubling over, especially since most of the current callees just
1704 : * flush all cached state anyway.
1705 : */
1706 : void
1707 504460 : CacheRegisterSyscacheCallback(int cacheid,
1708 : SyscacheCallbackFunction func,
1709 : Datum arg)
1710 : {
1711 504460 : if (cacheid < 0 || cacheid >= SysCacheSize)
1712 0 : elog(FATAL, "invalid cache ID: %d", cacheid);
1713 504460 : if (syscache_callback_count >= MAX_SYSCACHE_CALLBACKS)
1714 0 : elog(FATAL, "out of syscache_callback_list slots");
1715 :
1716 504460 : if (syscache_callback_links[cacheid] == 0)
1717 : {
1718 : /* first callback for this cache */
1719 356656 : syscache_callback_links[cacheid] = syscache_callback_count + 1;
1720 : }
1721 : else
1722 : {
1723 : /* add to end of chain, so that older callbacks are called first */
1724 147804 : int i = syscache_callback_links[cacheid] - 1;
1725 :
1726 176754 : while (syscache_callback_list[i].link > 0)
1727 28950 : i = syscache_callback_list[i].link - 1;
1728 147804 : syscache_callback_list[i].link = syscache_callback_count + 1;
1729 : }
1730 :
1731 504460 : syscache_callback_list[syscache_callback_count].id = cacheid;
1732 504460 : syscache_callback_list[syscache_callback_count].link = 0;
1733 504460 : syscache_callback_list[syscache_callback_count].function = func;
1734 504460 : syscache_callback_list[syscache_callback_count].arg = arg;
1735 :
1736 504460 : ++syscache_callback_count;
1737 504460 : }
1738 :
1739 : /*
1740 : * CacheRegisterRelcacheCallback
1741 : * Register the specified function to be called for all future
1742 : * relcache invalidation events. The OID of the relation being
1743 : * invalidated will be passed to the function.
1744 : *
1745 : * NOTE: InvalidOid will be passed if a cache reset request is received.
1746 : * In this case the called routines should flush all cached state.
1747 : */
1748 : void
1749 39400 : CacheRegisterRelcacheCallback(RelcacheCallbackFunction func,
1750 : Datum arg)
1751 : {
1752 39400 : if (relcache_callback_count >= MAX_RELCACHE_CALLBACKS)
1753 0 : elog(FATAL, "out of relcache_callback_list slots");
1754 :
1755 39400 : relcache_callback_list[relcache_callback_count].function = func;
1756 39400 : relcache_callback_list[relcache_callback_count].arg = arg;
1757 :
1758 39400 : ++relcache_callback_count;
1759 39400 : }
1760 :
1761 : /*
1762 : * CallSyscacheCallbacks
1763 : *
1764 : * This is exported so that CatalogCacheFlushCatalog can call it, saving
1765 : * this module from knowing which catcache IDs correspond to which catalogs.
1766 : */
1767 : void
1768 19980292 : CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
1769 : {
1770 : int i;
1771 :
1772 19980292 : if (cacheid < 0 || cacheid >= SysCacheSize)
1773 0 : elog(ERROR, "invalid cache ID: %d", cacheid);
1774 :
1775 19980292 : i = syscache_callback_links[cacheid] - 1;
1776 22790288 : while (i >= 0)
1777 : {
1778 2809996 : struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
1779 :
1780 : Assert(ccitem->id == cacheid);
1781 2809996 : ccitem->function(ccitem->arg, cacheid, hashvalue);
1782 2809996 : i = ccitem->link - 1;
1783 : }
1784 19980292 : }
1785 :
1786 : /*
1787 : * LogLogicalInvalidations
1788 : *
1789 : * Emit WAL for invalidations caused by the current command.
1790 : *
1791 : * This is currently only used for logging invalidations at the command end
1792 : * or at commit time if any invalidations are pending.
1793 : */
1794 : void
1795 31712 : LogLogicalInvalidations(void)
1796 : {
1797 : xl_xact_invals xlrec;
1798 : InvalidationMsgsGroup *group;
1799 : int nmsgs;
1800 :
1801 : /* Quick exit if we haven't done anything with invalidation messages. */
1802 31712 : if (transInvalInfo == NULL)
1803 19942 : return;
1804 :
1805 11770 : group = &transInvalInfo->ii.CurrentCmdInvalidMsgs;
1806 11770 : nmsgs = NumMessagesInGroup(group);
1807 :
1808 11770 : if (nmsgs > 0)
1809 : {
1810 : /* prepare record */
1811 9460 : memset(&xlrec, 0, MinSizeOfXactInvals);
1812 9460 : xlrec.nmsgs = nmsgs;
1813 :
1814 : /* perform insertion */
1815 9460 : XLogBeginInsert();
1816 9460 : XLogRegisterData(&xlrec, MinSizeOfXactInvals);
1817 9460 : ProcessMessageSubGroupMulti(group, CatCacheMsgs,
1818 : XLogRegisterData(msgs,
1819 : n * sizeof(SharedInvalidationMessage)));
1820 9460 : ProcessMessageSubGroupMulti(group, RelCacheMsgs,
1821 : XLogRegisterData(msgs,
1822 : n * sizeof(SharedInvalidationMessage)));
1823 9460 : XLogInsert(RM_XACT_ID, XLOG_XACT_INVALIDATIONS);
1824 : }
1825 : }
|