1 /*-------------------------------------------------------------------------
4 * POSTGRES predicate locking
5 * to support full serializable transaction isolation
8 * The approach taken is to implement Serializable Snapshot Isolation (SSI)
9 * as initially described in this paper:
11 * Michael J. Cahill, Uwe Röhm, and Alan D. Fekete. 2008.
12 * Serializable isolation for snapshot databases.
13 * In SIGMOD '08: Proceedings of the 2008 ACM SIGMOD
14 * international conference on Management of data,
15 * pages 729-738, New York, NY, USA. ACM.
16 * http://doi.acm.org/10.1145/1376616.1376690
18 * and further elaborated in Cahill's doctoral thesis:
20 * Michael James Cahill. 2009.
21 * Serializable Isolation for Snapshot Databases.
22 * Sydney Digital Theses.
23 * University of Sydney, School of Information Technologies.
24 * http://hdl.handle.net/2123/5353
27 * Predicate locks for Serializable Snapshot Isolation (SSI) are SIREAD
28 * locks, which are so different from normal locks that a distinct set of
29 * structures is required to handle them. They are needed to detect
30 * rw-conflicts when the read happens before the write. (When the write
31 * occurs first, the reading transaction can check for a conflict by
32 * examining the MVCC data.)
34 * (1) Besides tuples actually read, they must cover ranges of tuples
35 * which would have been read based on the predicate. This will
36 * require modelling the predicates through locks against database
37 * objects such as pages, index ranges, or entire tables.
39 * (2) They must be kept in RAM for quick access. Because of this, it
40 * isn't possible to always maintain tuple-level granularity -- when
41 * the space allocated to store these approaches exhaustion, a
42 * request for a lock may need to scan for situations where a single
43 * transaction holds many fine-grained locks which can be coalesced
44 * into a single coarser-grained lock.
46 * (3) They never block anything; they are more like flags than locks
47 * in that regard; although they refer to database objects and are
48 * used to identify rw-conflicts with normal write locks.
50 * (4) While they are associated with a transaction, they must survive
51 * a successful COMMIT of that transaction, and remain until all
52 * overlapping transactions complete. This even means that they
53 * must survive termination of the transaction's process. If a
54 * top level transaction is rolled back, however, it is immediately
55 * flagged so that it can be ignored, and its SIREAD locks can be
56 * released any time after that.
58 * (5) The only transactions which create SIREAD locks or check for
59 * conflicts with them are serializable transactions.
61 * (6) When a write lock for a top level transaction is found to cover
62 * an existing SIREAD lock for the same transaction, the SIREAD lock
65 * (7) A write from a serializable transaction must ensure that an xact
66 * record exists for the transaction, with the same lifespan (until
67 * all concurrent transaction complete or the transaction is rolled
68 * back) so that rw-dependencies to that transaction can be
71 * We use an optimization for read-only transactions. Under certain
72 * circumstances, a read-only transaction's snapshot can be shown to
73 * never have conflicts with other transactions. This is referred to
74 * as a "safe" snapshot (and one known not to be is "unsafe").
75 * However, it can't be determined whether a snapshot is safe until
76 * all concurrent read/write transactions complete.
78 * Once a read-only transaction is known to have a safe snapshot, it
79 * can release its predicate locks and exempt itself from further
80 * predicate lock tracking. READ ONLY DEFERRABLE transactions run only
81 * on safe snapshots, waiting as necessary for one to be available.
84 * Lightweight locks to manage access to the predicate locking shared
85 * memory objects must be taken in this order, and should be released in
88 * SerializableFinishedListLock
89 * - Protects the list of transactions which have completed but which
90 * may yet matter because they overlap still-active transactions.
92 * SerializablePredicateLockListLock
93 * - Protects the linked list of locks held by a transaction. Note
94 * that the locks themselves are also covered by the partition
95 * locks of their respective lock targets; this lock only affects
96 * the linked list connecting the locks related to a transaction.
97 * - All transactions share this single lock (with no partitioning).
98 * - There is never a need for a process other than the one running
99 * an active transaction to walk the list of locks held by that
101 * - It is relatively infrequent that another process needs to
102 * modify the list for a transaction, but it does happen for such
103 * things as index page splits for pages with predicate locks and
104 * freeing of predicate locked pages by a vacuum process. When
105 * removing a lock in such cases, the lock itself contains the
106 * pointers needed to remove it from the list. When adding a
107 * lock in such cases, the lock can be added using the anchor in
108 * the transaction structure. Neither requires walking the list.
109 * - Cleaning up the list for a terminated transaction is sometimes
110 * not done on a retail basis, in which case no lock is required.
111 * - Due to the above, a process accessing its active transaction's
112 * list always uses a shared lock, regardless of whether it is
113 * walking or maintaining the list. This improves concurrency
114 * for the common access patterns.
115 * - A process which needs to alter the list of a transaction other
116 * than its own active transaction must acquire an exclusive
119 * FirstPredicateLockMgrLock based partition locks
120 * - The same lock protects a target, all locks on that target, and
121 * the linked list of locks on the target..
122 * - When more than one is needed, acquire in ascending order.
124 * SerializableXactHashLock
125 * - Protects both PredXact and SerializableXidHash.
128 * Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group
129 * Portions Copyright (c) 1994, Regents of the University of California
133 * src/backend/storage/lmgr/predicate.c
135 *-------------------------------------------------------------------------
140 * housekeeping for setting up shared memory predicate lock structures
141 * InitPredicateLocks(void)
142 * PredicateLockShmemSize(void)
144 * predicate lock reporting
145 * GetPredicateLockStatusData(void)
146 * PageIsPredicateLocked(Relation relation, BlockNumber blkno)
148 * predicate lock maintenance
149 * GetSerializableTransactionSnapshot(Snapshot snapshot)
150 * SetSerializableTransactionSnapshot(Snapshot snapshot,
151 * VirtualTransactionId *sourcevxid)
152 * RegisterPredicateLockingXid(void)
153 * PredicateLockRelation(Relation relation, Snapshot snapshot)
154 * PredicateLockPage(Relation relation, BlockNumber blkno,
156 * PredicateLockTuple(Relation relation, HeapTuple tuple,
158 * PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
159 * BlockNumber newblkno)
160 * PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
161 * BlockNumber newblkno)
162 * TransferPredicateLocksToHeapRelation(Relation relation)
163 * ReleasePredicateLocks(bool isCommit)
165 * conflict detection (may also trigger rollback)
166 * CheckForSerializableConflictOut(bool visible, Relation relation,
167 * HeapTupleData *tup, Buffer buffer,
169 * CheckForSerializableConflictIn(Relation relation, HeapTupleData *tup,
171 * CheckTableForSerializableConflictIn(Relation relation)
173 * final rollback checking
174 * PreCommit_CheckForSerializationFailure(void)
176 * two-phase commit support
177 * AtPrepare_PredicateLocks(void);
178 * PostPrepare_PredicateLocks(TransactionId xid);
179 * PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit);
180 * predicatelock_twophase_recover(TransactionId xid, uint16 info,
181 * void *recdata, uint32 len);
184 #include "postgres.h"
186 #include "access/htup_details.h"
187 #include "access/slru.h"
188 #include "access/subtrans.h"
189 #include "access/transam.h"
190 #include "access/twophase.h"
191 #include "access/twophase_rmgr.h"
192 #include "access/xact.h"
193 #include "access/xlog.h"
194 #include "miscadmin.h"
196 #include "storage/bufmgr.h"
197 #include "storage/predicate.h"
198 #include "storage/predicate_internals.h"
199 #include "storage/proc.h"
200 #include "storage/procarray.h"
201 #include "utils/rel.h"
202 #include "utils/snapmgr.h"
203 #include "utils/tqual.h"
205 /* Uncomment the next line to test the graceful degradation code. */
206 /* #define TEST_OLDSERXID */
209 * Test the most selective fields first, for performance.
211 * a is covered by b if all of the following hold:
212 * 1) a.database = b.database
213 * 2) a.relation = b.relation
214 * 3) b.offset is invalid (b is page-granularity or higher)
215 * 4) either of the following:
216 * 4a) a.offset is valid (a is tuple-granularity) and a.page = b.page
217 * or 4b) a.offset is invalid and b.page is invalid (a is
218 * page-granularity and b is relation-granularity
220 #define TargetTagIsCoveredBy(covered_target, covering_target) \
221 ((GET_PREDICATELOCKTARGETTAG_RELATION(covered_target) == /* (2) */ \
222 GET_PREDICATELOCKTARGETTAG_RELATION(covering_target)) \
223 && (GET_PREDICATELOCKTARGETTAG_OFFSET(covering_target) == \
224 InvalidOffsetNumber) /* (3) */ \
225 && (((GET_PREDICATELOCKTARGETTAG_OFFSET(covered_target) != \
226 InvalidOffsetNumber) /* (4a) */ \
227 && (GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
228 GET_PREDICATELOCKTARGETTAG_PAGE(covered_target))) \
229 || ((GET_PREDICATELOCKTARGETTAG_PAGE(covering_target) == \
230 InvalidBlockNumber) /* (4b) */ \
231 && (GET_PREDICATELOCKTARGETTAG_PAGE(covered_target) \
232 != InvalidBlockNumber))) \
233 && (GET_PREDICATELOCKTARGETTAG_DB(covered_target) == /* (1) */ \
234 GET_PREDICATELOCKTARGETTAG_DB(covering_target)))
237 * The predicate locking target and lock shared hash tables are partitioned to
238 * reduce contention. To determine which partition a given target belongs to,
239 * compute the tag's hash code with PredicateLockTargetTagHashCode(), then
240 * apply one of these macros.
241 * NB: NUM_PREDICATELOCK_PARTITIONS must be a power of 2!
243 #define PredicateLockHashPartition(hashcode) \
244 ((hashcode) % NUM_PREDICATELOCK_PARTITIONS)
245 #define PredicateLockHashPartitionLock(hashcode) \
246 (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + \
247 PredicateLockHashPartition(hashcode)].lock)
248 #define PredicateLockHashPartitionLockByIndex(i) \
249 (&MainLWLockArray[PREDICATELOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)
251 #define NPREDICATELOCKTARGETENTS() \
252 mul_size(max_predicate_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
254 #define SxactIsOnFinishedList(sxact) (!SHMQueueIsDetached(&((sxact)->finishedLink)))
257 * Note that a sxact is marked "prepared" once it has passed
258 * PreCommit_CheckForSerializationFailure, even if it isn't using
259 * 2PC. This is the point at which it can no longer be aborted.
261 * The PREPARED flag remains set after commit, so SxactIsCommitted
262 * implies SxactIsPrepared.
264 #define SxactIsCommitted(sxact) (((sxact)->flags & SXACT_FLAG_COMMITTED) != 0)
265 #define SxactIsPrepared(sxact) (((sxact)->flags & SXACT_FLAG_PREPARED) != 0)
266 #define SxactIsRolledBack(sxact) (((sxact)->flags & SXACT_FLAG_ROLLED_BACK) != 0)
267 #define SxactIsDoomed(sxact) (((sxact)->flags & SXACT_FLAG_DOOMED) != 0)
268 #define SxactIsReadOnly(sxact) (((sxact)->flags & SXACT_FLAG_READ_ONLY) != 0)
269 #define SxactHasSummaryConflictIn(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_IN) != 0)
270 #define SxactHasSummaryConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_SUMMARY_CONFLICT_OUT) != 0)
272 * The following macro actually means that the specified transaction has a
273 * conflict out *to a transaction which committed ahead of it*. It's hard
274 * to get that into a name of a reasonable length.
276 #define SxactHasConflictOut(sxact) (((sxact)->flags & SXACT_FLAG_CONFLICT_OUT) != 0)
277 #define SxactIsDeferrableWaiting(sxact) (((sxact)->flags & SXACT_FLAG_DEFERRABLE_WAITING) != 0)
278 #define SxactIsROSafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_SAFE) != 0)
279 #define SxactIsROUnsafe(sxact) (((sxact)->flags & SXACT_FLAG_RO_UNSAFE) != 0)
282 * Compute the hash code associated with a PREDICATELOCKTARGETTAG.
284 * To avoid unnecessary recomputations of the hash code, we try to do this
285 * just once per function, and then pass it around as needed. Aside from
286 * passing the hashcode to hash_search_with_hash_value(), we can extract
287 * the lock partition number from the hashcode.
289 #define PredicateLockTargetTagHashCode(predicatelocktargettag) \
290 get_hash_value(PredicateLockTargetHash, predicatelocktargettag)
293 * Given a predicate lock tag, and the hash for its target,
294 * compute the lock hash.
296 * To make the hash code also depend on the transaction, we xor the sxid
297 * struct's address into the hash code, left-shifted so that the
298 * partition-number bits don't change. Since this is only a hash, we
299 * don't care if we lose high-order bits of the address; use an
300 * intermediate variable to suppress cast-pointer-to-int warnings.
302 #define PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash) \
303 ((targethash) ^ ((uint32) PointerGetDatum((predicatelocktag)->myXact)) \
304 << LOG2_NUM_PREDICATELOCK_PARTITIONS)
308 * The SLRU buffer area through which we access the old xids.
310 static SlruCtlData OldSerXidSlruCtlData;
312 #define OldSerXidSlruCtl (&OldSerXidSlruCtlData)
314 #define OLDSERXID_PAGESIZE BLCKSZ
315 #define OLDSERXID_ENTRYSIZE sizeof(SerCommitSeqNo)
316 #define OLDSERXID_ENTRIESPERPAGE (OLDSERXID_PAGESIZE / OLDSERXID_ENTRYSIZE)
319 * Set maximum pages based on the lesser of the number needed to track all
320 * transactions and the maximum that SLRU supports.
322 #define OLDSERXID_MAX_PAGE Min(SLRU_PAGES_PER_SEGMENT * 0x10000 - 1, \
323 (MaxTransactionId) / OLDSERXID_ENTRIESPERPAGE)
325 #define OldSerXidNextPage(page) (((page) >= OLDSERXID_MAX_PAGE) ? 0 : (page) + 1)
327 #define OldSerXidValue(slotno, xid) (*((SerCommitSeqNo *) \
328 (OldSerXidSlruCtl->shared->page_buffer[slotno] + \
329 ((((uint32) (xid)) % OLDSERXID_ENTRIESPERPAGE) * OLDSERXID_ENTRYSIZE))))
331 #define OldSerXidPage(xid) ((((uint32) (xid)) / OLDSERXID_ENTRIESPERPAGE) % (OLDSERXID_MAX_PAGE + 1))
332 #define OldSerXidSegment(page) ((page) / SLRU_PAGES_PER_SEGMENT)
334 typedef struct OldSerXidControlData
336 int headPage; /* newest initialized page */
337 TransactionId headXid; /* newest valid Xid in the SLRU */
338 TransactionId tailXid; /* oldest xmin we might be interested in */
339 bool warningIssued; /* have we issued SLRU wrap-around warning? */
340 } OldSerXidControlData;
342 typedef struct OldSerXidControlData *OldSerXidControl;
344 static OldSerXidControl oldSerXidControl;
347 * When the oldest committed transaction on the "finished" list is moved to
348 * SLRU, its predicate locks will be moved to this "dummy" transaction,
349 * collapsing duplicate targets. When a duplicate is found, the later
350 * commitSeqNo is used.
352 static SERIALIZABLEXACT *OldCommittedSxact;
356 * These configuration variables are used to set the predicate lock table size
357 * and to control promotion of predicate locks to coarser granularity in an
358 * attempt to degrade performance (mostly as false positive serialization
359 * failure) gracefully in the face of memory pressurel
361 int max_predicate_locks_per_xact; /* set by guc.c */
362 int max_predicate_locks_per_relation; /* set by guc.c */
363 int max_predicate_locks_per_page; /* set by guc.c */
366 * This provides a list of objects in order to track transactions
367 * participating in predicate locking. Entries in the list are fixed size,
368 * and reside in shared memory. The memory address of an entry must remain
369 * fixed during its lifetime. The list will be protected from concurrent
370 * update externally; no provision is made in this code to manage that. The
371 * number of entries in the list, and the size allowed for each entry is
372 * fixed upon creation.
374 static PredXactList PredXact;
377 * This provides a pool of RWConflict data elements to use in conflict lists
378 * between transactions.
380 static RWConflictPoolHeader RWConflictPool;
383 * The predicate locking hash tables are in shared memory.
384 * Each backend keeps pointers to them.
386 static HTAB *SerializableXidHash;
387 static HTAB *PredicateLockTargetHash;
388 static HTAB *PredicateLockHash;
389 static SHM_QUEUE *FinishedSerializableTransactions;
392 * Tag for a dummy entry in PredicateLockTargetHash. By temporarily removing
393 * this entry, you can ensure that there's enough scratch space available for
394 * inserting one entry in the hash table. This is an otherwise-invalid tag.
396 static const PREDICATELOCKTARGETTAG ScratchTargetTag = {0, 0, 0, 0};
397 static uint32 ScratchTargetTagHash;
398 static LWLock *ScratchPartitionLock;
401 * The local hash table used to determine when to combine multiple fine-
402 * grained locks into a single courser-grained lock.
404 static HTAB *LocalPredicateLockHash = NULL;
407 * Keep a pointer to the currently-running serializable transaction (if any)
408 * for quick reference. Also, remember if we have written anything that could
409 * cause a rw-conflict.
411 static SERIALIZABLEXACT *MySerializableXact = InvalidSerializableXact;
412 static bool MyXactDidWrite = false;
414 /* local functions */
416 static SERIALIZABLEXACT *CreatePredXact(void);
417 static void ReleasePredXact(SERIALIZABLEXACT *sxact);
418 static SERIALIZABLEXACT *FirstPredXact(void);
419 static SERIALIZABLEXACT *NextPredXact(SERIALIZABLEXACT *sxact);
421 static bool RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer);
422 static void SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
423 static void SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact, SERIALIZABLEXACT *activeXact);
424 static void ReleaseRWConflict(RWConflict conflict);
425 static void FlagSxactUnsafe(SERIALIZABLEXACT *sxact);
427 static bool OldSerXidPagePrecedesLogically(int p, int q);
428 static void OldSerXidInit(void);
429 static void OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo);
430 static SerCommitSeqNo OldSerXidGetMinConflictCommitSeqNo(TransactionId xid);
431 static void OldSerXidSetActiveSerXmin(TransactionId xid);
433 static uint32 predicatelock_hash(const void *key, Size keysize);
434 static void SummarizeOldestCommittedSxact(void);
435 static Snapshot GetSafeSnapshot(Snapshot snapshot);
436 static Snapshot GetSerializableTransactionSnapshotInt(Snapshot snapshot,
437 VirtualTransactionId *sourcevxid,
439 static bool PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag);
440 static bool GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag,
441 PREDICATELOCKTARGETTAG *parent);
442 static bool CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag);
443 static void RemoveScratchTarget(bool lockheld);
444 static void RestoreScratchTarget(bool lockheld);
445 static void RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target,
446 uint32 targettaghash);
447 static void DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag);
448 static int MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag);
449 static bool CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag);
450 static void DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag);
451 static void CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
452 uint32 targettaghash,
453 SERIALIZABLEXACT *sxact);
454 static void DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash);
455 static bool TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag,
456 PREDICATELOCKTARGETTAG newtargettag,
458 static void PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag);
459 static void DropAllPredicateLocksFromTable(Relation relation,
461 static void SetNewSxactGlobalXmin(void);
462 static void ClearOldPredicateLocks(void);
463 static void ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
465 static bool XidIsConcurrent(TransactionId xid);
466 static void CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag);
467 static void FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer);
468 static void OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader,
469 SERIALIZABLEXACT *writer);
472 /*------------------------------------------------------------------------*/
475 * Does this relation participate in predicate locking? Temporary and system
476 * relations are exempt, as are materialized views.
479 PredicateLockingNeededForRelation(Relation relation)
481 return !(relation->rd_id < FirstBootstrapObjectId ||
482 RelationUsesLocalBuffers(relation) ||
483 relation->rd_rel->relkind == RELKIND_MATVIEW);
487 * When a public interface method is called for a read, this is the test to
488 * see if we should do a quick return.
490 * Note: this function has side-effects! If this transaction has been flagged
491 * as RO-safe since the last call, we release all predicate locks and reset
492 * MySerializableXact. That makes subsequent calls to return quickly.
494 * This is marked as 'inline' to make to eliminate the function call overhead
495 * in the common case that serialization is not needed.
498 SerializationNeededForRead(Relation relation, Snapshot snapshot)
500 /* Nothing to do if this is not a serializable transaction */
501 if (MySerializableXact == InvalidSerializableXact)
505 * Don't acquire locks or conflict when scanning with a special snapshot.
506 * This excludes things like CLUSTER and REINDEX. They use the wholesale
507 * functions TransferPredicateLocksToHeapRelation() and
508 * CheckTableForSerializableConflictIn() to participate in serialization,
509 * but the scans involved don't need serialization.
511 if (!IsMVCCSnapshot(snapshot))
515 * Check if we have just become "RO-safe". If we have, immediately release
516 * all locks as they're not needed anymore. This also resets
517 * MySerializableXact, so that subsequent calls to this function can exit
520 * A transaction is flagged as RO_SAFE if all concurrent R/W transactions
521 * commit without having conflicts out to an earlier snapshot, thus
522 * ensuring that no conflicts are possible for this transaction.
524 if (SxactIsROSafe(MySerializableXact))
526 ReleasePredicateLocks(false);
530 /* Check if the relation doesn't participate in predicate locking */
531 if (!PredicateLockingNeededForRelation(relation))
534 return true; /* no excuse to skip predicate locking */
538 * Like SerializationNeededForRead(), but called on writes.
539 * The logic is the same, but there is no snapshot and we can't be RO-safe.
542 SerializationNeededForWrite(Relation relation)
544 /* Nothing to do if this is not a serializable transaction */
545 if (MySerializableXact == InvalidSerializableXact)
548 /* Check if the relation doesn't participate in predicate locking */
549 if (!PredicateLockingNeededForRelation(relation))
552 return true; /* no excuse to skip predicate locking */
556 /*------------------------------------------------------------------------*/
559 * These functions are a simple implementation of a list for this specific
560 * type of struct. If there is ever a generalized shared memory list, we
561 * should probably switch to that.
563 static SERIALIZABLEXACT *
566 PredXactListElement ptle;
568 ptle = (PredXactListElement)
569 SHMQueueNext(&PredXact->availableList,
570 &PredXact->availableList,
571 offsetof(PredXactListElementData, link));
575 SHMQueueDelete(&ptle->link);
576 SHMQueueInsertBefore(&PredXact->activeList, &ptle->link);
581 ReleasePredXact(SERIALIZABLEXACT *sxact)
583 PredXactListElement ptle;
585 Assert(ShmemAddrIsValid(sxact));
587 ptle = (PredXactListElement)
589 - offsetof(PredXactListElementData, sxact)
590 + offsetof(PredXactListElementData, link));
591 SHMQueueDelete(&ptle->link);
592 SHMQueueInsertBefore(&PredXact->availableList, &ptle->link);
595 static SERIALIZABLEXACT *
598 PredXactListElement ptle;
600 ptle = (PredXactListElement)
601 SHMQueueNext(&PredXact->activeList,
602 &PredXact->activeList,
603 offsetof(PredXactListElementData, link));
610 static SERIALIZABLEXACT *
611 NextPredXact(SERIALIZABLEXACT *sxact)
613 PredXactListElement ptle;
615 Assert(ShmemAddrIsValid(sxact));
617 ptle = (PredXactListElement)
619 - offsetof(PredXactListElementData, sxact)
620 + offsetof(PredXactListElementData, link));
621 ptle = (PredXactListElement)
622 SHMQueueNext(&PredXact->activeList,
624 offsetof(PredXactListElementData, link));
631 /*------------------------------------------------------------------------*/
634 * These functions manage primitive access to the RWConflict pool and lists.
637 RWConflictExists(const SERIALIZABLEXACT *reader, const SERIALIZABLEXACT *writer)
641 Assert(reader != writer);
643 /* Check the ends of the purported conflict first. */
644 if (SxactIsDoomed(reader)
645 || SxactIsDoomed(writer)
646 || SHMQueueEmpty(&reader->outConflicts)
647 || SHMQueueEmpty(&writer->inConflicts))
650 /* A conflict is possible; walk the list to find out. */
651 conflict = (RWConflict)
652 SHMQueueNext(&reader->outConflicts,
653 &reader->outConflicts,
654 offsetof(RWConflictData, outLink));
657 if (conflict->sxactIn == writer)
659 conflict = (RWConflict)
660 SHMQueueNext(&reader->outConflicts,
662 offsetof(RWConflictData, outLink));
665 /* No conflict found. */
670 SetRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
674 Assert(reader != writer);
675 Assert(!RWConflictExists(reader, writer));
677 conflict = (RWConflict)
678 SHMQueueNext(&RWConflictPool->availableList,
679 &RWConflictPool->availableList,
680 offsetof(RWConflictData, outLink));
683 (errcode(ERRCODE_OUT_OF_MEMORY),
684 errmsg("not enough elements in RWConflictPool to record a read/write conflict"),
685 errhint("You might need to run fewer transactions at a time or increase max_connections.")));
687 SHMQueueDelete(&conflict->outLink);
689 conflict->sxactOut = reader;
690 conflict->sxactIn = writer;
691 SHMQueueInsertBefore(&reader->outConflicts, &conflict->outLink);
692 SHMQueueInsertBefore(&writer->inConflicts, &conflict->inLink);
696 SetPossibleUnsafeConflict(SERIALIZABLEXACT *roXact,
697 SERIALIZABLEXACT *activeXact)
701 Assert(roXact != activeXact);
702 Assert(SxactIsReadOnly(roXact));
703 Assert(!SxactIsReadOnly(activeXact));
705 conflict = (RWConflict)
706 SHMQueueNext(&RWConflictPool->availableList,
707 &RWConflictPool->availableList,
708 offsetof(RWConflictData, outLink));
711 (errcode(ERRCODE_OUT_OF_MEMORY),
712 errmsg("not enough elements in RWConflictPool to record a potential read/write conflict"),
713 errhint("You might need to run fewer transactions at a time or increase max_connections.")));
715 SHMQueueDelete(&conflict->outLink);
717 conflict->sxactOut = activeXact;
718 conflict->sxactIn = roXact;
719 SHMQueueInsertBefore(&activeXact->possibleUnsafeConflicts,
721 SHMQueueInsertBefore(&roXact->possibleUnsafeConflicts,
726 ReleaseRWConflict(RWConflict conflict)
728 SHMQueueDelete(&conflict->inLink);
729 SHMQueueDelete(&conflict->outLink);
730 SHMQueueInsertBefore(&RWConflictPool->availableList, &conflict->outLink);
734 FlagSxactUnsafe(SERIALIZABLEXACT *sxact)
739 Assert(SxactIsReadOnly(sxact));
740 Assert(!SxactIsROSafe(sxact));
742 sxact->flags |= SXACT_FLAG_RO_UNSAFE;
745 * We know this isn't a safe snapshot, so we can stop looking for other
746 * potential conflicts.
748 conflict = (RWConflict)
749 SHMQueueNext(&sxact->possibleUnsafeConflicts,
750 &sxact->possibleUnsafeConflicts,
751 offsetof(RWConflictData, inLink));
754 nextConflict = (RWConflict)
755 SHMQueueNext(&sxact->possibleUnsafeConflicts,
757 offsetof(RWConflictData, inLink));
759 Assert(!SxactIsReadOnly(conflict->sxactOut));
760 Assert(sxact == conflict->sxactIn);
762 ReleaseRWConflict(conflict);
764 conflict = nextConflict;
768 /*------------------------------------------------------------------------*/
771 * We will work on the page range of 0..OLDSERXID_MAX_PAGE.
772 * Compares using wraparound logic, as is required by slru.c.
775 OldSerXidPagePrecedesLogically(int p, int q)
780 * We have to compare modulo (OLDSERXID_MAX_PAGE+1)/2. Both inputs should
781 * be in the range 0..OLDSERXID_MAX_PAGE.
783 Assert(p >= 0 && p <= OLDSERXID_MAX_PAGE);
784 Assert(q >= 0 && q <= OLDSERXID_MAX_PAGE);
787 if (diff >= ((OLDSERXID_MAX_PAGE + 1) / 2))
788 diff -= OLDSERXID_MAX_PAGE + 1;
789 else if (diff < -((int) (OLDSERXID_MAX_PAGE + 1) / 2))
790 diff += OLDSERXID_MAX_PAGE + 1;
795 * Initialize for the tracking of old serializable committed xids.
803 * Set up SLRU management of the pg_serial data.
805 OldSerXidSlruCtl->PagePrecedes = OldSerXidPagePrecedesLogically;
806 SimpleLruInit(OldSerXidSlruCtl, "oldserxid",
807 NUM_OLDSERXID_BUFFERS, 0, OldSerXidLock, "pg_serial",
808 LWTRANCHE_OLDSERXID_BUFFERS);
809 /* Override default assumption that writes should be fsync'd */
810 OldSerXidSlruCtl->do_fsync = false;
813 * Create or attach to the OldSerXidControl structure.
815 oldSerXidControl = (OldSerXidControl)
816 ShmemInitStruct("OldSerXidControlData", sizeof(OldSerXidControlData), &found);
821 * Set control information to reflect empty SLRU.
823 oldSerXidControl->headPage = -1;
824 oldSerXidControl->headXid = InvalidTransactionId;
825 oldSerXidControl->tailXid = InvalidTransactionId;
826 oldSerXidControl->warningIssued = false;
831 * Record a committed read write serializable xid and the minimum
832 * commitSeqNo of any transactions to which this xid had a rw-conflict out.
833 * An invalid seqNo means that there were no conflicts out from xid.
836 OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
838 TransactionId tailXid;
844 Assert(TransactionIdIsValid(xid));
846 targetPage = OldSerXidPage(xid);
848 LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
851 * If no serializable transactions are active, there shouldn't be anything
852 * to push out to the SLRU. Hitting this assert would mean there's
853 * something wrong with the earlier cleanup logic.
855 tailXid = oldSerXidControl->tailXid;
856 Assert(TransactionIdIsValid(tailXid));
859 * If the SLRU is currently unused, zero out the whole active region from
860 * tailXid to headXid before taking it into use. Otherwise zero out only
861 * any new pages that enter the tailXid-headXid range as we advance
864 if (oldSerXidControl->headPage < 0)
866 firstZeroPage = OldSerXidPage(tailXid);
871 firstZeroPage = OldSerXidNextPage(oldSerXidControl->headPage);
872 isNewPage = OldSerXidPagePrecedesLogically(oldSerXidControl->headPage,
876 if (!TransactionIdIsValid(oldSerXidControl->headXid)
877 || TransactionIdFollows(xid, oldSerXidControl->headXid))
878 oldSerXidControl->headXid = xid;
880 oldSerXidControl->headPage = targetPage;
883 * Give a warning if we're about to run out of SLRU pages.
885 * slru.c has a maximum of 64k segments, with 32 (SLRU_PAGES_PER_SEGMENT)
886 * pages each. We need to store a 64-bit integer for each Xid, and with
887 * default 8k block size, 65536*32 pages is only enough to cover 2^30
888 * XIDs. If we're about to hit that limit and wrap around, warn the user.
890 * To avoid spamming the user, we only give one warning when we've used 1
891 * billion XIDs, and stay silent until the situation is fixed and the
892 * number of XIDs used falls below 800 million again.
894 * XXX: We have no safeguard to actually *prevent* the wrap-around,
895 * though. All you get is a warning.
897 if (oldSerXidControl->warningIssued)
899 TransactionId lowWatermark;
901 lowWatermark = tailXid + 800000000;
902 if (lowWatermark < FirstNormalTransactionId)
903 lowWatermark = FirstNormalTransactionId;
904 if (TransactionIdPrecedes(xid, lowWatermark))
905 oldSerXidControl->warningIssued = false;
909 TransactionId highWatermark;
911 highWatermark = tailXid + 1000000000;
912 if (highWatermark < FirstNormalTransactionId)
913 highWatermark = FirstNormalTransactionId;
914 if (TransactionIdFollows(xid, highWatermark))
916 oldSerXidControl->warningIssued = true;
918 (errmsg("memory for serializable conflict tracking is nearly exhausted"),
919 errhint("There might be an idle transaction or a forgotten prepared transaction causing this.")));
925 /* Initialize intervening pages. */
926 while (firstZeroPage != targetPage)
928 (void) SimpleLruZeroPage(OldSerXidSlruCtl, firstZeroPage);
929 firstZeroPage = OldSerXidNextPage(firstZeroPage);
931 slotno = SimpleLruZeroPage(OldSerXidSlruCtl, targetPage);
934 slotno = SimpleLruReadPage(OldSerXidSlruCtl, targetPage, true, xid);
936 OldSerXidValue(slotno, xid) = minConflictCommitSeqNo;
937 OldSerXidSlruCtl->shared->page_dirty[slotno] = true;
939 LWLockRelease(OldSerXidLock);
943 * Get the minimum commitSeqNo for any conflict out for the given xid. For
944 * a transaction which exists but has no conflict out, InvalidSerCommitSeqNo
947 static SerCommitSeqNo
948 OldSerXidGetMinConflictCommitSeqNo(TransactionId xid)
950 TransactionId headXid;
951 TransactionId tailXid;
955 Assert(TransactionIdIsValid(xid));
957 LWLockAcquire(OldSerXidLock, LW_SHARED);
958 headXid = oldSerXidControl->headXid;
959 tailXid = oldSerXidControl->tailXid;
960 LWLockRelease(OldSerXidLock);
962 if (!TransactionIdIsValid(headXid))
965 Assert(TransactionIdIsValid(tailXid));
967 if (TransactionIdPrecedes(xid, tailXid)
968 || TransactionIdFollows(xid, headXid))
972 * The following function must be called without holding OldSerXidLock,
973 * but will return with that lock held, which must then be released.
975 slotno = SimpleLruReadPage_ReadOnly(OldSerXidSlruCtl,
976 OldSerXidPage(xid), xid);
977 val = OldSerXidValue(slotno, xid);
978 LWLockRelease(OldSerXidLock);
983 * Call this whenever there is a new xmin for active serializable
984 * transactions. We don't need to keep information on transactions which
985 * precede that. InvalidTransactionId means none active, so everything in
986 * the SLRU can be discarded.
989 OldSerXidSetActiveSerXmin(TransactionId xid)
991 LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
994 * When no sxacts are active, nothing overlaps, set the xid values to
995 * invalid to show that there are no valid entries. Don't clear headPage,
996 * though. A new xmin might still land on that page, and we don't want to
997 * repeatedly zero out the same page.
999 if (!TransactionIdIsValid(xid))
1001 oldSerXidControl->tailXid = InvalidTransactionId;
1002 oldSerXidControl->headXid = InvalidTransactionId;
1003 LWLockRelease(OldSerXidLock);
1008 * When we're recovering prepared transactions, the global xmin might move
1009 * backwards depending on the order they're recovered. Normally that's not
1010 * OK, but during recovery no serializable transactions will commit, so
1011 * the SLRU is empty and we can get away with it.
1013 if (RecoveryInProgress())
1015 Assert(oldSerXidControl->headPage < 0);
1016 if (!TransactionIdIsValid(oldSerXidControl->tailXid)
1017 || TransactionIdPrecedes(xid, oldSerXidControl->tailXid))
1019 oldSerXidControl->tailXid = xid;
1021 LWLockRelease(OldSerXidLock);
1025 Assert(!TransactionIdIsValid(oldSerXidControl->tailXid)
1026 || TransactionIdFollows(xid, oldSerXidControl->tailXid));
1028 oldSerXidControl->tailXid = xid;
1030 LWLockRelease(OldSerXidLock);
1034 * Perform a checkpoint --- either during shutdown, or on-the-fly
1036 * We don't have any data that needs to survive a restart, but this is a
1037 * convenient place to truncate the SLRU.
1040 CheckPointPredicate(void)
1044 LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
1046 /* Exit quickly if the SLRU is currently not in use. */
1047 if (oldSerXidControl->headPage < 0)
1049 LWLockRelease(OldSerXidLock);
1053 if (TransactionIdIsValid(oldSerXidControl->tailXid))
1055 /* We can truncate the SLRU up to the page containing tailXid */
1056 tailPage = OldSerXidPage(oldSerXidControl->tailXid);
1061 * The SLRU is no longer needed. Truncate to head before we set head
1064 * XXX: It's possible that the SLRU is not needed again until XID
1065 * wrap-around has happened, so that the segment containing headPage
1066 * that we leave behind will appear to be new again. In that case it
1067 * won't be removed until XID horizon advances enough to make it
1070 tailPage = oldSerXidControl->headPage;
1071 oldSerXidControl->headPage = -1;
1074 LWLockRelease(OldSerXidLock);
1076 /* Truncate away pages that are no longer required */
1077 SimpleLruTruncate(OldSerXidSlruCtl, tailPage);
1080 * Flush dirty SLRU pages to disk
1082 * This is not actually necessary from a correctness point of view. We do
1083 * it merely as a debugging aid.
1085 * We're doing this after the truncation to avoid writing pages right
1086 * before deleting the file in which they sit, which would be completely
1089 SimpleLruFlush(OldSerXidSlruCtl, true);
1092 /*------------------------------------------------------------------------*/
1095 * InitPredicateLocks -- Initialize the predicate locking data structures.
1097 * This is called from CreateSharedMemoryAndSemaphores(), which see for
1098 * more comments. In the normal postmaster case, the shared hash tables
1099 * are created here. Backends inherit the pointers
1100 * to the shared tables via fork(). In the EXEC_BACKEND case, each
1101 * backend re-executes this code to obtain pointers to the already existing
1102 * shared hash tables.
1105 InitPredicateLocks(void)
1108 long max_table_size;
1113 * Compute size of predicate lock target hashtable. Note these
1114 * calculations must agree with PredicateLockShmemSize!
1116 max_table_size = NPREDICATELOCKTARGETENTS();
1119 * Allocate hash table for PREDICATELOCKTARGET structs. This stores
1120 * per-predicate-lock-target information.
1122 MemSet(&info, 0, sizeof(info));
1123 info.keysize = sizeof(PREDICATELOCKTARGETTAG);
1124 info.entrysize = sizeof(PREDICATELOCKTARGET);
1125 info.num_partitions = NUM_PREDICATELOCK_PARTITIONS;
1127 PredicateLockTargetHash = ShmemInitHash("PREDICATELOCKTARGET hash",
1131 HASH_ELEM | HASH_BLOBS |
1132 HASH_PARTITION | HASH_FIXED_SIZE);
1134 /* Assume an average of 2 xacts per target */
1135 max_table_size *= 2;
1138 * Reserve a dummy entry in the hash table; we use it to make sure there's
1139 * always one entry available when we need to split or combine a page,
1140 * because running out of space there could mean aborting a
1141 * non-serializable transaction.
1143 hash_search(PredicateLockTargetHash, &ScratchTargetTag, HASH_ENTER, NULL);
1146 * Allocate hash table for PREDICATELOCK structs. This stores per
1147 * xact-lock-of-a-target information.
1149 MemSet(&info, 0, sizeof(info));
1150 info.keysize = sizeof(PREDICATELOCKTAG);
1151 info.entrysize = sizeof(PREDICATELOCK);
1152 info.hash = predicatelock_hash;
1153 info.num_partitions = NUM_PREDICATELOCK_PARTITIONS;
1155 PredicateLockHash = ShmemInitHash("PREDICATELOCK hash",
1159 HASH_ELEM | HASH_FUNCTION |
1160 HASH_PARTITION | HASH_FIXED_SIZE);
1163 * Compute size for serializable transaction hashtable. Note these
1164 * calculations must agree with PredicateLockShmemSize!
1166 max_table_size = (MaxBackends + max_prepared_xacts);
1169 * Allocate a list to hold information on transactions participating in
1170 * predicate locking.
1172 * Assume an average of 10 predicate locking transactions per backend.
1173 * This allows aggressive cleanup while detail is present before data must
1174 * be summarized for storage in SLRU and the "dummy" transaction.
1176 max_table_size *= 10;
1178 PredXact = ShmemInitStruct("PredXactList",
1179 PredXactListDataSize,
1185 SHMQueueInit(&PredXact->availableList);
1186 SHMQueueInit(&PredXact->activeList);
1187 PredXact->SxactGlobalXmin = InvalidTransactionId;
1188 PredXact->SxactGlobalXminCount = 0;
1189 PredXact->WritableSxactCount = 0;
1190 PredXact->LastSxactCommitSeqNo = FirstNormalSerCommitSeqNo - 1;
1191 PredXact->CanPartialClearThrough = 0;
1192 PredXact->HavePartialClearedThrough = 0;
1193 requestSize = mul_size((Size) max_table_size,
1194 PredXactListElementDataSize);
1195 PredXact->element = ShmemAlloc(requestSize);
1196 /* Add all elements to available list, clean. */
1197 memset(PredXact->element, 0, requestSize);
1198 for (i = 0; i < max_table_size; i++)
1200 SHMQueueInsertBefore(&(PredXact->availableList),
1201 &(PredXact->element[i].link));
1203 PredXact->OldCommittedSxact = CreatePredXact();
1204 SetInvalidVirtualTransactionId(PredXact->OldCommittedSxact->vxid);
1205 PredXact->OldCommittedSxact->prepareSeqNo = 0;
1206 PredXact->OldCommittedSxact->commitSeqNo = 0;
1207 PredXact->OldCommittedSxact->SeqNo.lastCommitBeforeSnapshot = 0;
1208 SHMQueueInit(&PredXact->OldCommittedSxact->outConflicts);
1209 SHMQueueInit(&PredXact->OldCommittedSxact->inConflicts);
1210 SHMQueueInit(&PredXact->OldCommittedSxact->predicateLocks);
1211 SHMQueueInit(&PredXact->OldCommittedSxact->finishedLink);
1212 SHMQueueInit(&PredXact->OldCommittedSxact->possibleUnsafeConflicts);
1213 PredXact->OldCommittedSxact->topXid = InvalidTransactionId;
1214 PredXact->OldCommittedSxact->finishedBefore = InvalidTransactionId;
1215 PredXact->OldCommittedSxact->xmin = InvalidTransactionId;
1216 PredXact->OldCommittedSxact->flags = SXACT_FLAG_COMMITTED;
1217 PredXact->OldCommittedSxact->pid = 0;
1219 /* This never changes, so let's keep a local copy. */
1220 OldCommittedSxact = PredXact->OldCommittedSxact;
1223 * Allocate hash table for SERIALIZABLEXID structs. This stores per-xid
1224 * information for serializable transactions which have accessed data.
1226 MemSet(&info, 0, sizeof(info));
1227 info.keysize = sizeof(SERIALIZABLEXIDTAG);
1228 info.entrysize = sizeof(SERIALIZABLEXID);
1230 SerializableXidHash = ShmemInitHash("SERIALIZABLEXID hash",
1234 HASH_ELEM | HASH_BLOBS |
1238 * Allocate space for tracking rw-conflicts in lists attached to the
1241 * Assume an average of 5 conflicts per transaction. Calculations suggest
1242 * that this will prevent resource exhaustion in even the most pessimal
1243 * loads up to max_connections = 200 with all 200 connections pounding the
1244 * database with serializable transactions. Beyond that, there may be
1245 * occasional transactions canceled when trying to flag conflicts. That's
1248 max_table_size *= 5;
1250 RWConflictPool = ShmemInitStruct("RWConflictPool",
1251 RWConflictPoolHeaderDataSize,
1257 SHMQueueInit(&RWConflictPool->availableList);
1258 requestSize = mul_size((Size) max_table_size,
1259 RWConflictDataSize);
1260 RWConflictPool->element = ShmemAlloc(requestSize);
1261 /* Add all elements to available list, clean. */
1262 memset(RWConflictPool->element, 0, requestSize);
1263 for (i = 0; i < max_table_size; i++)
1265 SHMQueueInsertBefore(&(RWConflictPool->availableList),
1266 &(RWConflictPool->element[i].outLink));
1271 * Create or attach to the header for the list of finished serializable
1274 FinishedSerializableTransactions = (SHM_QUEUE *)
1275 ShmemInitStruct("FinishedSerializableTransactions",
1279 SHMQueueInit(FinishedSerializableTransactions);
1282 * Initialize the SLRU storage for old committed serializable
1287 /* Pre-calculate the hash and partition lock of the scratch entry */
1288 ScratchTargetTagHash = PredicateLockTargetTagHashCode(&ScratchTargetTag);
1289 ScratchPartitionLock = PredicateLockHashPartitionLock(ScratchTargetTagHash);
1293 * Estimate shared-memory space used for predicate lock table
1296 PredicateLockShmemSize(void)
1299 long max_table_size;
1301 /* predicate lock target hash table */
1302 max_table_size = NPREDICATELOCKTARGETENTS();
1303 size = add_size(size, hash_estimate_size(max_table_size,
1304 sizeof(PREDICATELOCKTARGET)));
1306 /* predicate lock hash table */
1307 max_table_size *= 2;
1308 size = add_size(size, hash_estimate_size(max_table_size,
1309 sizeof(PREDICATELOCK)));
1312 * Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
1315 size = add_size(size, size / 10);
1317 /* transaction list */
1318 max_table_size = MaxBackends + max_prepared_xacts;
1319 max_table_size *= 10;
1320 size = add_size(size, PredXactListDataSize);
1321 size = add_size(size, mul_size((Size) max_table_size,
1322 PredXactListElementDataSize));
1324 /* transaction xid table */
1325 size = add_size(size, hash_estimate_size(max_table_size,
1326 sizeof(SERIALIZABLEXID)));
1328 /* rw-conflict pool */
1329 max_table_size *= 5;
1330 size = add_size(size, RWConflictPoolHeaderDataSize);
1331 size = add_size(size, mul_size((Size) max_table_size,
1332 RWConflictDataSize));
1334 /* Head for list of finished serializable transactions. */
1335 size = add_size(size, sizeof(SHM_QUEUE));
1337 /* Shared memory structures for SLRU tracking of old committed xids. */
1338 size = add_size(size, sizeof(OldSerXidControlData));
1339 size = add_size(size, SimpleLruShmemSize(NUM_OLDSERXID_BUFFERS, 0));
1346 * Compute the hash code associated with a PREDICATELOCKTAG.
1348 * Because we want to use just one set of partition locks for both the
1349 * PREDICATELOCKTARGET and PREDICATELOCK hash tables, we have to make sure
1350 * that PREDICATELOCKs fall into the same partition number as their
1351 * associated PREDICATELOCKTARGETs. dynahash.c expects the partition number
1352 * to be the low-order bits of the hash code, and therefore a
1353 * PREDICATELOCKTAG's hash code must have the same low-order bits as the
1354 * associated PREDICATELOCKTARGETTAG's hash code. We achieve this with this
1355 * specialized hash function.
1358 predicatelock_hash(const void *key, Size keysize)
1360 const PREDICATELOCKTAG *predicatelocktag = (const PREDICATELOCKTAG *) key;
1363 Assert(keysize == sizeof(PREDICATELOCKTAG));
1365 /* Look into the associated target object, and compute its hash code */
1366 targethash = PredicateLockTargetTagHashCode(&predicatelocktag->myTarget->tag);
1368 return PredicateLockHashCodeFromTargetHashCode(predicatelocktag, targethash);
1373 * GetPredicateLockStatusData
1374 * Return a table containing the internal state of the predicate
1375 * lock manager for use in pg_lock_status.
1377 * Like GetLockStatusData, this function tries to hold the partition LWLocks
1378 * for as short a time as possible by returning two arrays that simply
1379 * contain the PREDICATELOCKTARGETTAG and SERIALIZABLEXACT for each lock
1380 * table entry. Multiple copies of the same PREDICATELOCKTARGETTAG and
1381 * SERIALIZABLEXACT will likely appear.
1384 GetPredicateLockStatusData(void)
1386 PredicateLockData *data;
1390 HASH_SEQ_STATUS seqstat;
1391 PREDICATELOCK *predlock;
1393 data = (PredicateLockData *) palloc(sizeof(PredicateLockData));
1396 * To ensure consistency, take simultaneous locks on all partition locks
1397 * in ascending order, then SerializableXactHashLock.
1399 for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
1400 LWLockAcquire(PredicateLockHashPartitionLockByIndex(i), LW_SHARED);
1401 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1403 /* Get number of locks and allocate appropriately-sized arrays. */
1404 els = hash_get_num_entries(PredicateLockHash);
1405 data->nelements = els;
1406 data->locktags = (PREDICATELOCKTARGETTAG *)
1407 palloc(sizeof(PREDICATELOCKTARGETTAG) * els);
1408 data->xacts = (SERIALIZABLEXACT *)
1409 palloc(sizeof(SERIALIZABLEXACT) * els);
1412 /* Scan through PredicateLockHash and copy contents */
1413 hash_seq_init(&seqstat, PredicateLockHash);
1417 while ((predlock = (PREDICATELOCK *) hash_seq_search(&seqstat)))
1419 data->locktags[el] = predlock->tag.myTarget->tag;
1420 data->xacts[el] = *predlock->tag.myXact;
1426 /* Release locks in reverse order */
1427 LWLockRelease(SerializableXactHashLock);
1428 for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
1429 LWLockRelease(PredicateLockHashPartitionLockByIndex(i));
1435 * Free up shared memory structures by pushing the oldest sxact (the one at
1436 * the front of the SummarizeOldestCommittedSxact queue) into summary form.
1437 * Each call will free exactly one SERIALIZABLEXACT structure and may also
1438 * free one or more of these structures: SERIALIZABLEXID, PREDICATELOCK,
1439 * PREDICATELOCKTARGET, RWConflictData.
1442 SummarizeOldestCommittedSxact(void)
1444 SERIALIZABLEXACT *sxact;
1446 LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
1449 * This function is only called if there are no sxact slots available.
1450 * Some of them must belong to old, already-finished transactions, so
1451 * there should be something in FinishedSerializableTransactions list that
1452 * we can summarize. However, there's a race condition: while we were not
1453 * holding any locks, a transaction might have ended and cleaned up all
1454 * the finished sxact entries already, freeing up their sxact slots. In
1455 * that case, we have nothing to do here. The caller will find one of the
1456 * slots released by the other backend when it retries.
1458 if (SHMQueueEmpty(FinishedSerializableTransactions))
1460 LWLockRelease(SerializableFinishedListLock);
1465 * Grab the first sxact off the finished list -- this will be the earliest
1466 * commit. Remove it from the list.
1468 sxact = (SERIALIZABLEXACT *)
1469 SHMQueueNext(FinishedSerializableTransactions,
1470 FinishedSerializableTransactions,
1471 offsetof(SERIALIZABLEXACT, finishedLink));
1472 SHMQueueDelete(&(sxact->finishedLink));
1474 /* Add to SLRU summary information. */
1475 if (TransactionIdIsValid(sxact->topXid) && !SxactIsReadOnly(sxact))
1476 OldSerXidAdd(sxact->topXid, SxactHasConflictOut(sxact)
1477 ? sxact->SeqNo.earliestOutConflictCommit : InvalidSerCommitSeqNo);
1479 /* Summarize and release the detail. */
1480 ReleaseOneSerializableXact(sxact, false, true);
1482 LWLockRelease(SerializableFinishedListLock);
1487 * Obtain and register a snapshot for a READ ONLY DEFERRABLE
1488 * transaction. Ensures that the snapshot is "safe", i.e. a
1489 * read-only transaction running on it can execute serializably
1490 * without further checks. This requires waiting for concurrent
1491 * transactions to complete, and retrying with a new snapshot if
1492 * one of them could possibly create a conflict.
1494 * As with GetSerializableTransactionSnapshot (which this is a subroutine
1495 * for), the passed-in Snapshot pointer should reference a static data
1496 * area that can safely be passed to GetSnapshotData.
1499 GetSafeSnapshot(Snapshot origSnapshot)
1503 Assert(XactReadOnly && XactDeferrable);
1508 * GetSerializableTransactionSnapshotInt is going to call
1509 * GetSnapshotData, so we need to provide it the static snapshot area
1510 * our caller passed to us. The pointer returned is actually the same
1511 * one passed to it, but we avoid assuming that here.
1513 snapshot = GetSerializableTransactionSnapshotInt(origSnapshot,
1516 if (MySerializableXact == InvalidSerializableXact)
1517 return snapshot; /* no concurrent r/w xacts; it's safe */
1519 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1522 * Wait for concurrent transactions to finish. Stop early if one of
1523 * them marked us as conflicted.
1525 MySerializableXact->flags |= SXACT_FLAG_DEFERRABLE_WAITING;
1526 while (!(SHMQueueEmpty(&MySerializableXact->possibleUnsafeConflicts) ||
1527 SxactIsROUnsafe(MySerializableXact)))
1529 LWLockRelease(SerializableXactHashLock);
1530 ProcWaitForSignal(WAIT_EVENT_SAFE_SNAPSHOT);
1531 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1533 MySerializableXact->flags &= ~SXACT_FLAG_DEFERRABLE_WAITING;
1535 if (!SxactIsROUnsafe(MySerializableXact))
1537 LWLockRelease(SerializableXactHashLock);
1538 break; /* success */
1541 LWLockRelease(SerializableXactHashLock);
1543 /* else, need to retry... */
1545 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
1546 errmsg("deferrable snapshot was unsafe; trying a new one")));
1547 ReleasePredicateLocks(false);
1551 * Now we have a safe snapshot, so we don't need to do any further checks.
1553 Assert(SxactIsROSafe(MySerializableXact));
1554 ReleasePredicateLocks(false);
1560 * GetSafeSnapshotBlockingPids
1561 * If the specified process is currently blocked in GetSafeSnapshot,
1562 * write the process IDs of all processes that it is blocked by
1563 * into the caller-supplied buffer output[]. The list is truncated at
1564 * output_size, and the number of PIDs written into the buffer is
1565 * returned. Returns zero if the given PID is not currently blocked
1566 * in GetSafeSnapshot.
1569 GetSafeSnapshotBlockingPids(int blocked_pid, int *output, int output_size)
1571 int num_written = 0;
1572 SERIALIZABLEXACT *sxact;
1574 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
1576 /* Find blocked_pid's SERIALIZABLEXACT by linear search. */
1577 for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
1579 if (sxact->pid == blocked_pid)
1583 /* Did we find it, and is it currently waiting in GetSafeSnapshot? */
1584 if (sxact != NULL && SxactIsDeferrableWaiting(sxact))
1586 RWConflict possibleUnsafeConflict;
1588 /* Traverse the list of possible unsafe conflicts collecting PIDs. */
1589 possibleUnsafeConflict = (RWConflict)
1590 SHMQueueNext(&sxact->possibleUnsafeConflicts,
1591 &sxact->possibleUnsafeConflicts,
1592 offsetof(RWConflictData, inLink));
1594 while (possibleUnsafeConflict != NULL && num_written < output_size)
1596 output[num_written++] = possibleUnsafeConflict->sxactOut->pid;
1597 possibleUnsafeConflict = (RWConflict)
1598 SHMQueueNext(&sxact->possibleUnsafeConflicts,
1599 &possibleUnsafeConflict->inLink,
1600 offsetof(RWConflictData, inLink));
1604 LWLockRelease(SerializableXactHashLock);
1610 * Acquire a snapshot that can be used for the current transaction.
1612 * Make sure we have a SERIALIZABLEXACT reference in MySerializableXact.
1613 * It should be current for this process and be contained in PredXact.
1615 * The passed-in Snapshot pointer should reference a static data area that
1616 * can safely be passed to GetSnapshotData. The return value is actually
1617 * always this same pointer; no new snapshot data structure is allocated
1618 * within this function.
1621 GetSerializableTransactionSnapshot(Snapshot snapshot)
1623 Assert(IsolationIsSerializable());
1626 * Can't use serializable mode while recovery is still active, as it is,
1627 * for example, on a hot standby. We could get here despite the check in
1628 * check_XactIsoLevel() if default_transaction_isolation is set to
1629 * serializable, so phrase the hint accordingly.
1631 if (RecoveryInProgress())
1633 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1634 errmsg("cannot use serializable mode in a hot standby"),
1635 errdetail("\"default_transaction_isolation\" is set to \"serializable\"."),
1636 errhint("You can use \"SET default_transaction_isolation = 'repeatable read'\" to change the default.")));
1639 * A special optimization is available for SERIALIZABLE READ ONLY
1640 * DEFERRABLE transactions -- we can wait for a suitable snapshot and
1641 * thereby avoid all SSI overhead once it's running.
1643 if (XactReadOnly && XactDeferrable)
1644 return GetSafeSnapshot(snapshot);
1646 return GetSerializableTransactionSnapshotInt(snapshot,
1651 * Import a snapshot to be used for the current transaction.
1653 * This is nearly the same as GetSerializableTransactionSnapshot, except that
1654 * we don't take a new snapshot, but rather use the data we're handed.
1656 * The caller must have verified that the snapshot came from a serializable
1657 * transaction; and if we're read-write, the source transaction must not be
1661 SetSerializableTransactionSnapshot(Snapshot snapshot,
1662 VirtualTransactionId *sourcevxid,
1665 Assert(IsolationIsSerializable());
1668 * We do not allow SERIALIZABLE READ ONLY DEFERRABLE transactions to
1669 * import snapshots, since there's no way to wait for a safe snapshot when
1670 * we're using the snap we're told to. (XXX instead of throwing an error,
1671 * we could just ignore the XactDeferrable flag?)
1673 if (XactReadOnly && XactDeferrable)
1675 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
1676 errmsg("a snapshot-importing transaction must not be READ ONLY DEFERRABLE")));
1678 (void) GetSerializableTransactionSnapshotInt(snapshot, sourcevxid,
1683 * Guts of GetSerializableTransactionSnapshot
1685 * If sourcexid is valid, this is actually an import operation and we should
1686 * skip calling GetSnapshotData, because the snapshot contents are already
1687 * loaded up. HOWEVER: to avoid race conditions, we must check that the
1688 * source xact is still running after we acquire SerializableXactHashLock.
1689 * We do that by calling ProcArrayInstallImportedXmin.
1692 GetSerializableTransactionSnapshotInt(Snapshot snapshot,
1693 VirtualTransactionId *sourcevxid,
1697 VirtualTransactionId vxid;
1698 SERIALIZABLEXACT *sxact,
1702 /* We only do this for serializable transactions. Once. */
1703 Assert(MySerializableXact == InvalidSerializableXact);
1705 Assert(!RecoveryInProgress());
1708 * Since all parts of a serializable transaction must use the same
1709 * snapshot, it is too late to establish one after a parallel operation
1712 if (IsInParallelMode())
1713 elog(ERROR, "cannot establish serializable snapshot during a parallel operation");
1716 Assert(proc != NULL);
1717 GET_VXID_FROM_PGPROC(vxid, *proc);
1720 * First we get the sxact structure, which may involve looping and access
1721 * to the "finished" list to free a structure for use.
1723 * We must hold SerializableXactHashLock when taking/checking the snapshot
1724 * to avoid race conditions, for much the same reasons that
1725 * GetSnapshotData takes the ProcArrayLock. Since we might have to
1726 * release SerializableXactHashLock to call SummarizeOldestCommittedSxact,
1727 * this means we have to create the sxact first, which is a bit annoying
1728 * (in particular, an elog(ERROR) in procarray.c would cause us to leak
1729 * the sxact). Consider refactoring to avoid this.
1731 #ifdef TEST_OLDSERXID
1732 SummarizeOldestCommittedSxact();
1734 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1737 sxact = CreatePredXact();
1738 /* If null, push out committed sxact to SLRU summary & retry. */
1741 LWLockRelease(SerializableXactHashLock);
1742 SummarizeOldestCommittedSxact();
1743 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1747 /* Get the snapshot, or check that it's safe to use */
1749 snapshot = GetSnapshotData(snapshot);
1750 else if (!ProcArrayInstallImportedXmin(snapshot->xmin, sourcevxid))
1752 ReleasePredXact(sxact);
1753 LWLockRelease(SerializableXactHashLock);
1755 (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
1756 errmsg("could not import the requested snapshot"),
1757 errdetail("The source process with pid %d is not running anymore.",
1762 * If there are no serializable transactions which are not read-only, we
1763 * can "opt out" of predicate locking and conflict checking for a
1764 * read-only transaction.
1766 * The reason this is safe is that a read-only transaction can only become
1767 * part of a dangerous structure if it overlaps a writable transaction
1768 * which in turn overlaps a writable transaction which committed before
1769 * the read-only transaction started. A new writable transaction can
1770 * overlap this one, but it can't meet the other condition of overlapping
1771 * a transaction which committed before this one started.
1773 if (XactReadOnly && PredXact->WritableSxactCount == 0)
1775 ReleasePredXact(sxact);
1776 LWLockRelease(SerializableXactHashLock);
1780 /* Maintain serializable global xmin info. */
1781 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
1783 Assert(PredXact->SxactGlobalXminCount == 0);
1784 PredXact->SxactGlobalXmin = snapshot->xmin;
1785 PredXact->SxactGlobalXminCount = 1;
1786 OldSerXidSetActiveSerXmin(snapshot->xmin);
1788 else if (TransactionIdEquals(snapshot->xmin, PredXact->SxactGlobalXmin))
1790 Assert(PredXact->SxactGlobalXminCount > 0);
1791 PredXact->SxactGlobalXminCount++;
1795 Assert(TransactionIdFollows(snapshot->xmin, PredXact->SxactGlobalXmin));
1798 /* Initialize the structure. */
1800 sxact->SeqNo.lastCommitBeforeSnapshot = PredXact->LastSxactCommitSeqNo;
1801 sxact->prepareSeqNo = InvalidSerCommitSeqNo;
1802 sxact->commitSeqNo = InvalidSerCommitSeqNo;
1803 SHMQueueInit(&(sxact->outConflicts));
1804 SHMQueueInit(&(sxact->inConflicts));
1805 SHMQueueInit(&(sxact->possibleUnsafeConflicts));
1806 sxact->topXid = GetTopTransactionIdIfAny();
1807 sxact->finishedBefore = InvalidTransactionId;
1808 sxact->xmin = snapshot->xmin;
1809 sxact->pid = MyProcPid;
1810 SHMQueueInit(&(sxact->predicateLocks));
1811 SHMQueueElemInit(&(sxact->finishedLink));
1815 sxact->flags |= SXACT_FLAG_READ_ONLY;
1818 * Register all concurrent r/w transactions as possible conflicts; if
1819 * all of them commit without any outgoing conflicts to earlier
1820 * transactions then this snapshot can be deemed safe (and we can run
1821 * without tracking predicate locks).
1823 for (othersxact = FirstPredXact();
1825 othersxact = NextPredXact(othersxact))
1827 if (!SxactIsCommitted(othersxact)
1828 && !SxactIsDoomed(othersxact)
1829 && !SxactIsReadOnly(othersxact))
1831 SetPossibleUnsafeConflict(sxact, othersxact);
1837 ++(PredXact->WritableSxactCount);
1838 Assert(PredXact->WritableSxactCount <=
1839 (MaxBackends + max_prepared_xacts));
1842 MySerializableXact = sxact;
1843 MyXactDidWrite = false; /* haven't written anything yet */
1845 LWLockRelease(SerializableXactHashLock);
1847 /* Initialize the backend-local hash table of parent locks */
1848 Assert(LocalPredicateLockHash == NULL);
1849 MemSet(&hash_ctl, 0, sizeof(hash_ctl));
1850 hash_ctl.keysize = sizeof(PREDICATELOCKTARGETTAG);
1851 hash_ctl.entrysize = sizeof(LOCALPREDICATELOCK);
1852 LocalPredicateLockHash = hash_create("Local predicate lock",
1853 max_predicate_locks_per_xact,
1855 HASH_ELEM | HASH_BLOBS);
1861 * Register the top level XID in SerializableXidHash.
1862 * Also store it for easy reference in MySerializableXact.
1865 RegisterPredicateLockingXid(TransactionId xid)
1867 SERIALIZABLEXIDTAG sxidtag;
1868 SERIALIZABLEXID *sxid;
1872 * If we're not tracking predicate lock data for this transaction, we
1873 * should ignore the request and return quickly.
1875 if (MySerializableXact == InvalidSerializableXact)
1878 /* We should have a valid XID and be at the top level. */
1879 Assert(TransactionIdIsValid(xid));
1881 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
1883 /* This should only be done once per transaction. */
1884 Assert(MySerializableXact->topXid == InvalidTransactionId);
1886 MySerializableXact->topXid = xid;
1889 sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
1891 HASH_ENTER, &found);
1894 /* Initialize the structure. */
1895 sxid->myXact = MySerializableXact;
1896 LWLockRelease(SerializableXactHashLock);
1901 * Check whether there are any predicate locks held by any transaction
1902 * for the page at the given block number.
1904 * Note that the transaction may be completed but not yet subject to
1905 * cleanup due to overlapping serializable transactions. This must
1906 * return valid information regardless of transaction isolation level.
1908 * Also note that this doesn't check for a conflicting relation lock,
1909 * just a lock specifically on the given page.
1911 * One use is to support proper behavior during GiST index vacuum.
1914 PageIsPredicateLocked(Relation relation, BlockNumber blkno)
1916 PREDICATELOCKTARGETTAG targettag;
1917 uint32 targettaghash;
1918 LWLock *partitionLock;
1919 PREDICATELOCKTARGET *target;
1921 SET_PREDICATELOCKTARGETTAG_PAGE(targettag,
1922 relation->rd_node.dbNode,
1926 targettaghash = PredicateLockTargetTagHashCode(&targettag);
1927 partitionLock = PredicateLockHashPartitionLock(targettaghash);
1928 LWLockAcquire(partitionLock, LW_SHARED);
1929 target = (PREDICATELOCKTARGET *)
1930 hash_search_with_hash_value(PredicateLockTargetHash,
1931 &targettag, targettaghash,
1933 LWLockRelease(partitionLock);
1935 return (target != NULL);
1940 * Check whether a particular lock is held by this transaction.
1942 * Important note: this function may return false even if the lock is
1943 * being held, because it uses the local lock table which is not
1944 * updated if another transaction modifies our lock list (e.g. to
1945 * split an index page). It can also return true when a coarser
1946 * granularity lock that covers this target is being held. Be careful
1947 * to only use this function in circumstances where such errors are
1951 PredicateLockExists(const PREDICATELOCKTARGETTAG *targettag)
1953 LOCALPREDICATELOCK *lock;
1955 /* check local hash table */
1956 lock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
1964 * Found entry in the table, but still need to check whether it's actually
1965 * held -- it could just be a parent of some held lock.
1971 * Return the parent lock tag in the lock hierarchy: the next coarser
1972 * lock that covers the provided tag.
1974 * Returns true and sets *parent to the parent tag if one exists,
1975 * returns false if none exists.
1978 GetParentPredicateLockTag(const PREDICATELOCKTARGETTAG *tag,
1979 PREDICATELOCKTARGETTAG *parent)
1981 switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
1983 case PREDLOCKTAG_RELATION:
1984 /* relation locks have no parent lock */
1987 case PREDLOCKTAG_PAGE:
1988 /* parent lock is relation lock */
1989 SET_PREDICATELOCKTARGETTAG_RELATION(*parent,
1990 GET_PREDICATELOCKTARGETTAG_DB(*tag),
1991 GET_PREDICATELOCKTARGETTAG_RELATION(*tag));
1995 case PREDLOCKTAG_TUPLE:
1996 /* parent lock is page lock */
1997 SET_PREDICATELOCKTARGETTAG_PAGE(*parent,
1998 GET_PREDICATELOCKTARGETTAG_DB(*tag),
1999 GET_PREDICATELOCKTARGETTAG_RELATION(*tag),
2000 GET_PREDICATELOCKTARGETTAG_PAGE(*tag));
2010 * Check whether the lock we are considering is already covered by a
2011 * coarser lock for our transaction.
2013 * Like PredicateLockExists, this function might return a false
2014 * negative, but it will never return a false positive.
2017 CoarserLockCovers(const PREDICATELOCKTARGETTAG *newtargettag)
2019 PREDICATELOCKTARGETTAG targettag,
2022 targettag = *newtargettag;
2024 /* check parents iteratively until no more */
2025 while (GetParentPredicateLockTag(&targettag, &parenttag))
2027 targettag = parenttag;
2028 if (PredicateLockExists(&targettag))
2032 /* no more parents to check; lock is not covered */
2037 * Remove the dummy entry from the predicate lock target hash, to free up some
2038 * scratch space. The caller must be holding SerializablePredicateLockListLock,
2039 * and must restore the entry with RestoreScratchTarget() before releasing the
2042 * If lockheld is true, the caller is already holding the partition lock
2043 * of the partition containing the scratch entry.
2046 RemoveScratchTarget(bool lockheld)
2050 Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2053 LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2054 hash_search_with_hash_value(PredicateLockTargetHash,
2056 ScratchTargetTagHash,
2057 HASH_REMOVE, &found);
2060 LWLockRelease(ScratchPartitionLock);
2064 * Re-insert the dummy entry in predicate lock target hash.
2067 RestoreScratchTarget(bool lockheld)
2071 Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2074 LWLockAcquire(ScratchPartitionLock, LW_EXCLUSIVE);
2075 hash_search_with_hash_value(PredicateLockTargetHash,
2077 ScratchTargetTagHash,
2078 HASH_ENTER, &found);
2081 LWLockRelease(ScratchPartitionLock);
2085 * Check whether the list of related predicate locks is empty for a
2086 * predicate lock target, and remove the target if it is.
2089 RemoveTargetIfNoLongerUsed(PREDICATELOCKTARGET *target, uint32 targettaghash)
2091 PREDICATELOCKTARGET *rmtarget PG_USED_FOR_ASSERTS_ONLY;
2093 Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2095 /* Can't remove it until no locks at this target. */
2096 if (!SHMQueueEmpty(&target->predicateLocks))
2099 /* Actually remove the target. */
2100 rmtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2104 Assert(rmtarget == target);
2108 * Delete child target locks owned by this process.
2109 * This implementation is assuming that the usage of each target tag field
2110 * is uniform. No need to make this hard if we don't have to.
2112 * We aren't acquiring lightweight locks for the predicate lock or lock
2113 * target structures associated with this transaction unless we're going
2114 * to modify them, because no other process is permitted to modify our
2118 DeleteChildTargetLocks(const PREDICATELOCKTARGETTAG *newtargettag)
2120 SERIALIZABLEXACT *sxact;
2121 PREDICATELOCK *predlock;
2123 LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2124 sxact = MySerializableXact;
2125 predlock = (PREDICATELOCK *)
2126 SHMQueueNext(&(sxact->predicateLocks),
2127 &(sxact->predicateLocks),
2128 offsetof(PREDICATELOCK, xactLink));
2131 SHM_QUEUE *predlocksxactlink;
2132 PREDICATELOCK *nextpredlock;
2133 PREDICATELOCKTAG oldlocktag;
2134 PREDICATELOCKTARGET *oldtarget;
2135 PREDICATELOCKTARGETTAG oldtargettag;
2137 predlocksxactlink = &(predlock->xactLink);
2138 nextpredlock = (PREDICATELOCK *)
2139 SHMQueueNext(&(sxact->predicateLocks),
2141 offsetof(PREDICATELOCK, xactLink));
2143 oldlocktag = predlock->tag;
2144 Assert(oldlocktag.myXact == sxact);
2145 oldtarget = oldlocktag.myTarget;
2146 oldtargettag = oldtarget->tag;
2148 if (TargetTagIsCoveredBy(oldtargettag, *newtargettag))
2150 uint32 oldtargettaghash;
2151 LWLock *partitionLock;
2152 PREDICATELOCK *rmpredlock PG_USED_FOR_ASSERTS_ONLY;
2154 oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2155 partitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2157 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2159 SHMQueueDelete(predlocksxactlink);
2160 SHMQueueDelete(&(predlock->targetLink));
2161 rmpredlock = hash_search_with_hash_value
2164 PredicateLockHashCodeFromTargetHashCode(&oldlocktag,
2167 Assert(rmpredlock == predlock);
2169 RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2171 LWLockRelease(partitionLock);
2173 DecrementParentLocks(&oldtargettag);
2176 predlock = nextpredlock;
2178 LWLockRelease(SerializablePredicateLockListLock);
2182 * Returns the promotion limit for a given predicate lock target. This is the
2183 * max number of descendant locks allowed before promoting to the specified
2184 * tag. Note that the limit includes non-direct descendants (e.g., both tuples
2185 * and pages for a relation lock).
2187 * Currently the default limit is 2 for a page lock, and half of the value of
2188 * max_pred_locks_per_transaction - 1 for a relation lock, to match behavior
2189 * of earlier releases when upgrading.
2191 * TODO SSI: We should probably add additional GUCs to allow a maximum ratio
2192 * of page and tuple locks based on the pages in a relation, and the maximum
2193 * ratio of tuple locks to tuples in a page. This would provide more
2194 * generally "balanced" allocation of locks to where they are most useful,
2195 * while still allowing the absolute numbers to prevent one relation from
2196 * tying up all predicate lock resources.
2199 MaxPredicateChildLocks(const PREDICATELOCKTARGETTAG *tag)
2201 switch (GET_PREDICATELOCKTARGETTAG_TYPE(*tag))
2203 case PREDLOCKTAG_RELATION:
2204 return max_predicate_locks_per_relation < 0
2205 ? (max_predicate_locks_per_xact
2206 / (-max_predicate_locks_per_relation)) - 1
2207 : max_predicate_locks_per_relation;
2209 case PREDLOCKTAG_PAGE:
2210 return max_predicate_locks_per_page;
2212 case PREDLOCKTAG_TUPLE:
2215 * not reachable: nothing is finer-granularity than a tuple, so we
2216 * should never try to promote to it.
2228 * For all ancestors of a newly-acquired predicate lock, increment
2229 * their child count in the parent hash table. If any of them have
2230 * more descendants than their promotion threshold, acquire the
2231 * coarsest such lock.
2233 * Returns true if a parent lock was acquired and false otherwise.
2236 CheckAndPromotePredicateLockRequest(const PREDICATELOCKTARGETTAG *reqtag)
2238 PREDICATELOCKTARGETTAG targettag,
2241 LOCALPREDICATELOCK *parentlock;
2247 targettag = *reqtag;
2249 /* check parents iteratively */
2250 while (GetParentPredicateLockTag(&targettag, &nexttag))
2252 targettag = nexttag;
2253 parentlock = (LOCALPREDICATELOCK *) hash_search(LocalPredicateLockHash,
2259 parentlock->held = false;
2260 parentlock->childLocks = 1;
2263 parentlock->childLocks++;
2265 if (parentlock->childLocks >
2266 MaxPredicateChildLocks(&targettag))
2269 * We should promote to this parent lock. Continue to check its
2270 * ancestors, however, both to get their child counts right and to
2271 * check whether we should just go ahead and promote to one of
2274 promotiontag = targettag;
2281 /* acquire coarsest ancestor eligible for promotion */
2282 PredicateLockAcquire(&promotiontag);
2290 * When releasing a lock, decrement the child count on all ancestor
2293 * This is called only when releasing a lock via
2294 * DeleteChildTargetLocks (i.e. when a lock becomes redundant because
2295 * we've acquired its parent, possibly due to promotion) or when a new
2296 * MVCC write lock makes the predicate lock unnecessary. There's no
2297 * point in calling it when locks are released at transaction end, as
2298 * this information is no longer needed.
2301 DecrementParentLocks(const PREDICATELOCKTARGETTAG *targettag)
2303 PREDICATELOCKTARGETTAG parenttag,
2306 parenttag = *targettag;
2308 while (GetParentPredicateLockTag(&parenttag, &nexttag))
2310 uint32 targettaghash;
2311 LOCALPREDICATELOCK *parentlock,
2312 *rmlock PG_USED_FOR_ASSERTS_ONLY;
2314 parenttag = nexttag;
2315 targettaghash = PredicateLockTargetTagHashCode(&parenttag);
2316 parentlock = (LOCALPREDICATELOCK *)
2317 hash_search_with_hash_value(LocalPredicateLockHash,
2318 &parenttag, targettaghash,
2322 * There's a small chance the parent lock doesn't exist in the lock
2323 * table. This can happen if we prematurely removed it because an
2324 * index split caused the child refcount to be off.
2326 if (parentlock == NULL)
2329 parentlock->childLocks--;
2332 * Under similar circumstances the parent lock's refcount might be
2333 * zero. This only happens if we're holding that lock (otherwise we
2334 * would have removed the entry).
2336 if (parentlock->childLocks < 0)
2338 Assert(parentlock->held);
2339 parentlock->childLocks = 0;
2342 if ((parentlock->childLocks == 0) && (!parentlock->held))
2344 rmlock = (LOCALPREDICATELOCK *)
2345 hash_search_with_hash_value(LocalPredicateLockHash,
2346 &parenttag, targettaghash,
2348 Assert(rmlock == parentlock);
2354 * Indicate that a predicate lock on the given target is held by the
2355 * specified transaction. Has no effect if the lock is already held.
2357 * This updates the lock table and the sxact's lock list, and creates
2358 * the lock target if necessary, but does *not* do anything related to
2359 * granularity promotion or the local lock table. See
2360 * PredicateLockAcquire for that.
2363 CreatePredicateLock(const PREDICATELOCKTARGETTAG *targettag,
2364 uint32 targettaghash,
2365 SERIALIZABLEXACT *sxact)
2367 PREDICATELOCKTARGET *target;
2368 PREDICATELOCKTAG locktag;
2369 PREDICATELOCK *lock;
2370 LWLock *partitionLock;
2373 partitionLock = PredicateLockHashPartitionLock(targettaghash);
2375 LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
2376 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
2378 /* Make sure that the target is represented. */
2379 target = (PREDICATELOCKTARGET *)
2380 hash_search_with_hash_value(PredicateLockTargetHash,
2381 targettag, targettaghash,
2382 HASH_ENTER_NULL, &found);
2385 (errcode(ERRCODE_OUT_OF_MEMORY),
2386 errmsg("out of shared memory"),
2387 errhint("You might need to increase max_pred_locks_per_transaction.")));
2389 SHMQueueInit(&(target->predicateLocks));
2391 /* We've got the sxact and target, make sure they're joined. */
2392 locktag.myTarget = target;
2393 locktag.myXact = sxact;
2394 lock = (PREDICATELOCK *)
2395 hash_search_with_hash_value(PredicateLockHash, &locktag,
2396 PredicateLockHashCodeFromTargetHashCode(&locktag, targettaghash),
2397 HASH_ENTER_NULL, &found);
2400 (errcode(ERRCODE_OUT_OF_MEMORY),
2401 errmsg("out of shared memory"),
2402 errhint("You might need to increase max_pred_locks_per_transaction.")));
2406 SHMQueueInsertBefore(&(target->predicateLocks), &(lock->targetLink));
2407 SHMQueueInsertBefore(&(sxact->predicateLocks),
2409 lock->commitSeqNo = InvalidSerCommitSeqNo;
2412 LWLockRelease(partitionLock);
2413 LWLockRelease(SerializablePredicateLockListLock);
2417 * Acquire a predicate lock on the specified target for the current
2418 * connection if not already held. This updates the local lock table
2419 * and uses it to implement granularity promotion. It will consolidate
2420 * multiple locks into a coarser lock if warranted, and will release
2421 * any finer-grained locks covered by the new one.
2424 PredicateLockAcquire(const PREDICATELOCKTARGETTAG *targettag)
2426 uint32 targettaghash;
2428 LOCALPREDICATELOCK *locallock;
2430 /* Do we have the lock already, or a covering lock? */
2431 if (PredicateLockExists(targettag))
2434 if (CoarserLockCovers(targettag))
2437 /* the same hash and LW lock apply to the lock target and the local lock. */
2438 targettaghash = PredicateLockTargetTagHashCode(targettag);
2440 /* Acquire lock in local table */
2441 locallock = (LOCALPREDICATELOCK *)
2442 hash_search_with_hash_value(LocalPredicateLockHash,
2443 targettag, targettaghash,
2444 HASH_ENTER, &found);
2445 locallock->held = true;
2447 locallock->childLocks = 0;
2449 /* Actually create the lock */
2450 CreatePredicateLock(targettag, targettaghash, MySerializableXact);
2453 * Lock has been acquired. Check whether it should be promoted to a
2454 * coarser granularity, or whether there are finer-granularity locks to
2457 if (CheckAndPromotePredicateLockRequest(targettag))
2460 * Lock request was promoted to a coarser-granularity lock, and that
2461 * lock was acquired. It will delete this lock and any of its
2462 * children, so we're done.
2467 /* Clean up any finer-granularity locks */
2468 if (GET_PREDICATELOCKTARGETTAG_TYPE(*targettag) != PREDLOCKTAG_TUPLE)
2469 DeleteChildTargetLocks(targettag);
2475 * PredicateLockRelation
2477 * Gets a predicate lock at the relation level.
2478 * Skip if not in full serializable transaction isolation level.
2479 * Skip if this is a temporary table.
2480 * Clear any finer-grained predicate locks this session has on the relation.
2483 PredicateLockRelation(Relation relation, Snapshot snapshot)
2485 PREDICATELOCKTARGETTAG tag;
2487 if (!SerializationNeededForRead(relation, snapshot))
2490 SET_PREDICATELOCKTARGETTAG_RELATION(tag,
2491 relation->rd_node.dbNode,
2493 PredicateLockAcquire(&tag);
2499 * Gets a predicate lock at the page level.
2500 * Skip if not in full serializable transaction isolation level.
2501 * Skip if this is a temporary table.
2502 * Skip if a coarser predicate lock already covers this page.
2503 * Clear any finer-grained predicate locks this session has on the relation.
2506 PredicateLockPage(Relation relation, BlockNumber blkno, Snapshot snapshot)
2508 PREDICATELOCKTARGETTAG tag;
2510 if (!SerializationNeededForRead(relation, snapshot))
2513 SET_PREDICATELOCKTARGETTAG_PAGE(tag,
2514 relation->rd_node.dbNode,
2517 PredicateLockAcquire(&tag);
2521 * PredicateLockTuple
2523 * Gets a predicate lock at the tuple level.
2524 * Skip if not in full serializable transaction isolation level.
2525 * Skip if this is a temporary table.
2528 PredicateLockTuple(Relation relation, HeapTuple tuple, Snapshot snapshot)
2530 PREDICATELOCKTARGETTAG tag;
2532 TransactionId targetxmin;
2534 if (!SerializationNeededForRead(relation, snapshot))
2538 * If it's a heap tuple, return if this xact wrote it.
2540 if (relation->rd_index == NULL)
2542 TransactionId myxid;
2544 targetxmin = HeapTupleHeaderGetXmin(tuple->t_data);
2546 myxid = GetTopTransactionIdIfAny();
2547 if (TransactionIdIsValid(myxid))
2549 if (TransactionIdFollowsOrEquals(targetxmin, TransactionXmin))
2551 TransactionId xid = SubTransGetTopmostTransaction(targetxmin);
2553 if (TransactionIdEquals(xid, myxid))
2555 /* We wrote it; we already have a write lock. */
2563 * Do quick-but-not-definitive test for a relation lock first. This will
2564 * never cause a return when the relation is *not* locked, but will
2565 * occasionally let the check continue when there really *is* a relation
2568 SET_PREDICATELOCKTARGETTAG_RELATION(tag,
2569 relation->rd_node.dbNode,
2571 if (PredicateLockExists(&tag))
2574 tid = &(tuple->t_self);
2575 SET_PREDICATELOCKTARGETTAG_TUPLE(tag,
2576 relation->rd_node.dbNode,
2578 ItemPointerGetBlockNumber(tid),
2579 ItemPointerGetOffsetNumber(tid));
2580 PredicateLockAcquire(&tag);
2587 * Remove a predicate lock target along with any locks held for it.
2589 * Caller must hold SerializablePredicateLockListLock and the
2590 * appropriate hash partition lock for the target.
2593 DeleteLockTarget(PREDICATELOCKTARGET *target, uint32 targettaghash)
2595 PREDICATELOCK *predlock;
2596 SHM_QUEUE *predlocktargetlink;
2597 PREDICATELOCK *nextpredlock;
2600 Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2601 Assert(LWLockHeldByMe(PredicateLockHashPartitionLock(targettaghash)));
2603 predlock = (PREDICATELOCK *)
2604 SHMQueueNext(&(target->predicateLocks),
2605 &(target->predicateLocks),
2606 offsetof(PREDICATELOCK, targetLink));
2607 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2610 predlocktargetlink = &(predlock->targetLink);
2611 nextpredlock = (PREDICATELOCK *)
2612 SHMQueueNext(&(target->predicateLocks),
2614 offsetof(PREDICATELOCK, targetLink));
2616 SHMQueueDelete(&(predlock->xactLink));
2617 SHMQueueDelete(&(predlock->targetLink));
2619 hash_search_with_hash_value
2622 PredicateLockHashCodeFromTargetHashCode(&predlock->tag,
2624 HASH_REMOVE, &found);
2627 predlock = nextpredlock;
2629 LWLockRelease(SerializableXactHashLock);
2631 /* Remove the target itself, if possible. */
2632 RemoveTargetIfNoLongerUsed(target, targettaghash);
2637 * TransferPredicateLocksToNewTarget
2639 * Move or copy all the predicate locks for a lock target, for use by
2640 * index page splits/combines and other things that create or replace
2641 * lock targets. If 'removeOld' is true, the old locks and the target
2644 * Returns true on success, or false if we ran out of shared memory to
2645 * allocate the new target or locks. Guaranteed to always succeed if
2646 * removeOld is set (by using the scratch entry in PredicateLockTargetHash
2647 * for scratch space).
2649 * Warning: the "removeOld" option should be used only with care,
2650 * because this function does not (indeed, can not) update other
2651 * backends' LocalPredicateLockHash. If we are only adding new
2652 * entries, this is not a problem: the local lock table is used only
2653 * as a hint, so missing entries for locks that are held are
2654 * OK. Having entries for locks that are no longer held, as can happen
2655 * when using "removeOld", is not in general OK. We can only use it
2656 * safely when replacing a lock with a coarser-granularity lock that
2657 * covers it, or if we are absolutely certain that no one will need to
2658 * refer to that lock in the future.
2660 * Caller must hold SerializablePredicateLockListLock.
2663 TransferPredicateLocksToNewTarget(PREDICATELOCKTARGETTAG oldtargettag,
2664 PREDICATELOCKTARGETTAG newtargettag,
2667 uint32 oldtargettaghash;
2668 LWLock *oldpartitionLock;
2669 PREDICATELOCKTARGET *oldtarget;
2670 uint32 newtargettaghash;
2671 LWLock *newpartitionLock;
2673 bool outOfShmem = false;
2675 Assert(LWLockHeldByMe(SerializablePredicateLockListLock));
2677 oldtargettaghash = PredicateLockTargetTagHashCode(&oldtargettag);
2678 newtargettaghash = PredicateLockTargetTagHashCode(&newtargettag);
2679 oldpartitionLock = PredicateLockHashPartitionLock(oldtargettaghash);
2680 newpartitionLock = PredicateLockHashPartitionLock(newtargettaghash);
2685 * Remove the dummy entry to give us scratch space, so we know we'll
2686 * be able to create the new lock target.
2688 RemoveScratchTarget(false);
2692 * We must get the partition locks in ascending sequence to avoid
2693 * deadlocks. If old and new partitions are the same, we must request the
2696 if (oldpartitionLock < newpartitionLock)
2698 LWLockAcquire(oldpartitionLock,
2699 (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2700 LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2702 else if (oldpartitionLock > newpartitionLock)
2704 LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2705 LWLockAcquire(oldpartitionLock,
2706 (removeOld ? LW_EXCLUSIVE : LW_SHARED));
2709 LWLockAcquire(newpartitionLock, LW_EXCLUSIVE);
2712 * Look for the old target. If not found, that's OK; no predicate locks
2713 * are affected, so we can just clean up and return. If it does exist,
2714 * walk its list of predicate locks and move or copy them to the new
2717 oldtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2724 PREDICATELOCKTARGET *newtarget;
2725 PREDICATELOCK *oldpredlock;
2726 PREDICATELOCKTAG newpredlocktag;
2728 newtarget = hash_search_with_hash_value(PredicateLockTargetHash,
2731 HASH_ENTER_NULL, &found);
2735 /* Failed to allocate due to insufficient shmem */
2740 /* If we created a new entry, initialize it */
2742 SHMQueueInit(&(newtarget->predicateLocks));
2744 newpredlocktag.myTarget = newtarget;
2747 * Loop through all the locks on the old target, replacing them with
2748 * locks on the new target.
2750 oldpredlock = (PREDICATELOCK *)
2751 SHMQueueNext(&(oldtarget->predicateLocks),
2752 &(oldtarget->predicateLocks),
2753 offsetof(PREDICATELOCK, targetLink));
2754 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2757 SHM_QUEUE *predlocktargetlink;
2758 PREDICATELOCK *nextpredlock;
2759 PREDICATELOCK *newpredlock;
2760 SerCommitSeqNo oldCommitSeqNo = oldpredlock->commitSeqNo;
2762 predlocktargetlink = &(oldpredlock->targetLink);
2763 nextpredlock = (PREDICATELOCK *)
2764 SHMQueueNext(&(oldtarget->predicateLocks),
2766 offsetof(PREDICATELOCK, targetLink));
2767 newpredlocktag.myXact = oldpredlock->tag.myXact;
2771 SHMQueueDelete(&(oldpredlock->xactLink));
2772 SHMQueueDelete(&(oldpredlock->targetLink));
2774 hash_search_with_hash_value
2777 PredicateLockHashCodeFromTargetHashCode(&oldpredlock->tag,
2779 HASH_REMOVE, &found);
2783 newpredlock = (PREDICATELOCK *)
2784 hash_search_with_hash_value(PredicateLockHash,
2786 PredicateLockHashCodeFromTargetHashCode(&newpredlocktag,
2792 /* Out of shared memory. Undo what we've done so far. */
2793 LWLockRelease(SerializableXactHashLock);
2794 DeleteLockTarget(newtarget, newtargettaghash);
2800 SHMQueueInsertBefore(&(newtarget->predicateLocks),
2801 &(newpredlock->targetLink));
2802 SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
2803 &(newpredlock->xactLink));
2804 newpredlock->commitSeqNo = oldCommitSeqNo;
2808 if (newpredlock->commitSeqNo < oldCommitSeqNo)
2809 newpredlock->commitSeqNo = oldCommitSeqNo;
2812 Assert(newpredlock->commitSeqNo != 0);
2813 Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
2814 || (newpredlock->tag.myXact == OldCommittedSxact));
2816 oldpredlock = nextpredlock;
2818 LWLockRelease(SerializableXactHashLock);
2822 Assert(SHMQueueEmpty(&oldtarget->predicateLocks));
2823 RemoveTargetIfNoLongerUsed(oldtarget, oldtargettaghash);
2829 /* Release partition locks in reverse order of acquisition. */
2830 if (oldpartitionLock < newpartitionLock)
2832 LWLockRelease(newpartitionLock);
2833 LWLockRelease(oldpartitionLock);
2835 else if (oldpartitionLock > newpartitionLock)
2837 LWLockRelease(oldpartitionLock);
2838 LWLockRelease(newpartitionLock);
2841 LWLockRelease(newpartitionLock);
2845 /* We shouldn't run out of memory if we're moving locks */
2846 Assert(!outOfShmem);
2848 /* Put the scratch entry back */
2849 RestoreScratchTarget(false);
2856 * Drop all predicate locks of any granularity from the specified relation,
2857 * which can be a heap relation or an index relation. If 'transfer' is true,
2858 * acquire a relation lock on the heap for any transactions with any lock(s)
2859 * on the specified relation.
2861 * This requires grabbing a lot of LW locks and scanning the entire lock
2862 * target table for matches. That makes this more expensive than most
2863 * predicate lock management functions, but it will only be called for DDL
2864 * type commands that are expensive anyway, and there are fast returns when
2865 * no serializable transactions are active or the relation is temporary.
2867 * We don't use the TransferPredicateLocksToNewTarget function because it
2868 * acquires its own locks on the partitions of the two targets involved,
2869 * and we'll already be holding all partition locks.
2871 * We can't throw an error from here, because the call could be from a
2872 * transaction which is not serializable.
2874 * NOTE: This is currently only called with transfer set to true, but that may
2875 * change. If we decide to clean up the locks from a table on commit of a
2876 * transaction which executed DROP TABLE, the false condition will be useful.
2879 DropAllPredicateLocksFromTable(Relation relation, bool transfer)
2881 HASH_SEQ_STATUS seqstat;
2882 PREDICATELOCKTARGET *oldtarget;
2883 PREDICATELOCKTARGET *heaptarget;
2890 uint32 heaptargettaghash;
2893 * Bail out quickly if there are no serializable transactions running.
2894 * It's safe to check this without taking locks because the caller is
2895 * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
2896 * would matter here can be acquired while that is held.
2898 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
2901 if (!PredicateLockingNeededForRelation(relation))
2904 dbId = relation->rd_node.dbNode;
2905 relId = relation->rd_id;
2906 if (relation->rd_index == NULL)
2914 heapId = relation->rd_index->indrelid;
2916 Assert(heapId != InvalidOid);
2917 Assert(transfer || !isIndex); /* index OID only makes sense with
2920 /* Retrieve first time needed, then keep. */
2921 heaptargettaghash = 0;
2924 /* Acquire locks on all lock partitions */
2925 LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
2926 for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
2927 LWLockAcquire(PredicateLockHashPartitionLockByIndex(i), LW_EXCLUSIVE);
2928 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
2931 * Remove the dummy entry to give us scratch space, so we know we'll be
2932 * able to create the new lock target.
2935 RemoveScratchTarget(true);
2937 /* Scan through target map */
2938 hash_seq_init(&seqstat, PredicateLockTargetHash);
2940 while ((oldtarget = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
2942 PREDICATELOCK *oldpredlock;
2945 * Check whether this is a target which needs attention.
2947 if (GET_PREDICATELOCKTARGETTAG_RELATION(oldtarget->tag) != relId)
2948 continue; /* wrong relation id */
2949 if (GET_PREDICATELOCKTARGETTAG_DB(oldtarget->tag) != dbId)
2950 continue; /* wrong database id */
2951 if (transfer && !isIndex
2952 && GET_PREDICATELOCKTARGETTAG_TYPE(oldtarget->tag) == PREDLOCKTAG_RELATION)
2953 continue; /* already the right lock */
2956 * If we made it here, we have work to do. We make sure the heap
2957 * relation lock exists, then we walk the list of predicate locks for
2958 * the old target we found, moving all locks to the heap relation lock
2959 * -- unless they already hold that.
2963 * First make sure we have the heap relation target. We only need to
2966 if (transfer && heaptarget == NULL)
2968 PREDICATELOCKTARGETTAG heaptargettag;
2970 SET_PREDICATELOCKTARGETTAG_RELATION(heaptargettag, dbId, heapId);
2971 heaptargettaghash = PredicateLockTargetTagHashCode(&heaptargettag);
2972 heaptarget = hash_search_with_hash_value(PredicateLockTargetHash,
2975 HASH_ENTER, &found);
2977 SHMQueueInit(&heaptarget->predicateLocks);
2981 * Loop through all the locks on the old target, replacing them with
2982 * locks on the new target.
2984 oldpredlock = (PREDICATELOCK *)
2985 SHMQueueNext(&(oldtarget->predicateLocks),
2986 &(oldtarget->predicateLocks),
2987 offsetof(PREDICATELOCK, targetLink));
2990 PREDICATELOCK *nextpredlock;
2991 PREDICATELOCK *newpredlock;
2992 SerCommitSeqNo oldCommitSeqNo;
2993 SERIALIZABLEXACT *oldXact;
2995 nextpredlock = (PREDICATELOCK *)
2996 SHMQueueNext(&(oldtarget->predicateLocks),
2997 &(oldpredlock->targetLink),
2998 offsetof(PREDICATELOCK, targetLink));
3001 * Remove the old lock first. This avoids the chance of running
3002 * out of lock structure entries for the hash table.
3004 oldCommitSeqNo = oldpredlock->commitSeqNo;
3005 oldXact = oldpredlock->tag.myXact;
3007 SHMQueueDelete(&(oldpredlock->xactLink));
3010 * No need for retail delete from oldtarget list, we're removing
3011 * the whole target anyway.
3013 hash_search(PredicateLockHash,
3015 HASH_REMOVE, &found);
3020 PREDICATELOCKTAG newpredlocktag;
3022 newpredlocktag.myTarget = heaptarget;
3023 newpredlocktag.myXact = oldXact;
3024 newpredlock = (PREDICATELOCK *)
3025 hash_search_with_hash_value(PredicateLockHash,
3027 PredicateLockHashCodeFromTargetHashCode(&newpredlocktag,
3033 SHMQueueInsertBefore(&(heaptarget->predicateLocks),
3034 &(newpredlock->targetLink));
3035 SHMQueueInsertBefore(&(newpredlocktag.myXact->predicateLocks),
3036 &(newpredlock->xactLink));
3037 newpredlock->commitSeqNo = oldCommitSeqNo;
3041 if (newpredlock->commitSeqNo < oldCommitSeqNo)
3042 newpredlock->commitSeqNo = oldCommitSeqNo;
3045 Assert(newpredlock->commitSeqNo != 0);
3046 Assert((newpredlock->commitSeqNo == InvalidSerCommitSeqNo)
3047 || (newpredlock->tag.myXact == OldCommittedSxact));
3050 oldpredlock = nextpredlock;
3053 hash_search(PredicateLockTargetHash, &oldtarget->tag, HASH_REMOVE,
3058 /* Put the scratch entry back */
3060 RestoreScratchTarget(true);
3062 /* Release locks in reverse order */
3063 LWLockRelease(SerializableXactHashLock);
3064 for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
3065 LWLockRelease(PredicateLockHashPartitionLockByIndex(i));
3066 LWLockRelease(SerializablePredicateLockListLock);
3070 * TransferPredicateLocksToHeapRelation
3071 * For all transactions, transfer all predicate locks for the given
3072 * relation to a single relation lock on the heap.
3075 TransferPredicateLocksToHeapRelation(Relation relation)
3077 DropAllPredicateLocksFromTable(relation, true);
3082 * PredicateLockPageSplit
3084 * Copies any predicate locks for the old page to the new page.
3085 * Skip if this is a temporary table or toast table.
3087 * NOTE: A page split (or overflow) affects all serializable transactions,
3088 * even if it occurs in the context of another transaction isolation level.
3090 * NOTE: This currently leaves the local copy of the locks without
3091 * information on the new lock which is in shared memory. This could cause
3092 * problems if enough page splits occur on locked pages without the processes
3093 * which hold the locks getting in and noticing.
3096 PredicateLockPageSplit(Relation relation, BlockNumber oldblkno,
3097 BlockNumber newblkno)
3099 PREDICATELOCKTARGETTAG oldtargettag;
3100 PREDICATELOCKTARGETTAG newtargettag;
3104 * Bail out quickly if there are no serializable transactions running.
3106 * It's safe to do this check without taking any additional locks. Even if
3107 * a serializable transaction starts concurrently, we know it can't take
3108 * any SIREAD locks on the page being split because the caller is holding
3109 * the associated buffer page lock. Memory reordering isn't an issue; the
3110 * memory barrier in the LWLock acquisition guarantees that this read
3111 * occurs while the buffer page lock is held.
3113 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
3116 if (!PredicateLockingNeededForRelation(relation))
3119 Assert(oldblkno != newblkno);
3120 Assert(BlockNumberIsValid(oldblkno));
3121 Assert(BlockNumberIsValid(newblkno));
3123 SET_PREDICATELOCKTARGETTAG_PAGE(oldtargettag,
3124 relation->rd_node.dbNode,
3127 SET_PREDICATELOCKTARGETTAG_PAGE(newtargettag,
3128 relation->rd_node.dbNode,
3132 LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
3135 * Try copying the locks over to the new page's tag, creating it if
3138 success = TransferPredicateLocksToNewTarget(oldtargettag,
3145 * No more predicate lock entries are available. Failure isn't an
3146 * option here, so promote the page lock to a relation lock.
3149 /* Get the parent relation lock's lock tag */
3150 success = GetParentPredicateLockTag(&oldtargettag,
3155 * Move the locks to the parent. This shouldn't fail.
3157 * Note that here we are removing locks held by other backends,
3158 * leading to a possible inconsistency in their local lock hash table.
3159 * This is OK because we're replacing it with a lock that covers the
3162 success = TransferPredicateLocksToNewTarget(oldtargettag,
3168 LWLockRelease(SerializablePredicateLockListLock);
3172 * PredicateLockPageCombine
3174 * Combines predicate locks for two existing pages.
3175 * Skip if this is a temporary table or toast table.
3177 * NOTE: A page combine affects all serializable transactions, even if it
3178 * occurs in the context of another transaction isolation level.
3181 PredicateLockPageCombine(Relation relation, BlockNumber oldblkno,
3182 BlockNumber newblkno)
3185 * Page combines differ from page splits in that we ought to be able to
3186 * remove the locks on the old page after transferring them to the new
3187 * page, instead of duplicating them. However, because we can't edit other
3188 * backends' local lock tables, removing the old lock would leave them
3189 * with an entry in their LocalPredicateLockHash for a lock they're not
3190 * holding, which isn't acceptable. So we wind up having to do the same
3191 * work as a page split, acquiring a lock on the new page and keeping the
3192 * old page locked too. That can lead to some false positives, but should
3193 * be rare in practice.
3195 PredicateLockPageSplit(relation, oldblkno, newblkno);
3199 * Walk the list of in-progress serializable transactions and find the new
3203 SetNewSxactGlobalXmin(void)
3205 SERIALIZABLEXACT *sxact;
3207 Assert(LWLockHeldByMe(SerializableXactHashLock));
3209 PredXact->SxactGlobalXmin = InvalidTransactionId;
3210 PredXact->SxactGlobalXminCount = 0;
3212 for (sxact = FirstPredXact(); sxact != NULL; sxact = NextPredXact(sxact))
3214 if (!SxactIsRolledBack(sxact)
3215 && !SxactIsCommitted(sxact)
3216 && sxact != OldCommittedSxact)
3218 Assert(sxact->xmin != InvalidTransactionId);
3219 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3220 || TransactionIdPrecedes(sxact->xmin,
3221 PredXact->SxactGlobalXmin))
3223 PredXact->SxactGlobalXmin = sxact->xmin;
3224 PredXact->SxactGlobalXminCount = 1;
3226 else if (TransactionIdEquals(sxact->xmin,
3227 PredXact->SxactGlobalXmin))
3228 PredXact->SxactGlobalXminCount++;
3232 OldSerXidSetActiveSerXmin(PredXact->SxactGlobalXmin);
3236 * ReleasePredicateLocks
3238 * Releases predicate locks based on completion of the current transaction,
3239 * whether committed or rolled back. It can also be called for a read only
3240 * transaction when it becomes impossible for the transaction to become
3241 * part of a dangerous structure.
3243 * We do nothing unless this is a serializable transaction.
3245 * This method must ensure that shared memory hash tables are cleaned
3246 * up in some relatively timely fashion.
3248 * If this transaction is committing and is holding any predicate locks,
3249 * it must be added to a list of completed serializable transactions still
3253 ReleasePredicateLocks(bool isCommit)
3256 RWConflict conflict,
3258 possibleUnsafeConflict;
3259 SERIALIZABLEXACT *roXact;
3262 * We can't trust XactReadOnly here, because a transaction which started
3263 * as READ WRITE can show as READ ONLY later, e.g., within
3264 * subtransactions. We want to flag a transaction as READ ONLY if it
3265 * commits without writing so that de facto READ ONLY transactions get the
3266 * benefit of some RO optimizations, so we will use this local variable to
3267 * get some cleanup logic right which is based on whether the transaction
3268 * was declared READ ONLY at the top level.
3270 bool topLevelIsDeclaredReadOnly;
3272 if (MySerializableXact == InvalidSerializableXact)
3274 Assert(LocalPredicateLockHash == NULL);
3278 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3280 Assert(!isCommit || SxactIsPrepared(MySerializableXact));
3281 Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
3282 Assert(!SxactIsCommitted(MySerializableXact));
3283 Assert(!SxactIsRolledBack(MySerializableXact));
3285 /* may not be serializable during COMMIT/ROLLBACK PREPARED */
3286 Assert(MySerializableXact->pid == 0 || IsolationIsSerializable());
3288 /* We'd better not already be on the cleanup list. */
3289 Assert(!SxactIsOnFinishedList(MySerializableXact));
3291 topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
3294 * We don't hold XidGenLock lock here, assuming that TransactionId is
3297 * If this value is changing, we don't care that much whether we get the
3298 * old or new value -- it is just used to determine how far
3299 * GlobalSerializableXmin must advance before this transaction can be
3300 * fully cleaned up. The worst that could happen is we wait for one more
3301 * transaction to complete before freeing some RAM; correctness of visible
3302 * behavior is not affected.
3304 MySerializableXact->finishedBefore = ShmemVariableCache->nextXid;
3307 * If it's not a commit it's a rollback, and we can clear our locks
3312 MySerializableXact->flags |= SXACT_FLAG_COMMITTED;
3313 MySerializableXact->commitSeqNo = ++(PredXact->LastSxactCommitSeqNo);
3314 /* Recognize implicit read-only transaction (commit without write). */
3315 if (!MyXactDidWrite)
3316 MySerializableXact->flags |= SXACT_FLAG_READ_ONLY;
3321 * The DOOMED flag indicates that we intend to roll back this
3322 * transaction and so it should not cause serialization failures for
3323 * other transactions that conflict with it. Note that this flag might
3324 * already be set, if another backend marked this transaction for
3327 * The ROLLED_BACK flag further indicates that ReleasePredicateLocks
3328 * has been called, and so the SerializableXact is eligible for
3329 * cleanup. This means it should not be considered when calculating
3332 MySerializableXact->flags |= SXACT_FLAG_DOOMED;
3333 MySerializableXact->flags |= SXACT_FLAG_ROLLED_BACK;
3336 * If the transaction was previously prepared, but is now failing due
3337 * to a ROLLBACK PREPARED or (hopefully very rare) error after the
3338 * prepare, clear the prepared flag. This simplifies conflict
3341 MySerializableXact->flags &= ~SXACT_FLAG_PREPARED;
3344 if (!topLevelIsDeclaredReadOnly)
3346 Assert(PredXact->WritableSxactCount > 0);
3347 if (--(PredXact->WritableSxactCount) == 0)
3350 * Release predicate locks and rw-conflicts in for all committed
3351 * transactions. There are no longer any transactions which might
3352 * conflict with the locks and no chance for new transactions to
3353 * overlap. Similarly, existing conflicts in can't cause pivots,
3354 * and any conflicts in which could have completed a dangerous
3355 * structure would already have caused a rollback, so any
3356 * remaining ones must be benign.
3358 PredXact->CanPartialClearThrough = PredXact->LastSxactCommitSeqNo;
3364 * Read-only transactions: clear the list of transactions that might
3365 * make us unsafe. Note that we use 'inLink' for the iteration as
3366 * opposed to 'outLink' for the r/w xacts.
3368 possibleUnsafeConflict = (RWConflict)
3369 SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3370 &MySerializableXact->possibleUnsafeConflicts,
3371 offsetof(RWConflictData, inLink));
3372 while (possibleUnsafeConflict)
3374 nextConflict = (RWConflict)
3375 SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3376 &possibleUnsafeConflict->inLink,
3377 offsetof(RWConflictData, inLink));
3379 Assert(!SxactIsReadOnly(possibleUnsafeConflict->sxactOut));
3380 Assert(MySerializableXact == possibleUnsafeConflict->sxactIn);
3382 ReleaseRWConflict(possibleUnsafeConflict);
3384 possibleUnsafeConflict = nextConflict;
3388 /* Check for conflict out to old committed transactions. */
3390 && !SxactIsReadOnly(MySerializableXact)
3391 && SxactHasSummaryConflictOut(MySerializableXact))
3394 * we don't know which old committed transaction we conflicted with,
3395 * so be conservative and use FirstNormalSerCommitSeqNo here
3397 MySerializableXact->SeqNo.earliestOutConflictCommit =
3398 FirstNormalSerCommitSeqNo;
3399 MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3403 * Release all outConflicts to committed transactions. If we're rolling
3404 * back clear them all. Set SXACT_FLAG_CONFLICT_OUT if any point to
3405 * previously committed transactions.
3407 conflict = (RWConflict)
3408 SHMQueueNext(&MySerializableXact->outConflicts,
3409 &MySerializableXact->outConflicts,
3410 offsetof(RWConflictData, outLink));
3413 nextConflict = (RWConflict)
3414 SHMQueueNext(&MySerializableXact->outConflicts,
3416 offsetof(RWConflictData, outLink));
3419 && !SxactIsReadOnly(MySerializableXact)
3420 && SxactIsCommitted(conflict->sxactIn))
3422 if ((MySerializableXact->flags & SXACT_FLAG_CONFLICT_OUT) == 0
3423 || conflict->sxactIn->prepareSeqNo < MySerializableXact->SeqNo.earliestOutConflictCommit)
3424 MySerializableXact->SeqNo.earliestOutConflictCommit = conflict->sxactIn->prepareSeqNo;
3425 MySerializableXact->flags |= SXACT_FLAG_CONFLICT_OUT;
3429 || SxactIsCommitted(conflict->sxactIn)
3430 || (conflict->sxactIn->SeqNo.lastCommitBeforeSnapshot >= PredXact->LastSxactCommitSeqNo))
3431 ReleaseRWConflict(conflict);
3433 conflict = nextConflict;
3437 * Release all inConflicts from committed and read-only transactions. If
3438 * we're rolling back, clear them all.
3440 conflict = (RWConflict)
3441 SHMQueueNext(&MySerializableXact->inConflicts,
3442 &MySerializableXact->inConflicts,
3443 offsetof(RWConflictData, inLink));
3446 nextConflict = (RWConflict)
3447 SHMQueueNext(&MySerializableXact->inConflicts,
3449 offsetof(RWConflictData, inLink));
3452 || SxactIsCommitted(conflict->sxactOut)
3453 || SxactIsReadOnly(conflict->sxactOut))
3454 ReleaseRWConflict(conflict);
3456 conflict = nextConflict;
3459 if (!topLevelIsDeclaredReadOnly)
3462 * Remove ourselves from the list of possible conflicts for concurrent
3463 * READ ONLY transactions, flagging them as unsafe if we have a
3464 * conflict out. If any are waiting DEFERRABLE transactions, wake them
3465 * up if they are known safe or known unsafe.
3467 possibleUnsafeConflict = (RWConflict)
3468 SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3469 &MySerializableXact->possibleUnsafeConflicts,
3470 offsetof(RWConflictData, outLink));
3471 while (possibleUnsafeConflict)
3473 nextConflict = (RWConflict)
3474 SHMQueueNext(&MySerializableXact->possibleUnsafeConflicts,
3475 &possibleUnsafeConflict->outLink,
3476 offsetof(RWConflictData, outLink));
3478 roXact = possibleUnsafeConflict->sxactIn;
3479 Assert(MySerializableXact == possibleUnsafeConflict->sxactOut);
3480 Assert(SxactIsReadOnly(roXact));
3482 /* Mark conflicted if necessary. */
3485 && SxactHasConflictOut(MySerializableXact)
3486 && (MySerializableXact->SeqNo.earliestOutConflictCommit
3487 <= roXact->SeqNo.lastCommitBeforeSnapshot))
3490 * This releases possibleUnsafeConflict (as well as all other
3491 * possible conflicts for roXact)
3493 FlagSxactUnsafe(roXact);
3497 ReleaseRWConflict(possibleUnsafeConflict);
3500 * If we were the last possible conflict, flag it safe. The
3501 * transaction can now safely release its predicate locks (but
3502 * that transaction's backend has to do that itself).
3504 if (SHMQueueEmpty(&roXact->possibleUnsafeConflicts))
3505 roXact->flags |= SXACT_FLAG_RO_SAFE;
3509 * Wake up the process for a waiting DEFERRABLE transaction if we
3510 * now know it's either safe or conflicted.
3512 if (SxactIsDeferrableWaiting(roXact) &&
3513 (SxactIsROUnsafe(roXact) || SxactIsROSafe(roXact)))
3514 ProcSendSignal(roXact->pid);
3516 possibleUnsafeConflict = nextConflict;
3521 * Check whether it's time to clean up old transactions. This can only be
3522 * done when the last serializable transaction with the oldest xmin among
3523 * serializable transactions completes. We then find the "new oldest"
3524 * xmin and purge any transactions which finished before this transaction
3527 needToClear = false;
3528 if (TransactionIdEquals(MySerializableXact->xmin, PredXact->SxactGlobalXmin))
3530 Assert(PredXact->SxactGlobalXminCount > 0);
3531 if (--(PredXact->SxactGlobalXminCount) == 0)
3533 SetNewSxactGlobalXmin();
3538 LWLockRelease(SerializableXactHashLock);
3540 LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3542 /* Add this to the list of transactions to check for later cleanup. */
3544 SHMQueueInsertBefore(FinishedSerializableTransactions,
3545 &MySerializableXact->finishedLink);
3548 ReleaseOneSerializableXact(MySerializableXact, false, false);
3550 LWLockRelease(SerializableFinishedListLock);
3553 ClearOldPredicateLocks();
3555 MySerializableXact = InvalidSerializableXact;
3556 MyXactDidWrite = false;
3558 /* Delete per-transaction lock table */
3559 if (LocalPredicateLockHash != NULL)
3561 hash_destroy(LocalPredicateLockHash);
3562 LocalPredicateLockHash = NULL;
3567 * Clear old predicate locks, belonging to committed transactions that are no
3568 * longer interesting to any in-progress transaction.
3571 ClearOldPredicateLocks(void)
3573 SERIALIZABLEXACT *finishedSxact;
3574 PREDICATELOCK *predlock;
3577 * Loop through finished transactions. They are in commit order, so we can
3578 * stop as soon as we find one that's still interesting.
3580 LWLockAcquire(SerializableFinishedListLock, LW_EXCLUSIVE);
3581 finishedSxact = (SERIALIZABLEXACT *)
3582 SHMQueueNext(FinishedSerializableTransactions,
3583 FinishedSerializableTransactions,
3584 offsetof(SERIALIZABLEXACT, finishedLink));
3585 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3586 while (finishedSxact)
3588 SERIALIZABLEXACT *nextSxact;
3590 nextSxact = (SERIALIZABLEXACT *)
3591 SHMQueueNext(FinishedSerializableTransactions,
3592 &(finishedSxact->finishedLink),
3593 offsetof(SERIALIZABLEXACT, finishedLink));
3594 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin)
3595 || TransactionIdPrecedesOrEquals(finishedSxact->finishedBefore,
3596 PredXact->SxactGlobalXmin))
3599 * This transaction committed before any in-progress transaction
3600 * took its snapshot. It's no longer interesting.
3602 LWLockRelease(SerializableXactHashLock);
3603 SHMQueueDelete(&(finishedSxact->finishedLink));
3604 ReleaseOneSerializableXact(finishedSxact, false, false);
3605 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3607 else if (finishedSxact->commitSeqNo > PredXact->HavePartialClearedThrough
3608 && finishedSxact->commitSeqNo <= PredXact->CanPartialClearThrough)
3611 * Any active transactions that took their snapshot before this
3612 * transaction committed are read-only, so we can clear part of
3615 LWLockRelease(SerializableXactHashLock);
3617 if (SxactIsReadOnly(finishedSxact))
3619 /* A read-only transaction can be removed entirely */
3620 SHMQueueDelete(&(finishedSxact->finishedLink));
3621 ReleaseOneSerializableXact(finishedSxact, false, false);
3626 * A read-write transaction can only be partially cleared. We
3627 * need to keep the SERIALIZABLEXACT but can release the
3628 * SIREAD locks and conflicts in.
3630 ReleaseOneSerializableXact(finishedSxact, true, false);
3633 PredXact->HavePartialClearedThrough = finishedSxact->commitSeqNo;
3634 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3638 /* Still interesting. */
3641 finishedSxact = nextSxact;
3643 LWLockRelease(SerializableXactHashLock);
3646 * Loop through predicate locks on dummy transaction for summarized data.
3648 LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3649 predlock = (PREDICATELOCK *)
3650 SHMQueueNext(&OldCommittedSxact->predicateLocks,
3651 &OldCommittedSxact->predicateLocks,
3652 offsetof(PREDICATELOCK, xactLink));
3655 PREDICATELOCK *nextpredlock;
3656 bool canDoPartialCleanup;
3658 nextpredlock = (PREDICATELOCK *)
3659 SHMQueueNext(&OldCommittedSxact->predicateLocks,
3660 &predlock->xactLink,
3661 offsetof(PREDICATELOCK, xactLink));
3663 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
3664 Assert(predlock->commitSeqNo != 0);
3665 Assert(predlock->commitSeqNo != InvalidSerCommitSeqNo);
3666 canDoPartialCleanup = (predlock->commitSeqNo <= PredXact->CanPartialClearThrough);
3667 LWLockRelease(SerializableXactHashLock);
3670 * If this lock originally belonged to an old enough transaction, we
3673 if (canDoPartialCleanup)
3675 PREDICATELOCKTAG tag;
3676 PREDICATELOCKTARGET *target;
3677 PREDICATELOCKTARGETTAG targettag;
3678 uint32 targettaghash;
3679 LWLock *partitionLock;
3681 tag = predlock->tag;
3682 target = tag.myTarget;
3683 targettag = target->tag;
3684 targettaghash = PredicateLockTargetTagHashCode(&targettag);
3685 partitionLock = PredicateLockHashPartitionLock(targettaghash);
3687 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3689 SHMQueueDelete(&(predlock->targetLink));
3690 SHMQueueDelete(&(predlock->xactLink));
3692 hash_search_with_hash_value(PredicateLockHash, &tag,
3693 PredicateLockHashCodeFromTargetHashCode(&tag,
3696 RemoveTargetIfNoLongerUsed(target, targettaghash);
3698 LWLockRelease(partitionLock);
3701 predlock = nextpredlock;
3704 LWLockRelease(SerializablePredicateLockListLock);
3705 LWLockRelease(SerializableFinishedListLock);
3709 * This is the normal way to delete anything from any of the predicate
3710 * locking hash tables. Given a transaction which we know can be deleted:
3711 * delete all predicate locks held by that transaction and any predicate
3712 * lock targets which are now unreferenced by a lock; delete all conflicts
3713 * for the transaction; delete all xid values for the transaction; then
3714 * delete the transaction.
3716 * When the partial flag is set, we can release all predicate locks and
3717 * in-conflict information -- we've established that there are no longer
3718 * any overlapping read write transactions for which this transaction could
3719 * matter -- but keep the transaction entry itself and any outConflicts.
3721 * When the summarize flag is set, we've run short of room for sxact data
3722 * and must summarize to the SLRU. Predicate locks are transferred to a
3723 * dummy "old" transaction, with duplicate locks on a single target
3724 * collapsing to a single lock with the "latest" commitSeqNo from among
3725 * the conflicting locks..
3728 ReleaseOneSerializableXact(SERIALIZABLEXACT *sxact, bool partial,
3731 PREDICATELOCK *predlock;
3732 SERIALIZABLEXIDTAG sxidtag;
3733 RWConflict conflict,
3736 Assert(sxact != NULL);
3737 Assert(SxactIsRolledBack(sxact) || SxactIsCommitted(sxact));
3738 Assert(partial || !SxactIsOnFinishedList(sxact));
3739 Assert(LWLockHeldByMe(SerializableFinishedListLock));
3742 * First release all the predicate locks held by this xact (or transfer
3743 * them to OldCommittedSxact if summarize is true)
3745 LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
3746 predlock = (PREDICATELOCK *)
3747 SHMQueueNext(&(sxact->predicateLocks),
3748 &(sxact->predicateLocks),
3749 offsetof(PREDICATELOCK, xactLink));
3752 PREDICATELOCK *nextpredlock;
3753 PREDICATELOCKTAG tag;
3754 SHM_QUEUE *targetLink;
3755 PREDICATELOCKTARGET *target;
3756 PREDICATELOCKTARGETTAG targettag;
3757 uint32 targettaghash;
3758 LWLock *partitionLock;
3760 nextpredlock = (PREDICATELOCK *)
3761 SHMQueueNext(&(sxact->predicateLocks),
3762 &(predlock->xactLink),
3763 offsetof(PREDICATELOCK, xactLink));
3765 tag = predlock->tag;
3766 targetLink = &(predlock->targetLink);
3767 target = tag.myTarget;
3768 targettag = target->tag;
3769 targettaghash = PredicateLockTargetTagHashCode(&targettag);
3770 partitionLock = PredicateLockHashPartitionLock(targettaghash);
3772 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
3774 SHMQueueDelete(targetLink);
3776 hash_search_with_hash_value(PredicateLockHash, &tag,
3777 PredicateLockHashCodeFromTargetHashCode(&tag,
3784 /* Fold into dummy transaction list. */
3785 tag.myXact = OldCommittedSxact;
3786 predlock = hash_search_with_hash_value(PredicateLockHash, &tag,
3787 PredicateLockHashCodeFromTargetHashCode(&tag,
3789 HASH_ENTER_NULL, &found);
3792 (errcode(ERRCODE_OUT_OF_MEMORY),
3793 errmsg("out of shared memory"),
3794 errhint("You might need to increase max_pred_locks_per_transaction.")));
3797 Assert(predlock->commitSeqNo != 0);
3798 Assert(predlock->commitSeqNo != InvalidSerCommitSeqNo);
3799 if (predlock->commitSeqNo < sxact->commitSeqNo)
3800 predlock->commitSeqNo = sxact->commitSeqNo;
3804 SHMQueueInsertBefore(&(target->predicateLocks),
3805 &(predlock->targetLink));
3806 SHMQueueInsertBefore(&(OldCommittedSxact->predicateLocks),
3807 &(predlock->xactLink));
3808 predlock->commitSeqNo = sxact->commitSeqNo;
3812 RemoveTargetIfNoLongerUsed(target, targettaghash);
3814 LWLockRelease(partitionLock);
3816 predlock = nextpredlock;
3820 * Rather than retail removal, just re-init the head after we've run
3823 SHMQueueInit(&sxact->predicateLocks);
3825 LWLockRelease(SerializablePredicateLockListLock);
3827 sxidtag.xid = sxact->topXid;
3828 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3830 /* Release all outConflicts (unless 'partial' is true) */
3833 conflict = (RWConflict)
3834 SHMQueueNext(&sxact->outConflicts,
3835 &sxact->outConflicts,
3836 offsetof(RWConflictData, outLink));
3839 nextConflict = (RWConflict)
3840 SHMQueueNext(&sxact->outConflicts,
3842 offsetof(RWConflictData, outLink));
3844 conflict->sxactIn->flags |= SXACT_FLAG_SUMMARY_CONFLICT_IN;
3845 ReleaseRWConflict(conflict);
3846 conflict = nextConflict;
3850 /* Release all inConflicts. */
3851 conflict = (RWConflict)
3852 SHMQueueNext(&sxact->inConflicts,
3853 &sxact->inConflicts,
3854 offsetof(RWConflictData, inLink));
3857 nextConflict = (RWConflict)
3858 SHMQueueNext(&sxact->inConflicts,
3860 offsetof(RWConflictData, inLink));
3862 conflict->sxactOut->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
3863 ReleaseRWConflict(conflict);
3864 conflict = nextConflict;
3867 /* Finally, get rid of the xid and the record of the transaction itself. */
3870 if (sxidtag.xid != InvalidTransactionId)
3871 hash_search(SerializableXidHash, &sxidtag, HASH_REMOVE, NULL);
3872 ReleasePredXact(sxact);
3875 LWLockRelease(SerializableXactHashLock);
3879 * Tests whether the given top level transaction is concurrent with
3880 * (overlaps) our current transaction.
3882 * We need to identify the top level transaction for SSI, anyway, so pass
3883 * that to this function to save the overhead of checking the snapshot's
3887 XidIsConcurrent(TransactionId xid)
3892 Assert(TransactionIdIsValid(xid));
3893 Assert(!TransactionIdEquals(xid, GetTopTransactionIdIfAny()));
3895 snap = GetTransactionSnapshot();
3897 if (TransactionIdPrecedes(xid, snap->xmin))
3900 if (TransactionIdFollowsOrEquals(xid, snap->xmax))
3903 for (i = 0; i < snap->xcnt; i++)
3905 if (xid == snap->xip[i])
3913 * CheckForSerializableConflictOut
3914 * We are reading a tuple which has been modified. If it is visible to
3915 * us but has been deleted, that indicates a rw-conflict out. If it's
3916 * not visible and was created by a concurrent (overlapping)
3917 * serializable transaction, that is also a rw-conflict out,
3919 * We will determine the top level xid of the writing transaction with which
3920 * we may be in conflict, and check for overlap with our own transaction.
3921 * If the transactions overlap (i.e., they cannot see each other's writes),
3922 * then we have a conflict out.
3924 * This function should be called just about anywhere in heapam.c where a
3925 * tuple has been read. The caller must hold at least a shared lock on the
3926 * buffer, because this function might set hint bits on the tuple. There is
3927 * currently no known reason to call this function from an index AM.
3930 CheckForSerializableConflictOut(bool visible, Relation relation,
3931 HeapTuple tuple, Buffer buffer,
3935 SERIALIZABLEXIDTAG sxidtag;
3936 SERIALIZABLEXID *sxid;
3937 SERIALIZABLEXACT *sxact;
3938 HTSV_Result htsvResult;
3940 if (!SerializationNeededForRead(relation, snapshot))
3943 /* Check if someone else has already decided that we need to die */
3944 if (SxactIsDoomed(MySerializableXact))
3947 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
3948 errmsg("could not serialize access due to read/write dependencies among transactions"),
3949 errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict out checking."),
3950 errhint("The transaction might succeed if retried.")));
3954 * Check to see whether the tuple has been written to by a concurrent
3955 * transaction, either to create it not visible to us, or to delete it
3956 * while it is visible to us. The "visible" bool indicates whether the
3957 * tuple is visible to us, while HeapTupleSatisfiesVacuum checks what else
3958 * is going on with it.
3960 htsvResult = HeapTupleSatisfiesVacuum(tuple, TransactionXmin, buffer);
3963 case HEAPTUPLE_LIVE:
3966 xid = HeapTupleHeaderGetXmin(tuple->t_data);
3968 case HEAPTUPLE_RECENTLY_DEAD:
3971 xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
3973 case HEAPTUPLE_DELETE_IN_PROGRESS:
3974 xid = HeapTupleHeaderGetUpdateXid(tuple->t_data);
3976 case HEAPTUPLE_INSERT_IN_PROGRESS:
3977 xid = HeapTupleHeaderGetXmin(tuple->t_data);
3979 case HEAPTUPLE_DEAD:
3984 * The only way to get to this default clause is if a new value is
3985 * added to the enum type without adding it to this switch
3986 * statement. That's a bug, so elog.
3988 elog(ERROR, "unrecognized return value from HeapTupleSatisfiesVacuum: %u", htsvResult);
3991 * In spite of having all enum values covered and calling elog on
3992 * this default, some compilers think this is a code path which
3993 * allows xid to be used below without initialization. Silence
3996 xid = InvalidTransactionId;
3998 Assert(TransactionIdIsValid(xid));
3999 Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin));
4002 * Find top level xid. Bail out if xid is too early to be a conflict, or
4003 * if it's our own xid.
4005 if (TransactionIdEquals(xid, GetTopTransactionIdIfAny()))
4007 xid = SubTransGetTopmostTransaction(xid);
4008 if (TransactionIdPrecedes(xid, TransactionXmin))
4010 if (TransactionIdEquals(xid, GetTopTransactionIdIfAny()))
4014 * Find sxact or summarized info for the top level xid.
4017 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4018 sxid = (SERIALIZABLEXID *)
4019 hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4023 * Transaction not found in "normal" SSI structures. Check whether it
4024 * got pushed out to SLRU storage for "old committed" transactions.
4026 SerCommitSeqNo conflictCommitSeqNo;
4028 conflictCommitSeqNo = OldSerXidGetMinConflictCommitSeqNo(xid);
4029 if (conflictCommitSeqNo != 0)
4031 if (conflictCommitSeqNo != InvalidSerCommitSeqNo
4032 && (!SxactIsReadOnly(MySerializableXact)
4033 || conflictCommitSeqNo
4034 <= MySerializableXact->SeqNo.lastCommitBeforeSnapshot))
4036 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4037 errmsg("could not serialize access due to read/write dependencies among transactions"),
4038 errdetail_internal("Reason code: Canceled on conflict out to old pivot %u.", xid),
4039 errhint("The transaction might succeed if retried.")));
4041 if (SxactHasSummaryConflictIn(MySerializableXact)
4042 || !SHMQueueEmpty(&MySerializableXact->inConflicts))
4044 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4045 errmsg("could not serialize access due to read/write dependencies among transactions"),
4046 errdetail_internal("Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid),
4047 errhint("The transaction might succeed if retried.")));
4049 MySerializableXact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4052 /* It's not serializable or otherwise not important. */
4053 LWLockRelease(SerializableXactHashLock);
4056 sxact = sxid->myXact;
4057 Assert(TransactionIdEquals(sxact->topXid, xid));
4058 if (sxact == MySerializableXact || SxactIsDoomed(sxact))
4060 /* Can't conflict with ourself or a transaction that will roll back. */
4061 LWLockRelease(SerializableXactHashLock);
4066 * We have a conflict out to a transaction which has a conflict out to a
4067 * summarized transaction. That summarized transaction must have
4068 * committed first, and we can't tell when it committed in relation to our
4069 * snapshot acquisition, so something needs to be canceled.
4071 if (SxactHasSummaryConflictOut(sxact))
4073 if (!SxactIsPrepared(sxact))
4075 sxact->flags |= SXACT_FLAG_DOOMED;
4076 LWLockRelease(SerializableXactHashLock);
4081 LWLockRelease(SerializableXactHashLock);
4083 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4084 errmsg("could not serialize access due to read/write dependencies among transactions"),
4085 errdetail_internal("Reason code: Canceled on conflict out to old pivot."),
4086 errhint("The transaction might succeed if retried.")));
4091 * If this is a read-only transaction and the writing transaction has
4092 * committed, and it doesn't have a rw-conflict to a transaction which
4093 * committed before it, no conflict.
4095 if (SxactIsReadOnly(MySerializableXact)
4096 && SxactIsCommitted(sxact)
4097 && !SxactHasSummaryConflictOut(sxact)
4098 && (!SxactHasConflictOut(sxact)
4099 || MySerializableXact->SeqNo.lastCommitBeforeSnapshot < sxact->SeqNo.earliestOutConflictCommit))
4101 /* Read-only transaction will appear to run first. No conflict. */
4102 LWLockRelease(SerializableXactHashLock);
4106 if (!XidIsConcurrent(xid))
4108 /* This write was already in our snapshot; no conflict. */
4109 LWLockRelease(SerializableXactHashLock);
4113 if (RWConflictExists(MySerializableXact, sxact))
4115 /* We don't want duplicate conflict records in the list. */
4116 LWLockRelease(SerializableXactHashLock);
4121 * Flag the conflict. But first, if this conflict creates a dangerous
4122 * structure, ereport an error.
4124 FlagRWConflict(MySerializableXact, sxact);
4125 LWLockRelease(SerializableXactHashLock);
4129 * Check a particular target for rw-dependency conflict in. A subroutine of
4130 * CheckForSerializableConflictIn().
4133 CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
4135 uint32 targettaghash;
4136 LWLock *partitionLock;
4137 PREDICATELOCKTARGET *target;
4138 PREDICATELOCK *predlock;
4139 PREDICATELOCK *mypredlock = NULL;
4140 PREDICATELOCKTAG mypredlocktag;
4142 Assert(MySerializableXact != InvalidSerializableXact);
4145 * The same hash and LW lock apply to the lock target and the lock itself.
4147 targettaghash = PredicateLockTargetTagHashCode(targettag);
4148 partitionLock = PredicateLockHashPartitionLock(targettaghash);
4149 LWLockAcquire(partitionLock, LW_SHARED);
4150 target = (PREDICATELOCKTARGET *)
4151 hash_search_with_hash_value(PredicateLockTargetHash,
4152 targettag, targettaghash,
4156 /* Nothing has this target locked; we're done here. */
4157 LWLockRelease(partitionLock);
4162 * Each lock for an overlapping transaction represents a conflict: a
4163 * rw-dependency in to this transaction.
4165 predlock = (PREDICATELOCK *)
4166 SHMQueueNext(&(target->predicateLocks),
4167 &(target->predicateLocks),
4168 offsetof(PREDICATELOCK, targetLink));
4169 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4172 SHM_QUEUE *predlocktargetlink;
4173 PREDICATELOCK *nextpredlock;
4174 SERIALIZABLEXACT *sxact;
4176 predlocktargetlink = &(predlock->targetLink);
4177 nextpredlock = (PREDICATELOCK *)
4178 SHMQueueNext(&(target->predicateLocks),
4180 offsetof(PREDICATELOCK, targetLink));
4182 sxact = predlock->tag.myXact;
4183 if (sxact == MySerializableXact)
4186 * If we're getting a write lock on a tuple, we don't need a
4187 * predicate (SIREAD) lock on the same tuple. We can safely remove
4188 * our SIREAD lock, but we'll defer doing so until after the loop
4189 * because that requires upgrading to an exclusive partition lock.
4191 * We can't use this optimization within a subtransaction because
4192 * the subtransaction could roll back, and we would be left
4193 * without any lock at the top level.
4195 if (!IsSubTransaction()
4196 && GET_PREDICATELOCKTARGETTAG_OFFSET(*targettag))
4198 mypredlock = predlock;
4199 mypredlocktag = predlock->tag;
4202 else if (!SxactIsDoomed(sxact)
4203 && (!SxactIsCommitted(sxact)
4204 || TransactionIdPrecedes(GetTransactionSnapshot()->xmin,
4205 sxact->finishedBefore))
4206 && !RWConflictExists(sxact, MySerializableXact))
4208 LWLockRelease(SerializableXactHashLock);
4209 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4212 * Re-check after getting exclusive lock because the other
4213 * transaction may have flagged a conflict.
4215 if (!SxactIsDoomed(sxact)
4216 && (!SxactIsCommitted(sxact)
4217 || TransactionIdPrecedes(GetTransactionSnapshot()->xmin,
4218 sxact->finishedBefore))
4219 && !RWConflictExists(sxact, MySerializableXact))
4221 FlagRWConflict(sxact, MySerializableXact);
4224 LWLockRelease(SerializableXactHashLock);
4225 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4228 predlock = nextpredlock;
4230 LWLockRelease(SerializableXactHashLock);
4231 LWLockRelease(partitionLock);
4234 * If we found one of our own SIREAD locks to remove, remove it now.
4236 * At this point our transaction already has an ExclusiveRowLock on the
4237 * relation, so we are OK to drop the predicate lock on the tuple, if
4238 * found, without fearing that another write against the tuple will occur
4239 * before the MVCC information makes it to the buffer.
4241 if (mypredlock != NULL)
4243 uint32 predlockhashcode;
4244 PREDICATELOCK *rmpredlock;
4246 LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4247 LWLockAcquire(partitionLock, LW_EXCLUSIVE);
4248 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4251 * Remove the predicate lock from shared memory, if it wasn't removed
4252 * while the locks were released. One way that could happen is from
4253 * autovacuum cleaning up an index.
4255 predlockhashcode = PredicateLockHashCodeFromTargetHashCode
4256 (&mypredlocktag, targettaghash);
4257 rmpredlock = (PREDICATELOCK *)
4258 hash_search_with_hash_value(PredicateLockHash,
4262 if (rmpredlock != NULL)
4264 Assert(rmpredlock == mypredlock);
4266 SHMQueueDelete(&(mypredlock->targetLink));
4267 SHMQueueDelete(&(mypredlock->xactLink));
4269 rmpredlock = (PREDICATELOCK *)
4270 hash_search_with_hash_value(PredicateLockHash,
4274 Assert(rmpredlock == mypredlock);
4276 RemoveTargetIfNoLongerUsed(target, targettaghash);
4279 LWLockRelease(SerializableXactHashLock);
4280 LWLockRelease(partitionLock);
4281 LWLockRelease(SerializablePredicateLockListLock);
4283 if (rmpredlock != NULL)
4286 * Remove entry in local lock table if it exists. It's OK if it
4287 * doesn't exist; that means the lock was transferred to a new
4288 * target by a different backend.
4290 hash_search_with_hash_value(LocalPredicateLockHash,
4291 targettag, targettaghash,
4294 DecrementParentLocks(targettag);
4300 * CheckForSerializableConflictIn
4301 * We are writing the given tuple. If that indicates a rw-conflict
4302 * in from another serializable transaction, take appropriate action.
4304 * Skip checking for any granularity for which a parameter is missing.
4306 * A tuple update or delete is in conflict if we have a predicate lock
4307 * against the relation or page in which the tuple exists, or against the
4311 CheckForSerializableConflictIn(Relation relation, HeapTuple tuple,
4314 PREDICATELOCKTARGETTAG targettag;
4316 if (!SerializationNeededForWrite(relation))
4319 /* Check if someone else has already decided that we need to die */
4320 if (SxactIsDoomed(MySerializableXact))
4322 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4323 errmsg("could not serialize access due to read/write dependencies among transactions"),
4324 errdetail_internal("Reason code: Canceled on identification as a pivot, during conflict in checking."),
4325 errhint("The transaction might succeed if retried.")));
4328 * We're doing a write which might cause rw-conflicts now or later.
4329 * Memorize that fact.
4331 MyXactDidWrite = true;
4334 * It is important that we check for locks from the finest granularity to
4335 * the coarsest granularity, so that granularity promotion doesn't cause
4336 * us to miss a lock. The new (coarser) lock will be acquired before the
4337 * old (finer) locks are released.
4339 * It is not possible to take and hold a lock across the checks for all
4340 * granularities because each target could be in a separate partition.
4344 SET_PREDICATELOCKTARGETTAG_TUPLE(targettag,
4345 relation->rd_node.dbNode,
4347 ItemPointerGetBlockNumber(&(tuple->t_self)),
4348 ItemPointerGetOffsetNumber(&(tuple->t_self)));
4349 CheckTargetForConflictsIn(&targettag);
4352 if (BufferIsValid(buffer))
4354 SET_PREDICATELOCKTARGETTAG_PAGE(targettag,
4355 relation->rd_node.dbNode,
4357 BufferGetBlockNumber(buffer));
4358 CheckTargetForConflictsIn(&targettag);
4361 SET_PREDICATELOCKTARGETTAG_RELATION(targettag,
4362 relation->rd_node.dbNode,
4364 CheckTargetForConflictsIn(&targettag);
4368 * CheckTableForSerializableConflictIn
4369 * The entire table is going through a DDL-style logical mass delete
4370 * like TRUNCATE or DROP TABLE. If that causes a rw-conflict in from
4371 * another serializable transaction, take appropriate action.
4373 * While these operations do not operate entirely within the bounds of
4374 * snapshot isolation, they can occur inside a serializable transaction, and
4375 * will logically occur after any reads which saw rows which were destroyed
4376 * by these operations, so we do what we can to serialize properly under
4379 * The relation passed in must be a heap relation. Any predicate lock of any
4380 * granularity on the heap will cause a rw-conflict in to this transaction.
4381 * Predicate locks on indexes do not matter because they only exist to guard
4382 * against conflicting inserts into the index, and this is a mass *delete*.
4383 * When a table is truncated or dropped, the index will also be truncated
4384 * or dropped, and we'll deal with locks on the index when that happens.
4386 * Dropping or truncating a table also needs to drop any existing predicate
4387 * locks on heap tuples or pages, because they're about to go away. This
4388 * should be done before altering the predicate locks because the transaction
4389 * could be rolled back because of a conflict, in which case the lock changes
4390 * are not needed. (At the moment, we don't actually bother to drop the
4391 * existing locks on a dropped or truncated table at the moment. That might
4392 * lead to some false positives, but it doesn't seem worth the trouble.)
4395 CheckTableForSerializableConflictIn(Relation relation)
4397 HASH_SEQ_STATUS seqstat;
4398 PREDICATELOCKTARGET *target;
4404 * Bail out quickly if there are no serializable transactions running.
4405 * It's safe to check this without taking locks because the caller is
4406 * holding an ACCESS EXCLUSIVE lock on the relation. No new locks which
4407 * would matter here can be acquired while that is held.
4409 if (!TransactionIdIsValid(PredXact->SxactGlobalXmin))
4412 if (!SerializationNeededForWrite(relation))
4416 * We're doing a write which might cause rw-conflicts now or later.
4417 * Memorize that fact.
4419 MyXactDidWrite = true;
4421 Assert(relation->rd_index == NULL); /* not an index relation */
4423 dbId = relation->rd_node.dbNode;
4424 heapId = relation->rd_id;
4426 LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
4427 for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
4428 LWLockAcquire(PredicateLockHashPartitionLockByIndex(i), LW_SHARED);
4429 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4431 /* Scan through target list */
4432 hash_seq_init(&seqstat, PredicateLockTargetHash);
4434 while ((target = (PREDICATELOCKTARGET *) hash_seq_search(&seqstat)))
4436 PREDICATELOCK *predlock;
4439 * Check whether this is a target which needs attention.
4441 if (GET_PREDICATELOCKTARGETTAG_RELATION(target->tag) != heapId)
4442 continue; /* wrong relation id */
4443 if (GET_PREDICATELOCKTARGETTAG_DB(target->tag) != dbId)
4444 continue; /* wrong database id */
4447 * Loop through locks for this target and flag conflicts.
4449 predlock = (PREDICATELOCK *)
4450 SHMQueueNext(&(target->predicateLocks),
4451 &(target->predicateLocks),
4452 offsetof(PREDICATELOCK, targetLink));
4455 PREDICATELOCK *nextpredlock;
4457 nextpredlock = (PREDICATELOCK *)
4458 SHMQueueNext(&(target->predicateLocks),
4459 &(predlock->targetLink),
4460 offsetof(PREDICATELOCK, targetLink));
4462 if (predlock->tag.myXact != MySerializableXact
4463 && !RWConflictExists(predlock->tag.myXact, MySerializableXact))
4465 FlagRWConflict(predlock->tag.myXact, MySerializableXact);
4468 predlock = nextpredlock;
4472 /* Release locks in reverse order */
4473 LWLockRelease(SerializableXactHashLock);
4474 for (i = NUM_PREDICATELOCK_PARTITIONS - 1; i >= 0; i--)
4475 LWLockRelease(PredicateLockHashPartitionLockByIndex(i));
4476 LWLockRelease(SerializablePredicateLockListLock);
4481 * Flag a rw-dependency between two serializable transactions.
4483 * The caller is responsible for ensuring that we have a LW lock on
4484 * the transaction hash table.
4487 FlagRWConflict(SERIALIZABLEXACT *reader, SERIALIZABLEXACT *writer)
4489 Assert(reader != writer);
4491 /* First, see if this conflict causes failure. */
4492 OnConflict_CheckForSerializationFailure(reader, writer);
4494 /* Actually do the conflict flagging. */
4495 if (reader == OldCommittedSxact)
4496 writer->flags |= SXACT_FLAG_SUMMARY_CONFLICT_IN;
4497 else if (writer == OldCommittedSxact)
4498 reader->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4500 SetRWConflict(reader, writer);
4503 /*----------------------------------------------------------------------------
4504 * We are about to add a RW-edge to the dependency graph - check that we don't
4505 * introduce a dangerous structure by doing so, and abort one of the
4506 * transactions if so.
4508 * A serialization failure can only occur if there is a dangerous structure
4509 * in the dependency graph:
4511 * Tin ------> Tpivot ------> Tout
4514 * Furthermore, Tout must commit first.
4516 * One more optimization is that if Tin is declared READ ONLY (or commits
4517 * without writing), we can only have a problem if Tout committed before Tin
4518 * acquired its snapshot.
4519 *----------------------------------------------------------------------------
4522 OnConflict_CheckForSerializationFailure(const SERIALIZABLEXACT *reader,
4523 SERIALIZABLEXACT *writer)
4526 RWConflict conflict;
4528 Assert(LWLockHeldByMe(SerializableXactHashLock));
4532 /*------------------------------------------------------------------------
4533 * Check for already-committed writer with rw-conflict out flagged
4534 * (conflict-flag on W means that T2 committed before W):
4536 * R ------> W ------> T2
4539 * That is a dangerous structure, so we must abort. (Since the writer
4540 * has already committed, we must be the reader)
4541 *------------------------------------------------------------------------
4543 if (SxactIsCommitted(writer)
4544 && (SxactHasConflictOut(writer) || SxactHasSummaryConflictOut(writer)))
4547 /*------------------------------------------------------------------------
4548 * Check whether the writer has become a pivot with an out-conflict
4549 * committed transaction (T2), and T2 committed first:
4551 * R ------> W ------> T2
4554 * Because T2 must've committed first, there is no anomaly if:
4555 * - the reader committed before T2
4556 * - the writer committed before T2
4557 * - the reader is a READ ONLY transaction and the reader was concurrent
4558 * with T2 (= reader acquired its snapshot before T2 committed)
4560 * We also handle the case that T2 is prepared but not yet committed
4561 * here. In that case T2 has already checked for conflicts, so if it
4562 * commits first, making the above conflict real, it's too late for it
4564 *------------------------------------------------------------------------
4568 if (SxactHasSummaryConflictOut(writer))
4574 conflict = (RWConflict)
4575 SHMQueueNext(&writer->outConflicts,
4576 &writer->outConflicts,
4577 offsetof(RWConflictData, outLink));
4580 SERIALIZABLEXACT *t2 = conflict->sxactIn;
4582 if (SxactIsPrepared(t2)
4583 && (!SxactIsCommitted(reader)
4584 || t2->prepareSeqNo <= reader->commitSeqNo)
4585 && (!SxactIsCommitted(writer)
4586 || t2->prepareSeqNo <= writer->commitSeqNo)
4587 && (!SxactIsReadOnly(reader)
4588 || t2->prepareSeqNo <= reader->SeqNo.lastCommitBeforeSnapshot))
4593 conflict = (RWConflict)
4594 SHMQueueNext(&writer->outConflicts,
4596 offsetof(RWConflictData, outLink));
4600 /*------------------------------------------------------------------------
4601 * Check whether the reader has become a pivot with a writer
4602 * that's committed (or prepared):
4604 * T0 ------> R ------> W
4607 * Because W must've committed first for an anomaly to occur, there is no
4609 * - T0 committed before the writer
4610 * - T0 is READ ONLY, and overlaps the writer
4611 *------------------------------------------------------------------------
4613 if (!failure && SxactIsPrepared(writer) && !SxactIsReadOnly(reader))
4615 if (SxactHasSummaryConflictIn(reader))
4621 conflict = (RWConflict)
4622 SHMQueueNext(&reader->inConflicts,
4623 &reader->inConflicts,
4624 offsetof(RWConflictData, inLink));
4627 SERIALIZABLEXACT *t0 = conflict->sxactOut;
4629 if (!SxactIsDoomed(t0)
4630 && (!SxactIsCommitted(t0)
4631 || t0->commitSeqNo >= writer->prepareSeqNo)
4632 && (!SxactIsReadOnly(t0)
4633 || t0->SeqNo.lastCommitBeforeSnapshot >= writer->prepareSeqNo))
4638 conflict = (RWConflict)
4639 SHMQueueNext(&reader->inConflicts,
4641 offsetof(RWConflictData, inLink));
4648 * We have to kill a transaction to avoid a possible anomaly from
4649 * occurring. If the writer is us, we can just ereport() to cause a
4650 * transaction abort. Otherwise we flag the writer for termination,
4651 * causing it to abort when it tries to commit. However, if the writer
4652 * is a prepared transaction, already prepared, we can't abort it
4653 * anymore, so we have to kill the reader instead.
4655 if (MySerializableXact == writer)
4657 LWLockRelease(SerializableXactHashLock);
4659 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4660 errmsg("could not serialize access due to read/write dependencies among transactions"),
4661 errdetail_internal("Reason code: Canceled on identification as a pivot, during write."),
4662 errhint("The transaction might succeed if retried.")));
4664 else if (SxactIsPrepared(writer))
4666 LWLockRelease(SerializableXactHashLock);
4668 /* if we're not the writer, we have to be the reader */
4669 Assert(MySerializableXact == reader);
4671 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4672 errmsg("could not serialize access due to read/write dependencies among transactions"),
4673 errdetail_internal("Reason code: Canceled on conflict out to pivot %u, during read.", writer->topXid),
4674 errhint("The transaction might succeed if retried.")));
4676 writer->flags |= SXACT_FLAG_DOOMED;
4681 * PreCommit_CheckForSerializableConflicts
4682 * Check for dangerous structures in a serializable transaction
4685 * We're checking for a dangerous structure as each conflict is recorded.
4686 * The only way we could have a problem at commit is if this is the "out"
4687 * side of a pivot, and neither the "in" side nor the pivot has yet
4690 * If a dangerous structure is found, the pivot (the near conflict) is
4691 * marked for death, because rolling back another transaction might mean
4692 * that we flail without ever making progress. This transaction is
4693 * committing writes, so letting it commit ensures progress. If we
4694 * canceled the far conflict, it might immediately fail again on retry.
4697 PreCommit_CheckForSerializationFailure(void)
4699 RWConflict nearConflict;
4701 if (MySerializableXact == InvalidSerializableXact)
4704 Assert(IsolationIsSerializable());
4706 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4708 /* Check if someone else has already decided that we need to die */
4709 if (SxactIsDoomed(MySerializableXact))
4711 LWLockRelease(SerializableXactHashLock);
4713 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4714 errmsg("could not serialize access due to read/write dependencies among transactions"),
4715 errdetail_internal("Reason code: Canceled on identification as a pivot, during commit attempt."),
4716 errhint("The transaction might succeed if retried.")));
4719 nearConflict = (RWConflict)
4720 SHMQueueNext(&MySerializableXact->inConflicts,
4721 &MySerializableXact->inConflicts,
4722 offsetof(RWConflictData, inLink));
4723 while (nearConflict)
4725 if (!SxactIsCommitted(nearConflict->sxactOut)
4726 && !SxactIsDoomed(nearConflict->sxactOut))
4728 RWConflict farConflict;
4730 farConflict = (RWConflict)
4731 SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4732 &nearConflict->sxactOut->inConflicts,
4733 offsetof(RWConflictData, inLink));
4736 if (farConflict->sxactOut == MySerializableXact
4737 || (!SxactIsCommitted(farConflict->sxactOut)
4738 && !SxactIsReadOnly(farConflict->sxactOut)
4739 && !SxactIsDoomed(farConflict->sxactOut)))
4742 * Normally, we kill the pivot transaction to make sure we
4743 * make progress if the failing transaction is retried.
4744 * However, we can't kill it if it's already prepared, so
4745 * in that case we commit suicide instead.
4747 if (SxactIsPrepared(nearConflict->sxactOut))
4749 LWLockRelease(SerializableXactHashLock);
4751 (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
4752 errmsg("could not serialize access due to read/write dependencies among transactions"),
4753 errdetail_internal("Reason code: Canceled on commit attempt with conflict in from prepared pivot."),
4754 errhint("The transaction might succeed if retried.")));
4756 nearConflict->sxactOut->flags |= SXACT_FLAG_DOOMED;
4759 farConflict = (RWConflict)
4760 SHMQueueNext(&nearConflict->sxactOut->inConflicts,
4761 &farConflict->inLink,
4762 offsetof(RWConflictData, inLink));
4766 nearConflict = (RWConflict)
4767 SHMQueueNext(&MySerializableXact->inConflicts,
4768 &nearConflict->inLink,
4769 offsetof(RWConflictData, inLink));
4772 MySerializableXact->prepareSeqNo = ++(PredXact->LastSxactCommitSeqNo);
4773 MySerializableXact->flags |= SXACT_FLAG_PREPARED;
4775 LWLockRelease(SerializableXactHashLock);
4778 /*------------------------------------------------------------------------*/
4781 * Two-phase commit support
4786 * Do the preparatory work for a PREPARE: make 2PC state file
4787 * records for all predicate locks currently held.
4790 AtPrepare_PredicateLocks(void)
4792 PREDICATELOCK *predlock;
4793 SERIALIZABLEXACT *sxact;
4794 TwoPhasePredicateRecord record;
4795 TwoPhasePredicateXactRecord *xactRecord;
4796 TwoPhasePredicateLockRecord *lockRecord;
4798 sxact = MySerializableXact;
4799 xactRecord = &(record.data.xactRecord);
4800 lockRecord = &(record.data.lockRecord);
4802 if (MySerializableXact == InvalidSerializableXact)
4805 /* Generate an xact record for our SERIALIZABLEXACT */
4806 record.type = TWOPHASEPREDICATERECORD_XACT;
4807 xactRecord->xmin = MySerializableXact->xmin;
4808 xactRecord->flags = MySerializableXact->flags;
4811 * Note that we don't include the list of conflicts in our out in the
4812 * statefile, because new conflicts can be added even after the
4813 * transaction prepares. We'll just make a conservative assumption during
4817 RegisterTwoPhaseRecord(TWOPHASE_RM_PREDICATELOCK_ID, 0,
4818 &record, sizeof(record));
4821 * Generate a lock record for each lock.
4823 * To do this, we need to walk the predicate lock list in our sxact rather
4824 * than using the local predicate lock table because the latter is not
4825 * guaranteed to be accurate.
4827 LWLockAcquire(SerializablePredicateLockListLock, LW_SHARED);
4829 predlock = (PREDICATELOCK *)
4830 SHMQueueNext(&(sxact->predicateLocks),
4831 &(sxact->predicateLocks),
4832 offsetof(PREDICATELOCK, xactLink));
4834 while (predlock != NULL)
4836 record.type = TWOPHASEPREDICATERECORD_LOCK;
4837 lockRecord->target = predlock->tag.myTarget->tag;
4839 RegisterTwoPhaseRecord(TWOPHASE_RM_PREDICATELOCK_ID, 0,
4840 &record, sizeof(record));
4842 predlock = (PREDICATELOCK *)
4843 SHMQueueNext(&(sxact->predicateLocks),
4844 &(predlock->xactLink),
4845 offsetof(PREDICATELOCK, xactLink));
4848 LWLockRelease(SerializablePredicateLockListLock);
4853 * Clean up after successful PREPARE. Unlike the non-predicate
4854 * lock manager, we do not need to transfer locks to a dummy
4855 * PGPROC because our SERIALIZABLEXACT will stay around
4856 * anyway. We only need to clean up our local state.
4859 PostPrepare_PredicateLocks(TransactionId xid)
4861 if (MySerializableXact == InvalidSerializableXact)
4864 Assert(SxactIsPrepared(MySerializableXact));
4866 MySerializableXact->pid = 0;
4868 hash_destroy(LocalPredicateLockHash);
4869 LocalPredicateLockHash = NULL;
4871 MySerializableXact = InvalidSerializableXact;
4872 MyXactDidWrite = false;
4876 * PredicateLockTwoPhaseFinish
4877 * Release a prepared transaction's predicate locks once it
4878 * commits or aborts.
4881 PredicateLockTwoPhaseFinish(TransactionId xid, bool isCommit)
4883 SERIALIZABLEXID *sxid;
4884 SERIALIZABLEXIDTAG sxidtag;
4888 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4889 sxid = (SERIALIZABLEXID *)
4890 hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
4891 LWLockRelease(SerializableXactHashLock);
4893 /* xid will not be found if it wasn't a serializable transaction */
4897 /* Release its locks */
4898 MySerializableXact = sxid->myXact;
4899 MyXactDidWrite = true; /* conservatively assume that we wrote
4901 ReleasePredicateLocks(isCommit);
4905 * Re-acquire a predicate lock belonging to a transaction that was prepared.
4908 predicatelock_twophase_recover(TransactionId xid, uint16 info,
4909 void *recdata, uint32 len)
4911 TwoPhasePredicateRecord *record;
4913 Assert(len == sizeof(TwoPhasePredicateRecord));
4915 record = (TwoPhasePredicateRecord *) recdata;
4917 Assert((record->type == TWOPHASEPREDICATERECORD_XACT) ||
4918 (record->type == TWOPHASEPREDICATERECORD_LOCK));
4920 if (record->type == TWOPHASEPREDICATERECORD_XACT)
4922 /* Per-transaction record. Set up a SERIALIZABLEXACT. */
4923 TwoPhasePredicateXactRecord *xactRecord;
4924 SERIALIZABLEXACT *sxact;
4925 SERIALIZABLEXID *sxid;
4926 SERIALIZABLEXIDTAG sxidtag;
4929 xactRecord = (TwoPhasePredicateXactRecord *) &record->data.xactRecord;
4931 LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
4932 sxact = CreatePredXact();
4935 (errcode(ERRCODE_OUT_OF_MEMORY),
4936 errmsg("out of shared memory")));
4938 /* vxid for a prepared xact is InvalidBackendId/xid; no pid */
4939 sxact->vxid.backendId = InvalidBackendId;
4940 sxact->vxid.localTransactionId = (LocalTransactionId) xid;
4943 /* a prepared xact hasn't committed yet */
4944 sxact->prepareSeqNo = RecoverySerCommitSeqNo;
4945 sxact->commitSeqNo = InvalidSerCommitSeqNo;
4946 sxact->finishedBefore = InvalidTransactionId;
4948 sxact->SeqNo.lastCommitBeforeSnapshot = RecoverySerCommitSeqNo;
4951 * Don't need to track this; no transactions running at the time the
4952 * recovered xact started are still active, except possibly other
4953 * prepared xacts and we don't care whether those are RO_SAFE or not.
4955 SHMQueueInit(&(sxact->possibleUnsafeConflicts));
4957 SHMQueueInit(&(sxact->predicateLocks));
4958 SHMQueueElemInit(&(sxact->finishedLink));
4960 sxact->topXid = xid;
4961 sxact->xmin = xactRecord->xmin;
4962 sxact->flags = xactRecord->flags;
4963 Assert(SxactIsPrepared(sxact));
4964 if (!SxactIsReadOnly(sxact))
4966 ++(PredXact->WritableSxactCount);
4967 Assert(PredXact->WritableSxactCount <=
4968 (MaxBackends + max_prepared_xacts));
4972 * We don't know whether the transaction had any conflicts or not, so
4973 * we'll conservatively assume that it had both a conflict in and a
4974 * conflict out, and represent that with the summary conflict flags.
4976 SHMQueueInit(&(sxact->outConflicts));
4977 SHMQueueInit(&(sxact->inConflicts));
4978 sxact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_IN;
4979 sxact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT;
4981 /* Register the transaction's xid */
4983 sxid = (SERIALIZABLEXID *) hash_search(SerializableXidHash,
4985 HASH_ENTER, &found);
4986 Assert(sxid != NULL);
4988 sxid->myXact = (SERIALIZABLEXACT *) sxact;
4991 * Update global xmin. Note that this is a special case compared to
4992 * registering a normal transaction, because the global xmin might go
4993 * backwards. That's OK, because until recovery is over we're not
4994 * going to complete any transactions or create any non-prepared
4995 * transactions, so there's no danger of throwing away.
4997 if ((!TransactionIdIsValid(PredXact->SxactGlobalXmin)) ||
4998 (TransactionIdFollows(PredXact->SxactGlobalXmin, sxact->xmin)))
5000 PredXact->SxactGlobalXmin = sxact->xmin;
5001 PredXact->SxactGlobalXminCount = 1;
5002 OldSerXidSetActiveSerXmin(sxact->xmin);
5004 else if (TransactionIdEquals(sxact->xmin, PredXact->SxactGlobalXmin))
5006 Assert(PredXact->SxactGlobalXminCount > 0);
5007 PredXact->SxactGlobalXminCount++;
5010 LWLockRelease(SerializableXactHashLock);
5012 else if (record->type == TWOPHASEPREDICATERECORD_LOCK)
5014 /* Lock record. Recreate the PREDICATELOCK */
5015 TwoPhasePredicateLockRecord *lockRecord;
5016 SERIALIZABLEXID *sxid;
5017 SERIALIZABLEXACT *sxact;
5018 SERIALIZABLEXIDTAG sxidtag;
5019 uint32 targettaghash;
5021 lockRecord = (TwoPhasePredicateLockRecord *) &record->data.lockRecord;
5022 targettaghash = PredicateLockTargetTagHashCode(&lockRecord->target);
5024 LWLockAcquire(SerializableXactHashLock, LW_SHARED);
5026 sxid = (SERIALIZABLEXID *)
5027 hash_search(SerializableXidHash, &sxidtag, HASH_FIND, NULL);
5028 LWLockRelease(SerializableXactHashLock);
5030 Assert(sxid != NULL);
5031 sxact = sxid->myXact;
5032 Assert(sxact != InvalidSerializableXact);
5034 CreatePredicateLock(&lockRecord->target, targettaghash, sxact);