Jump to content

Software transactional memory

From Wikipedia, the free encyclopedia

Incomputer science,software transactional memory(STM) is aconcurrency controlmechanism analogous todatabase transactionsfor controlling access toshared memoryinconcurrent computing.It is an alternative tolock-based synchronization.STM is a strategy implemented in software, rather than as a hardware component. A transaction in this context occurs when a piece of code executes a series of reads and writes to shared memory. These reads and writes logically occur at a single instant in time; intermediate states are not visible to other (successful) transactions. The idea of providing hardware support for transactions originated in a 1986 paper byTom Knight.[1]The idea was popularized byMaurice HerlihyandJ. Eliot B. Moss.[2]In 1995,Nir Shavitand Dan Touitou extended this idea to software-only transactional memory (STM).[3]Since 2005, STM has been the focus of intense research[4]and support for practical implementations is growing.

Performance[edit]

Unlike thelockingtechniques used in most modern multithreaded applications, STM is often veryoptimistic:athreadcompletes modifications to shared memory without regard for what other threads might be doing, recording every read and write that it is performing in a log. Instead of placing the onus on the writer to make sure it does not adversely affect other operations in progress, it is placed on the reader, who after completing an entire transaction verifies that other threads have not concurrently made changes to memory that it accessed in the past. This final operation, in which the changes of a transaction are validated and, if validation is successful, made permanent, is called acommit.A transaction may alsoabortat any time, causing all of its prior changes to be rolled back or undone. If a transaction cannot be committed due to conflicting changes, it is typically aborted and re-executed from the beginning until it succeeds.

The benefit of this optimistic approach is increased concurrency: no thread needs to wait for access to a resource, and different threads can safely and simultaneously modify disjoint parts of a data structure that would normally be protected under the same lock.

However, in practice, STM systems also suffer a performance hit compared to fine-grained lock-based systems on small numbers of processors (1 to 4 depending on the application). This is due primarily to the overhead associated with maintaining the log and the time spent committing transactions. Even in this case performance is typically no worse than twice as slow.[5]Advocates of STM believe this penalty is justified by the conceptual benefits of STM.[citation needed]

Theoretically, theworst casespace and time complexity ofnconcurrent transactions isO(n). Actual needs depend on implementation details (one can make transactions fail early enough to avoid overhead), but there will also be cases, albeit rare, where lock-based algorithms have better time complexity than software transactional memory.

Conceptual advantages and disadvantages[edit]

In addition to their performance benefits,[citation needed]STM greatly simplifies conceptual understanding of multithreaded programs and helps make programs more maintainable by working in harmony with existing high-level abstractions such as objects and modules. Lock-based programming has a number of well-known problems that frequently arise in practice:

  • Locking requires thinking about overlapping operations and partial operations in distantly separated and seemingly unrelated sections of code, a task which is very difficult and error-prone.
  • Locking requires programmers to adopt a locking policy to preventdeadlock,livelock,and other failures to make progress. Such policies are often informally enforced and fallible, and when these issues arise they are insidiously difficult to reproduce and debug.
  • Locking can lead topriority inversion,a phenomenon where a high-priority thread is forced to wait for a low-priority thread holding exclusive access to a resource that it needs.

In contrast, the concept of a memory transaction is much simpler, because each transaction can be viewed in isolation as a single- threaded computation. Deadlock and livelock are either prevented entirely or handled by an external transaction manager; the programmer need hardly worry about it. Priority inversion can still be an issue, but high-priority transactions can abort conflicting lower priority transactions that have not already committed.

However, the need to retry and abort transactions limits their behavior. Any operation performed within a transaction must beidempotentsince a transaction might be retried. Additionally, if an operation hasside effectsthat must be undone if the transaction is aborted, then a correspondingrollbackoperation must be included. This makes manyinput/output(I/O) operations difficult or impossible to perform within transactions. Such limits are typically overcome in practice by creating buffers that queue up the irreversible operations and perform them after the transaction succeeds. InHaskell,this limit is enforced atcompile timeby thedata typesystem.

Composable operations[edit]

In 2005, Tim Harris,Simon Marlow,Simon Peyton Jones,andMaurice Herlihydescribed an STM system built onConcurrent Haskellthat enables arbitraryatomic operationsto be composed into larger atomic operations, a useful concept impossible with lock-based programming. To quote the authors:

Perhaps the most fundamental objection [...] is thatlock-based programs do not compose:correct fragments may fail when combined. For example, consider a hash table with thread-safe insert and delete operations. Now suppose that we want to delete one item A from table t1, and insert it into table t2; but the intermediate state (in which neither table contains the item) must not be visible to other threads. Unless the implementor of the hash table anticipates this need, there is simply no way to satisfy this requirement. [...] In short, operations that are individually correct (insert, delete) cannot be composed into larger correct operations.
—Tim Harris et al., "Composable Memory Transactions", Section 2: Background, pg.2[6]

With STM, this problem is simple to solve: simply wrapping two operations in a transaction makes the combined operation atomic. The only sticking point is that it is unclear to the caller, who is unaware of the implementation details of the component methods, when it should attempt to re-execute the transaction if it fails. In response, the authors proposed aretrycommand which uses the transaction log generated by the failed transaction to determine which memory cells it read, and automatically retries the transaction when one of these cells is modified, based on the logic that the transaction will not behave differently until at least one such value is changed.

The authors also proposed a mechanism for composition ofalternatives,theorElsefunction. It runs one transaction and, if that transaction does aretry,runs a second one. If both retry, it tries them both again as soon as a relevant change is made.[clarification needed]This facility, comparable to features such as the Portable Operating System Interface (POSIX) networkingselect()call, allows the caller to wait on any one of a number of events simultaneously. It also simplifies programming interfaces, for example by providing a simple mechanism to convert between blocking and nonblocking operations.

This scheme has been implemented in theGlasgow Haskell Compiler.

Proposed language support[edit]

The conceptual simplicity of STMs enables them to be exposed to the programmer using relatively simple languagesyntax.Tim Harris and Keir Fraser's "Language Support for Lightweight Transactions" proposed the idea of using the classicalconditional critical region(CCR) to represent transactions. In its simplest form, this is just an "atomic block", a block of code which logically occurs at a single instant:

// Insert a node into a doubly linked list atomically
atomic{
newNode->prev = node;
newNode->next = node->next;
node->next->prev = newNode;
node->next = newNode;
}

When the end of the block is reached, the transaction is committed if possible, or else aborted and retried. (This is simply a conceptual example, not correct code. For example, it behaves incorrectly if node is deleted from the list during the transaction.)

CCRs also permit aguard condition,which enables a transaction to wait until it has work to do:

atomic(queueSize > 0) {
remove item from queue and use it
}

If the condition is not satisfied, the transaction manager will wait until another transaction has made acommitthat affects the condition before retrying. Thisloose couplingbetween producers and consumers enhances modularity compared to explicit signaling between threads. "Composable Memory Transactions"[6]took this a step farther with itsretrycommand (discussed above), which can, at any time, abort the transaction and wait untilsome valuepreviously read by the transaction is modified before retrying. For example:

atomic{
if (queueSize > 0) {
remove item from queue and use it
} else {
retry
}
}

This ability to retry dynamically late in the transaction simplifies the programming model and opens up new possibilities.

One issue is how exceptions behave when they propagate outside of transactions. In "Composable Memory Transactions",[6]the authors decided that this should abort the transaction, since exceptions normally indicate unexpected errors in Concurrent Haskell, but that the exception could retain information allocated by and read during the transaction for diagnostic purposes. They stress that other design decisions may be reasonable in other settings.

Transactional locking[edit]

STM can be implemented as a lock-free algorithm or it can use locking.[7]There are two types of locking schemes: In encounter-time locking (Ennals, Saha, and Harris), memory writes are done by first temporarily acquiring a lock for a given location, writing the value directly, and logging it in the undo log. Commit-time locking locks memory locations only during the commit phase.

A commit-time scheme named "Transactional Locking II" implemented by Dice, Shalev, and Shavit uses a global version clock. Every transaction starts by reading the current value of the clock and storing it as the read-version. Then, on every read or write, the version of the particular memory location is compared to the read-version; and, if it is greater, the transaction is aborted. This guarantees that the code is executed on a consistent snapshot of memory. During commit, all write locations are locked, and version numbers of all read and write locations are re-checked. Finally, the global version clock is incremented, new write values from the log are written back to memory and stamped with the new clock version.

Implementation issues[edit]

One problem with implementing software transactional memory with optimistic reading is that it is possible for an incomplete transaction to read inconsistent state (that is, to read a mixture of old and new values written by another transaction). Such a transaction is doomed to abort if it ever tries to commit, so this does not violate the consistency condition enforced by the transactional system, but it is possible for this "temporary" inconsistent state to cause a transaction to trigger a fatal exceptional condition such as a segmentation fault or even enter an endless loop, as in the following contrived example from Figure 4 of "Language Support for Lightweight Transactions":

atomic{
if (x!= y)
while (true) {
}
}
atomic{
x++;
y++;
}
Transaction A
Transaction B

Providedx=yinitially, neither transaction above alters this invariant, but it is possible that transaction A will readxafter transaction B updates it but readybefore transaction B updates it, causing it to enter an infinite loop. The usual strategy for dealing with this is to intercept any fatal exceptions and abort any transaction that is not valid.

One way to deal with these issues is to detect transactions that execute illegal operations or fail to terminate and abort them cleanly; another approach is thetransactional lockingscheme.

Practical implementations[edit]

References[edit]

  1. ^Tom Knight.An architecture for mostly functional languages.Proceedings of the 1986 ACM conference onLispand functional programming.
  2. ^Maurice Herlihy and J. Eliot B. Moss.Transactional memory: architectural support for lock-free data structures.Proceedings of the 20th annual international symposium on Computer architecture (ISCA '93). Volume 21, Issue 2, May 1993.
  3. ^Nir Shavit and Dan Touitou.Software transactional memory.Distributed Computing. Volume 10, Number 2. February 1997.
  4. ^""software transactional memory" - Google Scholar ".Retrieved10 November2013.
  5. ^Peyton Jones, Simon."Programming in the Age of Concurrency: Software Transactional Memory".Microsoft Developers Network: Channel 9.Retrieved2007-06-09.
  6. ^abcHarris, T.;Marlow, S.;Peyton Jones, S.;Herlihy, M.(2005)."Composable memory transactions"(PDF).Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming - PPoPP '05.p. 48.doi:10.1145/1065944.1065952.ISBN1595930809.S2CID53245159.
  7. ^Concurrency_control#Methods
  8. ^"Glasgow Haskell Compiler (GHC) Commentary: Software Transactional Memory (STM)".Haskell.org: GitLab.
  9. ^"Software Transactional Memory in C++: Pure Functional Approach (tutorial)".GitHub.
  10. ^"Refs and Transactions".Clojure.org.
  11. ^"Poor man's STM in Node.js".Clojure.org.
  12. ^"talhof8/kashmir".GitHub.
  13. ^"Rust Package Registry".Crates.io.
  14. ^"Introduction to Software Transactional Memory ZIO".Zio.dev.
  15. ^"Kcas — STM based on lock-free MCAS".GitHub.