Extending Relativistic Programming to Multiple Writers

For software to take advantage of modern multicore processors, it must be safely concurrent and it must scale. Many techniques that allow safe concurrency do so at the expense of scalability. Coarse grain locking allows multiple threads to access common data safely, but not at the same time. Non-Blo...

Full description

Bibliographic Details
Main Author: Howard, Philip William
Format: Others
Published: PDXScholar 2012
Subjects:
Online Access:https://pdxscholar.library.pdx.edu/open_access_etds/114
https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=1113&context=open_access_etds
id ndltd-pdx.edu-oai-pdxscholar.library.pdx.edu-open_access_etds-1113
record_format oai_dc
spelling ndltd-pdx.edu-oai-pdxscholar.library.pdx.edu-open_access_etds-11132019-10-20T04:23:57Z Extending Relativistic Programming to Multiple Writers Howard, Philip William For software to take advantage of modern multicore processors, it must be safely concurrent and it must scale. Many techniques that allow safe concurrency do so at the expense of scalability. Coarse grain locking allows multiple threads to access common data safely, but not at the same time. Non-Blocking Synchronization and Transactional Memory techniques optimistically allow concurrency, but only for disjoint accesses and only at a high performance cost. Relativistic programming is a technique that allows low overhead readers and joint access parallelism between readers and writers. Most of the work on relativistic programming has assumed a single writer at a time (or, in partitionable data structures, a single writer per partition), and single writer solutions cannot scale on the write side. This dissertation extends prior work on relativistic programming in the following ways: 1) It analyses the ordering requirements of lock-based and relativistic programs in order to clarify the differences in their correctness and performance characteristics, and to define precisely the behavior required of the relativistic programming primitives. 2) It shows how relativistic programming can be used to construct efficient, scalable algorithms for complex data structures whose update operations involve multiple writes to multiple nodes. 3) It shows how disjoint access parallelism can be supported for relativistic writers, using Software Transactional Memory, while still allowing low-overhead, linearly-scalable, relativistic reads. 2012-01-01T08:00:00Z text application/pdf https://pdxscholar.library.pdx.edu/open_access_etds/114 https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=1113&context=open_access_etds Dissertations and Theses PDXScholar Concurrency Relativistic programming Data structures Synchronization Multicore Multiprocessors -- Programming Systems programming (Computer science) Parallel programming (Computer science)
collection NDLTD
format Others
sources NDLTD
topic Concurrency
Relativistic programming
Data structures
Synchronization
Multicore
Multiprocessors -- Programming
Systems programming (Computer science)
Parallel programming (Computer science)
spellingShingle Concurrency
Relativistic programming
Data structures
Synchronization
Multicore
Multiprocessors -- Programming
Systems programming (Computer science)
Parallel programming (Computer science)
Howard, Philip William
Extending Relativistic Programming to Multiple Writers
description For software to take advantage of modern multicore processors, it must be safely concurrent and it must scale. Many techniques that allow safe concurrency do so at the expense of scalability. Coarse grain locking allows multiple threads to access common data safely, but not at the same time. Non-Blocking Synchronization and Transactional Memory techniques optimistically allow concurrency, but only for disjoint accesses and only at a high performance cost. Relativistic programming is a technique that allows low overhead readers and joint access parallelism between readers and writers. Most of the work on relativistic programming has assumed a single writer at a time (or, in partitionable data structures, a single writer per partition), and single writer solutions cannot scale on the write side. This dissertation extends prior work on relativistic programming in the following ways: 1) It analyses the ordering requirements of lock-based and relativistic programs in order to clarify the differences in their correctness and performance characteristics, and to define precisely the behavior required of the relativistic programming primitives. 2) It shows how relativistic programming can be used to construct efficient, scalable algorithms for complex data structures whose update operations involve multiple writes to multiple nodes. 3) It shows how disjoint access parallelism can be supported for relativistic writers, using Software Transactional Memory, while still allowing low-overhead, linearly-scalable, relativistic reads.
author Howard, Philip William
author_facet Howard, Philip William
author_sort Howard, Philip William
title Extending Relativistic Programming to Multiple Writers
title_short Extending Relativistic Programming to Multiple Writers
title_full Extending Relativistic Programming to Multiple Writers
title_fullStr Extending Relativistic Programming to Multiple Writers
title_full_unstemmed Extending Relativistic Programming to Multiple Writers
title_sort extending relativistic programming to multiple writers
publisher PDXScholar
publishDate 2012
url https://pdxscholar.library.pdx.edu/open_access_etds/114
https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=1113&context=open_access_etds
work_keys_str_mv AT howardphilipwilliam extendingrelativisticprogrammingtomultiplewriters
_version_ 1719270844972138496