Verification and Enforcement of Safe Schedules for Concurrent Programs

Automated software verification can prove the correctness of a program with respect to a given specification and may be a valuable support in the difficult task of ensuring the quality of large software systems. However, the automated verification of concurrent software can be particularly chall...

Full description

Bibliographic Details
Main Author: Metzler, Patrick
Format: Others
Language:en
Published: 2020
Online Access:https://tuprints.ulb.tu-darmstadt.de/13432/13/Dissertation-Metzler.pdf
Metzler, Patrick <http://tuprints.ulb.tu-darmstadt.de/view/person/Metzler=3APatrick=3A=3A.html> (2020): Verification and Enforcement of Safe Schedules for Concurrent Programs. (Publisher's Version)Darmstadt, Technische Universität, DOI: 10.25534/tuprints-00013432 <https://doi.org/10.25534/tuprints-00013432>, [Ph.D. Thesis]
Description
Summary:Automated software verification can prove the correctness of a program with respect to a given specification and may be a valuable support in the difficult task of ensuring the quality of large software systems. However, the automated verification of concurrent software can be particularly challenging due to the vast complexity that non-deterministic scheduling causes. This thesis is concerned with techniques that reduce the complexity of concurrent programs in order to ease the verification task. We approach this problem from two orthogonal directions: state space reduction and reduction of non-determinism in executions of concurrent programs. Following the former direction, we present an algorithm for dynamic partial-order reduction, a state space reduction technique that avoids the verification of redundant executions. Our algorithm, EPOR, eagerly creates schedules for program fragments. In comparison to other dynamic partial-order reduction algorithms, it avoids redundant race and dependency checks. Our experiments show that EPOR runs considerably faster than a state-of-the-art algorithm, which allows in several cases to analyze programs with a higher number of threads within a given timeout. In the latter direction, we present a formal framework for using incomplete verification results to extract safe schedulers. As incomplete verification results do not need to proof the correctness of all possible executions of a program, their complexity can be significantly lower than complete verification results. Hence, they can be faster obtained. We constrain the scheduling of programs but not their inputs in order to preserve their full functionality. In our framework, executions under the scheduling constraints of an incomplete verification result are safe, deadlock-free, and fair. We instantiate our framework with the Impact model checking algorithm and find in our evaluation that it can be used to model check programs that are intractable for monolithic model checkers, synthesize synchronization via assume statements, and guarantee fair executions. In order to safely execute a program within the set of executions covered by an incomplete verification, scheduling needs to be constrained. We discuss how to extract and encode schedules from incomplete verification results, for both finite and infinite executions, and how to efficiently enforce scheduling constraints, both in terms of reducing the time to look up permission of executing the next event and executing independent events concurrently (by applying partial-order reduction). A drawback of enforcing scheduling constraints is a potential overhead in the execution time. However, in several cases, constrained executions turned out to be even faster than unconstrained executions. Our experimental results show that iteratively relaxing a schedule can significantly reduce this overhead. Hence, it is possible to adjust the incurred execution time overhead in order to find a sweet spot with respect to the amount of effort for creating schedules (i.e., the duration of verification). Interestingly, we found cases in which a much earlier reduction of execution time overhead is obtained by choosing favorable scheduling constraints, which suggests that execution time performance does not simply rely on the number of scheduling constraints but to a large extend also on their structure.