Distributed process cooperation in time warp

Optimistic simulation (or Time Warp) is one of the two major techniques employed in parallel (distributed) discrete event-driven simulation. As opposed to a conservative approach, Time Warp offers great potential to speed up a simulation because it does not rely on the blocking in order to satisfy t...

Full description

Bibliographic Details
Main Author: Choe, Myongsu, 1959-
Other Authors: Tropper, Carl (advisor)
Format: Others
Language:en
Published: McGill University 1999
Subjects:
Online Access:http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36565
Description
Summary:Optimistic simulation (or Time Warp) is one of the two major techniques employed in parallel (distributed) discrete event-driven simulation. As opposed to a conservative approach, Time Warp offers great potential to speed up a simulation because it does not rely on the blocking in order to satisfy the causality constraint. By using rollback to correct causality errors and to avoid deadlock, it provides a great deal of parallelism and modeling power. However, it has adverse affects such as memory over-consumption and futile event processing resulting from uncontrolled rollbacks. === In this thesis, we propose approaches to solve the inherent problem of load imbalance in optimistic simulations, especially, on a distributed memory MIMD system, thus seeking stability and simulation efficiency at the same time. As promising candidates to achieve these goals, we focus on an efficient GVT computation and the balancing of loads between processors by regulating bursting outgoing message flows or migrating unbalanced loads between processors. === First, we suggest a variant of Mattern's GVT algorithm which uses a scalar counter and a partly distributed control to reduce the number of control messages required for GVT computation. We compare it with Bellenot, Samadi, and Mattern's algorithms on simulations of large switching networks---the shuffle ring network, Manhattan street network, and a PCS network. === We then propose three different load balancing schemes: a flow control algorithm based on the use of stochastic learning automata, a dynamic load balancing scheme, and an integration of both of these controls. The purpose is to balance loads (measured by a space-time product) between processors. Three control algorithms and a simulation without using any control are compared on the simulations of shuffle ring networks and a PCS network in terms of several measures: simulation elapsed time, number of rollbacks and anti-messages, rollback distances, goodput, and standard deviation of space-time products.