Summary: | Storage systems used for supercomputers and high performance computing (HPC) centers exhibit
load imbalance and resource contention. This is mainly due to two factors: the bursty nature of the
I/O of scientific applications; and the complex and distributed I/O path without centralized arbitration
and control. For example, the extant Lustre parallel storage system, which forms the backend
storage for many HPC centers, comprises numerous components, all connected in custom network
topologies, and serve varying demands of large number of users and applications. Consequently,
some storage servers can be more loaded than others, creating bottlenecks, and reducing overall
application I/O performance. Existing solutions focus on per application load balancing, and thus
are not effective due to the lack of a global view of the system.
In this thesis, we adopt a data-driven quantitative approach to load balance the I/O servers at
extreme scale. To this end, we design a global mapper on Lustre Metadata Server (MDS), which
gathers runtime statistics collected from key storage components on the I/O path, and applies
Markov chain modeling and a dynamic maximum flow algorithm to decide where data should
be placed in a load-balanced fashion. Evaluation using a realistic system simulator shows that our
approach yields better load balancing, which in turn can help yield higher end-to-end performance. === Master of Science
|