Summary: | This thesis explores the possibility of applying the existing Multi-Area Thévenin Equivalent (MATE) algorithm to the power flow problem. Various theoretical considerations/difficulties of handling link connections in power flow are discussed. The current equation power flow program is examined in hopes of aiding link decoupling by taking advantage of the current equation’s inherent symmetrical links. However, implementation and testing of the current equation program indicated contrary results to recently published material on current equation programs.
In an attempt to make MATE viable for power flow, one of MATE’s bottlenecks was examined, the link matrix. It was found that the problem could be alleviated by using a multi-level approach. This approach would allow link computation to be distributed across the subsystems and levels. An existing multi-level MATE algorithm has already been proposed but was implemented for only two levels. This thesis proposes a massively parallel algorithm for a general number of levels. The distribution of the link matrix allows for mass parallelization of the system matrix into very small subsystems. A flop analysis of the proposed multi-level MATE algorithm reveals that the majority of the computation is spent performing independent small matrix multiplication operations.
Upon inspection of the strengths of the proposed multi-level MATE algorithm, it appears that the algorithm would benefit from a parallel computing platform such as modern GPUs. Today’s GPUs contain hundreds to thousands of scalar processors providing approximately an order of magnitude in computational power over multi-core CPUs. This has garnered the GPU much attention in many scientific disciplines. To test the feasibility of the MATE’s algorithm on the GPU, the algorithm’s most common operation, small matrix multiplication, was implemented. The test case was arranged to simulate the conditions of a 15,000 node system being factorized. The routine is meant to serve as the algorithm’s BLAS since linear algebra libraries on the GPU are not meant to handle very small matrices. The routine was found to successfully achieve a decent portion of the GPU’s peak flops.
|