Off-campus UNL users: To download campus access dissertations, please use the following link to log into our proxy server with your NU ID and password. When you are done browsing please remember to return to this page and log out.
Non-UNL users: Please talk to your librarian about requesting this dissertation through interlibrary loan.
APOP: An automatic pattern- and object-based code parallelization framework for clusters
In today's cluster computing environment, MPI (Message Passing Interface) is the dominant programming paradigm due to its stability. However, MPI programmers are challenged by the nontrivial efforts of creating, dispatching, and synchronizing parallel tasks. Consequently, extensive studies have been done on automatic parallelization. Most of the current parallel compilers use a control-centric approach that faces challenges such as non- or sub-optimal data communication among processors and inability to fully exploit the processing power of the increasingly popular Multi-Core Multi-Processor (MCMP) clusters. In this dissertation, we tackle the above problems by developing a novel framework called APOP ( Automatic Pattern- and Object-based Parallelization). APOP performs parallelization using a data-centric approach and presents parallel tasks as objects. The synchronization among parallel tasks is enforced according to their inter-dependency, leading to a data-driven execution model. The parallel tasks are executed as threads on top of a runtime environment called ODDRE (Object-based Data-Driven Runtime Environment), exploiting thread-level parallelism. In APOP, past parallelization experiences are accumulated as templates and are used to guide future code parallelization through pattern matching. To evaluate the feasibility and advantages of APOP, we have designed and implemented a proof-of-concept parallelizer called PJava and compared the performance of its generated code to those of the handcrafted JOPI (a Java dialect of MPI) code and MPI-C code in the applications of LU factorization and matrix multiplication. The experimental results show that the PJava-generated code achieves better performance than the handcrafted JOPI code and tracks the performance pattern of the MPI-C code. In this research, we have also done an extensive study on the problem of parallelism granularity, in which we used a curve-fitting approach to choose the appropriate task size.^
Liu, Xuli, "APOP: An automatic pattern- and object-based code parallelization framework for clusters" (2007). ETD collection for University of Nebraska - Lincoln. AAI3252445.