Associate Professor of Electrical and Computer Engineering
Ph.D. - 1992, Georgia Tech
Computer Science
M.S. - Georgia Tech
Information and Computer Science
B.S. - Georgia Tech
Information and Computer Science
Contact Information
Office: 300-D Riggs Hall
Office Phone: 864.656.1224
Fax: 864.656.5910
Email: walt@clemson.edu
Research interests
Parallel I/O systems
In recent years, the disparity between I/O performance and processor performance has led to I/O bottlenecks in many applications, especially those using large data sets. A popular approach for alleviating this kind of bottleneck is the use of parallel file systems. Parallel file systems are system software that perform two primary functions: first, to distribute file data among multiple storage nodes in a parallel computer, and second to coordinate concurrent access to files by multiple tasks of a parallel application. The goal of the Parallel Virtual File System (PVFS) Project is to explore the design, implementation, and uses of parallel file systems. PVFS serves as both a platform for parallel I/O research as well as a production file system for the cluster computing community. The PVFS project is conducted jointly between The Parallel Architecture Research Laboratory (PARL) at Clemson University and The Mathematics and Computer Science Division at Argonne National Laboratory along with several partner institutions. Funding for the PVFS project has come from NASA and NSF. Current research focuses on very large High End Computing systems with 100,000s of compute processors and 10,000s of I/O nodes. Current efforts focus on small unaligned accesses, and metadata operations. As part of this research we are developing detailed simulation models of parallel file system protocols and experimenting with a number of features including caches, distributed directories, and intelligent servers. Promising results are implemented for production use in PVFS by the Clemson PVFS development team, which is a joint effort with CCIT.
Parallel computing environments
Many problems in developing efficient parallel codes can be traced to the computational model used. The traditional “von Neumann” model unfortunately does not capture much information that can be used to effect important optimizations. This fact is especially true in file I/O where details about usage patterns can greatly influence performance. For example, cache consistency models can have a big impact on performance and are driven by application behavior. The most popular approach to developing parallel systems is to keep the programming model as close as possible to the traditional model, thus allowing codes to be readily ported. An alternative approach involves using newer models that give system software more flexibility in achieving the desired computation. Unfortunately this approach may make it difficult for some programmers to migrate their applications. The PARL conducts research into the development of tools and techniques to simplify the development of parallel codes that utilize slightly different models of computation. These tools enable the use of various optimization that increase performance and functionality of the system without requiring applications programmers to develop these techniques. The CECAAD and Coven projects were early examples of this exploring factors like load balancing, granularity, and checkpointing. Current efforts center on the River model, which focuses on scheduling the data that flows through a computation in order to allow I/O and message passing to be optimized.
Dr. Ligon also has interests in fields related to high performance computing, including reconfigurable architectures, grid scheduling and co-allocation, compilers, operating systems, and high performance network protocols.