Austin, Texas -- November 15th, 2015
- University of Chicago & Argonne National Laboratory
Application Skeletons: Constructing and Using Abstract Many Task Applications in eScience (slides)
Abstract: Computer scientists who work on tools and systems to support many-task eScience applications usually use actual applications to prove that their systems will benefit science and engineering (e.g., improve application performance). Accessing and building the applications and necessary data sets can be difficult because of policy or technical issues, and it can be difficult to modify the characteristics of the applications to understand corner cases in the system design. In this talk, based on an FGCS paper in press: "Application skeletons: Construction and use in eScience” (DOI: 10.1016/j.future.2015.10.001), we present the Application Skeleton, a simple yet powerful tool to build synthetic many-task applications that abstract and represent real applications, with runtime and I/O close to those of the real applications. This allows computer scientists to focus on the system they are building; they can work with the simpler skeleton applications and be sure that their work will also be applicable to the real applications. In addition, skeleton applications support simple reproducible system experiments since they are represented by a compact set of parameters. Our Application Skeleton tool (available as open source at https://github.com/applicationskeleton/Skeleton) currently can create easy-to-access, easy-to-build, and easy-to-run bag-of-task, (iterative) map-reduce, and (iterative) multistage workflow applications. The tasks can be serial, parallel, or a mix of both. The parameters to represent the tasks can either be discovered through a manual profiling of the applications or through an automated method. We select three representative applications (Montage, BLAST, CyberShake Postprocessing), then describe and generate skeleton applications for each. We show that the skeleton applications have identical (or close) performance to that of the real applications. We then show examples of using skeleton applications to verify system optimizations such as data caching, I/O tuning, and task scheduling, as well as the system resilience mechanism, in some cases modifying the skeleton applications to emphasize some characteristic, and thus show that using skeleton applications simplifies the process of designing, implementing, and testing these optimizations.
Daniel S. Katz is a Senior Fellow in the Computation Institute (CI) at the University of Chicago and Argonne National Laboratory and is currently a Program Director in the Division of Advanced Cyberinfrastructure (formerly the Office of Cyberinfrastructure) at the National Science Foundation. He was previously Open Grid Forum Area Co-director for Applications and TeraGrid GIG Director of Science. He is also an adjunct faculty member at the Center for Computation & Technology (CCT), Louisiana State University (LSU), where he was previously CCT Director for Cyberinfrastructure Development from 2006 to 2009, and Adjunct Associate Professor in the Department of Electrical and Computer Engineering from 2006 to 2013. He was at JPL from 1996 to 2006, in a variety of roles, including: Principal Member of the Information Systems and Computer Science Staff, Supervisor of the Parallel Applications Technologies group, Area Program Manager of High End Computing in the Space Mission Information Technology Office, Applications Project Element Manager for the Remote Exploration and Experimentation (REE) Project, and Team Leader for MOD Tool (a tool for the integrated design of microwave and millimeter-wave instruments). From 1993 to 1996 he was employed by Cray Research (and later by Silicon Graphics) as a Computational Scientist on-site at JPL and Caltech, specializing in parallel implementation of computational electromagnetic algorithms. Dan's interest is in the development and use of advanced cyberinfrastructure to solve challenging problems at multiple scales. His technical research interests are in applications, algorithms, fault tolerance, and programming in parallel and distributed computing, including HPC, Grid, Cloud, etc. He is also interested in policy issues, including citation and credit mechanisms and practices associated with software and data, organization and community practices for collaboration, and career paths for computing researchers. He received his B.S., M.S., and Ph.D degrees in Electrical Engineering from Northwestern University, Evanston, Illinois, in 1988, 1990, and 1994, respectively. His work is documented in numerous book chapters, journal and conference publications, and NASA Tech Briefs. He is a senior member of the IEEE and ACM, designed and maintained (until 2001) the original website for the IEEE Antenna and Propagation Society, and serves on the IEEE Technical Committee on Parallel Processing's Executive Committee and the steering committees for the IEEE Grid, Cluster, and e-Science conference series.