CFP (TXT) | News | Topics | Dates | Submission | Organization | Program | Sponsors
5th Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS) 2012
Co-located with Supercomputing/SC 2012Salt Lake City -- November 12th, 2012
Biggest Impact Award
- Location: 155-C
- Date: November 12th, 2012
- Time: 1:30PM
Dr. Alexandru Iosup
Assistant Professor
Delft University of Technology, The Netherlands
Talk title: IaaS Cloud Benchmarking: Approaches, Challenges, and Experience (Slides, Paper)
Abstract: Over the past five years, Infrastructure-as-a-Service clouds have grown into the branch of ICT that offers services related to on-demand lease of storage, computation, and network. One of the major impediments in the selection and even use of (commercial) IaaS clouds is the lack of benchmarking results, that is, the lack of trustworthy quantitative information that allows (potential) cloud users to compare and reason about IaaS clouds.
In this talk we discuss empirical approaches to quantitative evaluation, which we find to be a necessary bumpy road toward cloud benchmarking. Both industry and academia have used empirical approaches for years, but the limited success achieved so far for IaaS clouds and similar systems (e.g., grids) is perhaps indicative of the complexity and size of challenges. We present the lessons we have learned in developing the SkyMark framework for cloud performance evaluation and the results of our SkyMark-based investigation of three research questions: What is the performance of production IaaS cloud services? How variable is the performance of widely used production cloud services? and What is the impact on performance of the user-level middleware, such as the provisioning and allocation policies that interact with IaaS clouds? We discuss the impact of our findings on large-scale, many-task and many-user applications; notably, we discuss not only cloud performance, but also operation and behavior.
In contrast to previous attempts, our research combines empirical and other approaches, for example modeling and simulation, for deeper analysis; is based on a combination of short-term and multi-year measurements for better longevity of results; and uses large, comprehensive studies of several real clouds for an overall broader study. This presentation can also provide useful insights for fields related to benchmarking, for example experimental evaluation conducted in any large-scale distributed system.
Last but not least, we present a roadmap toward cloud benchmarking and the way we plan to progress on it with other members of the RG Cloud Group of the Standard Performance Evaluation Corporation (SPEC) [ http://research.spec.org/working-groups/rg-cloud-working-group.html ].
Bio: Dr. Alexandru Iosup received his Ph.D. in Computer Science in 2009 from the Delft University of Technology (TU Delft), the Netherlands. He is currently an Assistant Professor with the Parallel and Distributed Systems Group at TU Delft. He was a visiting scholar at U. Dortmund, U.Wisconsin-Madison, U. Innsbruck, and U.California-Berkeley in 2004, 2006, 2008, and 2010, respectively. In 2011 he received a Dutch NWO/STW Veni grant (the Dutch equivalent of the US NSF CAREER.) His research interests are in the area of distributed computing; keywords: cloud computing, grid computing, peer-to-peer systems, scientific computing, massively multiplayer online games, scheduling, scalability, reliability, performance evaluation, and workload characterization. Dr. Iosup is the author of over 50 scientific publications and has received several awards and distinctions, including best paper awards at IEEE CCGrid 2010, Euro-Par 2009, and IEEE P2P 2006. He is the co-founder of the Grid Workloads, the Peer-to-Peer Trace, and the Failure Trace Archives, which provide open access to workload and resource operation traces from large-scale distributed computing environments. He is currently working on cloud resource management for e-Science and consumer workloads.