This is the companion web site for preprint titled “Patterns in the Chaos – a Structured Large-Scale Analysis of Variations in the Performance of Public IaaS Providers”. Please see the e-print of the submission on arXiv. Here, you will find additional information, the raw data we collected, the benchmarks we used to collect this data, and further analysis that we did not find space for in the publication. Should we later on find errors in our paper, we will also collect errata here.
Benchmarking the performance of public cloud providers is a common research topic. Previous research has already extensively evaluated the performance of different cloud platforms for different use cases, and under different constraints and experiment setups. In this paper, we present a principled, large-scale literature review to collect and codify existing research regarding the predictability of performance in public Infrastructure-as-a-Service (IaaS) clouds. We formulate 15 hypotheses relating to the nature of performance variations in IaaS systems, to the factors of influence of performance variations, and how to compare different instance types. In a second step, we conduct extensive real-life experimentation on Amazon EC2 and Google Compute Engine to empirically validate those hypotheses. At the time of our research, performance in EC2 was substantially less predictable than in GCE. Further, we show that hardware heterogeneity is in practice less prevalent than anticipated by earlier research,
while multi-tenancy has a dramatic impact on performance and predictability.
For the paper, we looked at performance predictability in Google Compute Engine, AWS Elastic Compute Cloud, Microsoft Azure, and IBM Softlayer.