Navigation auf


Department of Informatics s.e.a.l

TOIT16 Online Appendix

“Patterns in the Chaos” Online Appendix

This is the companion web site for preprint titled “Patterns in the Chaos – a Structured Large-Scale Analysis of Variations in the Performance of Public IaaS Providers”. Please see the e-print of the submission on arXiv. Here, you will find additional information, the raw data we collected, the benchmarks we used to collect this data, and further analysis that we did not find space for in the publication. Should we later on find errors in our paper, we will also collect errata here.


Benchmarking the performance of public cloud providers is a common research topic. Previous research has already extensively evaluated the performance of different cloud platforms for different use cases, and under different constraints and experiment setups. In this paper, we present a principled, large-scale literature review to collect and codify existing research regarding the predictability of performance in public Infrastructure-as-a-Service (IaaS) clouds. We formulate 15 hypotheses relating to the nature of performance variations in IaaS systems, to the factors of influence of performance variations, and how to compare different instance types. In a second step, we conduct extensive real-life experimentation on Amazon EC2 and Google Compute Engine to empirically validate those hypotheses. At the time of our research, performance in EC2 was substantially less predictable than in GCE. Further, we show that hardware heterogeneity is in practice less prevalent than anticipated by earlier research,
while multi-tenancy has a dramatic impact on performance and predictability.

Evaluated Cloud Providers

For the paper, we looked at performance predictability in Google Compute Engine, AWS Elastic Compute Cloud, Microsoft Azure, and IBM Softlayer.

Collected Data

Full Data Set (ZIP (ZIP, 640 KB))

R Script used for Data Cleaning (R code (R, 4 KB))


To generate the data sets, we used our tool Cloud Workbench (CWB) to install and run the benchmarks in the cloud instances we wanted to evaluate. CWB is freely available as open source software (however the installation procedure still needs work – send us a mail if you have troubles). CWB uses Chef to actually provision the benchmark code. Below, we provide as ZIP archives of Chef recipes all benchmarks we used for this paper.

Benchmarks (ZIP (ZIP, 68 KB))

The recipes depend on a special Chef cookbook, which installs the CWB client library, so maybe you will need to adapt the recipes to install them successfully outside of CWB. If you are only interested in how we actually execute the benchmarks, you may investigate the files in files/default in each ZIP archive. The configurations in these files are inherited from attributes/default.rb.

List of Codified Studies

The hypotheses formulated in the paper are based on a principled study of existing research in cloud benchmarking. Below, we give full references for all 56 studies that we considered in our work, in alphabetical order of the last name of the first author.

  1. Dinesh Agarwal and Sushil K. Prasad. 2012. AzureBench: Benchmarking the Storage Services of the Azure Cloud Platform. In Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW ’12). IEEE Computer Society, Washington, DC, USA, 1048-1057. DOI=10.1109/IPDPSW.2012.128
  2. Akioka, S.; Muraoka, Y., HPC Benchmarks on Amazon EC2. In Proceedings of the IEEE 24th International Conference on Advanced Information Networking and Applications Workshops (WAINA), pp.1029,1034, 20-23 April 2010
    doi: 10.1109/WAINA.2010.166
  3. Gültekin Ataş and Vehbi Cagri Gungor. 2014. Performance evaluation of cloud computing platforms using statistical methods. Comput. Electr. Eng. 40, 5 (July 2014), 1636-1649. DOI=10.1016/j.compeleceng.2014.03.017
  4. Azevedo, E.; Dias, C.; Dantas, R.; Sadok, D.; Fernandes, S.; Simoes, R.; Kamienski, C., Profiling core operations for elasticity in cloud environments. In Proceedings of the 2012 IEEE Latin America Conference on Cloud Computing and Communications (LATINCLOUD) pp.43,48, 26-27 Nov. 2012 doi: 10.1109/LatinCloud.2012.6508156
  5. Sean Kenneth Barker and Prashant Shenoy. 2010. Empirical evaluation of latency-sensitive application performance in the cloud. In Proceedings of the first annual ACM SIGMM conference on Multimedia systems (MMSys ’10). ACM, New York, NY, USA, 35-46. DOI=10.1145/1730836.1730842
  6. Amir Hossein Borhani, Philipp Leitner, Bu-Sung Lee, Xiaorong Li and Terence Hung. 2014. WPress: An Application-Driven Performance Benchmark For Cloud-Based Virtual Machines. In Proceedings of the 2014 Enterprise Computing Conference (EDOC’14).
  7. Paul Brebner and Anna Liu. 2010. Performance and cost assessment of cloud services. In Proceedings of the 2010 international conference on Service-oriented computing (ICSOC’10), E. Michael Maximilien, Gustavo Rossi, Soe-Tsyr Yuan, Heiko Ludwig, and Marcelo Fantinato (Eds.). Springer-Verlag, Berlin, Heidelberg, 39-50.
  8. Marc Bux and Ulf Leser. 2013. DynamicCloudSim: simulating heterogeneity in computational clouds. In Proceedings of the 2nd ACM SIGMOD Workshop on Scalable Workflow Execution Engines and Technologies (SWEET ’13). ACM, New York, NY, USA, , Article 1 , 12 pages. DOI=10.1145/2499896.2499897
  9. Carlyle, A.G.; Harrell, S.L.; Smith, P.M., Cost-Effective HPC: The Community or the Cloud? Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference on , vol., no., pp.169,176, Nov. 30 2010-Dec. 3 2010
    doi: 10.1109/CloudCom.2010.115
  10. Cerotti, D.; Gribaudo, M.; Piazzolla, P.; Serazzi, G., Flexible CPU Provisioning in Clouds: A New Source of Performance Unpredictability, In Proceedings of the Ninth International Conference on Quantitative Evaluation of Systems (QEST), 2012, vol., no., pp.230,237, 17-20 Sept. 2012 doi: 10.1109/QEST.2012.23
  11. Carlo A. Curino, Djellel E. Difallah, Andrew Pavlo, and Philippe Cudre-Mauroux. 2012. Benchmarking OLTP/web databases in the cloud: the OLTP-bench framework. In Proceedings of the fourth international workshop on Cloud data management (CloudDB ’12). ACM, New York, NY, USA, 17-20. DOI=10.1145/2390021.2390025
  12. Jiang Dejun, Guillaume Pierre, and Chi-Hung Chi. 2009. EC2 performance analysis for resource provisioning of service-oriented applications. In Proceedings of the 2009 international conference on Service-oriented computing (ICSOC/ServiceWave’09), Asit Dan, and Farouk Toumani (Eds.). Springer-Verlag, Berlin, Heidelberg, 197-207.
  13. Expósito, Roberto R., Taboada, Guillermo L., Pardo, Xoán C., Touriño, Juan, and Doallo, Ramón. Running scientific codes on amazon EC2: a performance analysis of five high-end instances. In  Journal of Computer Science & Technology, vol. 13, no. 03, 2013
  14. Benjamin Farley, Ari Juels, Venkatanathan Varadarajan, Thomas Ristenpart, Kevin D. Bowers, and Michael M. Swift. 2012. More for your money: exploiting performance heterogeneity in public clouds. In Proceedings of the Third ACM Symposium on Cloud Computing (SoCC ’12). ACM, New York, NY, USA, , Article 20 , 14 pages. DOI=10.1145/2391229.2391249
  15. Fittkau, F.; Frey, S.; Hasselbring, W., CDOSim: Simulating cloud deployment options for software migration support.  in Proceedings of the IEEE 6th International Workshop on the Maintenance and Evolution of Service-Oriented and Cloud-Based Systems (MESOCA). pp.37,46, 2012. doi: 10.1109/MESOCA.2012.6392599
  16. Ian P. Gent, Lars Kotthoff. Reliability of Computational Experiments on Virtualised Hardware. ArXiv e-prints Oct 2011;
  17. Lee Gillam, Bin Li, John O’Loughlin and Anuz Pratap Singh Tomar. Fair Benchmarking for Cloud Computing systems. In Journal of Cloud Computing: Advances, Systems and Applications 2013, 2:6  doi:10.1186/2192-113X-2-6
  18. Scott Hazelhurst. 2008. Scientific computing using virtual high-performance computing: a case study using the Amazon elastic computing cloud. In Proceedings of the 2008 annual research conference of the South African Institute of Computer Scientists and Information Technologists on IT research in developing countries: riding the wave of technology (SAICSIT ’08). ACM, New York, NY, USA, 94-103. DOI=10.1145/1456659.1456671
  19. Qiming He, Shujia Zhou, Ben Kobler, Dan Duffy, and Tom McGlynn. 2010. Case study for running HPC applications in public clouds. In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing (HPDC ’10). ACM, New York, NY, USA, 395-401. DOI=10.1145/1851476.1851535
  20. Z. Hill and M. Humphrey,  A quantitative analysis of high performance computing with Amazon’s EC2 infrastructure: The death of the local cluster? In Proceedings of GRID. 2009, 26-33.
  21. Zach Hill, Jie Li, Ming Mao, Arkaitz Ruiz-Alvarez, and Marty Humphrey. 2010. Early observations on the performance of Windows Azure. In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing (HPDC ’10). ACM, New York, NY, USA, 367-376. DOI=10.1145/1851476.1851532
  22. Gabor Imre, Hassan Charaf, Laszlo Lengyel. 2013. Performance Analysis of a Java Web Application Running on Amazon EC2. Acta Electrotechnica et Informatica, Vol. 13, No. 4, 2013, 32–39, DOI: 10.15546/aeei-2013-0046
  23. Alexandru Iosup, Simon Ostermann, Nezih Yigitbasi, Radu Prodan, Thomas Fahringer, and Dick Epema. 2011. Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing. IEEE Trans. Parallel Distrib. Syst. 22, 6 (June 2011), 931-945. DOI=10.1109/TPDS.2011.66
  24. Alexandru Iosup, Nezih Yigitbasi, and Dick Epema. 2011. On the Performance Variability of Production Cloud Services. In Proceedings of the 2011 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID ’11). IEEE Computer Society, Washington, DC, USA, 104-113. DOI=10.1109/CCGrid.2011.22
  25. Shigeru Imai, Thomas Chestna, and Carlos A. Varela. 2013. Accurate Resource Prediction for Hybrid IaaS Clouds Using Workload-Tailored Elastic Compute Units. In Proceedings of the 2013 IEEE/ACM 6th International Conference on Utility and Cloud Computing (UCC ’13). IEEE Computer Society, Washington, DC, USA, 171-178. DOI=10.1109/UCC.2013.40
  26. Keith R. Jackson, Lavanya Ramakrishnan, Krishna Muriki, Shane Canon, Shreyas Cholia, John Shalf, Harvey J. Wasserman, and Nicholas J. Wright. 2010. Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud. In Proceedings of the 2010 IEEE Second International Conference on Cloud Computing Technology and Science(CLOUDCOM ’10). IEEE Computer Society, Washington, DC, USA, 159-168. DOI=10.1109/CloudCom.2010.69
  27. Donald Kossmann, Tim Kraska, and Simon Loesing. 2010. An evaluation of alternative architectures for transaction processing in the cloud. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data (SIGMOD ’10). ACM, New York, NY, USA, 579-590. DOI=10.1145/1807167.1807231
  28. G. Kousiouris , G. Giammatteo , A. Evangelinou , N. Galante , E. Kevani , C. Stampoltas , A. Menychtas , A. Kopaneli , K. Ramasamy Balraj , D. Kyriazis , T. Varvarigou , P. Stuer ,L. Orue-Echevarria Arrieta. A Multi-Cloud Framework for Measuring and Describing Performance Aspects of Cloud Services Across Different Application Types. In Proceedings of the 2014 Cloud Computing and Services Science Conference (CLOSER).
  29. Dilip Kumar Krishnappa, Eric Lyons, David Irwin, and Michael Zink. 2012. Network capabilities of cloud services for a real time scientific application. In Proceedings of the 2012 IEEE 37th Conference on Local Computer Networks (LCN 2012) (LCN ’12). IEEE Computer Society, Washington, DC, USA, 487-495. DOI=10.1109/LCN.2012.6423665
  30. Katrina LaCurts, Shuo Deng, Ameesh Goyal, and Hari Balakrishnan. 2013. Choreo: network-aware task placement for cloud applications. In Proceedings of the 2013 conference on Internet measurement conference (IMC ’13). ACM, New York, NY, USA, 191-204. DOI=10.1145/2504730.2504744
  31. Craig A. Lee. 2010. A perspective on scientific cloud computing. In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing (HPDC ’10). ACM, New York, NY, USA, 451-459. DOI=10.1145/1851476.1851542
  32. Lenk, A.; Menzel, M.; Lipsky, J.; Tai, S.; Offermann, P. What Are You Paying For? Performance Benchmarking for Infrastructure-as-a-Service Offerings, in Proceedings of the IEEE International Conference on Cloud Computing (CLOUD) , 2011. doi: 10.1109/CLOUD.2011.80
  33. Zheng Li, Liam O’Brien, Rajiv Ranjan, and Miranda Zhang. 2013. Early Observations on Performance of Google Compute Engine for Scientific Computing. In Proceedings of the 2013 IEEE International Conference on Cloud Computing Technology and Science – Volume 01 (CLOUDCOM ’13), Vol. 1. IEEE Computer Society, Washington, DC, USA, 1-8. DOI=10.1109/CloudCom.2013.7
  34. Zheng Li, Liam O’Brien, He Zhang, and Rainbow Cai. 2013. Boosting Metrics for Cloud Services Evaluation — The Last Mile of Using Benchmark Suites. In Proceedings of the 2013 IEEE 27th International Conference on Advanced Information Networking and Applications (AINA ’13). IEEE Computer Society, Washington, DC, USA, 381-388. DOI=10.1109/AINA.2013.99
  35. Ang Li, Xiaowei Yang, Srikanth Kandula, and Ming Zhang. 2010. CloudCmp: comparing public cloud providers. In Proceedings of the 10th ACM SIGCOMM conference on Internet measurement(IMC ’10). ACM, New York, NY, USA, 1-14. DOI=10.1145/1879141.1879143
  36. Ming Mao and Marty Humphrey. 2012. A Performance Study on the VM Startup Time in the Cloud. In Proceedings of the 2012 IEEE Fifth International Conference on Cloud Computing (CLOUD ’12). IEEE Computer Society, Washington, DC, USA, 423-430. DOI=10.1109/CLOUD.2012.103
  37. Piyush Mehrotra, Jahed Djomehri, Steve Heistand, Robert Hood, Haoqiang Jin, Arthur Lazanoff, Subhash Saini, and Rupak Biswas. 2012. Performance evaluation of Amazon EC2 for NASA HPC applications. In Proceedings of the 3rd workshop on Scientific Cloud Computing Date(ScienceCloud ’12). ACM, New York, NY, USA, 41-50. DOI=10.1145/2287036.2287045
  38. Mukherjee, J.; Wang, M.; Krishnamurthy, D., Performance Testing Web Applications on the Cloud. In Proceedings of the 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp.363,369, March 31 2014-April 4 2014
    doi: 10.1109/ICSTW.2014.57
  39. John O’Loughlin and Lee Gillam. 2013. Towards Performance Prediction for Public Infrastructure Clouds: An EC2 Case Study. In Proceedings of the 2013 IEEE International Conference on Cloud Computing Technology and Science – Volume 01 (CLOUDCOM ’13), Vol. 1. IEEE Computer Society, Washington, DC, USA, 475-480. DOI=10.1109/CloudCom.2013.69
  40. John O’Loughlin and Lee Gillam. 2014. Performance Evaluation for Cost-Efficient Public Infrastructure Cloud Use. In Proceedings of the 11th International Conference on Economics of Grids, Clouds, Systems and Services (GECON).
  41. Simon Ostermann, Alexandria Iosup, Nezih Yigitbasi, Radu Prodan, Thomas Fahringer, Dick Epema. A Performance Analysis of EC2 Cloud Computing Services for Scientific Computing. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, Volume 34, 2010, pp 115-131
  42. Zhonghong Ou, Hao Zhuang, Andrey Lukyanenko, Jukka K. Nurminen, Pan Hui, Vladimir Mazalov, and Antti Yla-Jaaski, Is the Same Instance Type Created Equal? Exploiting Heterogeneity of Public Clouds, IEEE Transactions on Cloud Computing, vol. 1, no. 2, pp. 201-214, July-December, 2013
  43. Zhonghong Ou, Hao Zhuang, Jukka K. Nurminen, Antti Ylä-Jääski, and Pan Hui. 2012. Exploiting hardware heterogeneity within the same instance type of Amazon EC2. In Proceedings of the 4th USENIX conference on Hot Topics in Cloud Ccomputing (HotCloud’12). USENIX Association, Berkeley, CA, USA, 4-4.
  44. Stephen C. Phillips, Vegard Engen, and Juri Papay. 2011. Snow White Clouds and the Seven Dwarfs. In Proceedings of the 2011 IEEE Third International Conference on Cloud Computing Technology and Science (CLOUDCOM ’11). IEEE Computer Society, Washington, DC, USA, 738-745. DOI=10.1109/CloudCom.2011.114
  45. Radu Prodan and Michael Sperk. 2013. Scientific computing with Google App Engine. Future Gener. Comput. Syst. 29, 7 (September 2013), 1851-1859. DOI=10.1016/j.future.2012.12.018
  46. Zia Ur Rehman, Farookh Khadeer Hussain, Omar Khadeer Hussain, and Jaipal Singh. 2013. Is There Self-Similarity in Cloud QoS Data?. In Proceedings of the 2013 IEEE 10th International Conference on e-Business Engineering (ICEBE ’13). IEEE Computer Society, Washington, DC, USA, 76-81. DOI=10.1109/ICEBE.2013.12
  47. Arkaitz Ruiz-Alvarez and Marty Humphrey. 2011. An automated approach to cloud storage service selection. In Proceedings of the 2nd international workshop on Scientific cloud computing(ScienceCloud ’11). ACM, New York, NY, USA, 39-48. DOI=10.1145/1996109.1996117
  48. Jörg Schad, Jens Dittrich, and Jorge-Arnulfo Quiané-Ruiz. 2010. Runtime measurements in the cloud: observing, analyzing, and reducing variance. Proc. VLDB Endow. 3, 1-2 (September 2010), 460-471. DOI=10.14778/1920841.1920902
  49. Scheuner J, Leitner P, Cito J, Gall H. Cloud WorkBench – Infrastructure-as-Code Based Cloud Benchmarking. ArXiv e-prints Aug 2014;
  50. Radu Tudoran, Alexandru Costan, Gabriel Antoniu, and Luc Bougé. 2012. A performance evaluation of Azure and Nimbus clouds for scientific applications. In Proceedings of the 2nd International Workshop on Cloud Computing Platforms (CloudCP ’12). ACM, New York, NY, USA, Article 4 , 6 pages. DOI=10.1145/2168697.2168701
  51. Radu Tudoran, Kate Keahey, Pierre Riteau, Sergey Panitkin, Gabriel Antoniu, Evaluating Streaming Strategies for Event Processing Across Infrastructure Clouds, CCGRID, 2014, 2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid) 2014, pp. 151-159, doi:10.1109/CCGrid.2014.89
  52. Venkatanathan Varadarajan, Thawan Kooburat, Benjamin Farley, Thomas Ristenpart, and Michael M. Swift. 2012. Resource-freeing attacks: improve your cloud performance (at your neighbor’s expense). In Proceedings of the 2012 ACM conference on Computer and communications security(CCS ’12). ACM, New York, NY, USA, 281-292. DOI=10.1145/2382196.2382228
  53. Edward Walker. 2008. Benchmarking Amazon EC2 for High-Performance Scientific Computing. In Login issue: October 2008, Volume 33, Number 5
  54. Guohui Wang and T. S. Eugene Ng. 2010. The impact of virtualization on network performance of amazon EC2 data center. In Proceedings of the 29th conference on Information communications(INFOCOM’10). IEEE Press, Piscataway, NJ, USA, 1163-1171.
  55. Jinhui Yao, Alex Ng, Shiping Chen, Dongxi Liu, Carsten Friedrich, Surya Nepal. 2012. A Performance Evaluation of Public Cloud Using TPC-C. In Service-Oriented Computing – ICSOC 2012 Workshops.
  56. Liang Zhao, Anna Liu, and Jacky Keung. 2010. Evaluating Cloud Platform Architecture with the CARE Framework. In Proceedings of the 2010 Asia Pacific Software Engineering Conference(APSEC ’10). IEEE Computer Society, Washington, DC, USA, 60-69. DOI=10.1109/APSEC.2010.17