Continuous Performance Evaluation Tool
- Compare performance measurements from different tests (load/performance test, simulations)
- User friendly web front to visualize evaluation results
- REST interface as API for ther PMWT
Performance measurements and simulations produce large amounts of data in a short period of time. Release cycles are getting shorter due to the DevOps movement and heavily rely on live data from production or test environments. In addition, performance simulations increasingly become accurate and close to exact predictions. Results from these simulations are reliable and can be compared with live data to detect deviations from expected behavior.
We present our continuous performance evaluation tool PET. It allows for managing performance measurement and simulation data independent of the data collection software. Since monitoring software systems will easily lead to the production of logs with high volume and velocity even on test systems, the tool stores measurements in a consistent repository built for big data. It provides analytics to compare measurements from different test runs. Furthermore, PET includes a user friendly web front end to visualize evaluation results.
Kroß, Johannes; Willnecker, Felix; Zwickl, Thomas; Krcmar, Helmut (2016):
PET - Continuous Performance Evaluation Tool. In: Proceedings of the International Workshop on Quality-Aware DevOps (QUDOS) co-located with the International Symposium on Software Testing and Analysis (ISSTA), July 21, 2016, Saarbrücken, Germany. http://dx.doi.org/10.1145/2945408.2945418. (bib)
Third Party Software License/Notices
|Bootstrap||getbootstrap.com||The MIT License|
|Date Picker||amsul.ca/pickadate.js/||The MIT License|
|DropzoneJS||dropzonejs.com||The MIT License|
|Highstock||highcharts.com||Creative Commons (CC BY-NC 3.0)|
|jQuery||jquery.com||The MIT License|