Measuring application performance as part of automation testing
When app performance measurement is done as part of automation testing, I understand interesting and useful results.
Results by testcase :
• SQL statements performed during the each test
• Separately divided, select, insert, delete, update quantities
• The number of rows read, lock waiting time, CPU usage and many more metrics.
Why there is need to be carefully?
In the application, already small code changes are usually reflected in many different services. For example, a new field is added to the user interface and no information is available with the current service, the developer adds the field search to the general service, adding a new service call to the new application for retrieval. In a local test, everything works quickly and conveniently as well as in the acceptance test environment, automation tests and the like look green. However, this is not so simple.
The service change for the new field is very poorly implemented, iterates the lists, and several unnecessary queries are made. Problems will start to emerge only in performance tests, slowdown in many batch times, and slowness in web service interfaces.
In this case, it will be started to find out where the fault is and the time is easily spent several days before the fault can be found. In many cases this small change does not lead to a comprehensive performance test, but the problems start to emerge only in production. Performance testing that covers an application typically requires a lot of calendar time – and correcting the findings will delay the entire delivery schedule.
Now that we add performance metrics into the automation testing of an application, we catch the SQL statements that have changed in the different services and the amount of SQL statements that have been changed. For example, in this case, the change in the overall service affected many different test cases.
SQL statement changes
Another similar issue is when the developer changes the widely used SQL statement, for example, by adding a new subquery to the query. In this case, the Select starts reading the whole bigger table. This kind of problem can not be found in even smaller test runs, as the whole table is read quickly. The sluggishness will only begin when the application is tested in a big performance / production base.
Increase your chances of success
As we add performance metrics into the standard automation tests for an application, we can get the number of read-through lines readily changed in different services compared to the situation before the change.
In large development projects, performance testing as part of test automation greatly accelerates performance problems, and thus the whole project has a better chance of success. Too often, the performance is only corrected afterwards to extinguish the fires where it rages.
In addition to this, you will find a lot more, for example. Number of sorts, number of failed SQL statements, number of commit and rollbacks, waiting time of locks etc.
What does my service do?
The Analysis Service I provide to you and your organization is the following:
The first run shows the performance of selected test cases. I’m looking for and listing you for example Top10 best / worst performing case.
In the next few days, test cases that have changed since the previous test will be captured.
Service is available for SQL server, DB2 and for Oracle.