Scaling is an essential part of load testing that involves increasing the number of virtual users, transactions, or data volume to measure the system’s performance under stress.
It helps identify potential bottlenecks in the system, such as hardware limitations or software bugs and allows teams to adjust the system’s configuration or hardware to improve performance.
Conducting scaling tests before running full-scale tests is a good idea because it helps teams determine the system’s capacity and ensure it can support the required load before running expensive and time-consuming tests.
By doing so, teams can avoid the risk of system failures, minimize downtime, and ultimately provide a better user experience.
How to run scaling tests
Scaling tests are then run on the main test environment to establish that a single script and the expected mix of scripts can run on the available tool hardware and that the server can support the necessary loads.
Bottlenecks are usually discovered and fixed during scaling tests. Baseline information about response times and the capacity of hardware is also collected to predict the results of larger tests so that you can plan and acquire the right hardware.
Scaling tests are also practiced for the definition, execution, monitoring, analysis, reports, and reset.
During the scaling tests, you learn if the hardware can support the workload, otherwise, the hardware, configuration, application, or goals can be adjusted.
Full-scale testing is the formal demonstration of that capacity.
In some cases, soak testing is also performed, where it is seen if the application and hardware can run for an extended period.
A full-scale test should always be run at least twice, in case one of the results is a fluke.
Looking at the collected data
After a test, the collected data is analyzed to determine if the goals have been met, identify any issues or trouble spots, and how they can be fixed.
Changes are then planned, such as hardware, software, or configuration adjustments, and a list of changes that can be made would form.
Starting at the top of the list, changes will be applied according to which would be the quickest, cheapest, and lowest risk to implement.
After each change, the test is rerun to verify that the executed solutions worked. This is also often a continuous process that is repeated until either time or money runs out.
The final report compares the results from full-scale tests to the goals and adjustments made.
The final report will also communicate if the current configuration met the goals set earlier and how long this will last.
Sometimes, this solution will last for some time or maybe delicate and needs further work, or sometimes the goals simply cannot be met.
In most cases, testing stops before all possible improvements have been made, so the final report identifies opportunities that may improve stability or performance.