Software performance testing
In software engineering, performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload.
Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the design and architecture of a system, prior to the onset of actual coding effort.
Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly. In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contribute most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time. It is critical to the cost performance of a new system, that performance test efforts begin at the inception of the development project and extend through to deployment. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to the end-to-end nature of its scope.
It is important (and often difficult to arrange) for the performance test conditions to be similar to the expected actual use.
Testing technology
Performance testing technology employs one or more PCs to act as injectors – each emulating the presence or numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes. The usual sequence is to ramp up the load – starting with a small number of virtual users and increasing the number over a period to some maximum. The test result shows how the performance varies with the load, given as number of users vs response time. Various tools are available to perform such tests. Tools in this category usually execute a suite of tests which will emulate real users against the system. Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete – something that might be caused by inefficient database queries, etc.
Performance testing can be combined with stress testing, in order to see what happens when an acceptable load is exceeded — does the system crash? How long does it take to recover after a large load is reduced? Does it fail in a way that causes collateral damage?
Performance specifications
It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort.
Except in real-time computing, performance testing is frequently not performed against a specification. I.e. no one has expressed the maximum acceptable response time for a given population of users.
Performance testing will identify the Sytem Under Test constraints. These constraint (bottlenecks) in processing data may be in software or hardware.
Performance testing is often used as part of the process of performance profile tuning. The goal is to identify the "weakest links" — there are often a small number of parts in the system which, if they are made faster, will result in the overall system running noticeably faster. It is sometimes difficult to identify which parts of the system represent the critical paths. To help identify critical paths some test tools include (or have as add-ons) instrumentation agents that run on the server and report transaction times, database access times, network overhead, and other server monitors. Without such instrumentation the use of primitive system tools may be required (e.g. Task Manager in Microsoft Windows).
Performance testing can be performed across the web, and even done in different parts of the country, since the response times of the internet itself vary regionally. It can also be done in-house, although routers would then need to be configured to introduce the lag what would typically occur on public networks. Loads should be introduced to the system from realistic points. For example, if 50% of a system's user base will be accessing the system via a 56K modem connection and the other half over a T1, then the load injectors (computers that simulate real users) should either inject load over the same connections (ideal) or simulate the network latency of such connections, following the same user profile.
It is always helpful to have a statement of the likely peak numbers of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification.
Performance specifcations (requirements) should ask the following questions, at a minimum:
- In detail, what is the performance test scope? What subsystems, interfaces, components, etc are in and out of scope for this test?
- For the user interfaces (UI's) involved, how many concurrent users are expected for each (specify peak vs. nominal)?
- What does the target system (hardware) look like (specify all server and network appliance configurations)?
- What is the Application Workload Mix of each application component? (for example: 20% login, 40% search, 30% item select, 10% checkout).
- What is the System Workload Mix? [Multiple worloads may be simulated in a single performance test] (for example: 30% Workload A, 20% Workload B, 50% Workload C)
- What are the time requirements for any/all backend batch processes (specify peak vs. nominal)?
Tasks to undertake
Tasks to perform such a test would include:
- Decide whether to use internal or external resources to perform the tests, depending on inhouse expertise (or lack thereof)
- Gather or ilicit performance requirements (specifications) from users and/or business analysts
- Develop a high-level plan (or project charter), including requirements, resources, timelines and milestones
- Develop of a detailed performance test plan (including detailed scenarios and test cases, workloads, environment info, etc)
- Choose test tool(s)
- Specify test data needed and charter effort (often overlooked, but often the death of a valid performance test)
- Develop proof-of-concept scripts for each application/component under test, using chosen test tools and strategies
- Develop detailed performance test project plan, including all dependencies and associated timelines
- Install and configure injectors/controller
- Configure the test environment (ideally identical hardware to the production platform), router configuration, quiet network (we don’t want results upset by other users), deployment of server instrumentation, database test sets developed, etc.
- Execute tests – probably repeatedly (iteratively) in order to see whether any unaccounted for factor might affect the results
- Analyze the results - either pass/fail, or investigation of critical path and recommendation of corrective action
Also see stress testing.
Newsgroups
- comp.software.measurement
- gmane.comp.lang.c++.perfometer (Web)
- gmane.comp.lang.c++.perfometer (NNTP)
Tools
- C/C++ Program Perfometer
- OpenSTA - Open, Systems Testing Architecture
- Mercury LoadRunner - Industry Standard Performance Test tool
- Facilita Forecast - Innovative Performance Test tool used by many corporations in the UK
- Segue SilkPerformer
See also benchmarking.