Worst-case execution time
The worst-case execution time (WCET) of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. Knowing worst-case execution times is of prime importance for the schedulability analysis of hard real-time systems.
Analysis structure
Timing analysis is in general performed on two levels:
- Worst-case execution time (WCET) analysis
- Higher-level/system-level analysis
WCET analysis considers the execution time of an isolated task. At this level, activities other than ones related to the considered task are ignored. Tasks are assumed never to block or to be interrupted (blocking is dealt with by the schedulability analysis).
At the higher level, the overall system performance is analyzed, given the results of the WCET analysis for each task or program in the system. Multiple tasks are usually assumed to execute on a single processor and compete for resources, and thus possibly block while attempting to access the resources. The most common type of analysis here is schedulability analysis: for example, fixed-priority analysis or rate-monotonic scheduling analysis. The tightness, or precision of schedulability analysis relies on the accuracy of the WCET analysis. If the WCET values are pessimistic (greater than any execution time that can occur in a running system) then the scheduler will be forced to allocate more time to those tasks than actually required.
A static WCET analysis tool should be able to work at a high-level to determine the structure of a program's task, working either on a piece of source code or disassembled binary executable. But it should also work at a low-level, using timing information about the real hardware that the task will execute on, with all its specific features. By combining those two kinds of analysis, the tool should give an upper bound on the time required to execute a given task on a given hardware platform.
At the low-level, static WCET analysis is complicated by the presence of architectural features that improve the average-case performance of the processor: instruction/data caches, branch prediction and instruction pipelines for example. It is possible to determine tight WCET bounds if these modern architectural features are taken into account in the timing model used by the analysis.
Besides static analysis approaches, which have dominated research in the area since the late 1980s, dynamic or measurement-based approaches have recently entered the research arena. The motivation cited by researchers in this branch is that computing hardware (the CPU in particular) has reached a complexity which is extremely hard to model. In particular, the modeling process can introduce errors from several sources: errors in chip design, lack of documentation, errors in documentation, errors in model creation; all leading to cases where the model predicts a different behavior to that observed on real hardware. Certification authorities such as the European Aviation Safety Agency, therefore, rely on model validation suites. On the other hand, measurement based approaches are also considered to be potentially inaccurate, because they rely on observing the worst-case effects during testing. Measurement-based approaches usually try to measure the execution times of short code segments (basic blocks) and then use static analysis methods to compute the worst-case behavior of the code as a whole. This is driven by the philosophy that the WCET of a basic block is easily measured, but creating a test case in which each block on the worst-case path is exercised is extremely difficult. In addition, the dependence of the execution time on the execution state of the architecture causes an explosion of the search space.
Historically industry has either used end-to-end measurements with an added safety margin for soft-real-time systems, or manual static analysis on simple hardware for safety critical systems. Recently industry has shown more interest in research into automated methods of calculating WCET. Complexity is increasingly becoming an issue in manual analysis and safety margins have become a liability in soft-real-time systems: they are either too generous, increasing the cost of the device, or too tight, causing device failure. In the future, it is likely that a requirement for safety critical systems is that they are analyzed using both static and measurement-based approaches.
Research groups
The most active research groups are in Sweden (Mälardalen, Linköping), Germany (Saarbrücken, Dortmund, Braunschweig), France (Toulouse, Rennes), Austria (Vienna), UK (York), Italy (Bologna), Spain (Cantabria, Valencia), and Switzerland (Zurich). Recently, the topic of code-level timing analysis has found more attention outside of Europe by research groups in the US (North Carolina, Florida), Canada, Australia, and Singapore.
WCET Tool Challenge
The first international WCET Tool Challenge took place during the autumn of 2006. It was organized by the University of Mälardalen and sponsored by the ARTIST2 Network of Excellence on Embedded Systems Design. The aim of the Challenge was to inspect and to compare different approaches in analyzing the worst-case execution time. All available tools and prototypes able to determine safe upper bounds for the WCET of tasks have participated. The final results were presented in November 2006 at the ISoLA 2006 International Symposium in Paphos, Cyprus.
A second Challenge took place in 2008 [1].
See also
- WCET-aware Compilation / The WCET-aware C Compiler WCC
- Big O notation
- Optimization (computer science)
- Best and worst cases
Articles and white papers
- The Worst-Case Execution Time Problem - Overview of Methods and Survey of Tools (PDF)
- Worst-Case Execution Time Prediction by Static Program Analysis (PDF)
- WCET Analysis of Probabilistic Hard Real Time Systems (PDF)
- OTAWA, a Framework for Experimenting WCET Computations (PDF)
- WCET Tool Challenge 2006 extended test results analysis of final report (Journal article in Springer)
- WCET Tool Challenge 2006 final report (PDF)
- A compiler framework for the reduction of worst-case execution times (PDF)