Software metric
A software metric is a measure of some property of a piece of software or its specifications.
Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have worked hard to bring similar approaches to software development. Tom DeMarco stated, “You can’t control what you can't measure.”[1]
Common software metrics
Common software metrics include:
- Source lines of code
- Cyclomatic complexity
- Function point analysis
- Bugs per line of code
- Code coverage
- Number of lines of customer requirements.
- Number of classes and interfaces
- Robert Cecil Martin’s software package metrics
- Cohesion
- Coupling
Limitations
It is very difficult to satisfactorily define or measure "how much" software there is in a program, especially when making such a prediction prior to the detail design. The practical utility of software metrics has thus been limited to narrow domains where the measurement process can be stabilized.
Management methodologies such as the Capability Maturity Model or ISO 9000 have therefore focused more on process metrics which assist in monitoring and controlling the processes that produce the software.
Examples of process metrics affecting software:
- Number of times the program failed to rebuild overnight
- Number of defects introduced per developer hour
- Number of changes to requirements
- Hours of programmer time available and spent per week
- Number of patch releases required after first product ship
Criticisms
Software metrics tend to be used as an aid in judging the quality of software development. Metrics are relatively easy to produce, but their use as a management instrument has drawbacks:
- Unethical: It is said to be unethical to reduce a person’s performance to a small number of numerical variables and then judge him/her by that measure. A supervisor may assign the most talented programmer to the hardest tasks on a project; which means it may take the longest time to develop the task and may generate the most defects due to the difficulty of the task. Uninformed managers overseeing the project might then judge the programmer as performing poorly without consulting the supervisor who has the full picture.
- Demeaning: “Management by numbers” without regard to the quality of experience of the employees, instead of “managing people.”
- Gaming: The measurement process is biased by the act of measurement by employees seeking to maximize management’s perception of their performances. For example, if lines of code are used to judge performance, then employees will write as many separate lines of code as possible, and if they find a way to shorten their code, they may not use it.
- Inaccurate: No known metrics are both meaningful and accurate. Lines of code measure exactly what is typed, but not of the difficulty of the problem. Function points were developed to better measure the complexity of the code or specification, but they require personal judgment to use well. Different estimators will produce different results. This makes function points hard to use fairly and unlikely to be used well by everyone.
- Uneconomical/Suboptimal: It has been argued that when the economic value of measurements are computed using proven methods from decision theory, measuring software developer performance turns out to be a much lower priority than measuring uncertain benefits and risks.[2]
Gaming Metrics
Industry experience suggests that the design of metrics will encourage certain kinds of behaviour from the people being measured. The common phrase applied is “you get what you measure” (or “be careful what you wish for”).
A simple example that is actually quite common is the cost-per-function-point metric applied in some Software Process Improvement programs as an indicator of productivity. The simplest way to achieve a lower cost-per-FP is to make function points arbitrarily smaller. Since there is no standard way of measuring function points, the metric is wide open to gaming – that is, cheating.
One school of thought on metrics design suggests that metrics communicate the real intention behind the goal, and that people should do exactly what the metric tells them to do. This is a spin-off of Test driven development, where developers are encouraged to write the code specifically to pass the test. If that’s the wrong code, then they wrote the wrong test. In the metrics design process, gaming is a useful tool to test metrics and help make them more robust, as well as for helping teams to more clearly and effectively articulate their real goals.
It should be noted that there are very few industry-standard metrics that stand up to even moderate gaming.
Balancing Metrics
This section possibly contains original research. |
This section is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. |
One way to avoid the “be careful what you wish for” trap is to apply a suite of metrics that balance each other out.[citation needed] In software projects, it’s advisable[dubious – discuss][weasel words] to have at least one metric for each of the following:
- Schedule
- Size/Complexity
- Cost
- Quality
Too much emphasis on any one of these aspects of performance is likely to create an imbalance in the team’s motivations, leading to a dysfunctional project.
The Balanced scorecard is a useful tool[citation needed] for managing a suite of metrics that address multiple performance perspectives.
See also
- Software engineering
- Computer science
- Software quality
- Software package metrics
- ohloh Quantitative analysis of hundreds of open source projects
- List of code quality management dashboards
References
- ^ DeMarco, Tom. Controlling Software Projects: Management, Measurement and Estimation. ISBN 0-13-171711-1.
- ^ Douglas Hubbard, The IT Measurement Inversion CIO Magazine, 1999
External links
- International Function Point Users Group
- What is FPA at Nemsa website
- Estimating With Use Case Points by Mike Cohn. Describes the process to measure the size of an application modeled with UML, using use cases.
- OO & Agile Metrics Resources - includes workshop material on gaming metrics to improve their design
- A pragmatic approach to software engineering measurement tools
- Further defines the term Software Metrics with examples.