Cumulative accuracy profile: Difference between revisions
added 2 points in application of cap approach |
mNo edit summary |
||
Line 38: | Line 38: | ||
| issue = No 01 |
| issue = No 01 |
||
| date =2003}}</ref> |
| date =2003}}</ref> |
||
<ref>{{Citation |
* <ref>{{Citation |
||
| last =Sobehart |
| last =Sobehart |
||
| first =Jorge |
| first =Jorge |
||
Line 48: | Line 49: | ||
| journal =Moody's Risk Management Services |
| journal =Moody's Risk Management Services |
||
| date =2000-05-15 |
| date =2000-05-15 |
||
| url = http://www.rogermstein.com/wp-content/uploads/SobehartKeenanStein2000.pdf }}</ref> |
| url = http://www.rogermstein.com/wp-content/uploads/SobehartKeenanStein2000.pdf }}</ref>CAP is also used by instructional design engineers, it helps them by providing an objective method based on the cumulative accuracy profile curve that allows to assess, retrain and rebuild instructional design models used in constructing courses. |
||
* CAP is also used by instructional design engineers, it helps them by providing an objective method based on the cumulative accuracy profile curve that allows to assess, retrain and rebuild instructional design models used in constructing courses. |
|||
* Professors and school authorities also use this approach for improved decision-making and managing educational resources more efficiently. |
* Professors and school authorities also use this approach for improved decision-making and managing educational resources more efficiently. |
||
Revision as of 12:19, 17 December 2020
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
A cumulative accuracy profile (CAP) is a concept utilized in data science to visualize the discrimination power. The CAP of a model represents the cumulative number of positive outcomes along the y-axis versus the corresponding cumulative number of a classifying parameter along the x-axis. The resulting curve is called CAP curve.[1] The CAP is distinct from the receiver operating characteristic (ROC), which plots the true-positive rate against the false-positive rate. CAP is used in the performance evaluation of the classification model. It can be used to understand the robustness of the classification model.
Analyzing a CAP
A cumulative accuracy profile can be used to evaluate a model by comparing the current curve to the 'perfect' curve. The maximum number of positive outcomes is achieved directly to the random CAP in which the positive outcomes are distributed equally. A good model will have a CAP between the perfect and random curves, and the closer a model is to the perfect CAP, the better is.
The accuracy ratio (AR) is defined as the ratio of the area between the model CAP and random CAP, and the area between the perfect CAP and random CAP.[2] For a successful model, the AR has values between zero and one, and the higher the value is, the stronger the model.
The cumulative number of positive outcomes indicates a model's strength at 50% of the classifying parameter. For a successful model, this value should lie between 50% and 100% of the maximum, with a higher percentage for stronger models.
In sporadic cases, the accuracy ratio can be negative. In this case, the model is performing worse than the random CAP.
Applications
- The cumulative accuracy profile (CAP) and Receiver operating characteristic (ROC) are both commonly used by banks and regulators to analyze the discriminatory ability of rating systems that evaluate credit risks.[3]
- [4]CAP is also used by instructional design engineers, it helps them by providing an objective method based on the cumulative accuracy profile curve that allows to assess, retrain and rebuild instructional design models used in constructing courses.
- Professors and school authorities also use this approach for improved decision-making and managing educational resources more efficiently.
References
- ^ "CUMULATIVE ACCURACY PROFILE AND ITS APPLICATION IN CREDIT RISK". www.linkedin.com. Retrieved 2020-12-11.
- ^ Calabrese, Raffaella (2009), The validation of Credit Rating and Scoring Models (PDF), Swiss Statistics Meeting, Geneva, Switzerland
{{citation}}
: CS1 maint: location missing publisher (link) - ^ Engelmann, Bernd; Hayden, Evelyn; Tasche, Dirk (2003), "Measuring the Discriminative Power of Rating Systems", Discussion Paper, Series 2: Banking and Financial Supervision (No 01)
{{citation}}
:|issue=
has extra text (help) - ^ Sobehart, Jorge; Keenan, Sean; Stein, Roger (2000-05-15), "Validation methodologies for default risk models" (PDF), Moody's Risk Management Services