Jump to content

Cumulative accuracy profile

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by JulesSD (talk | contribs) at 17:01, 25 November 2020 (Example). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The cumulative accuracy profile (or CAP ) is a concept utilized in data science to visualize the discrimination power of a model. The CAP of a model represents the cumulative number of positive outcomes along the y-axis versus the corresponding cumulative number of a classifying parameter along the x-axis. The CAP is distinct from the receiver operating characteristic (ROC), which plots the true-positive rate against the false-positive rate. CAP is used in the performance evaluation of the classification model. It helps us understand the robustness of the classification model.

Example

Imagine you are a saleswoman or a salesman at a store selling clothes. The store has a total of 100,000 customers, which we place on the horizontal axis. Based on your observations, every time you offer your customers a deal, almost 10% of them respond and buy the product, which means that 10% of the total (10,000) is placed on the vertical axis. Now we've got a proposal that we want to offer, and we want to see how many customers are going to purchase our product by drawing a line that will represent the random selection. The slope of the line equal to the 10% that we know responds on average to an offer as if we just send them out like that. Now the question is, how do we pick and choose our customers? First, we build a model. A customer segmentation model demographic segmentation model but which wants to predict whether or not they will leave the company will predict whether or not they will purchase the product. It's a very simple process. It's the same thing because purchased is also a binary variable yes or no. And we can also run the same experiment and we can take a group of customers before we send out the offer and then look back and see who purchased whether male or female, Which country were they in what age predominately were they browsing on mobile were they browsing via a computer and all of these factors we can take them into account them put them into a logistic regression and get a model which will help us assess the likelihood of certain types of customers purchasing based of their characteristics or the general demographic status and other characteristics.

The CAP Curve for the perfect, good and random model predicting the buying customers from a pool of 100000 individuals.

Once we have built this model we apply it to a select customer and send the offer to a female customer of a bank whose favorite color is red. They're most likely to leave the bag and we will have a similar result. Say perhaps a male customer in this certain age group who is most likely to purchase a mobile or something else if it will tell us something or it will actually rank our Customers. We 'll give them probability of purchasing and we use the portability to contact your customer, of course, we contact we get zero response, then if we contact 20000 we'll probably get a much higher response rate than just 2000 because we're contacted 2000. Our response rate will be higher than 4000 which we get in this random scenario. If our model is good, by the time we're at around 60 thousand or more we are really getting to that 10000 mark so we get 10000 people. So now this draws a line through these crosses. So what you see here is called the cumulative accuracy profile of your model.

Analyzing a CAP

CAP can be used to evaluate a model by comparing the curve to the perfect CAP in which the maximum number of positive outcomes is achieved directly to the random CAP in which the positive outcomes are distributed equally. A good model will have a CAP between the perfect CAP and the random CAP with a better model tending to the perfect CAP.

The accuracy ratio (AR) is defined as the ratio of the area between the model CAP and the random CAP and the area between the perfect CAP and the random CAP.[1] For a successful model the AR has values between zero and one, with a higher value for a stronger model.

Another indication of a model's strength is given by the cumulative number of positive outcomes at 50% of the classifying parameter. For a successful model, this value should lie between 50% and 100% of the maximum, with a higher percentage for stronger models.

On very rare cases the accuracy ratio can be negative. In this case, the model is performing worse than the random CAP.

Applications

CAP and Receiver operating characteristic (ROC) are both commonly used by banks and regulators to analyze the discriminatory ability of rating systems that evaluate credit risks.[2] [3]

References

  1. ^ Calabrese, Raffaella (2009), The validation of Credit Rating and Scoring Models (PDF), Swiss Statistics Meeting, Geneva, Switzerland{{citation}}: CS1 maint: location missing publisher (link)
  2. ^ Engelmann, Bernd; Hayden, Evelyn; Tasche, Dirk (2003), "Measuring the Discriminative Power of Rating Systems", Discussion Paper, Series 2: Banking and Financial Supervision (No 01) {{citation}}: |issue= has extra text (help)
  3. ^ Sobehart, Jorge; Keenan, Sean; Stein, Roger (2000-05-15), "Validation methodologies for default risk models" (PDF), Moody's Risk Management Services