-
Notifications
You must be signed in to change notification settings - Fork 16
/
Information-Value-With-R.html
484 lines (446 loc) · 32.8 KB
/
Information-Value-With-R.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
<!DOCTYPE html>
<html>
<head>
<title>InformationValue R Package</title>
<meta charset="utf-8">
<meta name="Description" content="R Language Tutorials for Advanced Statistics">
<meta name="Keywords" content="R, Tutorial, Machine learning, Statistics, Data Mining, Analytics, Data science, Linear Regression, Logistic Regression, Time series, Forecasting">
<meta name="Distribution" content="Global">
<meta name="Author" content="Selva Prabhakaran">
<meta name="Robots" content="index, follow">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="shortcut icon" href="/screenshots/iconb-64.png" type="image/x-icon" />
<link href="www/bootstrap.min.css" rel="stylesheet">
<link href="www/highlight.css" rel="stylesheet">
<link href='http://fonts.googleapis.com/css?family=Inconsolata:400,700'
rel='stylesheet' type='text/css'>
<!-- Color Script -->
<style type="text/css">
a {
color: #3675C5;
color: rgb(25, 145, 248);
color: #4582ec;
color: #3F73D8;
}
li {
line-height: 1.65;
}
/* reduce spacing around math formula*/
.MathJax_Display {
margin: 0em 0em;
}
</style>
<!-- Add Google search -->
<script language="Javascript" type="text/javascript">
function my_search_google()
{
var query = document.getElementById("my-google-search").value;
window.open("http://google.com/search?q=" + query
+ "%20site:" + "http://r-statistics.co");
}
</script>
</head>
<body>
<div class="container">
<div class="masthead">
<!--
<ul class="nav nav-pills pull-right">
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">
Table of contents<b class="caret"></b>
</a>
<ul class="dropdown-menu pull-right" role="menu">
<li class="dropdown-header"></li>
<li class="dropdown-header">Tutorial</li>
<li><a href="R-Tutorial.html">R Tutorial</a></li>
<li class="dropdown-header">ggplot2</li>
<li><a href="ggplot2-Tutorial-With-R.html">ggplot2 Short Tutorial</a></li>
<li><a href="Complete-Ggplot2-Tutorial-Part1-With-R-Code.html">ggplot2 Tutorial 1 - Intro</a></li>
<li><a href="Complete-Ggplot2-Tutorial-Part2-Customizing-Theme-With-R-Code.html">ggplot2 Tutorial 2 - Theme</a></li>
<li><a href="Top50-Ggplot2-Visualizations-MasterList-R-Code.html">ggplot2 Tutorial 3 - Masterlist</a></li>
<li><a href="ggplot2-cheatsheet.html">ggplot2 Quickref</a></li>
<li class="dropdown-header">Foundations</li>
<li><a href="Linear-Regression.html">Linear Regression</a></li>
<li><a href="Statistical-Tests-in-R.html">Statistical Tests</a></li>
<li><a href="Missing-Value-Treatment-With-R.html">Missing Value Treatment</a></li>
<li><a href="Outlier-Treatment-With-R.html">Outlier Analysis</a></li>
<li><a href="Variable-Selection-and-Importance-With-R.html">Feature Selection</a></li>
<li><a href="Model-Selection-in-R.html">Model Selection</a></li>
<li><a href="Logistic-Regression-With-R.html">Logistic Regression</a></li>
<li><a href="Environments.html">Advanced Linear Regression</a></li>
<li class="dropdown-header">Advanced Regression Models</li>
<li><a href="adv-regression-models.html">Advanced Regression Models</a></li>
<li class="dropdown-header">Time Series</li>
<li><a href="Time-Series-Analysis-With-R.html">Time Series Analysis</a></li>
<li><a href="Time-Series-Forecasting-With-R.html">Time Series Forecasting </a></li>
<li><a href="Time-Series-Forecasting-With-R-part2.html">More Time Series Forecasting</a></li>
<li class="dropdown-header">High Performance Computing</li>
<li><a href="Parallel-Computing-With-R.html">Parallel computing</a></li>
<li><a href="Strategies-To-Improve-And-Speedup-R-Code.html">Strategies to Speedup R code</a></li>
<li class="dropdown-header">Useful Techniques</li>
<li><a href="Association-Mining-With-R.html">Association Mining</a></li>
<li><a href="Multi-Dimensional-Scaling-With-R.html">Multi Dimensional Scaling</a></li>
<li><a href="Profiling.html">Optimization</a></li>
<li><a href="Information-Value-With-R.html">InformationValue package</a></li>
</ul>
</li>
</ul>
-->
<ul class="nav nav-pills pull-right">
<div class="input-group">
<form onsubmit="my_search_google()">
<input type="text" class="form-control" id="my-google-search" placeholder="Search..">
<form>
</div><!-- /input-group -->
</ul><!-- /.col-lg-6 -->
<h3 class="muted"><a href="/">r-statistics.co</a><small> by Selva Prabhakaran</small></h3>
<hr>
</div>
<div class="row">
<div class="col-xs-12 col-sm-3" id="nav">
<div class="well">
<li>
<ul class="list-unstyled">
<li class="dropdown-header"></li>
<li class="dropdown-header">Tutorial</li>
<li><a href="R-Tutorial.html">R Tutorial</a></li>
<li class="dropdown-header">ggplot2</li>
<li><a href="ggplot2-Tutorial-With-R.html">ggplot2 Short Tutorial</a></li>
<li><a href="Complete-Ggplot2-Tutorial-Part1-With-R-Code.html">ggplot2 Tutorial 1 - Intro</a></li>
<li><a href="Complete-Ggplot2-Tutorial-Part2-Customizing-Theme-With-R-Code.html">ggplot2 Tutorial 2 - Theme</a></li>
<li><a href="Top50-Ggplot2-Visualizations-MasterList-R-Code.html">ggplot2 Tutorial 3 - Masterlist</a></li>
<li><a href="ggplot2-cheatsheet.html">ggplot2 Quickref</a></li>
<li class="dropdown-header">Foundations</li>
<li><a href="Linear-Regression.html">Linear Regression</a></li>
<li><a href="Statistical-Tests-in-R.html">Statistical Tests</a></li>
<li><a href="Missing-Value-Treatment-With-R.html">Missing Value Treatment</a></li>
<li><a href="Outlier-Treatment-With-R.html">Outlier Analysis</a></li>
<li><a href="Variable-Selection-and-Importance-With-R.html">Feature Selection</a></li>
<li><a href="Model-Selection-in-R.html">Model Selection</a></li>
<li><a href="Logistic-Regression-With-R.html">Logistic Regression</a></li>
<li><a href="Environments.html">Advanced Linear Regression</a></li>
<li class="dropdown-header">Advanced Regression Models</li>
<li><a href="adv-regression-models.html">Advanced Regression Models</a></li>
<li class="dropdown-header">Time Series</li>
<li><a href="Time-Series-Analysis-With-R.html">Time Series Analysis</a></li>
<li><a href="Time-Series-Forecasting-With-R.html">Time Series Forecasting </a></li>
<li><a href="Time-Series-Forecasting-With-R-part2.html">More Time Series Forecasting</a></li>
<li class="dropdown-header">High Performance Computing</li>
<li><a href="Parallel-Computing-With-R.html">Parallel computing</a></li>
<li><a href="Strategies-To-Improve-And-Speedup-R-Code.html">Strategies to Speedup R code</a></li>
<li class="dropdown-header">Useful Techniques</li>
<li><a href="Association-Mining-With-R.html">Association Mining</a></li>
<li><a href="Multi-Dimensional-Scaling-With-R.html">Multi Dimensional Scaling</a></li>
<li><a href="Profiling.html">Optimization</a></li>
<li><a href="Information-Value-With-R.html">InformationValue package</a></li>
</ul>
</li>
</div>
<div class="well">
<p>Stay up-to-date. <a href="https://docs.google.com/forms/d/1xkMYkLNFU9U39Dd8S_2JC0p8B5t6_Yq6zUQjanQQJpY/viewform">Subscribe!</a></p>
<p><a href="https://docs.google.com/forms/d/13GrkCFcNa-TOIllQghsz2SIEbc-YqY9eJX02B19l5Ow/viewform">Chat!</a></p>
</div>
<h4>Contents</h4>
<ul class="list-unstyled" id="toc"></ul>
<!--
<hr>
<p><a href="/contribute.html">How to contribute</a></p>
<p><a class="btn btn-primary" href="">Edit this page</a></p>
-->
</div>
<div id="content" class="col-xs-12 col-sm-8 pull-right">
<h1>InformationValue</h1>
<p>The functions in <a href="https://cran.r-project.org/web/packages/InformationValue/">InformationValue</a> package are broadly divided in following categories:</p>
<p><strong>1. Diagnostics of predicted probability scores</strong></p>
<p><strong>2. Performance analysis</strong></p>
<p><strong>3. Functions that aid accuracy improvement</strong></p>
<p>First, lets define the meaning of the various terms used in this document.</p>
<h2>How to install</h2>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">install.packages</span>(<span class="st">"InformationValue"</span>) <span class="co"># For stable CRAN version</span>
devtools::<span class="kw">install_github</span>(<span class="st">"InformationValue"</span>) <span class="co"># For latest dev version.</span></code></pre></div>
<h2>Definitions of functions</h2>
<p><strong>Sensitivity</strong>, a.k.a <em>True Positive Rate</em> is the proportion of the events (ones) that a model predicted correctly as events, for a given prediction probability cut-off.</p>
<p><strong>Specificity</strong>, a.k.a * 1 - False Positive Rate* is the proportion of the non-events (zeros) that a model predicted correctly as non-events, for a given prediction probability cut-off.</p>
<p><strong>False Positive Rate</strong> is the proportion of non-events (zeros) that were predicted as events (ones)</p>
<p><strong>False Negative Rate</strong> is the proportion of events (ones) that were predicted as non-events (zeros)</p>
<p><strong>Mis-classification error</strong> is the proportion of observations (both events and non-event) that were not predicted correctly.</p>
<p><strong>Concordance</strong> is the percentage of <em>all-possible-pairs-of-predicted Ones and Zeros</em> where the scores of actual ones are greater than the scores of actual zeros. It represents the predictive power of a binary classification model.</p>
<p><strong>Weights of Evidence (WOE)</strong> provides a method of recoding the categorical <code>x</code> variable to continuous variables. For each category of a categorical variable, the <strong>WOE</strong> is calculated as:</p>
<p><br /><span class="math display">$$WOE = ln\left(\frac{perc\ good\ of\ all\ goods}{perc\ bad\ of\ all\ bads} \right)$$</span><br /></p>
<p>In above formula, <em>goods</em> is synonymous with <em>ones</em>, <em>events</em>, <em>positives</em> or <em>responders</em> and <em>bads</em> is synonymous with <em>zeros</em>, <em>non-events</em>, <em>negatives</em> or <em>non-responders</em>.</p>
<p><strong>Information Value (IV)</strong> is a measure of the predictive capability of a categorical <code>x</code> variable to accurately predict the goods and bads. For each category of <code>x</code>, information value is computed as:</p>
<p><br /><span class="math display"><em>I</em><em>V</em> = (<em>p</em><em>e</em><em>r</em><em>c</em> <em>g</em><em>o</em><em>o</em><em>d</em> <em>o</em><em>f</em> <em>a</em><em>l</em><em>l</em> <em>g</em><em>o</em><em>o</em><em>d</em><em>s</em>−<em>p</em><em>e</em><em>r</em><em>c</em> <em>b</em><em>a</em><em>d</em> <em>o</em><em>f</em> <em>a</em><em>l</em><em>l</em> <em>b</em><em>a</em><em>d</em><em>s</em>) * <em>W</em><em>O</em><em>E</em></span><br /></p>
<p>The total IV of a variable is the sum of IV’s of its categories. Here is what the values of IV mean according to Siddiqi (2006):</p>
<ul>
<li>Less than 0.02, then the predictor is not useful for modeling (separating the Goods from the Bads)</li>
<li>0.02 to 0.1, then the predictor has only a weak relationship.</li>
<li>0.1 to 0.3, then the predictor has a medium strength relationship.</li>
<li>0.3 or higher, then the predictor has a strong relationship.</li>
</ul>
<p>Here is a sample MS Excel file that shows how to <a href="./IVCalc.xlsx">calculate WOE and Information Value</a>.</p>
<p><strong>KS Statistic</strong> or <em>Kolmogorov-Smirnov statistic</em> is the maximum difference between the cumulative true positive and cumulative false positive rate. It is often used as the deciding metric to judge the efficacy of models in credit scoring. The higher the <code>ks_stat</code>, the more efficient is the model at capturing the responders (Ones). This should not be confused with the <code>ks.test</code> function.</p>
<h2>1.1 <code>plotROC</code></h2>
<p>The plotROC uses the <code>ggplot2</code> framework to create the ROC curve and prints the <code>AUROC</code> inside. It comes with an option to display the change points of the prediction probability scores on the graph if you set the <code>Show.labels = T</code>.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">data</span>(<span class="st">'ActualsAndScores'</span>)
<span class="kw">plotROC</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)</code></pre></div>
<p><img src='screenshots/ROC-Curve.png' width='528' height='528' /></p>
<p>You can also get the sensitivity matrix used to make the plot by turning on <code>returnSensitivityMat = TRUE</code>.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">sensMat <-<span class="st"> </span><span class="kw">plotROC</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores, <span class="dt">returnSensitivityMat =</span> <span class="ot">TRUE</span>)</code></pre></div>
<h2>1.2. <code>sensitivity</code> or <code>recall</code></h2>
<p>Sensitivity, also considered as the ‘True Positive Rate’ or ‘recall’ is the proportion of ‘Events’ (or ‘Ones’) correctly predicted by the model, for a given prediction probability cutoff score. The default cutoff score for the <code>specificity</code> function unless specified by the <code>threshold</code> argument is taken as <code>0.5</code>.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">sensitivity</span>(<span class="dt">actuals =</span> ActualsAndScores$Actuals, <span class="dt">predictedScores =</span> ActualsAndScores$PredictedScores)
<span class="co">#> [1] 1</span></code></pre></div>
<p>If the objective of your problem is to maximise the ability of your model to detect the ‘Events’ (or ‘Ones’), even at the cost of wrongly predicting the non-events (‘Zeros’) as an event (‘One’), then you could set the threshold as determined by the <code>optimalCutoff()</code> with <code>optimiseFor='Ones'</code>.</p>
<p><strong>NOTE</strong>: This may not be the best example, because we are able to achieve the maximum sensitivity of 1 with the default cutoff of 0.5. However, I am showing this example just to understand how this could be implemented in real projects.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">max_sens_cutoff <-<span class="st"> </span><span class="kw">optimalCutoff</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores =</span> ActualsAndScores$PredictedScores, <span class="dt">optimiseFor=</span><span class="st">'Ones'</span>) <span class="co"># determine cutoff to maximise sensitivity.</span>
<span class="kw">print</span>(max_sens_cutoff) <span class="co"># This would be cut-off score that achieved maximum sensitivity.</span>
<span class="co">#> [1] 0.5531893</span>
<span class="kw">sensitivity</span>(<span class="dt">actuals =</span> ActualsAndScores$Actuals, <span class="dt">predictedScores =</span> ActualsAndScores$PredictedScores, <span class="dt">threshold=</span>max_sens_cutoff)
<span class="co">#> [1] 1</span></code></pre></div>
<h2>1.3. <code>specificity</code></h2>
<p>For a given probability score cutoff (<code>threshold</code>), specificity computes what proportion of the total non-events (zeros) were predicted accurately. It can alo be computed as <code>1 - False Positive Rate</code>. If unless specified, the default <code>threshold</code> value is set as <code>0.5</code>, which means, the values of <code>ActualsAndScores$PredictedScores</code> above <code>0.5</code> is considered as events (Ones).</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">specificity</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)
<span class="co">#> [1] 0.1411765</span></code></pre></div>
<p>If you wish to know what proportion of non-events could be detected by lowering the <code>threshold</code>:</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">specificity</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores, <span class="dt">threshold =</span> <span class="fl">0.35</span>)
<span class="co">#> [1] 0.01176471</span></code></pre></div>
<h2>1.4. <code>precision</code></h2>
<p>For a given probability score cutoff (<code>threshold</code>), precision or ‘positive predictive value’ computes the proportion of the total events (ones) out of the total that were predicted to be events (ones).</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">precision</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)
<span class="co">#> [1] 0.5379747</span></code></pre></div>
<h2>1.5. <code>npv</code></h2>
<p>For a given probability score cutoff (<code>threshold</code>), npv or ‘negative predictive value’ computes the proportion of the total non-events (zeros) out of the total that were predicted to be non-events (zeros).</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">npv</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)
<span class="co">#> [1] 1</span></code></pre></div>
<h2>1.6. <code>youdensIndex</code></h2>
<p>Youden’s J Index (Youden 1950), calculated as <br /><span class="math display"><em>J</em> = <em>S</em><em>e</em><em>n</em><em>s</em><em>i</em><em>t</em><em>i</em><em>v</em><em>i</em><em>t</em><em>y</em> + <em>S</em><em>p</em><em>e</em><em>c</em><em>i</em><em>f</em><em>i</em><em>c</em><em>i</em><em>t</em><em>y</em> − 1</span><br /> represents the proportions of correctly predicted observations for both the events (Ones) and nonevents (Zeros). It is particularly useful if you want a single measure that accounts for both <em>false-positive</em> and <em>false-negative</em> rates</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">youdensIndex</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)
<span class="co">#> [1] 0.1411765</span></code></pre></div>
<h2>2.1. <code>misClassError</code></h2>
<p>Mis-Classification Error is the proportion of all events that were incorrectly classified, for a given probability cutoff score.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">misClassError</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores, <span class="dt">threshold=</span><span class="fl">0.5</span>)
<span class="co">#> [1] 0.4294</span></code></pre></div>
<h2>2.2. <code>Concordance</code></h2>
<p>Concordance is the percentage of predicted probability scores where the scores of actual positive’s are greater than the scores of actual negative’s. It is calculated by taking into account the scores of all possible pairs of <em>Ones</em> and <em>Zeros</em>. If the concordance of a model is 100%, it means that, by tweaking the prediction probability cutoff, we could accurately predict all of the events and non-events.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">Concordance</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)
<span class="co">#> $Concordance </span>
<span class="co">#> [1] 0.8730796</span>
<span class="co">#> </span>
<span class="co">#> $Discordance </span>
<span class="co">#> [1] 0.1269204</span>
<span class="co">#> </span>
<span class="co">#> $Tied </span>
<span class="co">#> [1] 0</span>
<span class="co">#> </span>
<span class="co">#> $Pairs </span>
<span class="co">#> [1] 7225</span></code></pre></div>
<h2>2.3. <code>somersD</code></h2>
<p><code>somersD</code> computes how many more concordant than discordant pairs exist divided by the total number of pairs. Larger the Somers D value, better model’s predictive ability.</p>
<p><br /><span class="math display">$$Somers D = \frac{Concordant Pairs - Discordant Pairs}{Total Pairs} $$</span><br /></p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">somersD</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)
<span class="co">#> [1] 0.7461592</span></code></pre></div>
<h2>2.4. <code>ks_stat</code></h2>
<p><code>ks_stat</code> computes the kolmogorov-smirnov statistic that is widely used in credit scoring to determine the efficacy of binary classification models. The higher the <code>ks_stat</code> more effective is the model at capturing the responders.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">ks_stat</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)
<span class="co">#> [1] 0.6118</span></code></pre></div>
<h2>2.5. <code>ks_plot</code></h2>
<p><code>ks_plot</code> plots the lift is capturing the responders (Ones) against the the random case where we don’t use the model. The more curvier (higher) the model curve, the better is your model.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">><span class="st"> </span><span class="kw">ks_plot</span>(<span class="dt">actuals=</span>ActualsAndScores$Actuals, <span class="dt">predictedScores=</span>ActualsAndScores$PredictedScores)</code></pre></div>
<p><img src='screenshots/KS_statistic_plot.png' width='670' height='419' /></p>
<h4>How to interpret this plot?</h4>
<p>This plot aims to answer the question: <em>Which part of the population should I target for my marketing campaign and what conversion rate can I expect?</em> Now, lets understand how it is computed and what those numbers mean.</p>
<p>After computing the prediction probability scores from a given logit model, the datapoints are sorted in descending order of prediction probability scores. This set of data points is split into 10 groups (or ranks) as marked on the X-axis, such that, the group ranked 1 contains the top 10% of datapoints with highest prediction probability scores, group ranked 2 containing the next 10% and so on.</p>
<p>The <strong>‘random’</strong> line in above chart corresponds to the case of capturing the responders (‘Ones’) by random selection, i.e., when you don’t have any model (generated probability scores) at disposal. While the <strong>‘model’</strong> line represents the case of capturing the responders if you go by the model generated probability scores, where you begin by targeting datapoint with highest probability scores. In simpler terms, it represents the proportion of total responders you can expect to capture as you keep targeting the data point starting from the bucket rank 1 through 10, in that order. So, now you know which part of the population to target and what is the expected conversion rate.</p>
<p>For example, from the above chart for instance, by targeting first 40% of the population, the model will be able to capture 70.59% of total responders(Ones), while without the model, you can expect to capture only 40% of responders by random targeting.</p>
<h2>3.1. <code>optimalCutoff</code></h2>
<p><code>optimalCutoff</code> determines the optimal threshold for prediction probability score based on your specific problem objectives. By adjusting the argument <code>optimiseFor</code> as follows, you can find the optimal cutoff that: 1. <code>Ones</code>: maximizes detection of events or ones 2. <code>Zeros</code>: maximizes detection of non-events or zeros 3. <code>Both</code>: control both <em>false positive rate</em> and <em>false negative rate</em> by maximizing the Youden’s J Index. 4. <code>misclasserror</code>: minimizes misclassification error (default)</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">><span class="st"> </span><span class="kw">optimalCutoff</span>(<span class="dt">actuals =</span> ActualsAndScores$Actuals, <span class="dt">predictedScores =</span> ActualsAndScores$PredictedScores) <span class="co"># returns cutoff that gives minimum misclassification error.</span></code></pre></div>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">><span class="st"> </span><span class="kw">optimalCutoff</span>(<span class="dt">actuals =</span> ActualsAndScores$Actuals, <span class="dt">predictedScores =</span> ActualsAndScores$PredictedScores,
+<span class="st"> </span><span class="dt">optimiseFor =</span> <span class="st">"Both"</span>) <span class="co"># returns cutoff that gives maximum of Youden's J Index</span>
><span class="st"> </span><span class="co"># > [1] 0.6431893</span></code></pre></div>
<p>By setting the <code>returnDiagnostics=TRUE</code> you can get the <code>sensitivityTable</code> that shows the <code>FPR</code>, <code>TPR</code>, <code>YOUDENSINDEX</code>, <code>SPECIFICITY</code>, <code>MISCLASSERROR</code> for various values of cutoff.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">><span class="st"> </span>sens_table <-<span class="st"> </span><span class="kw">optimalCutoff</span>(<span class="dt">actuals =</span> ActualsAndScores$Actuals, <span class="dt">predictedScores =</span> ActualsAndScores$PredictedScores,
+<span class="st"> </span><span class="dt">optimiseFor =</span> <span class="st">"Both"</span>, <span class="dt">returnDiagnostics =</span> <span class="ot">TRUE</span>)$sensitivityTable</code></pre></div>
<h2>3.2. <code>WOE</code></h2>
<p>Computes the Weights Of Evidence (WOE) for each group of a given categorical X and binary response Y.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">WOE</span>(<span class="dt">X=</span>SimData$X.Cat, <span class="dt">Y=</span>SimData$Y.Binary)</code></pre></div>
<h2>3.3. <code>WOETable</code></h2>
<p>Generates the WOE table showing the percentage goods, bads, WOE and IV for each category of <code>X</code>. WOE for a given category of <code>X</code> is computed as:</p>
<p><br /><span class="math display">$$WOE = ln(\frac{perc.Good}{perc.Bad}) $$</span><br /></p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">options</span>(<span class="dt">scipen =</span> <span class="dv">999</span>, <span class="dt">digits =</span> <span class="dv">2</span>)
<span class="kw">WOETable</span>(<span class="dt">X=</span>SimData$X.Cat, <span class="dt">Y=</span>SimData$Y.Binary)</code></pre></div>
<table>
<thead>
<tr class="header">
<th align="left">CAT</th>
<th align="left">GOODS</th>
<th align="left">BADS</th>
<th align="left">TOTAL</th>
<th align="left">PCT_G</th>
<th align="left">PCT_B</th>
<th align="left">WOE</th>
<th align="left">IV</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left">Group1</td>
<td align="left">179</td>
<td align="left">1500</td>
<td align="left">1679</td>
<td align="left">0.0246488571</td>
<td align="left">0.0659108885</td>
<td align="left">-0.9835731</td>
<td align="left">0.0405842251</td>
</tr>
<tr class="even">
<td align="left">Group2</td>
<td align="left">346</td>
<td align="left">525</td>
<td align="left">871</td>
<td align="left">0.0476452768</td>
<td align="left">0.0230688110</td>
<td align="left">0.7253020</td>
<td align="left">0.0178253591</td>
</tr>
<tr class="odd">
<td align="left">Group3</td>
<td align="left">560</td>
<td align="left">1354</td>
<td align="left">1914</td>
<td align="left">0.0771137428</td>
<td align="left">0.0594955620</td>
<td align="left">0.2593798</td>
<td align="left">0.0045698000</td>
</tr>
<tr class="even">
<td align="left">Group4</td>
<td align="left">6</td>
<td align="left">6</td>
<td align="left">6</td>
<td align="left">0.0008262187</td>
<td align="left">0.0002636436</td>
<td align="left">1.1422615</td>
<td align="left">0.0006426079</td>
</tr>
<tr class="odd">
<td align="left">Group5</td>
<td align="left">4595</td>
<td align="left">16369</td>
<td align="left">20964</td>
<td align="left">0.6327458001</td>
<td align="left">0.7192635557</td>
<td align="left">-0.1281591</td>
<td align="left">0.0110880366</td>
</tr>
<tr class="even">
<td align="left">Group6</td>
<td align="left">577</td>
<td align="left">461</td>
<td align="left">1038</td>
<td align="left">0.0794546957</td>
<td align="left">0.0202566131</td>
<td align="left">1.3667057</td>
<td align="left">0.0809063559</td>
</tr>
<tr class="odd">
<td align="left">Group7</td>
<td align="left">658</td>
<td align="left">1670</td>
<td align="left">2328</td>
<td align="left">0.0906086478</td>
<td align="left">0.0733807892</td>
<td align="left">0.2108875</td>
<td align="left">0.0036331398</td>
</tr>
<tr class="even">
<td align="left">Group8</td>
<td align="left">327</td>
<td align="left">859</td>
<td align="left">1186</td>
<td align="left">0.0450289177</td>
<td align="left">0.0377449688</td>
<td align="left">0.1764527</td>
<td align="left">0.0012852725</td>
</tr>
<tr class="odd">
<td align="left">Group9</td>
<td align="left">14</td>
<td align="left">14</td>
<td align="left">14</td>
<td align="left">0.0019278436</td>
<td align="left">0.0006151683</td>
<td align="left">1.1422615</td>
<td align="left">0.0014994184</td>
</tr>
</tbody>
</table>
<h2>3.4. <code>IV</code></h2>
<p>Compute the information value of a given categorical <code>X</code> (Factor) and binary <code>Y</code> (numeric) response. The information value of a category of <code>X</code> is calculated as: <br /><span class="math display"><em>I</em><em>V</em> = (<em>p</em><em>e</em><em>r</em><em>c</em>.<em>G</em><em>o</em><em>o</em><em>d</em> − <em>p</em><em>e</em><em>r</em><em>c</em>.<em>B</em><em>a</em><em>d</em>)*<em>W</em><em>O</em><em>E</em></span><br /> The <code>IV</code> of the categorical variables is the sum of information value of its individual categories.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">options</span>(<span class="dt">scipen =</span> <span class="dv">999</span>, <span class="dt">digits =</span> <span class="dv">4</span>)
<span class="kw">IV</span>(<span class="dt">X=</span>SimData$X.Cat, <span class="dt">Y=</span>SimData$Y.Binary)
<span class="co">#> 1] 0.162 </span>
<span class="co">#> attr(,“howgood”)</span>
<span class="co">#> [1] “Highly Predictive”</span></code></pre></div>
<blockquote>
<p>“He who gives up code safety for code speed deserves neither.”</p>
</blockquote>
<p>For more information and examples, visit <a href="http://rstatistics.net">rstatistics.net</a></p>
</div>
</div>
<div class="footer">
<hr>
<p>© 2016-17 Selva Prabhakaran. Powered by <a href="http://jekyllrb.com/">jekyll</a>,
<a href="http://yihui.name/knitr/">knitr</a>, and
<a href="http://johnmacfarlane.net/pandoc/">pandoc</a>.
This work is licensed under the <a href="http://creativecommons.org/licenses/by-nc/3.0/">Creative Commons License.</a>
</p>
</div>
</div> <!-- /container -->
<script src="//code.jquery.com/jquery.js"></script>
<script src="www/bootstrap.min.js"></script>
<script src="www/toc.js"></script>
<!-- MathJax Script -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
<script type="text/javascript"
src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<!-- Google Analytics Code -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-69351797-1', 'auto');
ga('send', 'pageview');
</script>
<style type="text/css">
/* reduce spacing around math formula*/
.MathJax_Display {
margin: 0em 0em;
}
body {
font-family: 'Helvetica Neue', Roboto, Arial, sans-serif;
font-size: 16px;
line-height: 27px;
font-weight: 400;
}
blockquote p {
line-height: 1.75;
color: #717171;
}
.well li{
line-height: 28px;
}
li.dropdown-header {
display: block;
padding: 0px;
font-size: 14px;
}
</style>
</body>
</html>