-
Notifications
You must be signed in to change notification settings - Fork 1
/
treebasedmodels.Rmd
485 lines (328 loc) · 23.8 KB
/
treebasedmodels.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
---
title: "Tree-based models"
#author: "Dr. Stephan Dietrich & Michelle González Amador"
#date: '2022-09-21'
output: html_document
---
## **Tree-based models for classification problems ** {.tabset .tabset-fade .tabset-pills}
**A general overview of tree-based methods**
An introduction to Tree-based machine learning models is given to us by [Dr. Francisco Rosales](https://www.linkedin.com/in/fraroma/), Assistant Professor at ESAN University (Perú) and Lead Data Scientist ([BREIN](https://www.linkedin.com/company/breinhub/)). You can watch the pre-recorded session below:
<center>
```{r, echo=FALSE}
library("vembedr")
embed_url("https://youtu.be/l7s3k2TlQeY?si=ezqWJ0tPGP1sX0Fa")
```
</center>
Some key points to keep in mind when working through the practical exercise include:
+ Tree-based methods work for both classification and regression problems.
+ Decision Trees are both a logical and a technical tool:
+ they involve stratifying or segmenting the predictor space into a number of simple regions
+ from each region, we obtain a relevant metric (e.g. mean/average) and then use that information to make predictions about the observations that belong to that region
+ Decision Trees are the simplest version of a tree-based method. To improve on a simple splitting algorithm, there exist ensemble learning techniques such as bagging and boosting:
+ bagging: also known as bootstrap aggregating, it is an ensemble technique used to decrease a model's variance. A <strong> Random Forest </strong> is a tree-based method that functions on the concept of bagging. The main idea behind a Random Forest model is that, if you partition the data that would be used to create a single decision tree into different parts, create one tree for each of these partitions, and then use a method to “average” the results of all of these different trees, you should end up with a better model.
+ boosting: an ensemble technique mainly used to decrease a model's bias. Like bagging, we create multiple trees from various splits of our training dataset. However, whilst bagging uses bootstrap to create the various data splits (from which each tree is born), in boosting each tree is grown sequentially, using information from the previously built tree. So, boosting doesn't use bootstrap. Instead each tree is a modified version of the original dataset (each subsequent tree is built from the residuals of the previous model).
<br>
To conclude our Malawi case study, we will implement a Random Forest algorithm to our classification problem: given a set of features X (e.g. ownership of a toilet, size of household, etc.), how likely are we to correctly identify an individual's income class? Recall that this problem has already been approached using a linear regression model (and a lasso linear model) and a logistic classification (i.e. an eager learner model) and whilst there was no improvement between a linear and a lasso linear model, we did increase our model's predictive ability when we switched from a linear prediction to a classification approach. I had previously claimed that the improvement was marginal --- but since the model will be used to determine who gets and who doesn't get an income supplement (i.e. who's an eligible recipient of a cash transfer, as part of Malawi's social protection policies), any improvement is critical and we should try various methods until we find the one that best fits our data.
<br>
Some discussion points before the practical:
+ Why did we decide to switch models (from linear to classification)?
+ Intuitively, why did a classification model perform better than a linear regression at predicting an individual's social class based on their monthly per capita consumption?
+ How would a Random Forest classification approach improve our predictive ability? (hint, the answer may be similar to the above one)
### **R practical**
<br>
As always, start by opening the libraries that you'll need to reproduce the script below. We will continue to use the Caret library for machine learning purposes, and some other general libraries for data wrangling and visualisation.
<br>
```{r, message=FALSE}
rm(list = ls()) # this line cleans your Global Environment.
setwd("/Users/michellegonzalez/Documents/GitHub/Machine-Learning-for-Public-Policy") # set your working directory
# Do not forget to install a package with the install.packages() function if it's the first time you use it!
library(dplyr) # core package for dataframe manipulation. Usually installed and loaded with the tidyverse, but sometimes needs to be loaded in conjunction to avoid warnings.
library(tidyverse) # a large collection of packages for data manipulation and visualisation.
library(caret) # a library with key functions that streamline the process for predictive modelling
library(skimr) # a package with a set of functions to describe dataframes and more
library(plyr) # a package for data wrangling
library(party) # provides a user-friendly interface for creating and analyzing decision trees using recursive partitioning
library(rpart) # recursive partitioning and regression trees
library(rpart.plot) # visualising decision trees
library(rattle) # to obtain a fancy wrapper for the rpart.plot
library(RColorBrewer) # import more colours
# import data
data_malawi <- read_csv("malawi.csv") # the file is directly read from the working directory/folder previously set
```
<br>
For this exercise, we will skip all the data pre-processing steps. At this point, we are all well acquainted with the Malawi dataset, and should be able to create our binary outcome, poor (or not), and clean the dataset in general. If you need to, you can always go back to the [Logistic Classification tab](https://www.ml4publicpolicy.com/classification.html) and repeat the data preparation process described there.
<br>
<h3> Data Split and Fit </h3>
```{r, echo=FALSE}
# object:vector that contains the names of the variables that we want to get rid of (notice this time lnzline is still there)
cols <- c("ea", "EA", "psu","hhwght", "strataid", "case_id","eatype")
# subset of the data_malawi object:datframe
data_malawi <- data_malawi[,-which(colnames(data_malawi) %in% cols)] # the minus sign indicates deletion of cols
# transform all binary/categorical data into factor class
min_count <- 3 # vector: 3 categories is our max number of categories found
# store boolean (true/false) if the number of unique values is lower or equal to the min_count vector
n_distinct2 <- apply(data_malawi, 2, function(x) length(unique(x))) <= min_count
# select the identified categorical variables and transform them into factors
data_malawi[n_distinct2] <- lapply(data_malawi[n_distinct2], factor)
# recall poverty line contains 1 unique value (it is static), let's transform the variable into numeric again
data_malawi$lnzline <- as.numeric(as.character(data_malawi$lnzline))
# if the log of per capita expenditure is below the estimated poverty line, classify individual as poor, else classify individual as not poor. Store as factor (default with text is class character)
data_malawi$poor <- as.factor(ifelse(data_malawi$lnexp_pc_month<= data_malawi$lnzline,"Y","N")) # Y(es) N(o)
# make sure that the factor target variable has poor = Y as reference category (this step is important when running the logistic regression)
data_malawi$poor <- relevel(data_malawi$poor, ref="Y") # make Y reference category
# Final data pre-processing: delete static variable (poverty line)
# and along with it: remove the continuous target (as it perfectly predicts the binary target)
data_malawi <- data_malawi[,-c(1,31)] # delete columns no. 1 and 31 from the dataset
```
```{r}
set.seed(1234) # ensures reproducibility of our data split
# data partitioning: train and test datasets
train_idx <- createDataPartition(data_malawi$poor, p = .8, list = FALSE, times = 1)
Train_df <- data_malawi[ train_idx,]
Test_df <- data_malawi[-train_idx,]
# data fit: fit a random forest model
# (be warned that this may take longer to run than previous models)
rf_train <- train(poor ~ .,
data = Train_df,
method = "ranger" # estimates a Random Forest algorithm via the ranger pkg (you may need to install the ranger pkg)
)
# First glimpse at our random forest model
print(rf_train)
```
<br>
If you read the final box of the print() output, you'll notice that, given our input Y and X features, and no other information, the optimal random forest model, uses the following:
+ mtry = 2: mtry is the number of variables to sample at random at each split. This is the number we feed to the recursive partitioning algorithm. At each split, the algorithm will search mtry (=2) variables (a completely different set from the previous split) chosen at random, and pick the best split point.
+ splitrule = gini: the splitting rule/algorithm used. Gini, or the Gini Impurity is a probability that ranges from $0$ to $1$. The lower the value, the more pure the node. Recall that a node that is $100\%$ pure includes only data from a single class (no noise!), and therefore the splitting stops.
+ Accuracy (or $1$ - the error rate): at $0.81$, it improves from our eager learner classification (logistic) approach by $0.01$ and it is highly accurate.
+ Kappa (adjusted accuracy): at $0.55$, it indicates that our random forest model (on the training data) seems to perform the same as out logistic model. To make a proper comparison, we need to look at the out-of-sample predictions evaluation statistics.
<br>
<h3> Out-of-sample predictions</h3>
<br>
```{r}
# make predictions using the trained model and the test dataset
set.seed(12345)
pr1 <- predict(rf_train, Test_df, type = "raw")
head(pr1) # Yes and No output
# evaluate the predictions using the ConfusionMatrix function from Caret pkg
confusionMatrix(pr1, Test_df[["poor"]], positive = "Y") # positive = "Y" indicates that our category of interest is Y (1)
```
Based on our out-of-sample predictions, the Random Forest algorithm seems to yield pretty similar accuracy in its predictions as the logistic classification algorithm. The performance metrics (accuracy, sensitivity, specificity, kappa) remain the same (as for most classification problems). If you want a refresher of what they mean and how to interpret them, go back one session for a more thorough explanation!
<br>
<h3> Fine-tuning parameters</h3>
<br>
We can try to improve our Random Forest model by fine-tuning two parameters: grid and cross-validation
```{r}
# prepare the grid (create a larger random draw space)
tuneGrid <- expand.grid(mtry = c(1,2, 3, 4),
splitrule = c("gini", "extratrees"),
min.node.size = c(1, 3, 5))
# prepare the folds
trControl <- trainControl( method = "cv",
number=5,
search = 'grid',
classProbs = TRUE,
savePredictions = "final"
) # 5-folds cross-validation
# fine-tune the model with optimised paramters
# (again, be ready to wait a few minutes for this to run)
rf_train_tuned <- train(poor ~ .,
data = Train_df,
method = "ranger",
tuneGrid = tuneGrid,
trControl = trControl
)
# let's see how the fine-tuned model fared
print(rf_train_tuned)
```
<br>
Fine tuning parameters has not done much for our in-sample model. The chosen mtry value and splitting rule were the same. The only parameter where I see improvement is in the (training set) Kappa, from $0.55$ to $0.56$. Will out of sample predictions improve?
<br>
```{r}
# make predictions using the trained model and the test dataset
set.seed(12345)
pr2 <- predict(rf_train_tuned, Test_df, type = "raw")
head(pr2) # Yes and No output
# evaluate the predictions using the ConfusionMatrix function from Caret pkg
confusionMatrix(pr2, Test_df[["poor"]], positive = "Y") # positive = "Y" indicates that our category of interest is Y (1)
```
<br>
Consistent with the improvements on the train set, the out-of-sample predictions also return a higher adjusted accurcacy (Kappa statistic), and improved specificity and sensitivity. Not by much (e.g. Kappa increase of $0.01$), but we'll take what we can get.
<br>
These results also show that the biggest prediction improvements happen when we make big decisions - such as foregoing the variability of continuous outcomes in favour of classes. Exploring classification algorithms - in this case a logistic and a random forest model - was definitely worthwhile, but did not yield large returns on our predictive abilities.
<h3> Visualising our model </h3>
<br>
To close the chapter, let's have a quick look at the sort of plots we can make with a Random Forest algorithm.
```{r}
# we'll need to re-estimate the rf model using rpart
MyRandomForest <- rpart(poor ~ ., data = Train_df)
# visualise the decision tree (first of many in the forest)
fancyRpartPlot(MyRandomForest, palettes = c("Oranges","Blues"), main = "Visualising nodes and splits")
```
The fancy Rpart Plot returns the flow chart that we have now learned to call a decision tree. Recall that we have used different packages (and different specifications) for the Random Forest. So, the visualisation that we're looking at now is not the exact replica of our preferred fine-tuned model. It is, nonetheless, a good way to help you understand how classifications and decisions are made with tree-based methods. If you'd like an in-depth explanation of the plot, you can visit the [Rpart.plot pkg documentation](http://www.milbo.org/rpart-plot/prp.pdf).
### **Python practical**
<br>
As always, start by opening the libraries that you'll need to reproduce the script below. We will continue to use the scikit-learn library for machine learning purposes, and some other general libraries for data wrangling and visualisation.
<br>
```{r, setup, include = FALSE}
library(reticulate)
use_miniconda("/Users/michellegonzalez/Library/r-miniconda-arm64", required = T)
```
```{python}
#==== Python version: 3.10.12 ====#
# Opening libraries
import sklearn as sk # our trusted Machine Learning library
from sklearn.model_selection import train_test_split # split the dataset into train and test
from sklearn.model_selection import cross_val_score # to obtain the cross-validation score
from sklearn.model_selection import cross_validate # to perform cross-validation
from sklearn.ensemble import RandomForestClassifier # to perform a Random Forest classification model
from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score, ConfusionMatrixDisplay # returns performance evaluation metrics
from sklearn.model_selection import RandomizedSearchCV # for fine-tuning parameters
from scipy.stats import randint # generate random integer
# Tree visualisation
from sklearn.tree import export_graphviz
from IPython.display import Image # for Jupyter Notebook users
import graphviz as gv
# Non-ML libraries
import random # for random state
import csv # a library to read and write csv files
import numpy as np # a library for handling
import pandas as pd # a library to help us easily navigate and manipulate dataframes
import seaborn as sns # a data visualisation library
import matplotlib.pyplot as plt # a data visualisation library
# Uploading data
malawi = pd.read_csv('/Users/michellegonzalez/Documents/GitHub/Machine-Learning-for-Public-Policy/malawi.csv')
```
<br>
For this exercise, we will skip all the data pre-processing steps. At this point, we are all well acquainted with the Malawi dataset, and should be able to create our binary outcome, poor (or not), and clean the dataset in general. If you need to, you can always go back to the [Logistic Classification tab](https://www.ml4publicpolicy.com/classification.html) and repeat the data preparation process described there.
<br>
<h3> Data Split and Fit </h3>
```{python, echo=FALSE}
# preparing the dataset
lnzline = malawi['lnzline']
# Instead of deleting case_id, we will set it as an index (we did not do this last time!).
# This is essentially using the case_id variable as row names (and won't be included in your ML model)
malawi = malawi.set_index('case_id')
# deleting variables from pandas dataframe
cols2delete = ['ea', 'EA', 'hhwght', 'psu', 'strataid', 'lnzline', 'eatype', 'region']
malawi = malawi.drop(cols2delete,axis=1) # axis=0 means delete rows and axis=1 means delete columns
#==== Correctly identify each vector type: ====#
# for-loop that iterates over variables in dataframe, if they have 2 unique values, transform vector into categorical
for column in malawi:
if malawi[column].nunique() == 2:
malawi[column] = pd.Categorical(malawi[column])
#==== Create a binary target variable: ====#
# use numPy to create a binary vector (notice we have rounded up the threshold)
malawi['Poor'] = np.where(malawi['lnexp_pc_month'] <= 7.555, 1, 0)
# transform target variable into categorical type
malawi['Poor'] = pd.Categorical(malawi['Poor']) # use malawi['Poor'].info() if you want to see the transformation
# Final data pre-processing: remove the continuous target (as it perfectly predicts the binary target in a non-informative way)
cont_target = ['lnexp_pc_month']
malawi = malawi.drop(cont_target , axis=1)
```
Let's use a simple 80:20 split for train and test data subsets.
```{python}
# First, recall the df structure
malawi.info() # returns the column number, e.g. hhsize = column number 0, hhsize2 = 1... etc.
# Then, split!
X = malawi.iloc[:, 0:27] # x is a matrix containing all variables except the last one, which conveniently is our binary target variable
y = malawi.iloc[:, 28] # y is a vector containing our target variable
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=12345) # random_state is for reproducibility purposes
```
Now, let's fit a Random Forest model:
```{python}
# data fit: fit a random forest model
rf = RandomForestClassifier(random_state=42) # empty random forest object
rf.fit(X_train, y_train) # fit the rf classifier using the training data
```
We have now successfully trained a Random Forest model, and there is no need to go over in-sample predictions. We can simply evaluate the model's ability to make out-of-sample predictions.
<br>
<h3> Out-of-sample predictions</h3>
<br>
```{python}
# predict our test-dataset target variable based on the trained model
y_pred = rf.predict(X_test)
# evaluate the prediction's performance (estimate accuracy score, report confusion matrix)
# create a confusion matrix object (we're improving from our previous confusion matrix exploration ;))
cm = confusion_matrix(y_test, y_pred)
print("Accuracy:", accuracy_score(y_test, y_pred))
print("Precision:", precision_score(y_test, y_pred))
print("Recall:", recall_score(y_test, y_pred))
print("Confusion Matrix:", cm)
ConfusionMatrixDisplay(confusion_matrix=cm).plot() # create confusion matrix plot
plt.show() # display confusion matrix plot created above
```
Based on our out-of-sample predictions, the Random Forest algorithm seems to yield pretty similar accuracy in its predictions as the logistic classification algorithm. If you want a reminder of how to interpret accuracy, precision and recall scores or how to read the confusion matrix, go back one session for a thorough explanation!
<br>
<h3> Fine-tuning parameters</h3>
<br>
To improve the performance of our random forest model, we can try hyperparameter tuning. You can think of the process as optimising the learning model by defining the settings that will govern the learning process of the model. In Python, and for a random forest model, we can use RandomizedSearchCV to find the optimal parameters within a range of parameters.
```{python}
# define hyperparameters and their ranges in a "parameter_distance" dictionary
parameter_distance = {'n_estimators': randint(50,500),
'max_depth': randint(1,10)
}
# n_estimators: the number of decision trees in the forest (at least 50 and at most 500)
# max_depth: the maximum depth of each decision tree (at least 1 split, and at most 20 splits of the tree into branches)
```
<br>
There are other hyperparameters, but a search of the optimal value of these is a good start to our model optimisation!
<br>
```{python}
# Please note that the script below might take a while to run (don't be alarmed if you have to wait a couple of minutes)
# Use a random search to find the best hyperparameters
random_search = RandomizedSearchCV(rf,
param_distributions = parameter_distance,
n_iter=5,
cv=5,
random_state=42)
# Fit the random search object to the training model
random_search.fit(X_train, y_train)
# create an object / variable that containes the best hyperparameters, according to our search:
best_rf_hype = random_search.best_estimator_
print('Best random forest hyperparameters:', random_search.best_params_)
```
**Now we can re-train our model using the retrieved hyperparameters and evaluate the out-of-sample-predictions of the model.**
```{python}
# for simplicity, store the best parameters again in a variable called x
x = random_search.best_params_
# Train the ranfom forest model using the best max_depth and n_estimators
rf_best = RandomForestClassifier(**x, random_state=1234) # pass the integers from the best parameters with **
rf_best.fit(X_train, y_train)
# Make out-of-sample predictions
y_pred_hype = rf_best.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, y_pred_hype)
recall = recall_score(y_test, y_pred_hype)
precision = precision_score(y_test, y_pred_hype)
print(f"Accuracy with best hyperparameters: {accuracy}")
print(f"Recall with best hyperparameters: {recall}")
print(f"Precision with best hyperparameters: {precision}")
```
<br>
It looks like we have not improved our accuracy or other peformance evaluation metrics by fine-tuning our model parameters. Perhaps a marginal improvement of $0.01$ on the precision score.
These results show that the biggest prediction improvements happen when we make big decisions - such as foregoing the variability of continuous outcomes in favour of classes. Exploring classification algorithms - in this case a logistic and a random forest model - was definitely worthwhile, but did not yield large returns on our predictive abilities.
<br>
<h3> Visualising our model </h3>
<br>
To close the chapter, let's have a quick look at the sort of plots we can make with a Random Forest algorithm.
While we cannot visualise the entirety of the forest, we can certainly have a look at the first two or three trees in our forest.
```{python}
# Select the first (recall in python, the firs element is 0) decision-tree to display from our random forest object:
# (Alternatively, use a for-loop to display the first two, three, four... trees)
tree = rf_best.estimators_[0] # select the first tree from the foress
# transform the tree into a graph object
dot_data = export_graphviz(tree,
feature_names=X_train.columns, # names of columns selected from the X_train dataset
filled=True,
max_depth=2, # how many layers/dimensions we want to display, only 2 after the initial branch in this case
impurity=False,
proportion=True
)
graph = gv.Source(dot_data) # gv.Source helps us display the DOT languag source of the graph (needed for rendering the image)
graph.render('tree_visualisation', format='png') # this will save the tree visualisation directly into your working folder
```
<center>
![Decision Tree](tree_visualisation.png)
</center>
### **Practice at home**
As usual, you can replicate this exercise using the Bolivia dataset. To let us know that you're still active, please answer this [Qualtrics-based question](https://maastrichtuniversity.eu.qualtrics.com/jfe/form/SV_1RNFd75exNEZVvU).