forked from citoverse/cito
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME.Rmd
152 lines (103 loc) · 6.69 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE, echo=FALSE}
options("progress_enabled" = TRUE)
knitr::opts_chunk$set(fig.path = "man/figures/README-", collapse=TRUE)
```
# cito
[![Project Status: Active -- The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active) [![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/cito)](https://cran.r-project.org/package=cito) [![R-CMD-check](https://github.com/citoverse/cito/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/citoverse/cito/actions/workflows/R-CMD-check.yaml)
[![Publication](https://img.shields.io/badge/Publication-10.1111/ecog.07143-green.svg)](https://doi.org/10.1111/ecog.07143)
<!-- badges: end -->
The 'cito' package provides a user-friendly interface for training and interpreting deep neural networks (DNN). 'cito' simplifies the fitting of DNNs by supporting the familiar formula syntax, hyperparameter tuning under cross-validation, and helps to detect and handle convergence problems. DNNs can be trained on CPU, GPU and MacOS GPUs. In addition, 'cito' has many downstream functionalities such as various explainable AI (xAI) metrics (e.g. variable importance, partial dependence plots, accumulated local effect plots, and effect estimates) to interpret trained DNNs. 'cito' optionally provides confidence intervals (and p-values) for all xAI metrics and predictions. At the same time, 'cito' is computationally efficient because it is based on the deep learning framework 'torch'. The 'torch' package is native to R, so no Python installation or other API is required for this package.
## Installation
Before installing 'cito' make sure 'torch' is installed. See the code chunk below if you are unsure on how to check this
```{r, eval = FALSE}
# check package
if(!require('torch',quietly = TRUE)) install.packages('torch')
library('torch')
#install torch
if(!torch_is_installed()) install_torch()
```
If you have trouble installing 'torch', please [visit the website of the 'torch' package](https://torch.mlverse.org/docs/articles/installation.html) or create an issue on [our github website](https://github.com/citoverse/cito/issues). We are happy to help you.
A stable version of cito from CRAN can be installed with:
```{r, eval=FALSE}
install.packages("cito")
```
The development version from [GitHub](https://github.com/) can be installed by:
```{r, eval=FALSE}
if(!require('devtools', quietly = TRUE)) install.packages('devtools')
devtools::install_github('citoverse/cito')
```
## Example
Once installed, the main function `dnn()` can be used. See the example below. A more in depth explanation can be found in the vignettes or [here under articles](https://citoverse.github.io/cito/).
1. Fit model with bootstrapping (to obtain confidence intervals). All methods work with and without bootstrapping
```{r}
library(cito)
nn.fit <- dnn(Sepal.Length~., data = datasets::iris, bootstrap = 30L)
```
2. Check if models have converged (compare training loss against baseline loss (=intercept only model)):
```{r, eval=FALSE}
analyze_training(nn.fit)
# At 1st glance, the networks converged since the loss is lower than the baseline loss and the training loss is on a plateau at the end of the training.
```
3. Plot model architecture
```{r}
plot(nn.fit)
```
4. 'cito' supports many advanced functionalities such as common explainable AI metrics that can be used for inference (i.e. to interpret the models). Variable importance (similar to a variation partitioning) and linear effects are directly returned by the `summary` function:
```{r}
summary(nn.fit)
```
5. Predict (with confidence intervals):
```{r}
dim(predict(nn.fit, newdata = datasets::iris))
```
### Hyperparameter tuning
Certain arguments/parameters such as the architecture, activation function, and the learning rate can be automatically tuned under crossvalidation (for a full list, see `?dnn`). Parameters that should be tuned, can be flagged by using the function `tune()` instead of a hyperparameter value:
```{r}
nn.fit <- dnn(Sepal.Length~., data = datasets::iris, lr = tune(0.0001, 0.1))
nn.fit$tuning
```
The tuning can be configured with `tuning=config_tuning()`. After tuning, a final model trained with the best hyperparameters is returned. Hyperparameter combinations that do not achieve a loss below the baseline loss will be aborted early and not fully cross-validated. These runs are given a test loss of infinity.
## Advanced
We can pass custom loss functions to 'cito', optionally with additional parameters that should be fitted. The only requirement is that all calculations must be written using the 'torch' package (cito automatically converts the initial values of the custom parameters to 'torch' objects).
We use a multivariate normal distribution as the likelihood function and we want to parameterize/fit the covariance matrix of the multivariate normal distribution:
1. We need one helper function, `create_cov()` that builds the covariance matrix based on a lower triangular matrix and the diagonals (low-rank approximation of the covariance matrix)
2. We need our custom likelihood function which uses the `distr_multivariate_normal(…)` function from the torch package:
```{r}
create_cov = function(L, Diag) {
return(torch::torch_matmul(L, L$t()) + torch::torch_diag(Diag$exp()+0.001))
}
custom_loss_MVN = function(true, pred) {
Sigma = create_cov(SigmaPar, SigmaDiag)
logLik = torch::distr_multivariate_normal(pred,
covariance_matrix = Sigma)$
log_prob(true)
return(-logLik$mean())
}
```
3. We use "SigmaPar" and "SigmaDiag" as parameters that we want to optimize along the DNN. We will pass a named list with starting values to 'cito' and 'cito' will infer automatically (based on the R shape) the shape of the parameters:
```{r, warning=FALSE}
nn.fit<- dnn(cbind(Sepal.Length, Sepal.Width, Petal.Length)~.,
data = datasets::iris,
lr = 0.01,
epochs = 200L,
loss = custom_loss_MVN,
verbose = FALSE,
plot = FALSE,
custom_parameters =
list(SigmaDiag = rep(0, 3), # Our parameters with starting values
SigmaPar = matrix(rnorm(6, sd = 0.001), 3, 2)) # Our parameters with starting values
)
```
Estimated covariance matrix:
```{r}
as.matrix(create_cov(nn.fit$loss$parameter$SigmaPar,
nn.fit$loss$parameter$SigmaDiag))
```
Empirical covariance matrix:
```{r}
cov(predict(nn.fit) - nn.fit$data$Y)
```