diff --git a/dev/articles/BuildingDeepModels.html b/dev/articles/BuildingDeepModels.html index 5ba98f5..60b8f01 100644 --- a/dev/articles/BuildingDeepModels.html +++ b/dev/articles/BuildingDeepModels.html @@ -105,7 +105,7 @@

Jenna Reps, Egill Fridgeirsson, Chungsoo Kim, Henrik John, Seng Chan You, Xiaoyong Pan

-

2024-05-15

+

2024-05-16

Source: vignettes/BuildingDeepModels.Rmd @@ -115,7 +115,7 @@

2024-05-15

@@ -159,8 +159,8 @@

Backgroundtorch -but we invite the community to add other backends.

+

In the package we use pytorch through the +reticulate package.

Many network architectures have recently been proposed and we have implemented a number of them, however, this list will grow in the near future. It is important to understand that some of these architectures @@ -170,7 +170,7 @@

Background

Requirements @@ -183,16 +183,17 @@

Integration with PatientLevelPr

The DeepPatientLevelPrediction package provides additional model settings that can be used within the -PatientLevelPrediction package runPlp() -function. To use both packages you first need to pick the deep learning -architecture you wish to fit (see below) and then you specify this as -the modelSettings inside runPlp().

+PatientLevelPrediction package runPlp() and +runMultiplePlp() functions. To use both packages you first +need to pick the deep learning architecture you wish to fit (see below) +and then you specify this as the modelSettings inside +runPlp().

 # load the data
 plpData <- PatientLevelPrediction::loadPlpData('locationOfData')
 
 # pick the set<Model> from  DeepPatientLevelPrediction
-deepLearningModel <- DeepPatientLevelPrediction::setResNet()
+deepLearningModel <- DeepPatientLevelPrediction::setDefaultResNet()
 
 # use PatientLevelPrediction to fit model
 deepLearningResult <- PatientLevelPrediction::runPlp(
@@ -220,10 +221,10 @@ 

Overall concept +some ground truth and involves automatically calculating the derivative +of the model parameters with respect to the the error between the +model’s predictions and ground truth. Then the model learns how to +adjust the model’s parameters to reduce the error.

Example @@ -247,9 +248,9 @@

Inputs set to 0. This is used to reduce overfitting.

The sizeEmbedding input specifies the size of the embedding used. The first layer is an embedding layer which converts -each sparse feature to a dense vector which it learns. An embedding is a -lower dimensional projection of the features where distance between -points is a measure of similarity.

+each sparse feature to a dense learned vector. An embedding is a lower +dimensional projection of the features where distance between points is +a measure of similarity.

The weightDecay input corresponds to the weight decay in the objective function. During model fitting the aim is to minimize the objective function. The objective function is made up of the prediction @@ -344,19 +345,18 @@

ResNet

Overall concept

Deep learning models are often trained via a process known as -gradient descent during backpropogation. During this process the network -weights are updated based on the gradient of the error function for the -current weights. However, as the number of layers in the network -increase, there is a greater chance of experiencing an issue known as -the vanishing or exploding gradient during this process. The vanishing -or exploding gradient is when the gradient goes to 0 or infinity, which -negatively impacts the model fitting.

+gradient descent. During this process the network weights are updated +based on the gradient of the error function for the current weights. +However, as the number of layers in the network increase, there is a +greater chance of experiencing an issue known vanishing or exploding +gradients. The vanishing or exploding gradient is when the gradient goes +to 0 or infinity, which negatively impacts the model fitting.

The residual network (ResNet) was introduced to address the vanishing or exploding gradient issue. It works by adding connections between non-adjacent layers, termed a ‘skip connection’.

The ResNet calculates embeddings for every feature and then averages them to compute an embedding per patient.

-

This implementation of a ResNet for tabular data is based on this paper.

+

Our implementation of a ResNet for tabular data is based on this paper.

This means we are extracting gender as a binary variable, age as a continuous variable and conditions occurring in the long term window, -which is by default 365 days prior.

+which is by default 365 days prior to index. If you want to know more +about these terms we recommend checking out the
book of OHDSI.

Next we need to define our database details, which defines from which database we are getting which cohorts. Since we don’t have a database we are using Eunomia.

@@ -222,7 +223,7 @@

The modelThe modelEgill Fridgeirsson

-

2024-05-15

+

2024-05-16

Source:
vignettes/Installing.Rmd @@ -115,7 +115,7 @@

2024-05-15

@@ -213,7 +213,7 @@

Installing DeepPati

This should install the required python packages. If that doesn’t happen it can be triggered by calling:

library(DeepPatientLevelPrediction)
-torch$trandn(10L)
+torch$randn(10L)

This should print out a tensor with ten different values.

When installing make sure to close any other Rstudio sessions that are using DeepPatientLevelPrediction or any dependency. diff --git a/dev/pkgdown.yml b/dev/pkgdown.yml index 08e901d..3225e67 100644 --- a/dev/pkgdown.yml +++ b/dev/pkgdown.yml @@ -5,5 +5,5 @@ articles: BuildingDeepModels: BuildingDeepModels.html FirstModel: FirstModel.html Installing: Installing.html -last_built: 2024-05-15T08:14Z +last_built: 2024-05-16T12:03Z diff --git a/dev/reference/index.html b/dev/reference/index.html index 4d7a5c2..8aa44bb 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -153,6 +153,10 @@

All functions snakeCaseToCamelCaseNames()

Convert the names of an object from snake case to camel case

+ +

torch

+ +

Pytorch module

trainingCache

diff --git a/dev/reference/torch.html b/dev/reference/torch.html new file mode 100644 index 0000000..a8de66d --- /dev/null +++ b/dev/reference/torch.html @@ -0,0 +1,123 @@ + +Pytorch module — torch • DeepPatientLevelPrediction + + +
+
+ + + +
+
+ + +
+

The `torch` module object is the equivalent of +`reticulate::import("torch")` and provided mainly as a convenience.

+
+ + +
+

Format

+

An object of class `python.builtin.module`

+
+
+

Value

+ + +

the torch Python module

+
+ +
+ +
+ + +
+ +
+

Site built with pkgdown 2.0.9.

+
+ +
+ + + + + + + + diff --git a/dev/reference/trainingCache.html b/dev/reference/trainingCache.html index 70c5224..e9a75f2 100644 --- a/dev/reference/trainingCache.html +++ b/dev/reference/trainingCache.html @@ -220,6 +220,14 @@

Usage

trainingCache$trimPerformance(hyperparameterResults)

+
+

Arguments

+

hyperparameterResults
+

List of hyperparameter results

+ + +

+


Method clone()

@@ -229,7 +237,7 @@

Usage

-

Arguments

+

Arguments

deep

Whether to make a deep clone.

diff --git a/dev/sitemap.xml b/dev/sitemap.xml index b36b3da..35744da 100644 --- a/dev/sitemap.xml +++ b/dev/sitemap.xml @@ -78,6 +78,9 @@ /reference/snakeCaseToCamelCaseNames.html + + /reference/torch.html + /reference/trainingCache.html