diff --git a/dev/articles/BuildingDeepModels.html b/dev/articles/BuildingDeepModels.html index 5ba98f5..60b8f01 100644 --- a/dev/articles/BuildingDeepModels.html +++ b/dev/articles/BuildingDeepModels.html @@ -105,7 +105,7 @@
vignettes/BuildingDeepModels.Rmd
BuildingDeepModels.Rmd
In the package we use pytorch
through the
+reticulate
package.
Many network architectures have recently been proposed and we have implemented a number of them, however, this list will grow in the near future. It is important to understand that some of these architectures @@ -170,7 +170,7 @@
The DeepPatientLevelPrediction
package provides
additional model settings that can be used within the
-PatientLevelPrediction
package runPlp()
-function. To use both packages you first need to pick the deep learning
-architecture you wish to fit (see below) and then you specify this as
-the modelSettings inside runPlp()
.
PatientLevelPrediction
package runPlp()
and
+runMultiplePlp()
functions. To use both packages you first
+need to pick the deep learning architecture you wish to fit (see below)
+and then you specify this as the modelSettings inside
+runPlp()
.
# load the data
plpData <- PatientLevelPrediction::loadPlpData('locationOfData')
# pick the set<Model> from DeepPatientLevelPrediction
-deepLearningModel <- DeepPatientLevelPrediction::setResNet()
+deepLearningModel <- DeepPatientLevelPrediction::setDefaultResNet()
# use PatientLevelPrediction to fit model
deepLearningResult <- PatientLevelPrediction::runPlp(
@@ -220,10 +221,10 @@ Overall concept
+some ground truth and involves automatically calculating the derivative
+of the model parameters with respect to the the error between the
+model’s predictions and ground truth. Then the model learns how to
+adjust the model’s parameters to reduce the error.
The sizeEmbedding
input specifies the size of the
embedding used. The first layer is an embedding layer which converts
-each sparse feature to a dense vector which it learns. An embedding is a
-lower dimensional projection of the features where distance between
-points is a measure of similarity.
The weightDecay
input corresponds to the weight decay in
the objective function. During model fitting the aim is to minimize the
objective function. The objective function is made up of the prediction
@@ -344,19 +345,18 @@
Deep learning models are often trained via a process known as -gradient descent during backpropogation. During this process the network -weights are updated based on the gradient of the error function for the -current weights. However, as the number of layers in the network -increase, there is a greater chance of experiencing an issue known as -the vanishing or exploding gradient during this process. The vanishing -or exploding gradient is when the gradient goes to 0 or infinity, which -negatively impacts the model fitting.
+gradient descent. During this process the network weights are updated +based on the gradient of the error function for the current weights. +However, as the number of layers in the network increase, there is a +greater chance of experiencing an issue known vanishing or exploding +gradients. The vanishing or exploding gradient is when the gradient goes +to 0 or infinity, which negatively impacts the model fitting.The residual network (ResNet) was introduced to address the vanishing or exploding gradient issue. It works by adding connections between non-adjacent layers, termed a ‘skip connection’.
The ResNet calculates embeddings for every feature and then averages them to compute an embedding per patient.
-This implementation of a ResNet for tabular data is based on this paper.
+Our implementation of a ResNet for tabular data is based on this paper.
resset <- setResNet(
numLayers = c(2L),
@@ -499,14 +500,15 @@ Model inputs:numBlocks
: How many Transformer blocks to use, each
block includes a self-attention layer and a feedforward block with two
linear layers.
-dimToken
: Dimension of the embedding for each feature’s
-embedding
+dimToken
: Dimension of the embedding for each
+feature.
dimOut
: Dimension of output, for binary problems this
is 1.
numHeads
: Number of attention heads for the
-self-attention
-attDropout
, ffnDropout
and
-resDropout
: How much dropout to apply on attentions, in
+self-attention, dimToken
needs to be divisible by
+numHeads
.
+attDropout
, ffnDropout
and
+resDropout
: How much dropout to apply on attentions,
feedforward block or in residual connections
dimHidden
: How many neurons in linear layers inside the
feedforward block
diff --git a/dev/articles/FirstModel.html b/dev/articles/FirstModel.html
index 6b8f9d8..5f3bbdb 100644
--- a/dev/articles/FirstModel.html
+++ b/dev/articles/FirstModel.html
@@ -104,7 +104,7 @@ Developing your first DeepPLP model
Egill
Fridgeirsson
- 2024-05-15
+ 2024-05-16
Source: vignettes/FirstModel.Rmd
FirstModel.Rmd
@@ -114,7 +114,7 @@ 2024-05-15
@@ -160,7 +160,8 @@ Our settings)
This means we are extracting gender as a binary variable, age as a
continuous variable and conditions occurring in the long term window,
-which is by default 365 days prior.
+which is by default 365 days prior to index. If you want to know more
+about these terms we recommend checking out the book of OHDSI.
Next we need to define our database details, which defines from which
database we are getting which cohorts. Since we don’t have a database we
are using Eunomia.
@@ -222,7 +223,7 @@ The modelThe modelEgill
Fridgeirsson
vignettes/Installing.Rmd
Installing.Rmd
This should install the required python packages. If that doesn’t happen it can be triggered by calling:
library(DeepPatientLevelPrediction)
-torch$trandn(10L)
+torch$randn(10L)
This should print out a tensor with ten different values.
When installing make sure to close any other Rstudio sessions that
are using DeepPatientLevelPrediction
or any dependency.
diff --git a/dev/pkgdown.yml b/dev/pkgdown.yml
index 08e901d..3225e67 100644
--- a/dev/pkgdown.yml
+++ b/dev/pkgdown.yml
@@ -5,5 +5,5 @@ articles:
BuildingDeepModels: BuildingDeepModels.html
FirstModel: FirstModel.html
Installing: Installing.html
-last_built: 2024-05-15T08:14Z
+last_built: 2024-05-16T12:03Z
diff --git a/dev/reference/index.html b/dev/reference/index.html
index 4d7a5c2..8aa44bb 100644
--- a/dev/reference/index.html
+++ b/dev/reference/index.html
@@ -153,6 +153,10 @@
snakeCaseToCamelCaseNames()
Convert the names of an object from snake case to camel case
Pytorch module
The `torch` module object is the equivalent of +`reticulate::import("torch")` and provided mainly as a convenience.
+An object of class `python.builtin.module`
+the torch Python module
+trainingCache$trimPerformance(hyperparameterResults)