From 35cfec1a4feace167c4e1fe9eff5c8b236cab53b Mon Sep 17 00:00:00 2001 From: egillax Date: Mon, 8 Jul 2024 15:22:10 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20OHDSI/De?= =?UTF-8?q?epPatientLevelPrediction@5fd319559b44e0866956867af67f84d9a702f0?= =?UTF-8?q?9d=20=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- 404.html | 23 +- articles/BuildingDeepModels.html | 114 +++---- articles/FirstModel.html | 50 +-- articles/Installing.html | 44 +-- articles/TransferLearning.html | 358 ++++++++++++++++++++++ articles/index.html | 25 +- authors.html | 47 +-- index.html | 23 +- news/index.html | 36 ++- pkgdown.yml | 8 +- reference/DeepPatientLevelPrediction.html | 39 ++- reference/camelCaseToSnakeCase.html | 33 +- reference/camelCaseToSnakeCaseNames.html | 33 +- reference/checkHigher.html | 31 +- reference/checkHigherEqual.html | 31 +- reference/checkIsClass.html | 31 +- reference/fitEstimator.html | 37 ++- reference/gridCvDeep.html | 37 ++- reference/index.html | 41 ++- reference/predictDeepEstimator.html | 33 +- reference/setDefaultResNet.html | 31 +- reference/setDefaultTransformer.html | 29 +- reference/setEstimator.html | 58 ++-- reference/setFinetuner.html | 129 ++++++++ reference/setMultiLayerPerceptron.html | 43 +-- reference/setResNet.html | 49 +-- reference/setTransformer.html | 61 ++-- reference/snakeCaseToCamelCase.html | 129 ++++++++ reference/snakeCaseToCamelCaseNames.html | 129 ++++++++ reference/torch.html | 124 ++++++++ reference/trainingCache.html | 59 ++-- sitemap.xml | 109 ++----- 32 files changed, 1500 insertions(+), 524 deletions(-) create mode 100644 articles/TransferLearning.html create mode 100644 reference/setFinetuner.html create mode 100644 reference/snakeCaseToCamelCase.html create mode 100644 reference/snakeCaseToCamelCaseNames.html create mode 100644 reference/torch.html diff --git a/404.html b/404.html index 7f6cb97..a50d5b5 100644 --- a/404.html +++ b/404.html @@ -6,7 +6,7 @@ Page not found (404) • DeepPatientLevelPrediction - + @@ -18,7 +18,7 @@ - +
@@ -41,7 +41,7 @@
  • - +
  • @@ -56,7 +56,7 @@
  • @@ -82,7 +85,7 @@
  • - +
  • @@ -93,7 +96,7 @@
    - +
    @@ -121,16 +124,16 @@

    Page not found (404)

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/articles/BuildingDeepModels.html b/articles/BuildingDeepModels.html index 9374ddd..9a7998f 100644 --- a/articles/BuildingDeepModels.html +++ b/articles/BuildingDeepModels.html @@ -6,20 +6,19 @@ Building Deep Learning Models • DeepPatientLevelPrediction - + - - +
    @@ -42,7 +41,7 @@
  • - +
  • @@ -57,7 +56,7 @@
  • @@ -83,7 +85,7 @@
  • - +
  • @@ -94,7 +96,7 @@
    - +
    @@ -104,9 +106,9 @@

    Jenna Reps, Egill Fridgeirsson, Chungsoo Kim, Henrik John, Seng Chan You, Xiaoyong Pan

    -

    2023-12-22

    +

    2024-07-08

    - Source: vignettes/BuildingDeepModels.Rmd + Source: vignettes/BuildingDeepModels.Rmd
    @@ -114,7 +116,7 @@

    2023-12-22

    @@ -158,8 +160,8 @@

    Backgroundtorch -but we invite the community to add other backends.

    +

    In the package we use pytorch through the +reticulate package.

    Many network architectures have recently been proposed and we have implemented a number of them, however, this list will grow in the near future. It is important to understand that some of these architectures @@ -169,7 +171,7 @@

    Background

    Requirements @@ -182,16 +184,17 @@

    Integration with PatientLevelPr

    The DeepPatientLevelPrediction package provides additional model settings that can be used within the -PatientLevelPrediction package runPlp() -function. To use both packages you first need to pick the deep learning -architecture you wish to fit (see below) and then you specify this as -the modelSettings inside runPlp().

    +PatientLevelPrediction package runPlp() and +runMultiplePlp() functions. To use both packages you first +need to pick the deep learning architecture you wish to fit (see below) +and then you specify this as the modelSettings inside +runPlp().

     # load the data
     plpData <- PatientLevelPrediction::loadPlpData('locationOfData')
     
     # pick the set<Model> from  DeepPatientLevelPrediction
    -deepLearningModel <- DeepPatientLevelPrediction::setResNet()
    +deepLearningModel <- DeepPatientLevelPrediction::setDefaultResNet()
     
     # use PatientLevelPrediction to fit model
     deepLearningResult <- PatientLevelPrediction::runPlp(
    @@ -209,7 +212,7 @@ 

    Non-Temporal ArchitecturesWe implemented the following non-temporal (2D data matrix) architectures:

    -

    Simple MLP +

    Simple MultiLayerPerceptron

    Example @@ -246,9 +249,9 @@

    Inputs set to 0. This is used to reduce overfitting.

    The sizeEmbedding input specifies the size of the embedding used. The first layer is an embedding layer which converts -each sparse feature to a dense vector which it learns. An embedding is a -lower dimensional projection of the features where distance between -points is a measure of similarity.

    +each sparse feature to a dense learned vector. An embedding is a lower +dimensional projection of the features where distance between points is +a measure of similarity.

    The weightDecay input corresponds to the weight decay in the objective function. During model fitting the aim is to minimize the objective function. The objective function is made up of the prediction @@ -343,19 +346,18 @@

    ResNet

    Overall concept

    Deep learning models are often trained via a process known as -gradient descent during backpropogation. During this process the network -weights are updated based on the gradient of the error function for the -current weights. However, as the number of layers in the network -increase, there is a greater chance of experiencing an issue known as -the vanishing or exploding gradient during this process. The vanishing -or exploding gradient is when the gradient goes to 0 or infinity, which -negatively impacts the model fitting.

    +gradient descent. During this process the network weights are updated +based on the gradient of the error function for the current weights. +However, as the number of layers in the network increase, there is a +greater chance of experiencing an issue known vanishing or exploding +gradients. The vanishing or exploding gradient is when the gradient goes +to 0 or infinity, which negatively impacts the model fitting.

    The residual network (ResNet) was introduced to address the vanishing or exploding gradient issue. It works by adding connections between non-adjacent layers, termed a ‘skip connection’.

    The ResNet calculates embeddings for every feature and then averages them to compute an embedding per patient.

    -

    This implementation of a ResNet for tabular data is based on this paper.

    +

    Our implementation of a ResNet for tabular data is based on this paper.

    ## To cite package 'DeepPatientLevelPrediction' in publications use:
     ## 
    -##   Fridgeirsson E, Reps J, Chan You S, Kim C, John H (22).
    +##   Fridgeirsson E, Reps J, Chan You S, Kim C, John H (8).
     ##   _DeepPatientLevelPrediction: Deep Learning For Patient Level
     ##   Prediction Using Data In The OMOP Common Data Model_. R package
    -##   version 2.0.3, <https://github.com/OHDSI/DeepPatientLevelPrediction>.
    +##   version 2.1.0, <https://github.com/OHDSI/DeepPatientLevelPrediction>.
     ## 
     ## A BibTeX entry for LaTeX users is
     ## 
    @@ -577,8 +581,8 @@ 

    Acknowledgments## title = {DeepPatientLevelPrediction: Deep Learning For Patient Level Prediction Using Data In The ## OMOP Common Data Model}, ## author = {Egill Fridgeirsson and Jenna Reps and Seng {Chan You} and Chungsoo Kim and Henrik John}, -## year = {22}, -## note = {R package version 2.0.3}, +## year = {8}, +## note = {R package version 2.1.0}, ## url = {https://github.com/OHDSI/DeepPatientLevelPrediction}, ## }

    Please reference this paper if you use the PLP Package in @@ -593,9 +597,7 @@

    Acknowledgments - -

    +

    @@ -608,16 +610,16 @@

    Acknowledgments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/articles/FirstModel.html b/articles/FirstModel.html index a504500..c11cc70 100644 --- a/articles/FirstModel.html +++ b/articles/FirstModel.html @@ -6,20 +6,19 @@ Developing your first DeepPLP model • DeepPatientLevelPrediction - + - - +
    @@ -42,7 +41,7 @@
  • - +
  • @@ -57,7 +56,7 @@
  • @@ -83,7 +85,7 @@
  • - +
  • @@ -94,7 +96,7 @@
    - +
    @@ -103,9 +105,9 @@

    Developing your first DeepPLP model

    Egill Fridgeirsson

    -

    2023-12-22

    +

    2024-07-08

    - Source: vignettes/FirstModel.Rmd + Source: vignettes/FirstModel.Rmd
    @@ -113,7 +115,7 @@

    2023-12-22

    @@ -159,7 +161,8 @@

    Our settings)

    This means we are extracting gender as a binary variable, age as a continuous variable and conditions occurring in the long term window, -which is by default 365 days prior.

    +which is by default 365 days prior to index. If you want to know more +about these terms we recommend checking out the
    book of OHDSI.

    Next we need to define our database details, which defines from which database we are getting which cohorts. Since we don’t have a database we are using Eunomia.

    @@ -221,7 +224,7 @@

    The modelThe modelAcknowledgmentscitation("DeepPatientLevelPrediction")

    ## To cite package 'DeepPatientLevelPrediction' in publications use:
     ## 
    -##   Fridgeirsson E, Reps J, Chan You S, Kim C, John H (22).
    +##   Fridgeirsson E, Reps J, Chan You S, Kim C, John H (8).
     ##   _DeepPatientLevelPrediction: Deep Learning For Patient Level
     ##   Prediction Using Data In The OMOP Common Data Model_. R package
    -##   version 2.0.3, <https://github.com/OHDSI/DeepPatientLevelPrediction>.
    +##   version 2.1.0, <https://github.com/OHDSI/DeepPatientLevelPrediction>.
     ## 
     ## A BibTeX entry for LaTeX users is
     ## 
    @@ -268,8 +272,8 @@ 

    Acknowledgments## title = {DeepPatientLevelPrediction: Deep Learning For Patient Level Prediction Using Data In The ## OMOP Common Data Model}, ## author = {Egill Fridgeirsson and Jenna Reps and Seng {Chan You} and Chungsoo Kim and Henrik John}, -## year = {22}, -## note = {R package version 2.0.3}, +## year = {8}, +## note = {R package version 2.1.0}, ## url = {https://github.com/OHDSI/DeepPatientLevelPrediction}, ## }

    Please reference this paper if you use the PLP Package in @@ -284,9 +288,7 @@

    Acknowledgments - - + @@ -299,16 +301,16 @@

    Acknowledgments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/articles/Installing.html b/articles/Installing.html index 20bed3f..8332311 100644 --- a/articles/Installing.html +++ b/articles/Installing.html @@ -6,20 +6,19 @@ DeepPatientLevelPrediction Installation Guide • DeepPatientLevelPrediction - + - - +
    @@ -42,7 +41,7 @@
  • - +
  • @@ -57,7 +56,7 @@
  • @@ -83,7 +85,7 @@
  • - +
  • @@ -94,7 +96,7 @@
    - +
    @@ -104,9 +106,9 @@

    DeepPatientLevelPrediction Installation

    Egill Fridgeirsson

    -

    2023-12-22

    +

    2024-07-08

    - Source: vignettes/Installing.Rmd + Source: vignettes/Installing.Rmd
    @@ -114,7 +116,7 @@

    2023-12-22

    @@ -212,7 +214,7 @@

    Installing DeepPati

    This should install the required python packages. If that doesn’t happen it can be triggered by calling:

    library(DeepPatientLevelPrediction)
    -torch$trandn(10L)
    +torch$randn(10L)

    This should print out a tensor with ten different values.

    When installing make sure to close any other Rstudio sessions that are using DeepPatientLevelPrediction or any dependency. @@ -282,10 +284,10 @@

    Acknowledgmentscitation("DeepPatientLevelPrediction")

    ## To cite package 'DeepPatientLevelPrediction' in publications use:
     ## 
    -##   Fridgeirsson E, Reps J, Chan You S, Kim C, John H (22).
    +##   Fridgeirsson E, Reps J, Chan You S, Kim C, John H (8).
     ##   _DeepPatientLevelPrediction: Deep Learning For Patient Level
     ##   Prediction Using Data In The OMOP Common Data Model_. R package
    -##   version 2.0.3, <https://github.com/OHDSI/DeepPatientLevelPrediction>.
    +##   version 2.1.0, <https://github.com/OHDSI/DeepPatientLevelPrediction>.
     ## 
     ## A BibTeX entry for LaTeX users is
     ## 
    @@ -293,8 +295,8 @@ 

    Acknowledgments## title = {DeepPatientLevelPrediction: Deep Learning For Patient Level Prediction Using Data In The ## OMOP Common Data Model}, ## author = {Egill Fridgeirsson and Jenna Reps and Seng {Chan You} and Chungsoo Kim and Henrik John}, -## year = {22}, -## note = {R package version 2.0.3}, +## year = {8}, +## note = {R package version 2.1.0}, ## url = {https://github.com/OHDSI/DeepPatientLevelPrediction}, ## }

    Please reference this paper if you use the PLP Package in @@ -309,9 +311,7 @@

    Acknowledgments - -

    + @@ -324,16 +324,16 @@

    Acknowledgments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/articles/TransferLearning.html b/articles/TransferLearning.html new file mode 100644 index 0000000..0378081 --- /dev/null +++ b/articles/TransferLearning.html @@ -0,0 +1,358 @@ + + + + + + + +How to use DeepPatientLevelPrediction for Transfer Learning • DeepPatientLevelPrediction + + + + + + + + + + + +
    +
    + + + + +
    +
    + + + + + +
    +

    Introduction +

    +

    This vignette describes how to use the DeepPatientLevelPrediction +package for transfer learning. Transfer learning is a machine learning +technique where a model trained on one task is used as a starting point +for training a model on a different task. This can be useful when you +have a small dataset for the new task, but a large dataset for a related +task. In this vignette, we will show how to use the +DeepPatientLevelPrediction package to perform transfer learning on a +patient-level prediction task.

    +
    +
    +

    Training initial model +

    +

    The first step in transfer learning is to train an initial model. In +this example, we will train a model to predict the risk of a patient +developing a certain condition based on their electronic health record +data. We will use the Eunomia package to access a dataset +to train the model. The following code shows how to train the initial +model:

    +
    +library(DeepPatientLevelPrediction)
    +
    +# Get connection details for the Eunomia dataset and create the cohorts
    +connectionDetails <- Eunomia::getEunomiaConnectionDetails()
    +Eunomia::createCohorts(connectionDetails)
    +

    The default Eunomia package includes four cohorts. Gastrointestinal +bleeding (GiBleed) and use of three different drugs, +diclofenac, NSAIDS and celecoxib. +Usually then we would use one of three drug cohorts as our target cohort +and then predict the risk of gastrointestinal bleeding. The +cohort_definition_ids of these are: +celecoxib: 1, diclofenac: 2, +GiBleed: 3 and NSAIDS: 4.

    +

    After creating the cohorts we can see that there are most patients in +the NSAIDS cohort. We will use this cohort as our target +cohort for the initial model. There are least patients in the +diclofenac cohort (excluding GiBleed), so we +will use this cohort as our target cohort for the transfer learning +model.

    +
    +# create some simple covariate settings using Sex, Age and Long-term conditions and drug use in the last year.
    +covariateSettings <- FeatureExtraction::createCovariateSettings(
    +  useDemographicsGender = TRUE,
    +  useDemographicsAge = TRUE,
    +  useConditionOccurrenceLongTerm = TRUE,
    +  useDrugEraLongTerm = TRUE,
    +  endDays = 0
    +)
    +
    +# Information about the database. In Eunomia sqlite there is only one schema, main and the cohorts are in a table named `cohort` which is the default. 
    +databaseDetails <- PatientLevelPrediction::createDatabaseDetails(
    +  connectionDetails = connectionDetails,
    +  cdmDatabaseId = "2", # Eunomia version used
    +  cdmDatabaseSchema = "main",
    +  targetId = 4,
    +  outcomeIds = 3,
    +  cdmDatabaseName = "eunomia"
    +)
    +
    +# Let's now extract the plpData object from the database
    +plpData <- PatientLevelPrediction::getPlpData(
    +  databaseDetails = databaseDetails,
    +  covariateSettings = covariateSettings,
    +  restrictPlpDataSettings = PatientLevelPrediction::createRestrictPlpDataSettings()
    +)
    +

    Now we can set up our initial model development. We will use a simple +ResNet.

    +
    +modelSettings <- setResNet(numLayers = c(2),
    +                           sizeHidden = 128,
    +                           hiddenFactor = 1,
    +                           residualDropout = 0.1,
    +                           hiddenDropout = 0.1,
    +                           sizeEmbedding = 128,
    +                           estimatorSettings = setEstimator(
    +                             learningRate = 3e-4,
    +                             weightDecay = 0,
    +                             device = "cpu", # use cuda here if you have a gpu
    +                             batchSize = 256,
    +                             epochs = 5,
    +                             seed = 42
    +                           ),
    +                           hyperParamSearch = "random",
    +                           randomSample = 1)
    +
    +plpResults <- PatientLevelPrediction::runPlp(
    +  plpData = plpData,
    +  outcomeId = 3, # 4 is the id of GiBleed
    +  modelSettings = modelSettings,
    +  analysisName = "Nsaids_GiBleed",
    +  analysisId = "1",
    +  # Let's predict the risk of Gibleed in the year following start of NSAIDs use
    +  populationSettings = PatientLevelPrediction::createStudyPopulationSettings(
    +    requireTimeAtRisk = FALSE,
    +    firstExposureOnly = TRUE,
    +    riskWindowStart = 1,
    +    riskWindowEnd = 365
    +  ),
    +  splitSettings = PatientLevelPrediction::createDefaultSplitSetting(splitSeed = 42),
    +  saveDirectory = "./output" # save in a folder in the current directory
    +)
    +

    This should take a few minutes on a cpu. Now that we have a model +developed we can further finetune it on the diclofenac +cohort. First we need to extract it.

    +
    +databaseDetails <- PatientLevelPrediction::createDatabaseDetails(
    +  connectionDetails = connectionDetails,
    +  cdmDatabaseId = "2", # Eunomia version used
    +  cdmDatabaseSchema = "main",
    +  targetId = 2, # diclofenac cohort
    +  outcomeIds = 3,
    +  cdmDatabaseName = "eunomia"
    +)
    +
    +plpDataTransfer <- PatientLevelPrediction::getPlpData(
    +  databaseDetails = databaseDetails,
    +  covariateSettings = covariateSettings, # same as for the developed model
    +  restrictPlpDataSettings = PatientLevelPrediction::createRestrictPlpDataSettings()
    +)
    +

    Now we can set up our transfer learning model development. For this +we need to use a different modelSettings function. +setFinetuner. We also need to know the path to the +previously developed model. This should be of the form +outputDir/analysisId/plpResult/model where outputDir is the +directory specified when we develop our model and analysisId is the id +we gave the analysis. In this case it is 1 and the path to +the model is: ./output/1/plpResult/model.

    +
    +modelSettingsTransfer <- setFinetuner(modelPath = './output/1/plpResult/model',
    +                                      estimatorSettings = setEstimator(
    +                                        learningRate = 3e-4,
    +                                        weightDecay = 0,
    +                                        device = "cpu", # use cuda here if you have a gpu
    +                                        batchSize = 256,
    +                                        epochs = 5,
    +                                        seed = 42
    +                                      ))
    +

    Currently the basic transfer learning works by loading the previously +trained model and resetting it’s last layer, often called the prediction +head. Then it will train only the parameters in this last layer. The +hope is that the other layer’s have learned some generalizable +representations of our data and by modifying the last layer we can mix +those representations to suit the new task.

    +
    +plpResultsTransfer <- PatientLevelPrediction::runPlp(
    +  plpData = plpDataTransfer,
    +  outcomeId = 3,
    +  modelSettings = modelSettingsTransfer,
    +  analysisName = "Diclofenac_GiBleed",
    +  analysisId = "2",
    +  populationSettings = PatientLevelPrediction::createStudyPopulationSettings(
    +    requireTimeAtRisk = FALSE,
    +    firstExposureOnly = TRUE,
    +    riskWindowStart = 1,
    +    riskWindowEnd = 365
    +  ),
    +  splitSettings = PatientLevelPrediction::createDefaultSplitSetting(splitSeed = 42),
    +  saveDirectory = "./outputTransfer" # save in a folder in the current directory
    +)
    +

    This should be much faster since it’s only training the last layer. +Unfortunately the results are bad. However this is a toy example on +synthetic toy data but the process on large observational data is +exactly the same.

    +
    +
    +

    Conclusion +

    +

    Now you have finetuned a model on a new cohort using transfer +learning. This can be useful when you have a small dataset for the new +task, but a large dataset for a related task or from a different +database. The DeepPatientLevelPrediction package makes it easy to +perform transfer learning on patient-level prediction tasks.

    +
    +
    +

    Acknowledgments +

    +

    Considerable work has been dedicated to provide the +DeepPatientLevelPrediction package.

    +
    +citation("DeepPatientLevelPrediction")
    +
    ## To cite package 'DeepPatientLevelPrediction' in publications use:
    +## 
    +##   Fridgeirsson E, Reps J, Chan You S, Kim C, John H (8).
    +##   _DeepPatientLevelPrediction: Deep Learning For Patient Level
    +##   Prediction Using Data In The OMOP Common Data Model_. R package
    +##   version 2.1.0, <https://github.com/OHDSI/DeepPatientLevelPrediction>.
    +## 
    +## A BibTeX entry for LaTeX users is
    +## 
    +##   @Manual{,
    +##     title = {DeepPatientLevelPrediction: Deep Learning For Patient Level Prediction Using Data In The
    +## OMOP Common Data Model},
    +##     author = {Egill Fridgeirsson and Jenna Reps and Seng {Chan You} and Chungsoo Kim and Henrik John},
    +##     year = {8},
    +##     note = {R package version 2.1.0},
    +##     url = {https://github.com/OHDSI/DeepPatientLevelPrediction},
    +##   }
    +

    Please reference this paper if you use the PLP Package in +your work:

    +

    Reps JM, Schuemie +MJ, Suchard MA, Ryan PB, Rijnbeek PR. Design and implementation of a +standardized framework to generate and evaluate patient-level prediction +models using observational healthcare data. J Am Med Inform Assoc. +2018;25(8):969-975.

    +
    +
    + + + +
    + + + +
    + +
    +

    +

    Site built with pkgdown 2.1.0.

    +
    + +
    +
    + + + + + + + + diff --git a/articles/index.html b/articles/index.html index 60a411d..60d2dc3 100644 --- a/articles/index.html +++ b/articles/index.html @@ -1,9 +1,9 @@ -Articles • DeepPatientLevelPredictionArticles • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - + @@ -97,15 +102,15 @@

    All vignettes

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/authors.html b/authors.html index 35c63c8..ee45190 100644 --- a/authors.html +++ b/authors.html @@ -1,9 +1,9 @@ -Authors and Citation • DeepPatientLevelPredictionAuthors and Citation • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    - +
    • -

      Egill Fridgeirsson. Author, maintainer. +

      Egill Fridgeirsson. Author, maintainer.

    • -

      Jenna Reps. Author. +

      Jenna Reps. Author.

    • -

      Seng Chan You. Author. +

      Seng Chan You. Author.

    • -

      Chungsoo Kim. Author. +

      Chungsoo Kim. Author.

    • -

      Henrik John. Author. +

      Henrik John. Author.

    Citation

    - Source: DESCRIPTION + Source: DESCRIPTION
    -

    Fridgeirsson E, Reps J, Chan You S, Kim C, John H (2023). +

    Fridgeirsson E, Reps J, Chan You S, Kim C, John H (2024). DeepPatientLevelPrediction: Deep Learning For Patient Level Prediction Using Data In The OMOP Common Data Model. -R package version 2.0.3, https://github.com/OHDSI/DeepPatientLevelPrediction. +R package version 2.1.0, https://github.com/OHDSI/DeepPatientLevelPrediction.

    @Manual{,
       title = {DeepPatientLevelPrediction: Deep Learning For Patient Level Prediction Using Data In The OMOP Common Data Model},
       author = {Egill Fridgeirsson and Jenna Reps and Seng {Chan You} and Chungsoo Kim and Henrik John},
    -  year = {2023},
    -  note = {R package version 2.0.3},
    +  year = {2024},
    +  note = {R package version 2.1.0},
       url = {https://github.com/OHDSI/DeepPatientLevelPrediction},
     }
    @@ -131,15 +134,15 @@

    Citation

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/index.html b/index.html index 5c829b7..4869d50 100644 --- a/index.html +++ b/index.html @@ -6,7 +6,7 @@ Deep Learning For Patient Level Prediction Using Data In The OMOP Common Data Model • DeepPatientLevelPrediction - + @@ -19,7 +19,7 @@ - +
    @@ -42,7 +42,7 @@
  • - +
  • @@ -57,7 +57,7 @@
  • @@ -83,7 +86,7 @@
  • - +
  • @@ -94,7 +97,7 @@
    - +
    @@ -222,16 +225,16 @@

    Developers

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/news/index.html b/news/index.html index f232d42..0091055 100644 --- a/news/index.html +++ b/news/index.html @@ -1,9 +1,9 @@ -Changelog • DeepPatientLevelPredictionChangelog • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    +
    + +
    • Added basic transfer learning functionality. See vignette(“TransferLearning”)
    • +
    • Add a gpu memory cleaner to clean cached memory after out of memory error
    • +
    • The python module torch is now accessed through an exported function instead of loading the module at package load
    • +
    • Added gradient accumulation. Studies running at different sites using different hardware can now use same effective batch size by accumulating gradients.
    • +
    • Refactored out the cross validation from the hyperparameter tuning
    • +
    • Remove predictions from non-optimal hyperparameter combinations to save space
    • +
    • Only use html vignettes
    • +
    • Rename MLP to MultiLayerPerceptron
    • +
    • Hotfix: Fix count for polars v0.20.x
    • @@ -180,15 +194,15 @@
    - - + + diff --git a/pkgdown.yml b/pkgdown.yml index de66f35..69bc959 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -1,9 +1,9 @@ -pandoc: 2.19.2 -pkgdown: 2.0.7 +pandoc: 3.1.11 +pkgdown: 2.1.0 pkgdown_sha: ~ articles: BuildingDeepModels: BuildingDeepModels.html FirstModel: FirstModel.html Installing: Installing.html -last_built: 2023-12-22T14:37Z - + TransferLearning: TransferLearning.html +last_built: 2024-07-08T15:21Z diff --git a/reference/DeepPatientLevelPrediction.html b/reference/DeepPatientLevelPrediction.html index a7838c7..60892d0 100644 --- a/reference/DeepPatientLevelPrediction.html +++ b/reference/DeepPatientLevelPrediction.html @@ -1,10 +1,10 @@ -DeepPatientLevelPrediction — DeepPatientLevelPrediction • DeepPatientLevelPredictionDeepPatientLevelPrediction — DeepPatientLevelPrediction • DeepPatientLevelPrediction - +
    @@ -26,7 +26,7 @@
    - +
    @@ -86,6 +89,20 @@

    DeepPatientLevelPrediction

    + +
    +

    Author

    +

    Maintainer: Egill Fridgeirsson e.fridgeirsson@erasmusmc.nl

    +

    Authors:

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/camelCaseToSnakeCase.html b/reference/camelCaseToSnakeCase.html index 6ca2ea4..9ab9d05 100644 --- a/reference/camelCaseToSnakeCase.html +++ b/reference/camelCaseToSnakeCase.html @@ -1,9 +1,9 @@ -Convert a camel case string to snake case — camelCaseToSnakeCase • DeepPatientLevelPredictionConvert a camel case string to snake case — camelCaseToSnakeCase • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -89,15 +92,15 @@

    Convert a camel case string to snake case

    Arguments

    -
    string
    + + +
    string

    The string to be converted

    Value

    - - -

    A string

    +

    A string

    @@ -112,15 +115,15 @@

    Value

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/camelCaseToSnakeCaseNames.html b/reference/camelCaseToSnakeCaseNames.html index a32460e..228bd2e 100644 --- a/reference/camelCaseToSnakeCaseNames.html +++ b/reference/camelCaseToSnakeCaseNames.html @@ -1,9 +1,9 @@ -Convert the names of an object from camel case to snake case — camelCaseToSnakeCaseNames • DeepPatientLevelPredictionConvert the names of an object from camel case to snake case — camelCaseToSnakeCaseNames • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -89,15 +92,15 @@

    Convert the names of an object from camel case to snake case

    Arguments

    -
    object
    + + +
    object

    The object of which the names should be converted

    Value

    - - -

    The same object, but with converted names.

    +

    The same object, but with converted names.

    @@ -112,15 +115,15 @@

    Value

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/checkHigher.html b/reference/checkHigher.html index c3ecffe..440e038 100644 --- a/reference/checkHigher.html +++ b/reference/checkHigher.html @@ -1,9 +1,9 @@ -helper function to check that input is higher than a certain value — checkHigher • DeepPatientLevelPredictionhelper function to check that input is higher than a certain value — checkHigher • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -89,11 +92,13 @@

    helper function to check that input is higher than a certain value

    Arguments

    -
    parameter
    + + +
    parameter

    the input parameter to check, can be a vector

    -
    value
    +
    value

    which value it should be higher than

    @@ -110,15 +115,15 @@

    Arguments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/checkHigherEqual.html b/reference/checkHigherEqual.html index 2b8e2f1..839517a 100644 --- a/reference/checkHigherEqual.html +++ b/reference/checkHigherEqual.html @@ -1,9 +1,9 @@ -helper function to check that input is higher or equal than a certain value — checkHigherEqual • DeepPatientLevelPredictionhelper function to check that input is higher or equal than a certain value — checkHigherEqual • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -89,11 +92,13 @@

    helper function to check that input is higher or equal than a certain value<

    Arguments

    -
    parameter
    + + +
    parameter

    the input parameter to check, can be a vector

    -
    value
    +
    value

    which value it should be higher or equal than

    @@ -110,15 +115,15 @@

    Arguments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/checkIsClass.html b/reference/checkIsClass.html index 25a43c5..5a78f1c 100644 --- a/reference/checkIsClass.html +++ b/reference/checkIsClass.html @@ -1,9 +1,9 @@ -helper function to check class of input — checkIsClass • DeepPatientLevelPredictionhelper function to check class of input — checkIsClass • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -89,11 +92,13 @@

    helper function to check class of input

    Arguments

    -
    parameter
    + + +
    parameter

    the input parameter to check

    -
    classes
    +
    classes

    which classes it should belong to (one or more)

    @@ -110,15 +115,15 @@

    Arguments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/fitEstimator.html b/reference/fitEstimator.html index f33a0ea..4bb80d6 100644 --- a/reference/fitEstimator.html +++ b/reference/fitEstimator.html @@ -1,9 +1,9 @@ -fitEstimator — fitEstimator • DeepPatientLevelPredictionfitEstimator — fitEstimator • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -89,23 +92,25 @@

    fitEstimator

    Arguments

    -
    trainData
    + + +
    trainData

    the data to use

    -
    modelSettings
    +
    modelSettings

    modelSettings object

    -
    analysisId
    +
    analysisId

    Id of the analysis

    -
    analysisPath
    +
    analysisPath

    Path of the analysis

    -
    ...
    +
    ...

    Extra inputs

    @@ -122,15 +127,15 @@

    Arguments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/gridCvDeep.html b/reference/gridCvDeep.html index 1a9e0f1..9f1d064 100644 --- a/reference/gridCvDeep.html +++ b/reference/gridCvDeep.html @@ -1,9 +1,9 @@ -gridCvDeep — gridCvDeep • DeepPatientLevelPredictiongridCvDeep — gridCvDeep • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -89,23 +92,25 @@

    gridCvDeep

    Arguments

    -
    mappedData
    + + +
    mappedData

    Mapped data with covariates

    -
    labels
    +
    labels

    Dataframe with the outcomes

    -
    modelSettings
    +
    modelSettings

    Settings of the model

    -
    modelLocation
    +
    modelLocation

    Where to save the model

    -
    analysisPath
    +
    analysisPath

    Path of the analysis

    @@ -122,15 +127,15 @@

    Arguments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/index.html b/reference/index.html index ec0b462..1f4ab2b 100644 --- a/reference/index.html +++ b/reference/index.html @@ -1,9 +1,9 @@ -Function reference • DeepPatientLevelPredictionPackage index • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -82,7 +85,7 @@

    All functions

    -

    DeepPatientLevelPrediction

    +

    DeepPatientLevelPrediction-package DeepPatientLevelPrediction

    DeepPatientLevelPrediction

    @@ -129,6 +132,10 @@

    All functions setEstimator()

    setEstimator

    + +

    setFinetuner()

    + +

    setFinetuner

    setMultiLayerPerceptron()

    @@ -141,6 +148,18 @@

    All functions setTransformer()

    create settings for training a non-temporal transformer

    + +

    snakeCaseToCamelCase()

    + +

    Convert a camel case string to snake case

    + +

    snakeCaseToCamelCaseNames()

    + +

    Convert the names of an object from snake case to camel case

    + +

    torch

    + +

    Pytorch module

    trainingCache

    @@ -158,15 +177,15 @@

    All functions
    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/predictDeepEstimator.html b/reference/predictDeepEstimator.html index 9893530..13d7a35 100644 --- a/reference/predictDeepEstimator.html +++ b/reference/predictDeepEstimator.html @@ -1,9 +1,9 @@ -predictDeepEstimator — predictDeepEstimator • DeepPatientLevelPredictionpredictDeepEstimator — predictDeepEstimator • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -89,15 +92,17 @@

    predictDeepEstimator

    Arguments

    -
    plpModel
    + + +
    plpModel

    the plpModel

    -
    data
    +
    data

    plp data object or a torch dataset

    -
    cohort
    +
    cohort

    data.frame with the rowIds of the people

    @@ -114,15 +119,15 @@

    Arguments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/setDefaultResNet.html b/reference/setDefaultResNet.html index 291ab30..7713ff7 100644 --- a/reference/setDefaultResNet.html +++ b/reference/setDefaultResNet.html @@ -1,9 +1,9 @@ -setDefaultResNet — setDefaultResNet • DeepPatientLevelPredictionsetDefaultResNet — setDefaultResNet • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -92,8 +95,10 @@

    setDefaultResNet

    Arguments

    -
    estimatorSettings
    -

    created with ```setEstimator```

    + + +
    estimatorSettings
    +

    created with “`setEstimator“`

    @@ -114,15 +119,15 @@

    Details

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/setDefaultTransformer.html b/reference/setDefaultTransformer.html index 0bd3658..5cba796 100644 --- a/reference/setDefaultTransformer.html +++ b/reference/setDefaultTransformer.html @@ -1,9 +1,9 @@ -Create default settings for a non-temporal transformer — setDefaultTransformer • DeepPatientLevelPredictionCreate default settings for a non-temporal transformer — setDefaultTransformer • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -92,7 +95,9 @@

    Create default settings for a non-temporal transformer

    Arguments

    -
    estimatorSettings
    + + +
    estimatorSettings

    created with `setEstimator`

    @@ -114,15 +119,15 @@

    Details

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/setEstimator.html b/reference/setEstimator.html index 09497c9..6526178 100644 --- a/reference/setEstimator.html +++ b/reference/setEstimator.html @@ -1,9 +1,9 @@ -setEstimator — setEstimator • DeepPatientLevelPredictionsetEstimator — setEstimator • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -96,52 +99,54 @@

    setEstimator

    criterion = torch$nn$BCEWithLogitsLoss, earlyStopping = list(useEarlyStopping = TRUE, params = list(patience = 4)), metric = "auc", + accumulationSteps = NULL, seed = NULL )

    Arguments

    -
    learningRate
    + + +
    learningRate

    what learning rate to use

    -
    weightDecay
    +
    weightDecay

    what weight_decay to use

    -
    batchSize
    +
    batchSize

    batchSize to use

    -
    epochs
    +
    epochs

    how many epochs to train for

    -
    device
    +
    device

    what device to train on, can be a string or a function to -that evaluates -to the device during runtime

    +that evaluates to the device during runtime

    -
    optimizer
    +
    optimizer

    which optimizer to use

    -
    scheduler
    +
    scheduler

    which learning rate scheduler to use

    -
    criterion
    +
    criterion

    loss function to use

    -
    earlyStopping
    +
    earlyStopping

    If earlyStopping should be used which stops the training of your metric is not improving

    -
    metric
    +
    metric

    either `auc` or `loss` or a custom metric to use. This is the metric used for scheduler and earlyStopping. Needs to be a list with function `fun`, mode either `min` or `max` and a @@ -150,7 +155,12 @@

    Arguments

    outputs a score.

    -
    seed
    +
    accumulationSteps
    +

    how many steps to accumulate gradients before +updating weights, can also be a function that is evaluated during runtime

    + + +
    seed

    seed to initialize weights of model with

    @@ -167,15 +177,15 @@

    Arguments

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/setFinetuner.html b/reference/setFinetuner.html new file mode 100644 index 0000000..35692d0 --- /dev/null +++ b/reference/setFinetuner.html @@ -0,0 +1,129 @@ + +setFinetuner — setFinetuner • DeepPatientLevelPrediction + + +
    +
    + + + +
    +
    + + +
    +

    creates settings for using transfer learning to finetune a model

    +
    + +
    +
    setFinetuner(modelPath, estimatorSettings = setEstimator())
    +
    + +
    +

    Arguments

    + + +
    modelPath
    +

    path to existing plpModel directory

    + + +
    estimatorSettings
    +

    settings created with `setEstimator`

    + +
    + +
    + +
    + + +
    + +
    +

    Site built with pkgdown 2.1.0.

    +
    + +
    + + + + + + + + diff --git a/reference/setMultiLayerPerceptron.html b/reference/setMultiLayerPerceptron.html index cbc527c..7b6a5c7 100644 --- a/reference/setMultiLayerPerceptron.html +++ b/reference/setMultiLayerPerceptron.html @@ -1,9 +1,9 @@ -setMultiLayerPerceptron — setMultiLayerPerceptron • DeepPatientLevelPredictionsetMultiLayerPerceptron — setMultiLayerPerceptron • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -99,38 +102,40 @@

    setMultiLayerPerceptron

    Arguments

    -
    numLayers
    + + +
    numLayers

    Number of layers in network, default: 1:8

    -
    sizeHidden
    +
    sizeHidden

    Amount of neurons in each default layer, default: 2^(6:9) (64 to 512)

    -
    dropout
    +
    dropout

    How much dropout to apply after first linear, default: seq(0, 0.3, 0.05)

    -
    sizeEmbedding
    +
    sizeEmbedding

    Size of embedding default: 2^(6:9) (64 to 512)

    -
    estimatorSettings
    +
    estimatorSettings

    settings of Estimator created with `setEstimator`

    -
    hyperParamSearch
    +
    hyperParamSearch

    Which kind of hyperparameter search to use random sampling or exhaustive grid search. default: 'random'

    -
    randomSample
    +
    randomSample

    How many random samples from hyperparameter space to use

    -
    randomSampleSeed
    +
    randomSampleSeed

    Random seed to sample hyperparameter combinations

    @@ -151,15 +156,15 @@

    Details

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/setResNet.html b/reference/setResNet.html index cfd0ee6..7c1c326 100644 --- a/reference/setResNet.html +++ b/reference/setResNet.html @@ -1,9 +1,9 @@ -setResNet — setResNet • DeepPatientLevelPredictionsetResNet — setResNet • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -101,50 +104,52 @@

    setResNet

    Arguments

    -
    numLayers
    + + +
    numLayers

    Number of layers in network, default: 1:16

    -
    sizeHidden
    +
    sizeHidden

    Amount of neurons in each default layer, default: 2^(6:10) (64 to 1024)

    -
    hiddenFactor
    +
    hiddenFactor

    How much to grow the amount of neurons in each ResLayer, default: 1:4

    -
    residualDropout
    +
    residualDropout

    How much dropout to apply after last linear layer in ResLayer, default: seq(0, 0.3, 0.05)

    -
    hiddenDropout
    +
    hiddenDropout

    How much dropout to apply after first linear layer in ResLayer, default: seq(0, 0.3, 0.05)

    -
    sizeEmbedding
    +
    sizeEmbedding

    Size of embedding layer, default: 2^(6:9) '(64 to 512)

    -
    estimatorSettings
    -

    created with ```setEstimator```

    +
    estimatorSettings
    +

    created with “`setEstimator“`

    -
    hyperParamSearch
    +
    hyperParamSearch

    Which kind of hyperparameter search to use random sampling or exhaustive grid search. default: 'random'

    -
    randomSample
    +
    randomSample

    How many random samples from hyperparameter space to use

    -
    randomSampleSeed
    +
    randomSampleSeed

    Random seed to sample hyperparameter combinations

    @@ -165,15 +170,15 @@

    Details

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/setTransformer.html b/reference/setTransformer.html index ff0982e..4474afe 100644 --- a/reference/setTransformer.html +++ b/reference/setTransformer.html @@ -1,9 +1,9 @@ -create settings for training a non-temporal transformer — setTransformer • DeepPatientLevelPredictioncreate settings for training a non-temporal transformer — setTransformer • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -86,13 +89,13 @@

    create settings for training a non-temporal transformer

    setTransformer(
       numBlocks = 3,
    -  dimToken = 96,
    +  dimToken = 192,
       dimOut = 1,
       numHeads = 8,
    -  attDropout = 0.25,
    -  ffnDropout = 0.25,
    +  attDropout = 0.2,
    +  ffnDropout = 0.1,
       resDropout = 0,
    -  dimHidden = 512,
    +  dimHidden = 256,
       dimHiddenRatio = NULL,
       estimatorSettings = setEstimator(weightDecay = 1e-06, batchSize = 1024, epochs = 10,
         seed = NULL),
    @@ -104,59 +107,61 @@ 

    create settings for training a non-temporal transformer

    Arguments

    -
    numBlocks
    + + +
    numBlocks

    number of transformer blocks

    -
    dimToken
    +
    dimToken

    dimension of each token (embedding size)

    -
    dimOut
    +
    dimOut

    dimension of output, usually 1 for binary problems

    -
    numHeads
    +
    numHeads

    number of attention heads

    -
    attDropout
    +
    attDropout

    dropout to use on attentions

    -
    ffnDropout
    +
    ffnDropout

    dropout to use in feedforward block

    -
    resDropout
    +
    resDropout

    dropout to use in residual connections

    -
    dimHidden
    +
    dimHidden

    dimension of the feedworward block

    -
    dimHiddenRatio
    +
    dimHiddenRatio

    dimension of the feedforward block as a ratio of dimToken (embedding size)

    -
    estimatorSettings
    +
    estimatorSettings

    created with `setEstimator`

    -
    hyperParamSearch
    +
    hyperParamSearch

    what kind of hyperparameter search to do, default 'random'

    -
    randomSample
    +
    randomSample

    How many samples to use in hyperparameter search if random

    -
    randomSampleSeed
    +
    randomSampleSeed

    Random seed to sample hyperparameter combinations

    @@ -178,15 +183,15 @@

    Details

    -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/reference/snakeCaseToCamelCase.html b/reference/snakeCaseToCamelCase.html new file mode 100644 index 0000000..a2b68cb --- /dev/null +++ b/reference/snakeCaseToCamelCase.html @@ -0,0 +1,129 @@ + +Convert a camel case string to snake case — snakeCaseToCamelCase • DeepPatientLevelPrediction + + +
    +
    + + + +
    +
    + + +
    +

    Convert a camel case string to snake case

    +
    + +
    +
    snakeCaseToCamelCase(string)
    +
    + +
    +

    Arguments

    + + +
    string
    +

    The string to be converted

    + +
    +
    +

    Value

    +

    A string

    +
    + +
    + +
    + + +
    + +
    +

    Site built with pkgdown 2.1.0.

    +
    + +
    + + + + + + + + diff --git a/reference/snakeCaseToCamelCaseNames.html b/reference/snakeCaseToCamelCaseNames.html new file mode 100644 index 0000000..c564acf --- /dev/null +++ b/reference/snakeCaseToCamelCaseNames.html @@ -0,0 +1,129 @@ + +Convert the names of an object from snake case to camel case — snakeCaseToCamelCaseNames • DeepPatientLevelPrediction + + +
    +
    + + + +
    +
    + + +
    +

    Convert the names of an object from snake case to camel case

    +
    + +
    +
    snakeCaseToCamelCaseNames(object)
    +
    + +
    +

    Arguments

    + + +
    object
    +

    The object of which the names should be converted

    + +
    +
    +

    Value

    +

    The same object, but with converted names.

    +
    + +
    + +
    + + +
    + +
    +

    Site built with pkgdown 2.1.0.

    +
    + +
    + + + + + + + + diff --git a/reference/torch.html b/reference/torch.html new file mode 100644 index 0000000..2e9ac34 --- /dev/null +++ b/reference/torch.html @@ -0,0 +1,124 @@ + +Pytorch module — torch • DeepPatientLevelPrediction + + +
    +
    + + + +
    +
    + + +
    +

    The `torch` module object is the equivalent of +`reticulate::import("torch")` and provided mainly as a convenience.

    +
    + + +
    +

    Format

    +

    An object of class `python.builtin.module`

    +
    +
    +

    Value

    +

    the torch Python module

    +
    + +
    + +
    + + +
    + +
    +

    Site built with pkgdown 2.1.0.

    +
    + +
    + + + + + + + + diff --git a/reference/trainingCache.html b/reference/trainingCache.html index 6e80a67..367f2b1 100644 --- a/reference/trainingCache.html +++ b/reference/trainingCache.html @@ -1,9 +1,9 @@ -TrainingCache — trainingCache • DeepPatientLevelPredictionTrainingCache — trainingCache • DeepPatientLevelPrediction - +
    @@ -25,7 +25,7 @@
    - +
    @@ -86,22 +89,14 @@

    TrainingCache

    Value

    - - -

    Whether the provided and cached parameter grid is identical

    - - +

    Whether the provided and cached parameter grid is identical

    Grid search results from the training cache

    - -

    Boolen

    - -

    Last grid search index

    Methods

    - +


    Method new()

    @@ -211,15 +207,32 @@

    Usage

    +


    +

    Method trimPerformance()

    +

    Trims the performance of the hyperparameter results by removing +the predictions from all but the best performing hyperparameter

    +

    Usage

    +

    trainingCache$trimPerformance(hyperparameterResults)

    +
    + +
    +

    Arguments

    +

    hyperparameterResults
    +

    List of hyperparameter results

    + + +

    +
    +


    Method clone()

    The objects of this class are cloneable with this method.

    -

    Usage

    +

    Usage

    trainingCache$clone(deep = FALSE)

    -

    Arguments

    +

    Arguments

    deep

    Whether to make a deep clone.

    @@ -243,15 +256,15 @@

    Arguments -

    Site built with pkgdown 2.0.7.

    +

    Site built with pkgdown 2.1.0.

    - - + + diff --git a/sitemap.xml b/sitemap.xml index 6ab78c0..dc43dd6 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1,78 +1,33 @@ - - - - /404.html - - - /articles/BuildingDeepModels.html - - - /articles/FirstModel.html - - - /articles/Installing.html - - - /articles/index.html - - - /authors.html - - - /index.html - - - /news/index.html - - - /reference/DeepPatientLevelPrediction.html - - - /reference/camelCaseToSnakeCase.html - - - /reference/camelCaseToSnakeCaseNames.html - - - /reference/checkHigher.html - - - /reference/checkHigherEqual.html - - - /reference/checkIsClass.html - - - /reference/fitEstimator.html - - - /reference/gridCvDeep.html - - - /reference/index.html - - - /reference/predictDeepEstimator.html - - - /reference/setDefaultResNet.html - - - /reference/setDefaultTransformer.html - - - /reference/setEstimator.html - - - /reference/setMultiLayerPerceptron.html - - - /reference/setResNet.html - - - /reference/setTransformer.html - - - /reference/trainingCache.html - + +/404.html +/articles/BuildingDeepModels.html +/articles/FirstModel.html +/articles/Installing.html +/articles/TransferLearning.html +/articles/index.html +/authors.html +/index.html +/news/index.html +/reference/DeepPatientLevelPrediction.html +/reference/camelCaseToSnakeCase.html +/reference/camelCaseToSnakeCaseNames.html +/reference/checkHigher.html +/reference/checkHigherEqual.html +/reference/checkIsClass.html +/reference/fitEstimator.html +/reference/gridCvDeep.html +/reference/index.html +/reference/predictDeepEstimator.html +/reference/setDefaultResNet.html +/reference/setDefaultTransformer.html +/reference/setEstimator.html +/reference/setFinetuner.html +/reference/setMultiLayerPerceptron.html +/reference/setResNet.html +/reference/setTransformer.html +/reference/snakeCaseToCamelCase.html +/reference/snakeCaseToCamelCaseNames.html +/reference/torch.html +/reference/trainingCache.html +