Skip to content

Commit 1870841

Browse files
committed
add pkgdown build to github actions
1 parent f239842 commit 1870841

7 files changed

+567
-10
lines changed

.Rbuildignore

+1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
^.*\.Rproj$
22
^\.Rproj\.user$
3+
^\.github$

.github/.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
*.html

.github/workflows/pkgdown.yaml

+35
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Workflow derived from https://github.com/r-lib/actions/tree/master/examples
2+
# Need help debugging build failures? Start at https://github.com/r-lib/actions#where-to-find-help
3+
on:
4+
push:
5+
branches: [main, develop]
6+
release:
7+
types: [published]
8+
workflow_dispatch:
9+
10+
name: pkgdown
11+
12+
jobs:
13+
pkgdown:
14+
runs-on: ubuntu-latest
15+
env:
16+
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
17+
steps:
18+
- uses: actions/checkout@v2
19+
20+
- uses: r-lib/actions/setup-pandoc@v1
21+
22+
- uses: r-lib/actions/setup-r@v1
23+
with:
24+
use-public-rspm: true
25+
26+
- uses: r-lib/actions/setup-r-dependencies@v1
27+
with:
28+
extra-packages: pkgdown
29+
needs: website
30+
31+
- name: Deploy package
32+
run: |
33+
git config --local user.name "$GITHUB_ACTOR"
34+
git config --local user.email "[email protected]"
35+
Rscript -e 'pkgdown::deploy_to_branch(new_process = FALSE)'

DeepPatientLevelPrediction.Rproj

+1
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ LaTeX: pdfLaTeX
1414

1515
BuildType: Package
1616
PackageUseDevtools: Yes
17+
PackageCleanBeforeInstall: Yes
1718
PackageInstallArgs: --no-multiarch --with-keep.source
1819
PackageBuildArgs: --compact-vignettes=both
1920
PackageCheckArgs: --as-cran

vignettes/BuildingDeepModels.Rmd

+12-10
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,13 @@ output:
2323
number_sections: yes
2424
toc: yes
2525
---
26+
27+
```{=html}
2628
<!--
2729
%\VignetteEngine{knitr}
2830
%\VignetteIndexEntry{Building Deep Learning Models}
2931
-->
30-
32+
```
3133
```{r setup, include=FALSE}
3234
knitr::opts_chunk$set(echo = TRUE)
3335
```
@@ -36,30 +38,30 @@ knitr::opts_chunk$set(echo = TRUE)
3638

3739
Patient level prediction aims to use historic data to learn a function between an input (a patient's features such as age/gender/comorbidities at index) and an output (whether the patient experienced an outcome during some time-at-risk). Deep learning is example of the the current state-of-the-art classifiers that can be implemented to learn the function between inputs and outputs.
3840

39-
Deep Learning models are widely used to automatically learn high-level feature representations from the data, and have achieved remarkable results in image processing, speech recognition and computational biology. Recently, interesting results have been shown using large observational healthcare data (e.g., electronic healthcare data or claims data), but more extensive research is needed to assess the power of Deep Learning in this domain.
41+
Deep Learning models are widely used to automatically learn high-level feature representations from the data, and have achieved remarkable results in image processing, speech recognition and computational biology. Recently, interesting results have been shown using large observational healthcare data (e.g., electronic healthcare data or claims data), but more extensive research is needed to assess the power of Deep Learning in this domain.
4042

41-
This vignette describes how you can use the Observational Health Data Sciences and Informatics (OHDSI) [`PatientLevelPrediction`](http://github.com/OHDSI/PatientLevelPrediction) package and [`DeepPatientLevelPrediction`](http://github.com/OHDSI/DeepPatientLevelPrediction) package to build Deep Learning models. This vignette assumes you have read and are comfortable with building patient level prediction models as described in the [`BuildingPredictiveModels` vignette](https://github.com/OHDSI/PatientLevelPrediction/blob/main/inst/doc/BuildingPredictiveModels.pdf). Furthermore, this vignette assumes you are familiar with Deep Learning methods.
43+
This vignette describes how you can use the Observational Health Data Sciences and Informatics (OHDSI) [`PatientLevelPrediction`](http://github.com/OHDSI/PatientLevelPrediction) package and [`DeepPatientLevelPrediction`](http://github.com/OHDSI/DeepPatientLevelPrediction) package to build Deep Learning models. This vignette assumes you have read and are comfortable with building patient level prediction models as described in the [`BuildingPredictiveModels` vignette](https://github.com/OHDSI/PatientLevelPrediction/blob/main/inst/doc/BuildingPredictiveModels.pdf). Furthermore, this vignette assumes you are familiar with Deep Learning methods.
4244

4345
# Background
4446

45-
Deep Learning models are build by stacking an often large number of neural network layers that perform feature engineering steps, e.g embedding, and are collapsed in a final softmax layer (basically a logistic regression layer). These algorithms need a lot of data to converge to a good representation, but currently the sizes of the large observational healthcare databases are growing fast which would make Deep Learning an interesting approach to test within OHDSI's [Patient-Level Prediction Framework](https://academic.oup.com/jamia/article/25/8/969/4989437). The current implementation allows us to perform research at scale on the value and limitations of Deep Learning using observational healthcare data.
47+
Deep Learning models are build by stacking an often large number of neural network layers that perform feature engineering steps, e.g embedding, and are collapsed in a final softmax layer (basically a logistic regression layer). These algorithms need a lot of data to converge to a good representation, but currently the sizes of the large observational healthcare databases are growing fast which would make Deep Learning an interesting approach to test within OHDSI's [Patient-Level Prediction Framework](https://academic.oup.com/jamia/article/25/8/969/4989437). The current implementation allows us to perform research at scale on the value and limitations of Deep Learning using observational healthcare data.
4648

47-
In the package we have used [torch](https://cran.r-project.org/web/packages/torch/index.html) and [tabnet](https://cran.r-project.org/web/packages/tabnet/index.html) but we invite the community to add other backends.
49+
In the package we have used [torch](https://cran.r-project.org/web/packages/torch/index.html) and [tabnet](https://cran.r-project.org/web/packages/tabnet/index.html) but we invite the community to add other backends.
4850

49-
Many network architectures have recently been proposed and we have implemented a number of them, however, this list will grow in the near future. It is important to understand that some of these architectures require a 2D data matrix, i.e. |patient|x|feature|, and others use a 3D data matrix |patient|x|feature|x|time|. The [FeatureExtraction Package](www.github.com\ohdsi\FeatureExtraction) has been extended to enable the extraction of both data formats as will be described with examples below.
51+
Many network architectures have recently been proposed and we have implemented a number of them, however, this list will grow in the near future. It is important to understand that some of these architectures require a 2D data matrix, i.e. \|patient\|x\|feature\|, and others use a 3D data matrix \|patient\|x\|feature\|x\|time\|. The [FeatureExtraction Package](www.github.com\ohdsi\FeatureExtraction) has been extended to enable the extraction of both data formats as will be described with examples below.
5052

5153
Note that training Deep Learning models is computationally intensive, our implementation therefore supports both GPU and CPU. It will automatically check whether there is GPU or not in your computer. A GPU is highly recommended for Deep Learning!
5254

5355
# Non-Temporal Architectures
56+
5457
We implemented the following non-temporal (2D data matrix) architectures:
5558

56-
1) ...
59+
1) ...
5760

5861
For the above two methods, we implemented support for a stacked autoencoder and a variational autoencoder to reduce the feature dimension as a first step. These autoencoders learn efficient data encodings in an unsupervised manner by stacking multiple layers in a neural network. Compared to the standard implementations of LR and MLP these implementations can use the GPU power to speed up the gradient descent approach in the back propagation to optimize the weights of the classifier.
5962

6063
##Example
6164

62-
6365
# Acknowledgments
6466

6567
Considerable work has been dedicated to provide the `DeepPatientLevelPrediction` package.
@@ -69,5 +71,5 @@ citation("PatientLevelPrediction")
6971
```
7072

7173
**Please reference this paper if you use the PLP Package in your work:**
72-
73-
[Reps JM, Schuemie MJ, Suchard MA, Ryan PB, Rijnbeek PR. Design and implementation of a standardized framework to generate and evaluate patient-level prediction models using observational healthcare data. J Am Med Inform Assoc. 2018;25(8):969-975.](http://dx.doi.org/10.1093/jamia/ocy032)
74+
75+
[Reps JM, Schuemie MJ, Suchard MA, Ryan PB, Rijnbeek PR. Design and implementation of a standardized framework to generate and evaluate patient-level prediction models using observational healthcare data. J Am Med Inform Assoc. 2018;25(8):969-975.](http://dx.doi.org/10.1093/jamia/ocy032)

0 commit comments

Comments
 (0)