- Add an option to use torch compile
- More efficient conversions from polars to torch in dataset processing
- Automatically detect broken links in docs using github actions
- Model initialization made more flexible with classes
- Added basic transfer learning functionality. See vignette("TransferLearning")
- Add a gpu memory cleaner to clean cached memory after out of memory error
- The python module torch is now accessed through an exported function instead of loading the module at package load
- Added gradient accumulation. Studies running at different sites using different hardware can now use same effective batch size by accumulating gradients.
- Refactored out the cross validation from the hyperparameter tuning
- Remove predictions from non-optimal hyperparameter combinations to save space
- Only use html vignettes
- Rename MLP to MultiLayerPerceptron
- Hotfix: Fix count for polars v0.20.x
- Ensure output from predict_proba is numeric instead of 1d array
- Refactoring: Move cross-validation to a separate function
- Refactoring: Move paramsToTune to a separate function
- linting: Enforcing HADES style
- Calculate AUC ourselves with torch, get rid of scikit-learn dependancy
- added Andromeda to dev dependencies
- Connection parameter fixed to be in line with newest polars
- Fixed a bug where LRFinder used a hardcoded batch size
- Seed is now used in LRFinder so it's reproducible
- Fixed a bug in NumericalEmbedding
- Fixed a bug for Transformer and numerical features
- Fixed a bug when resuming from a full TrainingCache (thanks Zoey Jiang and Linying Zhang )
- Updated installation documentation after feedback from HADES hackathon
- Fixed a bug where order of numeric features wasn't conserved between training and test set
- TrainingCache now only saves prediction dataframe for the best performing model
- New backend which uses pytorch through reticulate instead of torch in R
- All models ported over to python
- Dataset class now in python
- Estimator class in python
- Learning rate finder in python
- Added input checks and tests for wrong inputs
- Training-cache for single hyperparameter combination added
- Fixed empty test for training-cache
- Caching and resuming of hyperparameter iterations
- Fix bug where device function was not working for LRFinder
- Remove torchopt dependancy since adamw is now in torch
- Update torch dependency to >=0.10.0
- Allow device to be a function that resolves during Estimator initialization
- Fix actions after torch updated to v0.10 (#65)
- Fix bug introduced by removing modelType from attributes (#59)
- Check for if number of heads is compatible with embedding dimension fixed (#55)
- Now transformer width can be specified as a ratio of the embedding dimensions (dimToken), (#53)
- A custom metric can now be defined for earlyStopping and learning rate schedule (#51)
- Added a setEstimator function to configure the estimator (#51)
- Seed added for model weight initialization to improve reproducibility (#51)
- Added a learning rate finder for automatic calculatio of learning rate (#51)
- Add seed for sampling hyperparameters (#50)
- used vectorised torch operations to speed up data conversion in torch dataset
- Fix torch binaries issue when running tests from other github actions
- Fix link on website
- Fix tidyselect to silence warnings.
- Added changelog to website
- Added a first model tutorial
- Fixed small bug in default ResNet and Transformer
- created an Estimator R6 class to handle the model fitting
- Added three non-temporal models. An MLP, a ResNet and a Transformer
- ResNet and Transformer have default versions of hyperparameters
- Created tests and documentation for the package