All instructions are relative to the root directory of the repository.
- To install the needed packages, a requirements.txt file is provided. To install the needed packages, run:
pip install -r requirements.txt
- Alternatively, on Linux you can use conda for managing our environment. Install the environment with the provided
.yml
file by running
conda env create -f vagus.yml
Make sure conda is installed on your computer otherwise visit https://docs.anaconda.com/anaconda/install/index.html
- In the
initialise_run()
insrc/config.py
, change directory to the root directory of the repository. - The data is owned by EPFL STI IBI-STI TNE and we are not allowed to share it, please ask them to have access to it. Download data from Google Drive, unzip the folder and put it in:
data/original_dataset
. - Run
src/data_convert.py
to convert the dataset to model ready format (only run this once).
- Please, be sure that the
cur_model_id
parameter insrc/config.py
is set to the model for which you want to visualize the results. To use our best pre-trained model, usecur_model_id = 'FL_and_BCE_Adam_default'
- To visualize the model predictions and the evaluation scores run
src/visualisation.py
Steps required only if training the model again (it should take around 4 hours):
- Change the
cur_model_id
parameter insrc/config.py
to a suitable name in order to avoid overriding our pre-trained models. - Run
src/run.py
- To investigate different loss functions change the
_loss
insrc/train.py
. Refer to keras documentation for appropriate loss functions. Examples are provided insrc/loss.py
. Beware that our predictions are integers and not one-hot labels.
data
- contains datasetsmodel_checkpoints
- contains pretrained modelsmodel_losses
- contains metrics and losses of model training runsreport
- report figures and latexresults
- more pictures of various resultssrc
- code folderaugmentation.py
- augmentation utility functionsconfig.py
- constant and configuration parametersdata_convert.py
- converts original dataset to model ready formatdata_utils.py
- various data utility functionseval.py
- evaluation utility functionsfine_tune.py
- fine tuning function (to be used for future extensions of this work)loss.py
- loss functions and metrics that are passed into model during trainingmodel.py
- function to build modelpost_processing.py
- utility functions for post-processingrun.py
- main run script for model trainingstats.py
- utility functions for prediction statisticstrain.py
- main script for trainingvisualisation.py
- main script for visualising model results
- Ivan Bioli
- Umer Hasan
- Cao Khanh Nguyen