Code for my Bachelor's project (TB): Utilizing the neural network architecture of a fly brain for image processing
Author: Michael Strefeler
Just before the media focused on the attribution of the Physics and Chemistry Nobel Prizes to Machine Learning researchers, a great achievement was also making the headlines. After more than 10 years of research, the "connectome" of the flybrain was completed and published (https://flywire.ai/). A connectome is a comprehensive map of neural connections in the brain, and may be thought of as its "wiring diagram". An organism's nervous system is made up of neuron which communicate through synapses. A connectome is thus constructed by tracing the neurons in a nervous system and mapping where neurons are connected through synapses. The flybrain connectome consists of around 140'000 neurons and more than 50 millions synapses. Researchers have published not only the wiring, but also the type of neurons.
Given that in Deep Learning, the definition of the adequate architecture of a neural network model to deal with a particular task is a big issue. In this project, we aim at evaluating the usefulness of leveraging the wiring of the fly brain (drosophila melanogaster) to process images. We aim at using such architecture to process images and will perform both fine-tuning and transfer-learning as we normally do with neural network architectures defined by engineers. Besides evaluating the performance of the resulting system, we will profit from this project to learn about the way the brain of the fly is wired, in particular, the brain structures processing the visual stimuli.
- Ubuntu or Debian, I have only tested the code on these operating systems. It might work on Windows if CUDA is installed.
- CUDA, needed for GPU acceleration.
- uv, the extremely fast Python package and project manager.
- FFmpeg, a CLI tool used to generate animation videos in this project.
- Optional : git, to be able to clone the repository if needed.
- Optional : tmux, a terminal multiplexer.
- Download the files or clone them from GitHub
- Open a terminal in the directory
- Create a virtual env with uv using
uv venv --python=3.11 .venvand activate it usingsource .venv/bin/activate - Install dependencies
uv pip install -r requirements.txt - Create a file named ".env" in the directory and set the
FLYVIS_ROOT_DIRenvironment variable to a location on your device that you want the data to be in. Here's an example of what to put in that fileFLYVIS_ROOT_DIR="/home/username_here/flyvis-data"runexport FLYVIS_ROOT_DIR="/home/username_here/flyvis-data"to make sure that the environment variable is saved. - Now you can open the directory in VS Code and run the notebooks and scripts.
optical_flow_training.py is a python script that trains the connectome on a given stimulus and generates animations for the input, the ground truth and the model's prediction for one input.
The dataset must already exist to be able to run the script!! Create it my running the Jupyter notebook with the same name. This script works with the following datasets: big_shape, bouncy_squares and moving_MNIST
Make sure you've activated the environment and then run the following command uv run --active scripts/optical_flow_training.py
Usage: optical_flow_training.py [OPTIONS]
This script trains the model on a given stimulus (that must exist already)
and saves animations of the input, ground truth, and prediction.
Options:
-s, --stimulus TEXT Name of the dataset/stimulus to use [required]
-b, --batch-size INTEGER Batch size
-e, --epochs INTEGER Number of epochs
-d, --dont-save-model Don't save the trained model and decoder
--help Show this message and exit.
Example: optical_flow_training.py -s bouncy_squares -b 50 -e 250
optical_flow_load_model.py is a python script that loads a model (network and decoder) and generates ".mp4" files containing the model's prediction.
The dataset must already exist to be able to run the script!! Create it my running the Jupyter notebook with the same name. This script works with the following datasets: big_shape, bouncy_squares and moving_MNIST
Make sure you've activated the environment and then run uv run --active scripts/optical_flow_load_model.py
Usage: optical_flow_load_model.py [OPTIONS]
This script loads the model pretrained on a given stimulus (that must exist already) and saves animations. The name of the model is batch_X_Y_epochs
Options:
-n, --name TEXT Name of the pretrained model to load [required]
-s, --stimulus TEXT Name of the dataset/stimulus to use [required]
-a, --all Create animations for input, ground truth, and
prediction
--help Show this message and exit.
Example: optical_flow_load_model.py -n batch_50_250_epochs -s bouncy_squares
depth.py is a python that trains the model for the depth estimation task using the NYU Depth V2 dataset. This script will save images of the input, ground truth and prediction in the images/NYU_Depth directory.
You must download the dataset file using wget -P $FLYVIS_ROOT_DIR/NYU_Depth_V2 http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat before running the script.
After activating the environment run uv run --active scripts/depth.py
Usage: depth.py [OPTIONS]
Options:
-b, --batch-size INTEGER Batch size
-e, --epochs INTEGER Number of epochs
--help Show this message and exit.