The project is organized following the cookie-cutter-datascience guideline.
The code is organised in different projects, namely beardetection
,
bearfacedetection
, bearfacesegmentation
, bearidentification
and
bearedge
.
All the data lives in the data
folder and follows some data engineering
conventions.
Each project should use these subfolders according to their project names (eg.
data/05_model_input/beardetection
).
The notebooks live in the notebooks
folder and are also keyed by the project
names.
The scripts live in scripts
folder and are keyed by the project names.
A Makefile makes it easy to prepare commands and execute them in a DAG fashion. If we need something more involved for running code, we will add it later.
Run the following command to start a jupyter lab environment and start editing/running your notebooks:
make dev_noteboook
Install rclone and configure a remote for your Google Drive following this documentation.
make download_dataset
Find the private key on
roboflow.
Click on Export Dataset, select YOLOv8
format and show download code. The raw
URL is displayed and the private key is
located after the key
parameter:
https://app.roboflow.com/ds/b8vuUrGhDn?key=***
One can use the following command:
PRIVATE_KEY=findmeonroboflow make download_roboflow_bearfacedetection
Or can export the PRIVATE_KEY
as follows before running the subsequent commands:
export PRIVATE_KEY=findmeonroboflow
To download all data, run the following command:
export PRIVATE_KEY=findmeonroboflow
make data
Create a virtualenv using your tool of choice (eg. conda, pyenv, regular python, ...) and activate it.
conda create -n ai4bears python=3.9
conda activate ai4bears
Then one can run the following command to install the python dependencies:
make dev_setup