What's with the old fish logo in CBASv2?
The original CBAS logo featured fish, which confused some of our rodent-focused users! The name CBAS is pronounced like "sea bass," which was the source of the original pun.
Note
CBAS v3 is the actively developed version and is recommended for new projects and large videos. It introduces a streamed/chunked encoder (prevents multi-GB RAM spikes), a more robust backend, and a simpler labeling/training workflow. Please report issues! v3 is under active development.
Important
Need to reproduce a published result exactly? Use the v2-stable branch that matches the paper.
CBAS (Circadian Behavioral Analysis Suite) is a full-featured, open-source application for phenotyping complex animal behaviors. It automates behavior classification from video and provides a streamlined interface for labeling, training, visualization, and analysis.
Originally created by Logan Perry and now maintained by the Jones Lab at Texas A&M University.
- Standalone Desktop App: Robust, cross-platform (Windows/macOS/Linux) Electron app. No browser required.
- Real-time Video Acquisition: Record and process streams from any number of RTSP cameras.
- High-Performance AI: Supports state-of-the-art DINOv3 Vision Transformer backbones (gated via Hugging Face).
- Active Learning Workflow: Pre-label with an existing model, then rapidly “Review & Correct” using the interactive timeline.
- Confidence-Based Filtering: Jump directly to uncertain segments to spend time where it matters most.
- Automated Model Training: Create custom classifiers with balanced/weighted options and detailed performance reports.
- Rich Visualization: Multi-plot actograms, side-by-side behavior comparisons, adjustable binning/light cycles, optional acrophase.
- Streamed Encoding (No OOM): Videos are processed in chunks, not loaded entirely into RAM-fixes v2’s large-video memory failures.
- Standalone Desktop Application: Cross-platform Electron app; runs fully offline once models are cached.
- Supercharged Labeling: Active-learning with confidence-guided review and fast boundary correction (keyboard-first workflow).
- Enhanced Visualization: Tiled, side-by-side actograms for direct behavior comparison.
- Modern, Stable Backend: Dedicated worker threads keep the UI responsive during long encodes/training.
- Self-Describing Models: Bundled metadata ensures trained heads reload with the correct dimensions (prevents “shape mismatch” errors).
| Scenario | Recommended |
|---|---|
| New project, large videos (≥ 5–10 min) | v3 (beta) – streamed encoder prevents RAM exhaustion |
| Active labeling/training with confidence-guided review | v3 (beta) |
| Exact reproduction of published results | v2-stable |
| Very old machines with a known v2 workflow | v2-stable (use shorter/segmented clips) |
Seeing “Unable to allocate N GiB” in v2? Switch to v3-it streams frames and eliminates whole-video RAM spikes.
(Left) Real-time video recording of an individual mouse. (Right) Real-time actogram generation of nine distinct home cage behaviors.
The acquisition module is capable of batch processing streaming video data from any number of network-configured real-time streaming protocol (RTSP) IP cameras. This module's core functionality remains consistent with v2.
This module uses a powerful machine learning model to automatically classify behaviors and provides tools to analyze the results.
- High-Performance ML Backend: CBAS supports DINOv3 vision transformers as feature backbones with a custom LSTM head for time-series classification.
- Multi-Actogram Analysis: Tiled, side-by-side behavior plots with distinct colors for clear analysis.
- Interactive Plotting: Adjust bin size, start time, thresholds, light cycles; optionally plot acrophase.
The training module in v3 introduces a modern, efficient workflow for creating high-quality, custom datasets and models.
- Active Learning Interface: Pre-label, then “Review & Correct” using confidence filters and an interactive timeline.
- Flexible Training Options: Balanced oversampling or weighted loss for rare behaviors.
- Automated Performance Reports: F1/precision/recall plots and confusion matrices generated at the end of training.
We have tested the installation instructions to be as straightforward and user-friendly as possible, even for users with limited programming experience.
Click here for step-by-step instructions on how to install CBAS v3 from source.
As CBAS v3 is in active development, we recommend updating frequently to get the latest features and bug fixes. Because you installed CBAS from source using Git, updating is simple.
-
Open a Command Prompt (Windows) or Terminal (macOS/Linux).
-
Navigate to your CBAS directory. This is the folder where you originally ran the
git clonecommand.# Example for Windows cd C:\Users\YourName\Documents\CBAS
-
Pull the latest changes from the main branch on GitHub. This is the core update command:
git pull origin main
-
Update dependencies. Occasionally, we may add or update the required Python or Node.js packages. It's good practice to run these commands after updating:
[!NOTE] Remember to activate your virtual environment before running the
pipcommand.# Activate your virtual environment first # On Windows: .\venv\Scripts\activate # On macOS/Linux: source venv/bin/activate # Update Python packages pip install -r requirements.txt # Update Node.js packages npm install
After these steps, you can launch the application as usual with npm start, and you will be running the latest version.
The documentation is organized to follow the logical workflow of a typical project.
-
1. Hardware & Project Setup
-
2. Core Workflows
-
3. Support
CBAS includes a pre-trained model, the JonesLabModel, which was originally developed for the Guided Labeling workflow. It was trained under tightly controlled conditions using Jones Lab’s own recording hardware and lighting environment. Performance on your own data will vary substantially.
The JonesLabModel is a legacy CBAS v2 model. It uses a different model head, parameter dimensions, and encoder configuration than CBAS v3.
It is not compatible with CBAS v3 inference without manual conversion.
Attempting to load it directly in CBAS v3 will result in shape mismatch or “too many values to unpack” errors.
Most users should not copy or attempt to use this model in new projects.
If you want a working demonstration, train a small v3 model on your own labeled data.
If you fully understand the v2 model structure and still wish to explore it:
- Locate the model: Find the
JonesLabModelfolder inside the source directory (CBAS/models/JonesLabModel). - Copy to your project: Copy the entire folder.
- Paste into your project: Paste it into your project’s
models/directory.
When you next open the Label/Train page or click Refresh Datasets, the JonesLabModel card will appear.
Note: This model will only function correctly under CBAS v2 or a v3 installation explicitly adapted for legacy support.
CBAS allows power users to experiment with different feature encoder models on a per-project basis. Copy cbas_config.yaml.example into your project root as cbas_config.yaml and edit the encoder_model_identifier.
Note
Using DINOv3 requires a one-time Hugging Face authentication (read token) and accepting the model’s terms. Switching encoders requires re-encoding videos in that project.
Some state-of-the-art models require you to agree to their terms of use and authenticate with Hugging Face before you can download them. This is a one-time setup per computer.
Step 1: Get Your Hugging Face Access Token
- Log into your account on huggingface.co.
- Go to your Settings page by clicking your profile picture in the top-right.
- Navigate to the Access Tokens tab on the left.
- Create a New token. Give it a name (e.g.,
cbas-access) and assign it thereadrole. - Copy the generated token (
hf_...) to your clipboard.
Step 2: Log In from the Command Line
You must log in from the same terminal environment you use to run CBAS.
- Open a Command Prompt (Windows) or Terminal (macOS/Linux).
- Navigate to your main CBAS source code directory (e.g.,
cd C:\Users\YourName\Documents\CBAS). - Activate your Python virtual environment:
- On Windows:
.\venv\Scripts\activate - On macOS/Linux:
source venv/bin/activate
- On Windows:
- Run the login command:
huggingface-cli login
- Paste your access token when prompted and press Enter. Your terminal is now authenticated.
Step 3: Agree to the Model's Terms
Before you can download the model, you must accept its terms on the model's page.
- Go to the model's page on Hugging Face, for example: facebook/dinov3-vitb16-pretrain-lvd1689m.
- If prompted, click the button to review and accept the terms of use.
Step 4: Configure Your Project and Run CBAS
- In your specific CBAS project folder, edit your
cbas_config.yamlfile to uncomment the line for the DINOv3 model.encoder_model_identifier: "facebook/dinov3-vitb16-pretrain-lvd1689m"
- Save the file and run CBAS normally (
npm start).
The first time you launch a project with the new model, the backend will download the model files, which may take several minutes. All subsequent launches will be fast. Because this is a new encoder, all videos in this project will be automatically re-queued for encoding.
If you encounter issues with installation, "No files found" errors, or model training:
➡️ Click here to view the comprehensive Troubleshooting Guide
This guide covers:
- Installation errors (e.g., "No matching distribution for torch")
- "No new files found" / Inference errors
- Training stopping early
- Visual artifacts in recordings
While not required, we strongly recommend using a modern NVIDIA GPU (RTX 20-series or newer) to allow for GPU-accelerated training and inference.
Our lab's test machines:
- CPU: AMD Ryzen 9 5900X / 7900X
- RAM: 32 GB DDR4/DDR5
- SSD: 1TB+ NVMe SSD
- GPU: NVIDIA GeForce RTX 3090 24GB / 4090 24GB
As this is a beta, feedback and bug reports are highly encouraged! Please open an Issue to report any problems you find.






