You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**cellseg-models.pytorch** is a library built upon [PyTorch](https://pytorch.org/) that contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name might suggest, this library is heavily inspired by [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) library for semantic segmentation.
25
+
**cellseg-models.pytorch** is a library built upon [PyTorch](https://pytorch.org/) that contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name might suggest, this library is heavily inspired by [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) library for semantic segmentation.
26
26
27
-
## Features
27
+
## What's new? 📢
28
+
- Now you can use any pre-trained image encoder from the [timm](https://github.com/huggingface/pytorch-image-models) library as the model backbone. (Given that they implement the `forward_intermediates` method, most of them do).
29
+
- New example notebooks showing how to finetune **Cellpose** and **Stardist** with the new *state-of-the-art* foundation model backbones: [*UNI*](https://www.nature.com/articles/s41591-024-02857-3#Sec13) from the [MahmoodLab](https://faisal.ai/), and [Prov-GigaPath](https://www.nature.com/articles/s41586-024-07441-w) from Microsoft Research. Check out the notebooks [here (UNI)](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_UNI.ipynb), and [here (Prov-GigaPath)](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist_provgigapath.ipynb).
30
+
-**NOTE!**: **These foundation models are licensed under restrictive licences and you need to agree to the terms of any of the said models to be able to run the above notebooks**. You can request for access here: [the model pages UNI](https://huggingface.co/MahmoodLab/UNI) and [model pages Prov-GigaPath](https://huggingface.co/prov-gigapath/prov-gigapath). These models may only be used for non-commercial, academic research purposes with proper attribution. **Be sure that you have read and understood the terms before using the models**.
31
+
32
+
## Features 🌟
28
33
29
34
- High level API to define cell/nuclei instance segmentation models.
30
-
- 6 cell/nuclei instance segmentation models and more to come.
35
+
- 6 cell/nuclei instance segmentation model architectures and more to come.
31
36
- Open source datasets for training and benchmarking.
32
-
- Pre-trained backbones/encoders from the [timm](https://github.com/huggingface/pytorch-image-models) library.
33
-
- Pre-trained transformer backbones like [DinoV2](https://arxiv.org/abs/2304.07193) and [SAM](https://ai.facebook.com/research/publications/segment-anything/).
34
-
- All the architectures can be augmented to [panoptic segmentation](https://arxiv.org/abs/1801.00868).
35
37
- Flexibility to modify the components of the model architectures.
36
38
- Sliding window inference for large images.
37
39
- Multi-GPU inference.
40
+
- All model architectures can be augmented to [panoptic segmentation](https://arxiv.org/abs/1801.00868).
38
41
- Popular training losses and benchmarking metrics.
39
42
- Benchmarking utilities both for model latency & segmentation performance.
40
43
- Regularization techniques to tackle batch effects/domain shifts such as [Strong Augment](https://arxiv.org/abs/2206.15274), [Spectral decoupling](https://arxiv.org/abs/2011.09468), [Label smoothing](https://arxiv.org/abs/1512.00567).
41
-
- Ability to add transformers to the decoder layers.
42
44
- Example notebooks to train models with [lightning](https://lightning.ai/docs/pytorch/latest/) or [accelerate](https://huggingface.co/docs/accelerate/index).
45
+
- Example notebooks to finetune models with foundation model backbones such as UNI, Prov-GigaPath, and DINOv2.
43
46
44
-
## Installation
45
47
46
-
**Basic installation**
48
+
## Installation 🛠️
47
49
48
-
```shell
49
-
pip install cellseg-models-pytorch
50
-
```
51
-
52
-
**To install extra dependencies (training utilities and datamodules for open-source datasets) use**
<summarystyle="margin-left: 25px;"> Training Hover-Net with Pannuke</summary>
75
+
<summarystyle="margin-left: 25px;">Finetuning CellPose with UNI backbone</summary>
80
76
<divstyle="margin-left: 25px;">
81
77
82
-
-[Training Hover-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_hovernet.ipynb). Here we train the `Hover-Net` nuclei segmentation model with an `imagenet` pretrained `resnet50` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
78
+
-[Finetuning CellPose with UNI](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_UNI.ipynb). Here we finetune the CellPose multi-class nuclei segmentation model with the foundation model `UNI`-image-encoder backbone (checkout [UNI](https://www.nature.com/articles/s41591-024-02857-3#Sec13)). The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface. **NOTE** that you need to have granted access to the UNI weights and agreed to the terms of the model to be able to run the notebook.
83
79
84
80
</div>
85
81
</details>
86
82
87
83
<details>
88
-
<summarystyle="margin-left: 25px;">Training Stardist with Pannuke</summary>
84
+
<summarystyle="margin-left: 25px;">Finetuning Stardist with Prov-GigaPath backbone</summary>
89
85
<divstyle="margin-left: 25px;">
90
86
91
-
-[Training Stardist with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist.ipynb). Here we train the `Stardist` multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
87
+
-[Finetuning Stardist with Prov-GigaPath](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist_provgigapath.ipynb). Here we finetune the Stardist multi-class nuclei segmentation model with the foundation model `Prov-GigaPath`-image-encoder backbone (checkout [Prov-GigaPath](https://www.nature.com/articles/s41586-024-07441-w)). The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface. **NOTE** that you need to have granted access to the Prov-GigaPath weights and agreed to the terms of the model to be able to run the notebook.
92
88
93
89
</div>
94
90
</details>
95
91
96
92
<details>
97
-
<summarystyle="margin-left: 25px;">Training CellPose with Pannuke</summary>
93
+
<summarystyle="margin-left: 25px;">Finetuning CellPose with DINOv2 backbone</summary>
98
94
<divstyle="margin-left: 25px;">
99
95
100
-
-[Training CellPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose.ipynb). Here we train the `CellPose` multi-class nuclei segmentation model with an `imagenet` pretrained `convnext_small` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
96
+
-[Finetuning CellPose with DINOv2 backbone for Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_dinov2.ipynb). Here we finetune the CellPose multi-class nuclei segmentation model with a `LVD-142M` pretrained `DINOv2` backbone. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
101
97
102
98
</div>
103
99
</details>
104
100
105
101
<details>
106
-
<summarystyle="margin-left: 25px;">Training OmniPose with Pannuke</summary>
102
+
<summarystyle="margin-left: 25px;">Finetuning CellVit-SAM with Pannuke</summary>
107
103
<divstyle="margin-left: 25px;">
108
104
109
-
-[Training OmniPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_omnipose.ipynb). Here we train the OmniPose multi-class nuclei segmentation model with an `imagenet` pretrained `focalnet_small_lrf` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
105
+
-[Finetuning CellVit-SAM with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellvit.ipynb). Here we finetune the CellVit-SAM multi-class nuclei segmentation model with a `SA-1B` pretrained `SAM`-image-encoder backbone (checkout [SAM](https://github.com/facebookresearch/segment-anything)). The encoder is transformer based `VitDet`-model. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
110
106
111
107
</div>
112
108
</details>
113
109
114
110
<details>
115
-
<summarystyle="margin-left: 25px;">Training CPP-Net with Pannuke</summary>
111
+
<summarystyle="margin-left: 25px;">Training Hover-Net with Pannuke</summary>
116
112
<divstyle="margin-left: 25px;">
117
113
118
-
-[Training CPP-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cppnet.ipynb). Here we train the CPP-Net multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
114
+
-[Training Hover-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_hovernet.ipynb). Here we train the `Hover-Net`nuclei segmentation model with an `imagenet` pretrained `resnet50` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
119
115
120
116
</div>
121
117
</details>
122
118
119
+
120
+
121
+
123
122
<details>
124
-
<summarystyle="margin-left: 25px;">Finetuning CellPose with DINOv2 backbone</summary>
123
+
<summarystyle="margin-left: 25px;">Training Stardist with Pannuke</summary>
125
124
<divstyle="margin-left: 25px;">
126
125
127
-
-[Finetuning CellPose with DINOv2 backbone for Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_dinov2.ipynb). Here we finetune the CellPose multi-class nuclei segmentation model with a `LVD-142M` pretrained `DINOv2` backbone. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
126
+
-[Training Stardist with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist.ipynb). Here we train the `Stardist` multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
128
127
129
128
</div>
130
129
</details>
131
130
132
131
<details>
133
-
<summarystyle="margin-left: 25px;">Finetuning CellVit-SAM with Pannuke</summary>
132
+
<summarystyle="margin-left: 25px;">Training CellPose with Pannuke</summary>
134
133
<divstyle="margin-left: 25px;">
135
134
136
-
-[Finetuning CellVit-SAM with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellvit.ipynb). Here we finetune the CellVit-SAM multi-class nuclei segmentation model with a `SA-1B` pretrained `SAM`-image-encoder backbone (checkout [`SAM`](https://github.com/facebookresearch/segment-anything)). The encoder is transformer based `VitDet`-model. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
135
+
-[Training CellPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose.ipynb). Here we train the `CellPose` multi-class nuclei segmentation model with an `imagenet` pretrained `convnext_small` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
137
136
138
137
</div>
139
138
</details>
140
139
141
-
142
140
<details>
143
-
<summarystyle="margin-left: 25px;">Benchmarking Cellpose Trained on Pannuke</summary>
141
+
<summarystyle="margin-left: 25px;">Training OmniPose with Pannuke</summary>
144
142
<divstyle="margin-left: 25px;">
145
143
146
-
-[Benchmarking Cellpose Trained on Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_cellpose_benchmark.ipynb). Here we run benchmarking for `Cellpose` that was trained on Pannuke. Both the model performance and latency benchmarking are covered.
144
+
-[Training OmniPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_omnipose.ipynb). Here we train the OmniPose multi-class nuclei segmentation model with an `imagenet` pretrained `focalnet_small_lrf` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
147
145
148
146
</div>
149
147
</details>
150
148
151
149
<details>
152
-
<summarystyle="margin-left: 25px;">Training CellPose with Lizard</summary>
150
+
<summarystyle="margin-left: 25px;">Training CPP-Net with Pannuke</summary>
153
151
<divstyle="margin-left: 25px;">
154
152
155
-
-[Training CellPose with Lizard](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/lizard_nuclei_segmentation_cellpose.ipynb). Here we train the `Cellpose`model with Lizard dataset that is composed of varying sized images. This example is old and might not be up to date.
153
+
-[Training CPP-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cppnet.ipynb). Here we train the CPP-Net multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
156
154
157
155
</div>
158
156
</details>
159
157
158
+
<details>
159
+
<summarystyle="margin-left: 25px;">Benchmarking Cellpose Trained on Pannuke</summary>
160
+
<divstyle="margin-left: 25px;">
160
161
162
+
-[Benchmarking Cellpose Trained on Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_cellpose_benchmark.ipynb). Here we run benchmarking for `Cellpose` that was trained on Pannuke. Both the model performance and latency benchmarking are covered.
0 commit comments