Skip to content

Commit e97bc7d

Browse files
committed
chore: update readme, bump version, add changelog
1 parent 547c848 commit e97bc7d

File tree

4 files changed

+74
-63
lines changed

4 files changed

+74
-63
lines changed

CHANGELOG.md

+29-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,33 @@
11

2-
<a id='changelog-0.1.23'></a>
2+
<a id='changelog-0.1.25'></a>
3+
# 0.1.25 — 2024-07-05
4+
5+
## Features
6+
7+
- Image encoders are imported now only from timm models.
8+
- Add `enc_out_indices` to model classes, to enable selecting which layers to use as the encoder outputs.
9+
10+
# Removed
11+
- Removed SAM and DINOv2 original implementation image-encoders from this repo. These can be found from timm models these days.
12+
- Removed `cellseg_models_pytorch.training` module which was left unused after example notebooks were updated.
13+
14+
## Examples
15+
16+
- Updated example notebooks.
17+
- Added new example notebooks utilizing UNI foundation model from the MahmoodLab.
18+
- Added new example notebooks utilizing the Prov-GigaPath foundation model from the Microsoft Research.
19+
- **NOTE:** These examples use the huggingface model hub to load the weights. Permission to use the model weights is required to run these examples.
20+
21+
## Chore
22+
23+
- Update timm version to above 1.0.0.
24+
25+
## Breaking changes
26+
27+
- Lose support for python 3.9
28+
- The `self.encoder` in each model is new, thus, models with trained weights from previous versions of the package will not work with this version.
29+
30+
<a id='changelog-0.1.24'></a>
331
# 0.1.24 — 2023-10-13
432

533
## Style

README.md

+44-46
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@
55
**Python library for 2D cell/nuclei instance segmentation models written with [PyTorch](https://pytorch.org/).**
66

77
[![Generic badge](https://img.shields.io/badge/License-MIT-<COLOR>.svg?style=for-the-badge)](https://github.com/okunator/cellseg_models.pytorch/blob/master/LICENSE)
8-
[![PyTorch - Version](https://img.shields.io/badge/PYTORCH-1.8.1+-red?style=for-the-badge&logo=pytorch)](https://pytorch.org/)
9-
[![Python - Version](https://img.shields.io/badge/PYTHON-3.9+-red?style=for-the-badge&logo=python&logoColor=white)](https://www.python.org/)
8+
[![PyTorch - Version](https://img.shields.io/badge/PYTORCH-2+-red?style=for-the-badge&logo=pytorch)](https://pytorch.org/)
9+
[![Python - Version](https://img.shields.io/badge/PYTHON-3.10+-red?style=for-the-badge&logo=python&logoColor=white)](https://www.python.org/)
1010
<br>
1111
[![Github Test](https://img.shields.io/github/actions/workflow/status/okunator/cellseg_models.pytorch/tests.yml?label=Tests&logo=github&&style=for-the-badge)](https://github.com/okunator/cellseg_models.pytorch/actions/workflows/tests.yml)
1212
[![Pypi](https://img.shields.io/pypi/v/cellseg-models-pytorch?color=blue&logo=pypi&style=for-the-badge)](https://pypi.org/project/cellseg-models-pytorch/)
@@ -22,40 +22,37 @@
2222

2323
## Introduction
2424

25-
**cellseg-models.pytorch** is a library built upon [PyTorch](https://pytorch.org/) that contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name might suggest, this library is heavily inspired by [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) library for semantic segmentation.
25+
**cellseg-models.pytorch** is a library built upon [PyTorch](https://pytorch.org/) that contains multi-task encoder-decoder architectures along with dedicated post-processing methods for segmenting cell/nuclei instances. As the name might suggest, this library is heavily inspired by [segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch) library for semantic segmentation.
2626

27-
## Features
27+
## What's new? 📢
28+
- Now you can use any pre-trained image encoder from the [timm](https://github.com/huggingface/pytorch-image-models) library as the model backbone. (Given that they implement the `forward_intermediates` method, most of them do).
29+
- New example notebooks showing how to finetune **Cellpose** and **Stardist** with the new *state-of-the-art* foundation model backbones: [*UNI*](https://www.nature.com/articles/s41591-024-02857-3#Sec13) from the [MahmoodLab](https://faisal.ai/), and [Prov-GigaPath](https://www.nature.com/articles/s41586-024-07441-w) from Microsoft Research. Check out the notebooks [here (UNI)](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_UNI.ipynb), and [here (Prov-GigaPath)](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist_provgigapath.ipynb).
30+
- **NOTE!**: **These foundation models are licensed under restrictive licences and you need to agree to the terms of any of the said models to be able to run the above notebooks**. You can request for access here: [the model pages UNI](https://huggingface.co/MahmoodLab/UNI) and [model pages Prov-GigaPath](https://huggingface.co/prov-gigapath/prov-gigapath). These models may only be used for non-commercial, academic research purposes with proper attribution. **Be sure that you have read and understood the terms before using the models**.
31+
32+
## Features 🌟
2833

2934
- High level API to define cell/nuclei instance segmentation models.
30-
- 6 cell/nuclei instance segmentation models and more to come.
35+
- 6 cell/nuclei instance segmentation model architectures and more to come.
3136
- Open source datasets for training and benchmarking.
32-
- Pre-trained backbones/encoders from the [timm](https://github.com/huggingface/pytorch-image-models) library.
33-
- Pre-trained transformer backbones like [DinoV2](https://arxiv.org/abs/2304.07193) and [SAM](https://ai.facebook.com/research/publications/segment-anything/).
34-
- All the architectures can be augmented to [panoptic segmentation](https://arxiv.org/abs/1801.00868).
3537
- Flexibility to modify the components of the model architectures.
3638
- Sliding window inference for large images.
3739
- Multi-GPU inference.
40+
- All model architectures can be augmented to [panoptic segmentation](https://arxiv.org/abs/1801.00868).
3841
- Popular training losses and benchmarking metrics.
3942
- Benchmarking utilities both for model latency & segmentation performance.
4043
- Regularization techniques to tackle batch effects/domain shifts such as [Strong Augment](https://arxiv.org/abs/2206.15274), [Spectral decoupling](https://arxiv.org/abs/2011.09468), [Label smoothing](https://arxiv.org/abs/1512.00567).
41-
- Ability to add transformers to the decoder layers.
4244
- Example notebooks to train models with [lightning](https://lightning.ai/docs/pytorch/latest/) or [accelerate](https://huggingface.co/docs/accelerate/index).
45+
- Example notebooks to finetune models with foundation model backbones such as UNI, Prov-GigaPath, and DINOv2.
4346

44-
## Installation
4547

46-
**Basic installation**
48+
## Installation 🛠️
4749

48-
```shell
49-
pip install cellseg-models-pytorch
50-
```
51-
52-
**To install extra dependencies (training utilities and datamodules for open-source datasets) use**
5350

5451
```shell
55-
pip install cellseg-models-pytorch[all]
52+
pip install cellseg-models-pytorch
5653
```
5754

58-
## Models
55+
## Models 🤖
5956

6057
| Model | Paper |
6158
| -------------------------- | ------------------------------------------------------------------------------ |
@@ -71,102 +68,103 @@ pip install cellseg-models-pytorch[all]
7168
| Dataset | Paper |
7269
| ----------------------------- | ------------------------------------------------------------------------------------------------ |
7370
| [[7, 8](#References)] Pannuke | https://arxiv.org/abs/2003.10778 , https://link.springer.com/chapter/10.1007/978-3-030-23937-4_2 |
74-
| [[9](#References)] Lizard | http://arxiv.org/abs/2108.11195 |
7571

76-
## Notebook examples
72+
## Notebook examples 👇
7773

7874
<details>
79-
<summary style="margin-left: 25px;"> Training Hover-Net with Pannuke</summary>
75+
<summary style="margin-left: 25px;">Finetuning CellPose with UNI backbone</summary>
8076
<div style="margin-left: 25px;">
8177

82-
- [Training Hover-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_hovernet.ipynb). Here we train the `Hover-Net` nuclei segmentation model with an `imagenet` pretrained `resnet50` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
78+
- [Finetuning CellPose with UNI](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_UNI.ipynb). Here we finetune the CellPose multi-class nuclei segmentation model with the foundation model `UNI`-image-encoder backbone (checkout [UNI](https://www.nature.com/articles/s41591-024-02857-3#Sec13)). The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface. **NOTE** that you need to have granted access to the UNI weights and agreed to the terms of the model to be able to run the notebook.
8379

8480
</div>
8581
</details>
8682

8783
<details>
88-
<summary style="margin-left: 25px;">Training Stardist with Pannuke</summary>
84+
<summary style="margin-left: 25px;">Finetuning Stardist with Prov-GigaPath backbone</summary>
8985
<div style="margin-left: 25px;">
9086

91-
- [Training Stardist with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist.ipynb). Here we train the `Stardist` multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
87+
- [Finetuning Stardist with Prov-GigaPath](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist_provgigapath.ipynb). Here we finetune the Stardist multi-class nuclei segmentation model with the foundation model `Prov-GigaPath`-image-encoder backbone (checkout [Prov-GigaPath](https://www.nature.com/articles/s41586-024-07441-w)). The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface. **NOTE** that you need to have granted access to the Prov-GigaPath weights and agreed to the terms of the model to be able to run the notebook.
9288

9389
</div>
9490
</details>
9591

9692
<details>
97-
<summary style="margin-left: 25px;">Training CellPose with Pannuke</summary>
93+
<summary style="margin-left: 25px;">Finetuning CellPose with DINOv2 backbone</summary>
9894
<div style="margin-left: 25px;">
9995

100-
- [Training CellPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose.ipynb). Here we train the `CellPose` multi-class nuclei segmentation model with an `imagenet` pretrained `convnext_small` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
96+
- [Finetuning CellPose with DINOv2 backbone for Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_dinov2.ipynb). Here we finetune the CellPose multi-class nuclei segmentation model with a `LVD-142M` pretrained `DINOv2` backbone. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
10197

10298
</div>
10399
</details>
104100

105101
<details>
106-
<summary style="margin-left: 25px;">Training OmniPose with Pannuke</summary>
102+
<summary style="margin-left: 25px;">Finetuning CellVit-SAM with Pannuke</summary>
107103
<div style="margin-left: 25px;">
108104

109-
- [Training OmniPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_omnipose.ipynb). Here we train the OmniPose multi-class nuclei segmentation model with an `imagenet` pretrained `focalnet_small_lrf` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
105+
- [Finetuning CellVit-SAM with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellvit.ipynb). Here we finetune the CellVit-SAM multi-class nuclei segmentation model with a `SA-1B` pretrained `SAM`-image-encoder backbone (checkout [SAM](https://github.com/facebookresearch/segment-anything)). The encoder is transformer based `VitDet`-model. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
110106

111107
</div>
112108
</details>
113109

114110
<details>
115-
<summary style="margin-left: 25px;">Training CPP-Net with Pannuke</summary>
111+
<summary style="margin-left: 25px;"> Training Hover-Net with Pannuke</summary>
116112
<div style="margin-left: 25px;">
117113

118-
- [Training CPP-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cppnet.ipynb). Here we train the CPP-Net multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
114+
- [Training Hover-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_hovernet.ipynb). Here we train the `Hover-Net` nuclei segmentation model with an `imagenet` pretrained `resnet50` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
119115

120116
</div>
121117
</details>
122118

119+
120+
121+
123122
<details>
124-
<summary style="margin-left: 25px;">Finetuning CellPose with DINOv2 backbone</summary>
123+
<summary style="margin-left: 25px;">Training Stardist with Pannuke</summary>
125124
<div style="margin-left: 25px;">
126125

127-
- [Finetuning CellPose with DINOv2 backbone for Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_dinov2.ipynb). Here we finetune the CellPose multi-class nuclei segmentation model with a `LVD-142M` pretrained `DINOv2` backbone. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
126+
- [Training Stardist with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist.ipynb). Here we train the `Stardist` multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
128127

129128
</div>
130129
</details>
131130

132131
<details>
133-
<summary style="margin-left: 25px;">Finetuning CellVit-SAM with Pannuke</summary>
132+
<summary style="margin-left: 25px;">Training CellPose with Pannuke</summary>
134133
<div style="margin-left: 25px;">
135134

136-
- [Finetuning CellVit-SAM with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellvit.ipynb). Here we finetune the CellVit-SAM multi-class nuclei segmentation model with a `SA-1B` pretrained `SAM`-image-encoder backbone (checkout [`SAM`](https://github.com/facebookresearch/segment-anything)). The encoder is transformer based `VitDet`-model. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
135+
- [Training CellPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose.ipynb). Here we train the `CellPose` multi-class nuclei segmentation model with an `imagenet` pretrained `convnext_small` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
137136

138137
</div>
139138
</details>
140139

141-
142140
<details>
143-
<summary style="margin-left: 25px;">Benchmarking Cellpose Trained on Pannuke</summary>
141+
<summary style="margin-left: 25px;">Training OmniPose with Pannuke</summary>
144142
<div style="margin-left: 25px;">
145143

146-
- [Benchmarking Cellpose Trained on Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_cellpose_benchmark.ipynb). Here we run benchmarking for `Cellpose` that was trained on Pannuke. Both the model performance and latency benchmarking are covered.
144+
- [Training OmniPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_omnipose.ipynb). Here we train the OmniPose multi-class nuclei segmentation model with an `imagenet` pretrained `focalnet_small_lrf` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
147145

148146
</div>
149147
</details>
150148

151149
<details>
152-
<summary style="margin-left: 25px;">Training CellPose with Lizard</summary>
150+
<summary style="margin-left: 25px;">Training CPP-Net with Pannuke</summary>
153151
<div style="margin-left: 25px;">
154152

155-
- [Training CellPose with Lizard](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/lizard_nuclei_segmentation_cellpose.ipynb). Here we train the `Cellpose` model with Lizard dataset that is composed of varying sized images. This example is old and might not be up to date.
153+
- [Training CPP-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cppnet.ipynb). Here we train the CPP-Net multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
156154

157155
</div>
158156
</details>
159157

158+
<details>
159+
<summary style="margin-left: 25px;">Benchmarking Cellpose Trained on Pannuke</summary>
160+
<div style="margin-left: 25px;">
160161

162+
- [Benchmarking Cellpose Trained on Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_cellpose_benchmark.ipynb). Here we run benchmarking for `Cellpose` that was trained on Pannuke. Both the model performance and latency benchmarking are covered.
161163

164+
</div>
165+
</details>
162166

163-
164-
165-
166-
167-
168-
169-
## Code Examples
167+
## Code Examples 💻
170168

171169
**Define Cellpose for cell segmentation.**
172170

cellseg_models_pytorch/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
from . import inference, models, utils
22
from .models import CellPoseUnet, HoverNet, StarDistUnet
33

4-
__version__ = "0.1.24"
4+
__version__ = "0.1.25"
55
submodules = ["utils", "models", "inference"]
66
__all__ = [
77
"__version__",

0 commit comments

Comments
 (0)