Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@ Leveraging large-scale pre-trained ViTs, EoMT achieves accuracy similar to state

Turns out, *your ViT is secretly an image segmentation model*. EoMT shows that architectural complexity isn't necessary. For segmentation, a plain Transformer is all you need.

## News

- [2025-08-15]: 🚀 **EoMT is supported in [LightlyTrain](https://github.com/lightly-ai/lightly-train)!**
Pre-train ViT backbones and fine-tune with EoMT in just [a few lines of code](https://docs.lightly.ai/train/stable/semantic_segmentation.html). LightlyTrain is compatible with DINOv3 models. 🚀

## 🤗 Transformers Quick Usage Example

EoMT is also available on [Hugging Face Transformers](https://huggingface.co/docs/transformers/main/model_doc/eomt). To install:
Expand Down