Skip to content

Recipes for shrinking, optimizing, customizing cutting edge vision models. ๐Ÿ’œ

License

Notifications You must be signed in to change notification settings

merveenoyan/smol-vision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

42 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Smol

Smol Vision ๐Ÿฃ

Recipes for shrinking, optimizing, customizing cutting edge vision and multimodal AI models.

Latest examples ๐Ÿ‘‡๐Ÿป

Note: The script and notebook are updated to fix few issues related to QLoRA!

Notebook Description
Quantization/ONNX Faster and Smaller Zero-shot Object Detection with Optimum Quantize the state-of-the-art zero-shot object detection model OWLv2 using Optimum ONNXRuntime tools.
VLM Fine-tuning Fine-tune PaliGemma Fine-tune state-of-the-art vision language backbone PaliGemma using transformers.
Intro to Optimum/ORT Optimizing DETR with ๐Ÿค— Optimum A soft introduction to exporting vision models to ONNX and quantizing them.
Model Shrinking Knowledge Distillation for Computer Vision Knowledge distillation for image classification.
Quantization Fit in vision models using Quanto Fit in vision models to smaller hardware using quanto
Speed-up Faster foundation models with torch.compile Improving latency for foundation models using torch.compile
VLM Fine-tuning Fine-tune Florence-2 Fine-tune Florence-2 on DocVQA dataset
VLM Fine-tuning QLoRA Fine-tune IDEFICS3 on VQAv2 QLoRA Fine-tune IDEFICS3 on VQAv2 dataset
VLM Fine-tuning (Script) QLoRA Fine-tune IDEFICS3 on VQAv2 QLoRA Fine-tune IDEFICS3 on VQAv2 dataset
Multimodal RAG Multimodal RAG using ColPali and Qwen2-VL Learn to retrieve documents and pipeline to RAG without hefty document processing using ColPali through Byaldi and do the generation with Qwen2-VL
Speed-up/Memory Optimization Vision language model serving using TGI (SOON) Explore speed-ups and memory improvements for vision-language model serving with text-generation inference
Quantization/Optimum/ORT All levels of quantization and graph optimizations for Image Segmentation using Optimum (SOON) End-to-end model optimization using Optimum