Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama : first attempt to implement vision API (WIP) #9687

Draft
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented Sep 29, 2024

(Hopefully) fix #8010

Important

This is still WIP, only simple example is working
Collaborators are encouraged to discuss and give feedback on this

Motivation

Currently, the vision capability is provided by llava example, which is a CLIP implementation in ggml. While it's a good start, the API need some refactoring to be cleaner and more future-proof.

Inspired by current rework on sampling API, I propose to move the CLIP implementation into the main libllama, providing user a stable, easy-to-use API like what we did for llama_encode

The goals of this refactoring are:

  • Provide a good API and code architecture for more models to come in the future
  • Single gguf file for both vision+language (so, no more model surgery)
  • Only llava for now (because it simple for me to understand)
  • Have llama-cli accept image input

The no-goals:

  • No change to llama-server. It will be another PR
  • No minicpm, llama-3.2-vision, phi-3-vision, etc. Again, will be another PR

Plan

  • define the plan:
    • gguf metadata and tensor naming scheme
    • define API to be exposed from libllama
  • upgrade convert_hf_to_gguf.py to support llava --> not an ideal implementation, but kinda works
  • extend llama_model and llama_context to hold vision-related data
  • add llama-vision.{cpp|h}
  • add image capability to llama-cli

Implementation

Naming scheme

For metadata, we will add vision.* namespace.

  • vision.type: the type of vision encoder. We only support "clip" for now (not sure if there are any other implementation out there)
  • vision.*: other params for vision encoding, for example patch size, image size, etc
  • vision.clip.*: CLIP-related params

Example:

vision.type = 'clip'
vision.image_size = 336
vision.patch_size = 14
vision.clip.architecture = 'llava'
vision.clip.block_count = 24
vision.clip.embedding_length = 1024
vision.clip.feed_forward_length = 4096
vision.clip.attention.head_count = 16

For tensor naming scheme, we will prefix all vision-related tensor with v.enc.*. For example:

v.mmproj_a.bias
v.mmproj_a.weight
v.enc.embd.cls
v.enc.embd.patch.weight
v.enc.embd.pos.weight
v.enc.blk.0.input_norm.bias
v.enc.blk.0.input_norm.weight

API

libllamawill be responsible for:

  • Accepting bitmap image (RGB format) and split it into patches
  • It will NOT process specific format like PNG, JPG, etc. User must convert these format into bitmap (for example, using STB) before giving to llama
  • The API returns embeddings that can be add to a language batch
// represent an RGB image
// size of data must be equal to 3*nx*ny
struct llama_img {
    uint32_t nx;
    uint32_t ny;
    unsigned char * data;
};

typedef struct llama_img_batch {
    int32_t     n_imgs;
    llama_img * imgs;
    // add other things in future?
}

// encode image into embeddings
int32_t llama_vision_encode(struct llama_context * ctx, llama_img_batch * batch);

// get output embeddings, to be put into language batch
float * llama_vision_get_embeddings(struct llama_context * ctx, int32_t idx);

@ngxson ngxson changed the title llama : refactor vision API (WIP) llama : first attempt to refactor vision API (WIP) Sep 29, 2024
@github-actions github-actions bot added the python python script changes label Sep 29, 2024
@ngxson ngxson changed the title llama : first attempt to refactor vision API (WIP) llama : first attempt to implement vision API (WIP) Sep 29, 2024
@@ -178,6 +178,28 @@ class Adapter:
TYPE = "adapter.type"
LORA_ALPHA = "adapter.lora.alpha"

class Vision:
# only support vision.type = "clip" for now

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably better to be very specific here you are supporting ViT. HuggingFace has also moved away from calling every generic terms. Also I don't think the purpose is to support actual CLIP inference.

Copy link
Collaborator Author

@ngxson ngxson Oct 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, agree. I think for now it's safer to call this clip-vit to reflect the base implementation is openai/clip-vit-*

Atm it's quite complicated for me to drop the clip_ prefix in all functions. But hey at least the file name is now llama-vision.{cpp|h} instead of llama-clip, which should reflect that we can support ViT and also other things to come in the future.

image_output = std::move(padded_image);
}

static void normalize_image_u8_to_f32(const clip_image_u8 src, clip_image_f32 dst, const std::array<float, 3> & mean, const std::array<float, 3> & std) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a rootcause for some bugs ;)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, probably missing clip_image_f32 &; thanks for pointing this out.

@Nekotekina
Copy link
Contributor

Forgive me for (probably) offtopic, but would it be possible to use CLIP model with llama.cpp to compute text embedding for the "classic" purpose of matching text with images?

@ngxson
Copy link
Collaborator Author

ngxson commented Oct 11, 2024

Forgive me for (probably) offtopic, but would it be possible to use CLIP model with llama.cpp to compute text embedding for the "classic" purpose of matching text with images?

I'm not 100% sure, but in theory, everything is possible with the correct model weight and the correct compute graph.

@danbev
Copy link
Collaborator

danbev commented Nov 12, 2024

I took a shot at rebasing this branch and as there have been quite a few changes upstream which affects this PR and I wanted try this API out. I wanted to share the rebased repo in case it would be useful/save time.

I moved the example to a new example named simple-vision as the simple example in upstream has been updated to remove the dependencies on common. There are probably some improvements to be made in the changes made during rebasing but it might be easier/quicker to make those changes to the rebased branch.

@sragrawal
Copy link

@ngxson are you still planning on working on this? Having this as a first step towards llama-3.2-vision would be very useful.

@ngxson
Copy link
Collaborator Author

ngxson commented Dec 3, 2024

I'm not actively working on it, but will continue soon. Ref discussion: #10381

@danbev
Copy link
Collaborator

danbev commented Jan 2, 2025

@ngxson I've been taking a look at this and tried adding some initial support for Llama 3.2 Vision Instruct. I've added a simple-vision-mllama-example that shows the usage and also contains some more details.

The code needs more work and possibly integration with the existing vision api, but my goal was only to get something working as a first step. I've rebased the above linked branch (which builds upon this PR's code) with master and after latest code refactoring there I need to revisit some of my changes. But if this looks like it would be worth pursuing I'd be happy to continue working on it. If nothing else perhaps the model conversion and quantization could be used from this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
examples python python script changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

server: Bring back multimodal support
7 participants