Skip to content

Commit

Permalink
Multimodal: Fix memory leak with MMEmbeddings
Browse files Browse the repository at this point in the history
On a basic python class, class attributes are handled by reference,
meaning that every instance of embeddings would attach to that reference
and allocate more memory.

Switch to a Pydantic class and factory methods when instantiating.

Signed-off-by: kingbri <[email protected]>
  • Loading branch information
kingbri1 committed Feb 2, 2025
1 parent bd16681 commit 96e8375
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions common/multimodal.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
from typing import List
from backends.exllamav2.vision import get_image_embedding
from common import model
from loguru import logger
from pydantic import BaseModel, Field
from typing import List

from common.optional_dependencies import dependencies

if dependencies.exllamav2:
from exllamav2 import ExLlamaV2VisionTower


class MultimodalEmbeddingWrapper:
class MultimodalEmbeddingWrapper(BaseModel):
"""Common multimodal embedding wrapper"""

type: str = None
content: List = []
text_alias: List[str] = []
content: list = Field(default_factory=list)
text_alias: List[str] = Field(default_factory=list)

async def add(self, url: str):
# Determine the type of vision embedding to use
Expand Down

0 comments on commit 96e8375

Please sign in to comment.