You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The ChatMessage class in LlamaIndex 0.12.11 is not recognizing ImageBlock, regardless of the Python version used. Instead, it processes the content as though it only contains TextBlock. This behavior was observed in both Python 3.9.6 and Python 3.13.1.
In earlier versions of LlamaIndex (e.g., 0.12.10), this issue did not occur when using Python 3.9.6. The problem seems to have been introduced in version 0.12.11, significantly impacting workflows that rely on non-text blocks in ChatMessage.
Version
0.12.11
Steps to Reproduce
Steps to Reproduce
1. Set up a virtual environment with Python 3.13.1.
2. Install LlamaIndex and other dependencies.
3. Create a ChatMessage instance with both TextBlock and ImageBlock in the content.
4. Observe the behavior when processing the ChatMessage.
eg:
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import (
ChatMessage,
ImageBlock,
TextBlock,
MessageRole,
)
msg = ChatMessage(
role=MessageRole.USER,
blocks=[
TextBlock(text="Describe the images as an alternative text"),
ImageBlock(url="https://letsenhance.io/static/8f5e523ee6b2479e26ecc91b9c25261e/1015f/MainAfter.jpg"),
],
)
openai_llm = OpenAI(model="gpt-4o", max_new_tokens=300)
response = openai_llm.chat(messages=[msg])
Hey @n1vedhar1! I'm here to help you with any bugs, questions, or contributions you have for LlamaIndex. Let's squash those bugs together!
The issue you're experiencing with the ChatMessage class not recognizing ImageBlock in LlamaIndex 0.12.11 could be related to how the ChatMessage processes its content. In version 0.12.11, the ChatMessage class is designed to handle different block types, such as TextBlock and ImageBlock, by storing them in a list called blocks. When a ChatMessage is created, it should correctly identify and process both TextBlock and ImageBlock instances [1][2].
If the ImageBlock is not being recognized, it might be due to a bug or a change in how the ChatMessage class processes these blocks in version 0.12.11. You might want to verify that the ImageBlock is correctly initialized and added to the blocks list. Additionally, ensure that the OpenAI LLM is configured to handle multimodal content, as it interfaces with ChatMessage through functions that convert the message into a format suitable for OpenAI's API [3][4].
If the problem persists, consider checking for any updates or patches to LlamaIndex that might address this issue, or review the implementation details to ensure compatibility with the current version.
Bug Description
The ChatMessage class in LlamaIndex 0.12.11 is not recognizing ImageBlock, regardless of the Python version used. Instead, it processes the content as though it only contains TextBlock. This behavior was observed in both Python 3.9.6 and Python 3.13.1.
In earlier versions of LlamaIndex (e.g., 0.12.10), this issue did not occur when using Python 3.9.6. The problem seems to have been introduced in version 0.12.11, significantly impacting workflows that rely on non-text blocks in ChatMessage.
Version
0.12.11
Steps to Reproduce
Steps to Reproduce
1. Set up a virtual environment with Python 3.13.1.
2. Install LlamaIndex and other dependencies.
3. Create a ChatMessage instance with both TextBlock and ImageBlock in the content.
4. Observe the behavior when processing the ChatMessage.
eg:
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: