Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Settings.context_window Attribute error: NoneType object has no attribute context window. #17564

Open
1 task done
ArshiaSangwan opened this issue Jan 20, 2025 · 4 comments
Labels
question Further information is requested

Comments

@ArshiaSangwan
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

I am getting an AttributeError: 'NoneType' object has no attribute 'context_window' when using "Settings.context_window = 3900" with v 0.12.11 while implementing GraphRAG with Llama-Index.

I am importing the Settings module. Further, when I am using the existing suggested solution of ServiceContext, I get an error stating that it has been deprecated in the latest version of llama-index.
Any help would be appreciated.

@ArshiaSangwan ArshiaSangwan added the question Further information is requested label Jan 20, 2025
Copy link

dosubot bot commented Jan 20, 2025

Hey @ArshiaSangwan! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this issue together!

The error occurs because the Settings object is not properly initialized or is None when you try to set Settings.context_window = 3900. This suggests that the Settings object might not be correctly instantiated or imported in your code.

The ServiceContext is deprecated as of version 0.10.0, and Llama-Index has moved to using a global Settings object. To resolve this, ensure you are using the Settings object correctly:

from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core import Settings

Settings.llm = OpenAI(model="gpt-3.5-turbo")
Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
Settings.node_parser = SentenceSplitter(chunk_size=512, chunk_overlap=20)
Settings.num_output = 512
Settings.context_window = 3900

Make sure the Settings object is properly imported and used as shown above to avoid the NoneType error [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@logan-markewich
Copy link
Collaborator

Works fine for me? Can you give actual code to reproduce?

For example
https://colab.research.google.com/drive/1jxxtstn5KY-UDHeBtRntwokGT4J4rmKq?usp=sharing

@ArshiaSangwan
Copy link
Author

ArshiaSangwan commented Jan 21, 2025

Thank you for the prompt response @logan-markewich !
Following is the code I am running on my local machine (llama-index v 0.12.11) when I encounter the error:
However, when I ran the same code on colab with llama-index v0.12.12, it works just fine.

#importing the necessary libraries
from dotenv import load_dotenv
import os
from llama_index.core import Settings
from langchain_openai import ChatOpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.vector_stores.neo4jvector import Neo4jVectorStore
from llama_index.core import VectorStoreIndex
from llama_index.core.indices.query.query_transform import HyDEQueryTransform
from llama_index.core.query_engine import TransformQueryEngine
from llama_index.core.node_parser import SentenceSplitter

Load environment variables

load_dotenv()
NEO4J_URI=os.environ['NEO4J_URI']
NEO4J_USERNAME=os.environ['NEO4J_USERNAME']
NEO4J_PASSWORD=os.environ['NEO4J_PASSWORD']
NEO4J_DATABASE=os.environ['NEO4J_DATABASE']
OPENAI_API_KEY = os.environ['OPENAI_API_KEY']
OPENAI_API_BASE = os.environ['OPENAI_API_BASE']

llm_model = ChatOpenAI(model='o1-mini', base_url=os.environ['OPENAI_API_BASE'], api_key=os.environ['OPENAI_API_KEY'])
embed_model = OpenAIEmbedding(model="text-embedding-3-large", dimensions=3072, api_base=os.environ['OPENAI_API_BASE'], api_key=os.environ['OPENAI_API_KEY'], disallowed_special=())

Settings._llm = llm_model
Settings._embed_model = embed_model
Settings.node_parser = SentenceSplitter(chunk_size=512, chunk_overlap=20)

Debug

print("BEFORE")
Settings.context_window = 3900

print("SUCCESS")

@AxelSjoberg
Copy link

I have the same issue. For some reason setting the context_window before the LLM resolves the issue, which is weird.

Settings._llm = llm_model
Settings._embed_model = embed_model
Settings.node_parser = SentenceSplitter(chunk_size=512, chunk_overlap=20)
Settings.num_output = 512
Settings.context_window = 3900

->


AttributeError Traceback (most recent call last)
Cell In[4], line 4
2 Settings._embed_model = embed_model
3 Settings.node_parser = SentenceSplitter(chunk_size=512, chunk_overlap=20)
----> 4 Settings.num_output = 512
5 Settings.context_window = 3900

File ~/miniforge3/envs/kg_rag_gr_120/lib/python3.12/site-packages/llama_index/core/settings.py:220, in _Settings.num_output(self, num_output)
217 @num_output.setter
218 def num_output(self, num_output: int) -> None:
219 """Set the number of outputs."""
--> 220 self.prompt_helper.num_output = num_output

File ~/miniforge3/envs/kg_rag_gr_120/lib/python3.12/site-packages/llama_index/core/settings.py:201, in _Settings.prompt_helper(self)
199 """Get the prompt helper."""
200 if self._llm is not None and self._prompt_helper is None:
--> 201 self._prompt_helper = PromptHelper.from_llm_metadata(self._llm.metadata)
202 elif self._prompt_helper is None:
203 self._prompt_helper = PromptHelper()

File ~/miniforge3/envs/kg_rag_gr_120/lib/python3.12/site-packages/llama_index/core/indices/prompt_helper.py:120, in PromptHelper.from_llm_metadata(cls, llm_metadata, chunk_overlap_ratio, chunk_size_limit, tokenizer, separator)
106 @classmethod
107 def from_llm_metadata(
108 cls,
(...)
113 separator: str = " ",
114 ) -> "PromptHelper":
115 """Create from llm predictor.
116
117 This will autofill values like context_window and num_output.
118
119 """
--> 120 context_window = llm_metadata.context_window
122 if llm_metadata.num_output == -1:
123 num_output = DEFAULT_NUM_OUTPUTS

AttributeError: 'NoneType' object has no attribute 'context_window'


Whereas the following:

Settings.num_output = 512
Settings.context_window = 3900
Settings._llm = llm_model
Settings._embed_model = embed_model
Settings.node_parser = SentenceSplitter(chunk_size=512, chunk_overlap=20)

works fine. Something seems a bit off here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants