-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Open
Labels
bugSomething isn't workingSomething isn't workingtriageDefault label assignment, indicates new issue needs reviewed by a maintainerDefault label assignment, indicates new issue needs reviewed by a maintainer
Description
Do you need to file an issue?
- I have searched the existing issues and this bug is not already filed.
- My model is hosted on OpenAI or Azure. If not, please look at the "model providers" issue and don't file a new one here.
- I believe this is a legitimate bug, not just a question. If this is a question, please use the Discussions area.
Describe the bug
I followed the tutorial in get_started.md to index the book, but found it didn't generate the community reports. And I found these exceptions in log:
^^^^^^^^^^^^^^^^^^^
File "/Volumes/workplace/community/graphrag/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 859, in acompletion
raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'message': "Invalid parameter: 'response_format' of type 'json_schema' is not supported with this model. Learn more about supported models at the Structured Outputs guide: https://platform.openai.com/docs/guides/structured-outputs", 'type': 'invalid_request_error', 'param': None, 'code': None}}
After switch the model to gpt-4o in settings.yaml, it ran as expected. Can we change the default model in this repo?
Steps to reproduce
No response
Expected Behavior
No response
GraphRAG Config Used
# Paste your config here
Logs and screenshots
No response
Additional Information
- GraphRAG Version:
- Operating System:
- Python Version:
- Related Issues:
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingtriageDefault label assignment, indicates new issue needs reviewed by a maintainerDefault label assignment, indicates new issue needs reviewed by a maintainer