-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrates more models #716
Labels
Comments
JinHai-CN
changed the title
[Feature Request]: Integrates more models
Integrates more models
May 10, 2024
Request to support Qwen-max. Can I modify the code? |
@OXOOOOX Yes, you can modify the code and submit a PR. We will merge it into the code base. |
This was referenced Jun 28, 2024
KevinHuSh
pushed a commit
that referenced
this issue
Jul 4, 2024
### What problem does this PR solve? feat: Integrates LLM Azure OpenAI #716 ### Type of change - [x] New Feature (non-breaking change which adds functionality) ### Other It's just the back-end code, the front-end needs to provide the Azure OpenAI model addition form. #### Required parameters - base_url - api_key --------- Co-authored-by: yonghui li <[email protected]>
Would you pls add llama-3.1-70b-versatile and llama-3.1-8b-instant for Groq , for now is only llama3.0 Thank you |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This issue is used to document the LLM, embedding, reranker, etc. models that need to be integrated with RAGFlow.
The text was updated successfully, but these errors were encountered: