Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrates more models #716

Open
20 tasks done
JinHai-CN opened this issue May 10, 2024 · 4 comments
Open
20 tasks done

Integrates more models #716

JinHai-CN opened this issue May 10, 2024 · 4 comments
Labels

Comments

@JinHai-CN JinHai-CN pinned this issue May 10, 2024
@JinHai-CN JinHai-CN changed the title [Feature Request]: Integrates more models Integrates more models May 10, 2024
@OXOOOOX
Copy link

OXOOOOX commented May 29, 2024

Request to support Qwen-max. Can I modify the code?

@JinHai-CN
Copy link
Contributor Author

JinHai-CN commented May 30, 2024

@OXOOOOX
We intend to create an international community, so we encourage using English for communication.

Yes, you can modify the code and submit a PR. We will merge it into the code base.

KevinHuSh pushed a commit that referenced this issue Jul 4, 2024
### What problem does this PR solve?

feat: Integrates LLM Azure OpenAI #716 

### Type of change

- [x] New Feature (non-breaking change which adds functionality)

### Other
It's just the back-end code, the front-end needs to provide the Azure
OpenAI model addition form.
   
#### Required parameters

- base_url
- api_key

---------

Co-authored-by: yonghui li <[email protected]>
@yingfeng yingfeng unpinned this issue Aug 6, 2024
@yingfeng yingfeng pinned this issue Aug 6, 2024
@lionkingc
Copy link

Would you pls add llama-3.1-70b-versatile and llama-3.1-8b-instant for Groq , for now is only llama3.0

Thank you

@JinHai-CN
Copy link
Contributor Author

#1853

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants