-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding support for anthropic, azure, cohere, llama2 #26
base: master
Are you sure you want to change the base?
Conversation
Hi, thanks for this wonderful library. Just one quick question - does it support function calling for other models, or even just OpanAI models? This app relies on JSON response. |
yes it support function calling - exactly like how openai calls it - https://litellm.readthedocs.io/en/latest/input/ |
Nice! I'll try it later, thanks |
One difference I found is on the way to set Could you please also add |
Hey @polyrabbit i updated the requirements.txt. re:timeout - i thought that was for the completions endpoint - i don't recall seeing a timeout parameter for ChatCompletions - if you could share any relevant documentation, happy to check it out. Let me know if there are any remaining blockers for this PR |
I see it here: https://github.com/openai/openai-python/blob/b82a3f7e4c462a8a10fa445193301a3cefef9a4a/openai/api_resources/chat_completion.py#L21-L28 def create(cls, *args, **kwargs):
"""
Creates a new chat completion for the provided messages and parameters.
See https://platform.openai.com/docs/api-reference/chat-completions/create
for a list of valid parameters.
"""
start = time.time()
timeout = kwargs.pop("timeout", None) So |
got it - will make a fix for it and update the PR |
755874b
to
edcffb8
Compare
4db003f
to
d373b6a
Compare
4fe9948
to
440a59f
Compare
e9054be
to
f5fc98d
Compare
02b48c3
to
484c067
Compare
f91ab3b
to
b5cca5a
Compare
853de12
to
f4357ed
Compare
Hi @polyrabbit ,
Noticed you're only calling OpenAI. I'm working on litellm (simple library to standardize LLM API Calls - https://github.com/BerriAI/litellm) and was wondering if we could be helpful.
Added support for Claude, Cohere, Azure and Llama2 (via Replicate) by replacing the ChatOpenAI completion call with a litellm completion call. The code is pretty similar to the OpenAI class - as litellm follows the same pattern as the openai-python sdk.
Would love to know if this helps.
Happy to add additional tests / update documentation, if the initial PR looks good to you.