GPT 4o support? #2918
Closed
Unlok-Simon
started this conversation in
General
GPT 4o support?
#2918
Replies: 2 comments
-
Since I saw the configuration exists: https://github.com/janhq/jan/blob/dev/extensions/inference-openai-extension/resources/models.json
{
"sources": [
{
"url": "https://openai.com"
}
],
"id": "gpt-4o",
"object": "model",
"name": "OpenAI GPT 4o",
"version": "1.1",
"description": "OpenAI GPT 4o is a new flagship model with fast speed and high quality",
"format": "api",
"settings": {},
"parameters": {
"max_tokens": 4096,
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"stop": [],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "OpenAI",
"tags": [
"General"
]
},
"engine": "openai"
} (taken from here) |
Beta Was this translation helpful? Give feedback.
0 replies
-
Saw this has been added in the latest release 0.4.13. Thanks! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Any thoughts on when we will be able to select GPT 4o as the model for the OpenAI API? Keen to be able to use it from Jan for performance and cost reasons.
Beta Was this translation helpful? Give feedback.
All reactions