Different response behaviors when using OpenAIStream. How do you test your stream output? #330
Unanswered
StephenTangCook
asked this question in
Help
Replies: 1 comment 3 replies
-
We do some processing in OpenAIStream that could be related. Thanks for the high quality report @StephenTangCook, I’ll try and investigate early next week |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
(Discord cross-post)
Depending on what client/tool I use to test my API responses, I get troubling different behaviors when
OpenAIStream
is used to wrap the response. I can't tell if this is a bug, or the tools are implemented differently, or what the best practice for testing this is?I've tested with nearly an identical setup as the Vercel AI SDK example. My code: https://github.com/StephenTangCook/vercel-ai-sdk-example/blob/b6d34406f7e7c636924c8504684f0dc8cfc24236/app/api/completion/route.ts#L32
Versions:
"ai": "^2.1.20",
"next": "13.4.9",
"openai-edge": "^1.2.0",
node v18.16.0
1. With
OpenAiStream
andStreamingTextResponse
:2. Only
StreamingTextResponse
(noOpenAiStream
):As you can see, only # 2 consistently returns a stream. What does wrapping the response with
OpenAIStream
do in # 1 that makes these tools unhappy?Beta Was this translation helpful? Give feedback.
All reactions