-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential bug in ai.jsx #420
Comments
Which model are you using by default? |
I followed the Quickstart verbatim, meaning that the model is Github. The only "personal" choice I made was to select several fields from Github. I do not remember the setting I did at Github to allow this sample to access Github. |
I tried to run that same instance, with the prompt got back bullet points and I asked next resulting with Got response from
Note that I am harping on this issue, because it is possible that I found bug 😄 |
I think we have partially addressed some of the confusion that we were creating in the Quickstart with this PR that spells out the various types of docs collections and explains that there is a public collection for Git/GitHub. Have you still been seeing the error WRT max tokens? |
No, I did not try anything else - and will try for more tomorrow. When will this fix be "live"? Is it already? (I always have such questions because there is no information about "fixes in the current code and docs") |
I deployed (to the cloud) the sample https://docs.ai-jsx.com/sidekicks/sidekicks-quickstart and asked the question
show me what can you help with
. This resulted with the error fromlookUpGitHubKnowledgeBase
:This model response had an error: "Error during generation: AI.JSX(1032): OpenAI API Error: 400 This model's maximum context length is 4097 tokens. However, your messages resulted in 6069 tokens (5970 in the messages, 99 in the functions). Please reduce the length of the messages or functions. It's unclear whether this was caused by a bug in AI.JSX, in your code, or is an expected runtime error.
I have no doubt that I exceeded my token limit - and am reporting it just to be safe (that I reported this possible bug)
Added later: Rerun with a different (but similar) question:
show me what can you help with
and this time everything went fine:Perhaps this LLM is too smart for me, as running it with the first question, that resulted with
Error - This model's maximum context length is 4097 tokens
now responded fine.Note: I am fascinated with the difference in answering first and second question. Debugging this seems like a nightmare 😄
The text was updated successfully, but these errors were encountered: