-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node re-run and iteration judgment termination #8489
Labels
💪 enhancement
New feature or request
Comments
I've outlined your request as follows:
Please let me know if this aligns with your needs! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
LLM always generates the content we specify at one time, but in the long process, there may be a problem in a certain step in the middle, or we need to readjust the prompt word or regenerate it according to the middle part. The iteration function is indeed good for long processes, but it lacks the output after judging the conditions.
I found a similar function in coze, which can insert questions and answers in the loop to interact with the user and terminate the loop separately. But there is a problem here. If new auxiliary prompt words can be added when looping again, that is, the user gives new suggestions from the question and answer, so that LLM can regenerate.
2. Additional context or comments
I saw this mentioned as early as a year ago, but no one seems to be paying attention to this feature.
3. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: