-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Theory on how to alleviate the token max issue. #28
Comments
`async function getRelevantDomSegments(dom, instructions, llm) { while (startIndex < dom.length) {
} const finalInstructions = return result; |
@xbwtyz cool solution! My suggestion is after simplifying the html creating indexes and keeping these. Afterwards, asking questions about the user interactions and getting the relavent parts. |
It seems that keeping this in html format to send to gpt is taking up a lot of space (tokens)? Wouldn't it be better to just generate a flat list of [id, type, text] and send that over? So like [1833, 'c' 'Read More'] where you can have types like 'c' for clickable, 'i' for inputable etc. Edit; doing some testing here https://github.com/tluyben/browser-extension/tree/feature/token-limit-attempt ; seems to work better for sites i have tested. |
I see you are already working on a method using viewport to cutdown on token amount, but what if apply the following as well
This would of course introduce unnecessary space for error and ultimately slow down the extension, but maybe it would function in niche cases?
The text was updated successfully, but these errors were encountered: