-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requested tokens (XXXX) exceed context window of 2048 #30
Comments
Sorry for late response, you can try to modify the context window to avoid the problem. link |
I am on Mac M2 and it is working, I am mainly trying to use it to organize a pdf ebook library, but as soon as I put more than 10 pdf files in a folder I get the (numbers are varying) "Requested tokens (2162) exceed context window of 2048" error, i have already tried your fix to modify the context window, but the problem persists. Is it possible to set it so that only directory structure gets changed but filenames remain the same? Will there be epub support in the future? |
I tried that too, unfortunately it doesn't work. I think the parameter isn't being passed correctly to the nexasdk. If I set it there, then it works. |
how to set it correctly in the nexasdk? what value have you set it to there? |
As what I said before, you can try to modify the context window by n_ctx to avoid the problem. link |
Sorry for the late feedback. I tried this before, and it seems that the parameter is not being given to Nexa. However, when I change the value in nexa_inference_text.py on line 108, the change takes effect. the file is located in the virtual enviroment: |
Error Description: ValueError: Requested tokens (3857) exceed context window of 2048. I checked the code, and |
Same here. I added |
did you try to change the value directly in: nexa_inference_text.py? |
What is the max value that n_ctx can be set to? |
After having a look to the codebase of both the
I have tested and it works and the parameter is properly passed and set. |
That depends on the model you refer to. For the text model used here, |
Take into account that for large PDF's such as books it might take very long time if you choose the option '1. By Content'. Maybe the tool could be optimised in that aspect, so it is not needed to digest the whole document in order to speed up things. |
nctx works and got things going then on hundreds of markdown files with most of the time not exceeding 50 lines and each md file takes about 5 a 6 minutes to process. It seems the model is being asked 10 a 15 times before deciding it to assign a category. Takes 66 hour to finish in my case so some optimization will help here. |
I've reviewed the code, and it appears to process the first three pages of any PDF. The issue lies in the fact that the CPU is handling the processing. It would be more efficient if we could convert the first three pages into images and use the GPU for text extraction, leveraging tools like EasyOCR. |
There are two existing (closed) issues related to this, but neither offers a solution. I've tweaked the n_ctx value, but the error persists: 2048 tokens aren't enough. So, is this parameter ineffective, or am I missing something?
#18
#7
Traceback (most recent call last): File "c:\coding\Local-File-Organizer-gpu\Local-File-Organizer\main.py", line 339, in <module> main() File "c:\coding\Local-File-Organizer-gpu\Local-File-Organizer\main.py", line 254, in main data_texts = process_text_files(text_tuples, text_inference, silent=silent_mode, log_file=log_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\coding\Local-File-Organizer-gpu\Local-File-Organizer\text_data_processing.py", line 60, in process_text_files data = process_single_text_file(args, text_inference, silent=silent, log_file=log_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\coding\Local-File-Organizer-gpu\Local-File-Organizer\text_data_processing.py", line 37, in process_single_text_file foldername, filename, description = generate_text_metadata(text, file_path, progress, task_id, text_inference) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\coding\Local-File-Organizer-gpu\Local-File-Organizer\text_data_processing.py", line 71, in generate_text_metadata description = summarize_text_content(input_text, text_inference) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\coding\Local-File-Organizer-gpu\Local-File-Organizer\text_data_processing.py", line 21, in summarize_text_content response = text_inference.create_completion(prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\pflic\miniconda3\envs\local_file_organizer-gpu\Lib\site-packages\nexa\gguf\nexa_inference_text.py", line 234, in create_completion return self.model.create_completion(prompt=prompt, temperature=temperature, max_tokens=max_tokens, top_k=top_k, top_p=top_p, echo=echo, stream=stream, stop=stop, logprobs=logprobs, top_logprobs=top_logprobs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\pflic\miniconda3\envs\local_file_organizer-gpu\Lib\site-packages\nexa\gguf\llama\llama.py", line 1748, in create_completion completion: Completion = next(completion_or_chunks) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\pflic\miniconda3\envs\local_file_organizer-gpu\Lib\site-packages\nexa\gguf\llama\llama.py", line 1191, in _create_completion raise ValueError( ValueError: Requested tokens (2302) exceed context window of 2048
The text was updated successfully, but these errors were encountered: