-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error_class=URI::InvalidURIError error=“bad URI(is not URI) ..” #4646
Comments
@raulgupto Thanks for your report. The placeholder is replaced when retrying if the chunk has <match test>
@type http
endpoint http://localhost:9880/${tag}
<format>
@type json
</format>
<buffer tag>
@type file
path ...
flush_mode immediate
</buffer>
</match> |
Can you try killing the fluentd process ? I can’t figure out the exact scenario to reproduce this issue. What I’ve noticed is that, In normal scenario we have two buffer files for a chunk of message. But in this case, I’ve also noticed that only one present most of the time. |
I have tried.
This should be the cause.
The file buffer ( |
If |
I understand without a location you don’t know where to send it. But since retry_forever is true and fluentd keeps on retrying this chunk. What I’ve noticed is that instead of waiting just this chunk to be flushed. Fluentd proces is heavily waiting for this to be flushed, does not go down but consume whole buffer space and remain stuck forever. A solution to manually clear that buffer is there but that requires manual intervention to delete the buffer in production environment which is not sustainable. |
Another approach is to find a way how this problem would not appear in the first place. I’ve seen this appear frequently. Around 3-5 unique /160 hosts are facing this on monthly basis. Any existing config change that would fix this issue? |
To address the root cause, please investigate why some buffer files are disappearing. If this may be a bug in Fluentd, we need to find out how to reproduce this phenomenon to fix the bug.
Some errors are considered non-retriable, and Fluentd gives up retrying. About the error in this issue, Fluentd executes retrying. It is considered retriable in the current implementation. The issue may be improved if this can be fixed so that the error can be determined as non-retriable.
You can stop using
Certainly, we should improve the handling of buffers about this point. |
I’ll definitely add secondary_file. 1 question:
I don’t want to stop after n tries or n duration. I want to keep retrying assuming my endpoint will be back after recovering from failure / releases. |
@raulgupto Sorry for my late response.
|
Chunks that cannot resolve placeholders due to missing metafiles fail to be transferred. |
Thank you for the seconday_file workaround. It will help to manually recover and send logs in case of failures. It would however be great if we can have retries/solution that can help recover buffers in case the .meta file is lost |
So, it would be better to avoid the disappearance of buffer files. Do you have any idea as to why the buffer file disappears? Is Fluentd running duplicatedly? |
I’ve added graceful kill commands to kill running process and around 10 second of sleep for restarts. |
Sorry for my late response.
I see...
No. We need a way to reproduce the phenomenon. |
Hi all. We've been experiencing the same problem in our fluentd 1.16.5 Though we haven't been able to reproduce it we can offer some clues regarding when/how we started seeing it. Whilst running it under EKS we applied a VPA component to it which meant an automatic adjustment of CPU and memory limits versus the hardcoded limits we had before. Our current running hypothesis is that perhaps when fluentd hit one of these conditions (OOM) it would fail to write the meta file for one of the received logs and thus leave the log in an unprocessable state. |
Describe the bug
I’m getting this error continuously. When using @http plugin.
I’ve not been able to find the root cause for this but I’ve noticed this in coincidentally when my external endpoint is down for restarts.
I’ve buffering enabled which writes into my local disk and I do not drop any log chunks ie I’ve retry_forever as try. But when the service is back up this one chunk goes into periodic retries till infinity as the dynamic tag in the http endpoint is not resolved in retries.
so the whole error is like this:
Error_class=URI::InvalidURIError error=“bad URI(is not URI) \”https://myexternalendpoint.com/v0/${tag}\””
fluentd version: 1.16.5
To Reproduce
Use http plugin to an endpoint with ${tag}, using retry forever as true.
Expected behavior
Buffer chunk should be sent. It should not complain for invalid uri
Your Environment
Your Configuration
Your Error Log
Additional context
No response
The text was updated successfully, but these errors were encountered: