Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fluentbit Filters are not working as expected #9855

Open
pranavsankar07 opened this issue Jan 21, 2025 · 0 comments
Open

Fluentbit Filters are not working as expected #9855

pranavsankar07 opened this issue Jan 21, 2025 · 0 comments

Comments

@pranavsankar07
Copy link

Bug Report

Describe the bug
We have a scenario for sending security based logs to S3 using fluentbit, Since this service is used by two of our team (Network & Security). Network team already setup two INPUT plugins multiple filters & 2 OUTPUT for sending their respective events to S3, While i'm adding Security based filters with INPUT & OUTPUT plugins, we are getting objects in S3 but not with desired results its dumbing all logs.

[INPUT]
Name tail
Tag soc
Path /var/log/**/*.log
Skip_Long_Lines On
Refresh_Interval 10
Inotify_Watcher false
multiline.parser cri
storage.type filesystem
Buffer_Chunk_Size 64KB
Buffer_Max_Size 128KB
[FILTER]
Name modify
Match soc
Condition Key_value_matches log (securityContext:.+privileged:\strue|securityContext:.+allowPrivilegeEscalation:\strue)
Set keep true
[FILTER]
Name modify
Match soc
Condition Key_value_matches log (securityContext:.+runAsNonRoot:\sfalse|containerSecurityContext:.+runAsNonRoot:\sfalse)
Set keep true
[FILTER]
Name modify
Match soc
Condition Key_value_matches log authorization.k8s.io.+selfsubjectaccessreviews
Set keep true
[FILTER]
Name modify
Match soc
Condition Key_value_matches log authorization.k8s.io.+selfsubjectrulesreviews
Set keep true

[OUTPUT]
Name s3
 Match      soc
bucket bucketname
region awsregion
json_date_key date
json_date_format iso8601
total_file_size 100M
upload_chunk_size 6M
upload_timeout 10m
store_dir /var/log/s3/AWS/accountid/awsregion
store_dir_limit_size 256M
s3_key_format /var/log/s3/AWS/accountid/awsregion/$TAG/%Y/%m/%d/%H/${HOSTNAME}$UUID.log
auto_retry_requests true
preserve_data_ordering true
retry_limit 1  

Above listed is my INPUT, FILTER & OUTPUT plugins , while performing the changes & applying the commit using argo-cd it passed without any errors but "tag" based logs are not sending to S3 & instead sending all /var/log messages.

Can anyone help me to understand what issue i'm facing here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant