You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently if my kafka input tries to send the event to Elasticsearch/OpenSearch and it fails (in my recent scenario the write index alias was missing) the events during that time appear to be "dropped on the floor" but the consumer group offset doesn't account for them after the alias was fixed.
After the alias was fixed all new events have ingested properly.
However the 15 hour window when the alias wasn't working for indexing those events haven't been ingested.
Need way to track those items so they can sync. Changing the consumer to offset of earliest from latest would involve a lot of duplicate events so that isn't an option.
The commit to ZK should account for the fact that the event wasn't successfully indexed.
The text was updated successfully, but these errors were encountered:
Currently if my kafka input tries to send the event to Elasticsearch/OpenSearch and it fails (in my recent scenario the write index alias was missing) the events during that time appear to be "dropped on the floor" but the consumer group offset doesn't account for them after the alias was fixed.
After the alias was fixed all new events have ingested properly.
However the 15 hour window when the alias wasn't working for indexing those events haven't been ingested.
Need way to track those items so they can sync. Changing the consumer to offset of earliest from latest would involve a lot of duplicate events so that isn't an option.
The commit to ZK should account for the fact that the event wasn't successfully indexed.
The text was updated successfully, but these errors were encountered: