You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Kafka producer in the MQ service writes the event messages by the send method. The sending is asynchronous though and unless a blocking flush method is called, the messages may be lost if the process exits. See the docs for an example.
When everything goes right, this doesn’t cause problems, because there is an event looprunning, waiting for new messages. Because of that there is always time for the sent message to be actually produced.
If something happens though – an error occurs, an exception is raised, or the consumer times out, then the event loop ends and so does the whole process. It doesn’t wait for the producer thread and kills it. This can be prevented by calling the KafkaProducer.flush method after the loop ends and possible also when an exception occurs.
The Kafka producer in the MQ service writes the event messages by the send method. The sending is asynchronous though and unless a blocking flush method is called, the messages may be lost if the process exits. See the docs for an example.
When everything goes right, this doesn’t cause problems, because there is an event loop running, waiting for new messages. Because of that there is always time for the sent message to be actually produced.
If something happens though – an error occurs, an exception is raised, or the consumer times out, then the event loop ends and so does the whole process. It doesn’t wait for the producer thread and kills it. This can be prevented by calling the KafkaProducer.flush method after the loop ends and possible also when an exception occurs.
This needs to be implemented in the KafkaEventProducer wrapper.
Thinking out loud:
The text was updated successfully, but these errors were encountered: