-
-
Notifications
You must be signed in to change notification settings - Fork 136
Question: How can I ensure all asynchronous logged events are actually sent before closing process? #196
-
I have a question about how to use this library.
How can I ensure all asynchronous logged events are actually sent before closing the process?
I'm having an issue where the process is closed by a SIGTERM and not all messages are actually sent. I'd like to create a handler that catches SIGTERM signal and calls whatever code is necessary to block until all asynchronous messages have been sent.
I'm using the :splunk_http appender.
I assume it would be a simple SemanticLogger.flush but I'd like someone to confirm my assumption.
Beta Was this translation helpful? Give feedback.
All reactions
Semantic Logger has a built-in flush that occurs before a process exits: https://github.com/reidmorrison/semantic_logger/blob/master/lib/semantic_logger.rb#L44
We use sigterm heavily in our AWS ECS deployment for when docker containers are being scaled down.
We had issues previously when other at_exit handlers were not finishing quickly enough after a sigterm, which caused AWS ECS to hard kill the process causing messages to be lost. Once the other at_exit handlers were fixed, we get all our log messages now.
Replies: 2 comments
-
Semantic Logger has a built-in flush that occurs before a process exits: https://github.com/reidmorrison/semantic_logger/blob/master/lib/semantic_logger.rb#L44
We use sigterm heavily in our AWS ECS deployment for when docker containers are being scaled down.
We had issues previously when other at_exit handlers were not finishing quickly enough after a sigterm, which caused AWS ECS to hard kill the process causing messages to be lost. Once the other at_exit handlers were fixed, we get all our log messages now.
Beta Was this translation helpful? Give feedback.
All reactions
-
I might have a similar issue of the ECS deal because my problem is happening when a server is shutting down.
Beta Was this translation helpful? Give feedback.