You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Today, receivers are agnostic to the data encoding format. E.g.: The HTTP receiver can decode json, influx, protocol buffers, m&r etc, and all the other receivers can do that as well, since it's coupled with the handler.
However, the same can not be said for senders. Most of the time this isn't a big deal, since specific senders typically require small tweaks - the influx sender might require specific HTTP headers for sending a token to authenticate for example.
However, it's becoming increasingly obvious that being able to separate encoding from transport would be beneficial. So as such, a new module type is required, the inverse of parsers (coincidentally, maybe we should rename parsers to "decoders"?).
I don't think every sender should be required to use an encoder - it makes no sense at all to implement an SQL sender using a generic encoder for example, and by and large, sender modules should do what you expect them to do by default, with no encoder set. But this change should at least reduce code re-use in the HTTP senders(plural - http, influx and hec all use similar code).
This also ties in nicely with #227 - being able to send GOB data over HTTPS, but also UDP could make sense - use UDP to duplicate data to test instances, which would isolate the sending skogul from issues with the receiving skogul.
Great care needs to be taken to avoid further overcomplicating the configuration though.
The text was updated successfully, but these errors were encountered:
Ok, I ran into a conundrum today. The Kafka sender has implicit support for batching, that means the metrics[] logic is superfluous. I ended up adding a EncodeMetric method to the Encoder interface, but I'm still not convinced this is a universal thing, and it means there's a miss-match between encoder and parser.
The approach I used will "flatten" the container and transmit each metric as independent messages, which presumably makes things easier for other consumers, but it also mandated that I made a "JSONMetric"/"SkogulMetric" parser to match it, and now we have a situation where the JSON encoder can be used to send data with the HTTP sender, and then parsed by the JSON parser through a HTTP receiver, but you CAN'T use the JSON parser to parse messages sent by over Kafka, using the same encoder, because it's up to the sender which method to use.
I don't think there's an obvious solution here, and currently I'm leaning towards just seeing how this plays out as we extend the encoder-concept.
Today, receivers are agnostic to the data encoding format. E.g.: The HTTP receiver can decode json, influx, protocol buffers, m&r etc, and all the other receivers can do that as well, since it's coupled with the handler.
However, the same can not be said for senders. Most of the time this isn't a big deal, since specific senders typically require small tweaks - the influx sender might require specific HTTP headers for sending a token to authenticate for example.
However, it's becoming increasingly obvious that being able to separate encoding from transport would be beneficial. So as such, a new module type is required, the inverse of parsers (coincidentally, maybe we should rename parsers to "decoders"?).
I don't think every sender should be required to use an encoder - it makes no sense at all to implement an SQL sender using a generic encoder for example, and by and large, sender modules should do what you expect them to do by default, with no encoder set. But this change should at least reduce code re-use in the HTTP senders(plural - http, influx and hec all use similar code).
This also ties in nicely with #227 - being able to send GOB data over HTTPS, but also UDP could make sense - use UDP to duplicate data to test instances, which would isolate the sending skogul from issues with the receiving skogul.
Great care needs to be taken to avoid further overcomplicating the configuration though.
The text was updated successfully, but these errors were encountered: