You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Vulcan serializer and deserializer show some unintended behaviour, as demonstrated by the tests in #608.
Schema auto-registration
The schema that the KafkaAvroSerializer registers isn't always identical to the one used for encoding - this is because the writer schema isn't passed explicitly to the serializer, so it has to be inferred from the encoded value. That doesn't make a difference for subtypes of GenericContainer as they contain a reference to the schema they were encoded with, but it means logical type information is lost in the case of top-level primitive types. Consequently, these will always fail to decode as the Vulcan codec requires the writer schema's logical type to match the reader schema's. (We have a test to show that Codec.instant will fail to decode a value written with Codec.long, but it will also fail to decode a top-level value written with Codec.instant.)
Java schema resolution
In the fs2-kafka-vulcan deserializer, unlike in Codec.fromBinary, we use the Java lib's schema resolution to adapt the decoded value to the reader schema. This interacts with a bug in Vulcan's record decoder (fd4s/vulcan#335) to lose information about the writer schema's logical type. I'm writing some further notes to add to the Vulcan PR - I think the behaviour should be fixed, but we should also try to avoid breaking previously-working code.
The text was updated successfully, but these errors were encountered:
The Vulcan serializer and deserializer show some unintended behaviour, as demonstrated by the tests in #608.
Schema auto-registration
The schema that the
KafkaAvroSerializer
registers isn't always identical to the one used for encoding - this is because the writer schema isn't passed explicitly to the serializer, so it has to be inferred from the encoded value. That doesn't make a difference for subtypes ofGenericContainer
as they contain a reference to the schema they were encoded with, but it means logical type information is lost in the case of top-level primitive types. Consequently, these will always fail to decode as the Vulcan codec requires the writer schema's logical type to match the reader schema's. (We have a test to show thatCodec.instant
will fail to decode a value written withCodec.long
, but it will also fail to decode a top-level value written withCodec.instant
.)Java schema resolution
In the
fs2-kafka-vulcan
deserializer, unlike inCodec.fromBinary
, we use the Java lib's schema resolution to adapt the decoded value to the reader schema. This interacts with a bug in Vulcan's record decoder (fd4s/vulcan#335) to lose information about the writer schema's logical type. I'm writing some further notes to add to the Vulcan PR - I think the behaviour should be fixed, but we should also try to avoid breaking previously-working code.The text was updated successfully, but these errors were encountered: