Releases: fd4s/fs2-kafka
Releases · fd4s/fs2-kafka
fs2-kafka v0.19.0
Changes
- Add
KafkaProducer#producePassthrough
for only keeping the passthrough after producing. (#74) - Change
KafkaConsumer#stream
to be an alias forpartitionedStream.parJoinUnbounded
. (#78)- This also removes
ConsumerSettings#fetchTimeout
as it is now unused.
- This also removes
- Change to improve type inference of
ProducerMessage
. (#74, #76)- To support better type inference, a custom
fs2.kafka.ProducerRecord
has been added. - If you were using the Java
ProducerRecord
, change tofs2.kafka.ProducerRecord
.
- To support better type inference, a custom
- Change to replace
Sink
s withPipe
s, and usage ofStream#to
withStream#through
. (#73) - Remove
ProducerMessage#single
,multiple
, andpassthrough
. (#74)- They have been replaced with
ProducerMessage#apply
andProducerMessage#one
. - If you were previously using
single
in isolation, then you can now useone
. - For all other cases, you can now use
ProducerMessage#apply
instead.
- They have been replaced with
- Rename
KafkaProducer#produceBatched
toproduce
. (#74) - Remove the previous
KafkaProducer#produce
.- For previous behavior,
flatten
the result fromproduce
. (#74)
- For previous behavior,
Miscellaneous
- Change to include current year in license notices. (#72)
Released on 2019-01-18.
fs2-kafka v0.18.1
fs2-kafka v0.18.0
Additions
- Add support for default
ExecutionContext
forKafkaConsumer
s. (#60)
If you've been using theconsumerExecutionContextResource
context,
orconsumerExecutionContextStream
, then not providing a context
when creatingConsumerSettings
now yield the same result. - Add
KafkaConsumer#subscribeTo
for subscribing to topics with varargs. (#62) - Add
KafkaConsumer#seek
for setting starting offsets. Thanks @danielkarch. (#64)
Changes
- Change
KafkaConsumer#subscribe
to work for anyReducible
. (#62) - Change
KafkaConsumer#subscribe
to returnF[Unit]
instead ofStream[F, Unit]
. (#62) - Change
KafkaConsumer
requests to be attempted and errors returned directly. (#66) - Change to use internal singleton for
KafkaConsumer
poll requests. (#69)
Fixes
- Fix
toString
for custom exceptions. (#61) - Fix to always create new instances of
NotSubscribedException
. (#65) - Fix
KafkaConsumer
requests to check consumer has not shutdown. (#66) - Fix
Show[ProducerRecord[K, V]]
when partition isnull
. (#68)
Documentation
- Change to simplify the 'quick example' in the documentation. (#63)
Miscellaneous
- Change
OVO Energy Ltd
toOVO Energy Limited
in license texts. (#67)
Released on 2018-12-16.
fs2-kafka v0.17.3
fs2-kafka v0.17.2
fs2-kafka v0.17.1
fs2-kafka v0.17.0
Additions
- Add support for subscribing to topics with
Regex
patterns. (#29) - Add support for committing record metadata along with offsets. (#28)
- Add
CommittableOffsetBatch#updated(CommittableOffsetBatch)
. (#27) - Add
ProducerSettings#withLinger
andwithRequestTimeout
. (#35) - Add support for overriding
Consumer
andProducer
creation. (#17) - Add support for setting number of
ExecutionContext
threads. (#45)
Changes
- Change to make
KafkaConsumer
andKafkaProducer
sealed. (#33) - Change to move
KafkaConsumerActor
to internal package. (#32) - Change to improve performance of offset batching. (#20)
- Change to improve performance of
KafkaConsumerActor
. (#18, #21, #22) - Change behaviour of
KafkaConsumer#partitionedStream
toparallelPartitionedStream
. (#19)
TheparallelPartitionedStream
function onKafkaConsumer
has therefore been removed. (#30) - Change to alternative encoding of
ProducerMessage
andProducerResult
. (#38, #26) - Change to use
AnyVal
forResource
andStream
builders. (#31) - Change internal
private
definitions toprivate[this]
. (#37)
Fixes
- Fix to propagate
KafkaConsumerActor
errors toStream
s. (#36)
Updates
- Update fs2 to 1.0.1. (#46)
- Update cats-effect to 1.1.0. (#44)
- Update kafka-client to 2.0.1. (#34)
- Update sbt to 1.2.7. (#43)
Documentation
Miscellaneous
- Library is now published on Maven Central instead of on Bintray. (#42)
Released on 2018-12-03.
fs2-kafka v0.16.4
fs2-kafka v0.16.3
fs2-kafka v0.16.2
Changes
- Add support for offset commit recovery. By default, only
RetriableCommitFailedException
are retried with a jittered exponential backoff for 10 attempts, switching to fixed rate retries for up to 5 attempts (seeCommitRecovery#Default
for more details). If you want to keep the previous behaviour (no retries), use the following onConsumerSettings
. (#9)
consumerSettings
.withCommitRecovery(CommitRecovery.None)
- Fix
KafkaConsumer#fiber
instance to work as expected. Most notably,join
andcancel
should work as expected, withjoin
no longer possibly becoming non-terminating after callingcancel
. This also means that streams should be interrupted as expected. SeeKafkaConsumer#fiber
for more details. (#11)
Miscellaneous
- Change to run
doc
instead ofpackageDoc
invalidate
. (#10)
Released on 2018-11-05.