Releases: Blizzard/node-rdkafka
Releases · Blizzard/node-rdkafka
v2.2.0
Minor release
- Fixes with configuration to ensure all system library locations are used for include directories and library directories.
- Include js linting in build process for CI
- Support for Node 8
Consumer Changes
- Support for resume, pause, and offset store.
- Committed now takes an array of topic partitions to fetch committed offsets for. Defaults to the current assignment.
- Position now takes an array of topic partitions to get positions for, or defaults to current assignment.
v2.1.1
Minor release
- Fixes with configuration to ensure all system library features are used, as determined by
librdkafka
's make process. - Specify full source list and correctly use conditionals to determine what sources to compile
- Windows Support. See README.
v2.1.0
Minor release
- Upgraded librdkafka to 0.11.1
Consumer Changes
- Partition EOF will no longer stop a batch from completing when consuming.
v2.0.0
Major release because there are breaking changes!
Breaking Changes
- Keys are now returned as buffers in delivery reports
- Keys are now produced as buffers. If you pass one in as a string it will be converted.
- Topic objects have been removed. You should use topic name strings to create topics.
- New librdkafka produce methods do not support topic objects because they are in the process of being removed.
- You should use topic configuration to configure topics, and separate producers if special cases are needed. Producers are cheap!
v1.0.6
New release version v1.0.5.
Root Object Changes
- Added
librdkafkaVersion
andfeatures
properties to root object.
Consumer Changes
- assign and unassign in the rebalance callback check if the consumer is connected before they throw.
v1.0.5
New release version v1.0.5. This release note includes changes from v1.0.2 - v1.0.5
Consumer Changes
- Bug fix for custom rebalance callbacks where assignments were not being set. (v1.0.5)
- Kafka read stream now supports
streamAsBatch
option, if it is running in objectMode. This will push arrays to the stack instead of individual messages. (v1.0.4) - Fixed callback "leak" in read stream under high message volume (v1.0.3)
v1.0.1
New release version v1.0.1
Producer Changes
- Passing a buffer as a key will not convert it to a string before sending it to Kafka
v1.0.0
New release version v1.0.0
Producer API Changes
- Producer write stream is now its own class that has its own producer. Producer methods can be accessed through the member variable.
- Create one by using
Producer.createWriteStream();
Consumer API Changes
- Added
offset_commit_cb
- Rebalance callback now has first parameter as an error object. The error object can be checked to see if it is an assignment or an unassignment
- Consumer write stream is now its own class that has its own consumer. Consumer methods can be accessed through the member variable.
- Create one by using
Consumer.createReadStream()
- Added
query_watermark_offsets
support - Added support for seek method, but currently not supported in librdkafka 0.9.5. Will likely be in next release.
v0.10.2
New release version v0.10.2
Producer API Changes
- Producer flush method now is called asynchronously and must be provided a callback.
v0.10.10
New release version v0.10.0
API Changes
error
event is renamedevent.error
to show it corresponds with errors reported by thelibrdkafka
internals. Only streams emit events namederror
now, when there are stream related errors.- Added new error codes for librdkafka.
Consumer API Changes
commit
asynchronous methods no longer take a callback, and instead map directly to thelibrdkafka
async commit variants.- Internal queue timeouts no longer considered error worthy for consume methods.