diff --git a/website/versioned_docs/version-2.2.4/Consuming.md b/website/versioned_docs/version-2.2.4/Consuming.md
new file mode 100644
index 000000000..dd02f75bf
--- /dev/null
+++ b/website/versioned_docs/version-2.2.4/Consuming.md
@@ -0,0 +1,476 @@
+---
+id: version-2.2.4-consuming
+title: Consuming Messages
+original_id: consuming
+---
+
+Consumer groups allow a group of machines or processes to coordinate access to a list of topics, distributing the load among the consumers. When a consumer fails the load is automatically distributed to other members of the group. Consumer groups __must have__ unique group ids within the cluster, from a kafka broker perspective.
+
+Creating the consumer:
+
+```javascript
+const consumer = kafka.consumer({ groupId: 'my-group' })
+```
+
+Subscribing to some topics:
+
+```javascript
+await consumer.connect()
+
+await consumer.subscribe({ topics: ['topic-A'] })
+
+// You can subscribe to multiple topics at once
+await consumer.subscribe({ topics: ['topic-B', 'topic-C'] })
+
+// It's possible to start from the beginning of the topic
+await consumer.subscribe({ topics: ['topic-D'], fromBeginning: true })
+```
+
+Alternatively, you can subscribe to any topic that matches a regular expression:
+
+```javascript
+await consumer.connect()
+await consumer.subscribe({ topics: [/topic-(eu|us)-.*/i] })
+```
+
+When suppling a regular expression, the consumer will not match topics created after the subscription. If your broker has `topic-A` and `topic-B`, you subscribe to `/topic-.*/`, then `topic-C` is created, your consumer would not be automatically subscribed to `topic-C`.
+
+KafkaJS offers you two ways to process your data: `eachMessage` and `eachBatch`
+
+## eachMessage
+
+The `eachMessage` handler provides a convenient and easy to use API, feeding your function one message at a time. It is implemented on top of `eachBatch`, and it will automatically commit your offsets and heartbeat at the configured interval for you. If you are just looking to get started with Kafka consumers this a good place to start.
+
+```javascript
+await consumer.run({
+ eachMessage: async ({ topic, partition, message, heartbeat, pause }) => {
+ console.log({
+ key: message.key.toString(),
+ value: message.value.toString(),
+ headers: message.headers,
+ })
+ },
+})
+```
+
+Be aware that the `eachMessage` handler should not block for longer than the configured [session timeout](#options) or else the consumer will be removed from the group. If your workload involves very slow processing times for individual messages then you should either increase the session timeout or make periodic use of the `heartbeat` function exposed in the handler payload.
+The `pause` function is a convenience for `consumer.pause({ topic, partitions: [partition] })`. It will pause the current topic-partition and returns a function that allows you to resume consuming later.
+
+## eachBatch
+
+Some use cases require dealing with batches directly. This handler will feed your function batches and provide some utility functions to give your code more flexibility: `resolveOffset`, `heartbeat`, `commitOffsetsIfNecessary`, `uncommittedOffsets`, `isRunning`, `isStale`, and `pause`. All resolved offsets will be automatically committed after the function is executed.
+
+> Note: Be aware that using `eachBatch` directly is considered a more advanced use case as compared to using `eachMessage`, since you will have to understand how session timeouts and heartbeats are connected.
+
+```javascript
+await consumer.run({
+ eachBatchAutoResolve: true,
+ eachBatch: async ({
+ batch,
+ resolveOffset,
+ heartbeat,
+ commitOffsetsIfNecessary,
+ uncommittedOffsets,
+ isRunning,
+ isStale,
+ pause,
+ }) => {
+ for (let message of batch.messages) {
+ console.log({
+ topic: batch.topic,
+ partition: batch.partition,
+ highWatermark: batch.highWatermark,
+ message: {
+ offset: message.offset,
+ key: message.key.toString(),
+ value: message.value.toString(),
+ headers: message.headers,
+ }
+ })
+
+ resolveOffset(message.offset)
+ await heartbeat()
+ }
+ },
+})
+```
+
+* `eachBatchAutoResolve` configures auto-resolve of batch processing. If set to true, KafkaJS will automatically commit the last offset of the batch if `eachBatch` doesn't throw an error. Default: true.
+* `batch.highWatermark` is the last committed offset within the topic partition. It can be useful for calculating lag.
+* `resolveOffset()` is used to mark a message in the batch as processed. In case of errors, the consumer will automatically commit the resolved offsets.
+* `heartbeat(): Promise` can be used to send heartbeat to the broker according to the set `heartbeatInterval` value in consumer [configuration](#options), which means if you invoke `heartbeat()` sooner than `heartbeatInterval` it will be ignored.
+* `commitOffsetsIfNecessary(offsets?): Promise` is used to commit offsets based on the autoCommit configurations (`autoCommitInterval` and `autoCommitThreshold`). Note that auto commit won't happen in `eachBatch` if `commitOffsetsIfNecessary` is not invoked. Take a look at [autoCommit](#auto-commit) for more information.
+* `uncommittedOffsets()` returns all offsets by topic-partition which have not yet been committed.
+* `isRunning()` returns true if consumer is in running state, else it returns false.
+* `isStale()` returns whether the messages in the batch have been rendered stale through some other operation and should be discarded. For example, when calling [`consumer.seek`](#seek) the messages in the batch should be discarded, as they are not at the offset we seeked to.
+* `pause()` can be used to pause the consumer for the current topic-partition. All offsets resolved up to that point will be committed (subject to `eachBatchAutoResolve` and [autoCommit](#auto-commit)). Throw an error to pause in the middle of the batch without resolving the current offset. Alternatively, disable `eachBatchAutoResolve`. The returned function can be used to resume processing of the topic-partition. See [Pause & Resume](#pause-resume) for more information about this feature.
+
+### Example
+
+```javascript
+consumer.run({
+ eachBatchAutoResolve: false,
+ eachBatch: async ({ batch, resolveOffset, heartbeat, isRunning, isStale }) => {
+ for (let message of batch.messages) {
+ if (!isRunning() || isStale()) break
+ await processMessage(message)
+ resolveOffset(message.offset)
+ await heartbeat()
+ }
+ }
+})
+```
+
+In the example above, if the consumer is shutting down in the middle of the batch, the remaining messages won't be resolved and therefore not committed. This way, you can quickly shut down the consumer without losing/skipping any messages. If the batch goes stale for some other reason (like calling `consumer.seek`) none of the remaining messages are processed either.
+
+## Partition-aware concurrency
+
+By default, [`eachMessage`](Consuming.md#each-message) is invoked sequentially for each message in each partition. In order to concurrently process several messages per once, you can increase the `partitionsConsumedConcurrently` option:
+
+```javascript
+consumer.run({
+ partitionsConsumedConcurrently: 3, // Default: 1
+ eachMessage: async ({ topic, partition, message }) => {
+ // This will be called up to 3 times concurrently
+ },
+})
+```
+
+Messages in the same partition are still guaranteed to be processed in order, but messages from multiple partitions can be processed at the same time. If `eachMessage` consists of asynchronous work, such as network requests or other I/O, this can improve performance. If `eachMessage` is entirely synchronous, this will make no difference.
+
+The same thing applies if you are using [`eachBatch`](Consuming.md#each-batch). Given `partitionsConsumedConcurrently > 1`, you will be able to process multiple batches concurrently.
+
+A guideline for setting `partitionsConsumedConcurrently` would be that it should not be larger than the number of partitions consumed. Depending on whether or not your workload is CPU bound, it may also not benefit you to set it to a higher number than the number of logical CPU cores. A recommendation is to start with a low number and measure if increasing leads to higher throughput.
+
+## autoCommit
+
+The messages are always fetched in batches from Kafka, even when using the `eachMessage` handler. All resolved offsets will be committed to Kafka after processing the whole batch.
+
+Committing offsets periodically during a batch allows the consumer to recover from group rebalancing, stale metadata and other issues before it has completed the entire batch. However, committing more often increases network traffic and slows down processing. Auto-commit offers more flexibility when committing offsets; there are two flavors available:
+
+`autoCommitInterval`: The consumer will commit offsets after a given period, for example, five seconds. Value in milliseconds. Default: `null`
+
+```javascript
+consumer.run({
+ autoCommitInterval: 5000,
+ // ...
+})
+```
+
+`autoCommitThreshold`: The consumer will commit offsets after resolving a given number of messages, for example, a hundred messages. Default: `null`
+
+```javascript
+consumer.run({
+ autoCommitThreshold: 100,
+ // ...
+})
+```
+
+Having both flavors at the same time is also possible, the consumer will commit the offsets if any of the use cases (interval or number of messages) happens.
+
+`autoCommit`: Advanced option to disable auto committing altogether. Instead, you can [manually commit offsets](#manual-commits). Default: `true`
+
+## Manual committing
+
+When disabling [`autoCommit`](#auto-commit) you can still manually commit message offsets, in a couple of different ways:
+
+- By using the `commitOffsetsIfNecessary` method available in the `eachBatch` callback. The `commitOffsetsIfNecessary` method will still respect the other autoCommit options if set.
+- By [sending message offsets in a transaction](Transactions.md#offsets).
+- By using the `commitOffsets` method of the consumer (see below).
+
+The `consumer.commitOffsets` is the lowest-level option and will ignore all other auto commit settings, but in doing so allows the committed offset to be set to any offset and committing various offsets at once. This can be useful, for example, for building a processing reset tool. It can only be called after `consumer.run`. Committing offsets does not change what message we'll consume next once we've started consuming, but instead is only used to determine **from which place to start**. To immediately change from what offset you're consuming messages, you'll want to [seek](#seek), instead.
+
+```javascript
+consumer.run({
+ autoCommit: false,
+ eachMessage: async ({ topic, partition, message }) => {
+ // Process the message somehow
+ },
+})
+
+consumer.commitOffsets([
+ { topic: 'topic-A', partition: 0, offset: '1' },
+ { topic: 'topic-A', partition: 1, offset: '3' },
+ { topic: 'topic-B', partition: 0, offset: '2' }
+])
+```
+
+Note that you don't *have* to store consumed offsets in Kafka, but instead store it in a storage mechanism of your own choosing. That's an especially useful approach when the results of consuming a message are written to a datastore that allows atomically writing the consumed offset with it, like for example a SQL database. When possible it can make the consumption fully atomic and give "exactly once" semantics that are stronger than the default "at-least once" semantics you get with Kafka's offset commit functionality.
+
+The usual usage pattern for offsets stored outside of Kafka is as follows:
+
+- Run the consumer with `autoCommit` disabled.
+- Store a message's `offset + 1` in the store together with the results of processing. `1` is added to prevent that same message from being consumed again.
+- Use the externally stored offset on restart to [seek](#seek) the consumer to it.
+
+## fromBeginning
+
+The consumer group will use the latest committed offset when starting to fetch messages. If the offset is invalid or not defined, `fromBeginning` defines the behavior of the consumer group. This can be configured when subscribing to a topic:
+
+```javascript
+await consumer.subscribe({ topics: ['test-topic'], fromBeginning: true })
+await consumer.subscribe({ topics: ['other-topic'], fromBeginning: false })
+```
+
+When `fromBeginning` is `true`, the group will use the earliest offset. If set to `false`, it will use the latest offset. The default is `false`.
+
+## Options
+
+```javascript
+kafka.consumer({
+ groupId: ,
+ partitionAssigners: ,
+ sessionTimeout: ,
+ rebalanceTimeout: ,
+ heartbeatInterval: ,
+ metadataMaxAge: ,
+ allowAutoTopicCreation: ,
+ maxBytesPerPartition: ,
+ minBytes: ,
+ maxBytes: ,
+ maxWaitTimeInMs: ,
+ retry: