Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka perf improvements #1964

Merged
merged 6 commits into from
Sep 18, 2024

Conversation

slinkydeveloper
Copy link
Contributor

@slinkydeveloper slinkydeveloper commented Sep 16, 2024

This is the result of investigating the Kafka ingress performance.

Before:

Screenshot 2024-09-16 at 17-35-03 View panel - New dashboard - Dashboards - Grafana

After:

Screenshot 2024-09-16 at 17-35-53 View panel - New dashboard - Dashboards - Grafana

Red line is Bifrost append thpt, green line is PP Invoke command thpt. The kafka load tool generates as much load as possible (around 25k/s records on my machine).

In both situations the initial slow section seems to be caused by the load tool, which takes a good amount of my cpu. After it finishes producing, restate takes all the CPU.

In the after case, the kafka container OOMs before finishing (probably caused by the high load generated by the consumer), so I cut the section afterwards.

I ran this test using Rust SDK and a virtual object as target, and the following subscription:

http localhost:9070/subscriptions source="kafka://my-cluster/test-topic" sink="service://Greeter/greet" \
  options["fetch.queue.backoff.ms"]="500" \
  options["queued.max.messages.kbytes"]="131072" \
  options["reconnect.backoff.max.ms"]="1000"

The Kafka topic has 24 partitions (same number of Restate's partitions). The thpt improvement in this PR is greatly affected by the Kafka topic partition number, meaning higher number of Kafka partitions equals to higher throughput.

Tuning the knobs though has irrelevant impact most of the times (at least on my machine). I tried to tune fetch.wait.max.ms too, just increases CPU usage.

@slinkydeveloper
Copy link
Contributor Author

In the last commit I've added a metric to monitor the consumer group too, blue line here:

Screenshot 2024-09-16 at 18-11-58 View panel - New dashboard - Dashboards - Grafana

@slinkydeveloper
Copy link
Contributor Author

slinkydeveloper commented Sep 16, 2024

The last commit I've added optimizes for reading as fast as possible from Kafka, reaching a much higher throughput (but potentially overloading Bifrost?!)

Screenshot 2024-09-16 at 18-47-43 View panel - New dashboard - Dashboards - Grafana

This test was run with Kafka partitions = 3

Not sure if we should do this.

consumer.store_offset(&topic, partition, offset)?;
last_offset = msg.offset();

buffer.push(sender.send(&consumer_group_id, msg));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This send down the hood awaits on the bifrost append.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After some thinking, perhaps for the time being, it just makes sense to remove this commit with the buffering. In any case this code will dramatically change when #1651 happens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's your take on this @AhmedSoliman ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest you use Appender or BackgroundAppender when writing to bifrost, it's an easy change and will get you even better performance IMHO. This is adjacent to what we'll do regarding ingress -> pp communication in the future. The downside with all current options (your PR doesn't introduce the issue) is that we don't have proper back-pressure or control over faireness from partition processor so it's easy to cause overwhelm the partition processor if kafka's ingestion rate is faster than PP's processing rate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now I won't do this change, we should revisit this though once we sort out how the kafka ingress behaves in the distributed setup.

… to split the main consumer queue into subqueues, such that we can spawn a subtask for each topic-partition tuple.

This roughly gives us 8-10x improvement in throughput (on my machine).
crates/ingress-kafka/src/consumer_task.rs Outdated Show resolved Hide resolved
consumer.store_offset(&topic, partition, offset)?;
last_offset = msg.offset();

buffer.push(sender.send(&consumer_group_id, msg));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest you use Appender or BackgroundAppender when writing to bifrost, it's an easy change and will get you even better performance IMHO. This is adjacent to what we'll do regarding ingress -> pp communication in the future. The downside with all current options (your PR doesn't introduce the issue) is that we don't have proper back-pressure or control over faireness from partition processor so it's easy to cause overwhelm the partition processor if kafka's ingestion rate is faster than PP's processing rate.

Copy link
Contributor

@AhmedSoliman AhmedSoliman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice improvements indeed. 🚢

@slinkydeveloper slinkydeveloper merged commit c17f82b into restatedev:main Sep 18, 2024
11 checks passed
@slinkydeveloper slinkydeveloper deleted the kafka-improvements branch September 18, 2024 07:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants