-
Notifications
You must be signed in to change notification settings - Fork 4k
Description
There is the following content in the gRPC Manual Flow Control Example README:
Outgoing Flow Control
The underlying layer (such as Netty) will make the write wait when there is no space to write the next message. This causes the request stream to go into a not ready state and the outgoing onNext method invocation waits. You can explicitly check that the stream is ready for writing before calling onNext to avoid blocking.
This implies that when the caller tries to send streaming messages too fast, the onNext() method on the StreamObserver instance provided by the gRPC framework will block, effectively limits the send rate.
From my testing, though, the onNext() method never blocks. If the caller sends messages too fast, these messages will all be written into the buffer without limit, ultimately resulting in an OOM.
Is my understanding correct that the example README is outdated? The real reason why the caller should query the CallStreamObserver.isReady() method is not to avoid blocking, but to avoid OOM.