Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"java.util.concurrent.CancellationException: Disposed" with Docker image 1.4.0 #52

Open
jeanemvista opened this issue Mar 2, 2022 · 2 comments
Labels
bug Something isn't working

Comments

@jeanemvista
Copy link

Hi, with docker image 1.4.0, I have this stack trace in container when I send metric from SpringBoot application :

test-prometheus-rsocket-proxy-1 | reactor.core.Exceptions$ErrorCallbackNotImplemented: java.util.concurrent.CancellationException: Disposed test-prometheus-rsocket-proxy-1 | Caused by: java.util.concurrent.CancellationException: Disposed test-prometheus-rsocket-proxy-1 | at io.rsocket.internal.UnboundedProcessor.dispose(UnboundedProcessor.java:550) ~[rsocket-core-1.1.1.jar:na] test-prometheus-rsocket-proxy-1 | at io.rsocket.transport.netty.TcpDuplexConnection.doOnClose(TcpDuplexConnection.java:67) ~[rsocket-transport-netty-1.1.1.jar:na] test-prometheus-rsocket-proxy-1 | at io.rsocket.internal.BaseDuplexConnection.lambda$new$0(BaseDuplexConnection.java:30) ~[rsocket-core-1.1.1.jar:na] test-prometheus-rsocket-proxy-1 | at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.runFinally(FluxDoFinally.java:163) ~[reactor-core-3.4.14.jar:3.4.14] test-prometheus-rsocket-proxy-1 | at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:146) ~[reactor-core-3.4.14.jar:3.4.14] test-prometheus-rsocket-proxy-1 | at reactor.core.publisher.SinkEmptyMulticast$VoidInner.complete(SinkEmptyMulticast.java:238) ~[reactor-core-3.4.14.jar:3.4.14] test-prometheus-rsocket-proxy-1 | at reactor.core.publisher.SinkEmptyMulticast.tryEmitEmpty(SinkEmptyMulticast.java:70) ~[reactor-core-3.4.14.jar:3.4.14] test-prometheus-rsocket-proxy-1 | at reactor.core.publisher.SinkEmptySerialized.tryEmitEmpty(SinkEmptySerialized.java:46) ~[reactor-core-3.4.14.jar:3.4.14] test-prometheus-rsocket-proxy-1 | at io.rsocket.internal.BaseDuplexConnection.dispose(BaseDuplexConnection.java:51) ~[rsocket-core-1.1.1.jar:na] test-prometheus-rsocket-proxy-1 | at io.rsocket.transport.netty.TcpDuplexConnection.lambda$new$0(TcpDuplexConnection.java:49) ~[rsocket-transport-netty-1.1.1.jar:na] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) ~[netty-transport-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1164) ~[netty-transport-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:755) ~[netty-transport-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731) ~[netty-transport-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620) ~[netty-transport-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.shutdownInput(AbstractEpollChannel.java:522) ~[netty-transport-classes-epoll-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:818) ~[netty-transport-classes-epoll-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) ~[netty-transport-classes-epoll-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[netty-transport-classes-epoll-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.73.Final.jar:4.1.73.Final] test-prometheus-rsocket-proxy-1 | at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]

Docker version : 20.10.12 on Mac M1.

@shakuzen
Copy link
Member

I'm seeing this happen when shutting down an application that's sending metrics to the proxy server. Is that what you are seeing? Or is this happening at some other time that you're noticing?

@shakuzen shakuzen added bug Something isn't working and removed status: waiting for triage labels Mar 16, 2022
@jeanemvista
Copy link
Author

I did not notice under what conditions there is the crash.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants