-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Translator Proxy shuts down after disconnecting downstream miner #1426
Comments
I started drafting an integration test to reproduce this bug, but soon realized that we still lack some primitives: #[tokio::test]
async fn tproxy_survives_downstream_disconnect() {
let (_tp, tp_addr) = start_template_provider(None).await;
let (_pool, pool_addr) = start_pool(Some(tp_addr)).await;
let (_, tproxy_addr) = start_sv2_translator(pool_addr).await;
// connect first mining device
// note: this function doesn't return anything
// ideally we need something that allows us to kill the mining device
start_mining_device_sv1(tproxy_addr).await;
// tproxy still up
assert!(tokio::net::TcpListener::bind(tproxy_addr).await.is_err());
// connect second mining device
start_mining_device_sv1(tproxy_addr).await;
// tproxy still up
assert!(tokio::net::TcpListener::bind(tproxy_addr).await.is_err());
// here need something to kill the first mining device
// tproxy still up (if panic, we still have the bug)
assert!(tokio::net::TcpListener::bind(tproxy_addr).await.is_err());
} |
I can confirm this was introduced by #1319 here's what I did:
cc @Shourya742 |
an alternative way to reproduce the bug (using
this is just to show that we don't really need a CPU miner to reproduce this |
the comment above was just to showcase that there's a simpler way to try to reproduce this bug as Integration Test, which is to simply to emulate a downstream with a btw, I enabled logs by cherry-picking from #1430 to get a better understanding of what's going on #[tokio::test]
async fn tproxy_survives_downstream_disconnect() {
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::from_default_env())
.init();
let (_tp, tp_addr) = start_template_provider(None).await;
let (_pool, pool_addr) = start_pool(Some(tp_addr)).await;
let (_, tproxy_addr) = start_sv2_translator(pool_addr).await;
// emulate first downstream
let downstream_a = std::net::TcpStream::connect(tproxy_addr).unwrap();
// emulate second downstream
let _downstream_b = std::net::TcpStream::connect(tproxy_addr).unwrap();
// wait a bit to make sure the TCP sockets are processed
tokio::time::sleep(std::time::Duration::from_secs(2)).await;
// kill downstream_a
downstream_a.shutdown(std::net::Shutdown::Both).unwrap();
drop(downstream_a);
// wait a bit to make sure the TCP sockets are processed
tokio::time::sleep(std::time::Duration::from_secs(2)).await;
// wtf?? tproxy still up??? what happened to the bug?? this should panic!!!
assert!(tokio::net::TcpListener::bind(tproxy_addr).await.is_err());
// wtf?? this should panic!!!
let downstream_c = std::net::TcpStream::connect(tproxy_addr).unwrap();
} now the puzzling part... tProxy port continues taken after the first connection is killed 🤷🏼 and we even can see the
|
@Shourya742 any theories on the comment above? my instinct is that |
Can you cherry pick translator commits from this #1360 |
ok saw the expected behavior now:
|
While testing some PRs, I noticed that Translator Proxy shuts down gracefully if a downstream miner pointed to it is killed.
I'm pretty sure this behaviour has been introduced on PR #1319, but further investigations are needed.
What we want to achieve is a Translator Proxy which closes connections with downstreams that shuts down, but without shutting itself down too.
Here some logs representing the issue described above:
The text was updated successfully, but these errors were encountered: