-
-
Notifications
You must be signed in to change notification settings - Fork 9
feat: add option to use native dotnet http handler for http(s) #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
add WriteClosableStreamWrapper related to stdin operations for nativehandler
2b86300 to
73203b9
Compare
|
@HofmeisterAn This way to test different DockerClients maybe helps for SSH feature too. |
(otherwise performance test fails with connect timeout)
|
@HofmeisterAn added an performance test: But this fails locally for ManagedHttpHandler when thread count greater > ~60 (depends on hardware): Also a have to raise connect timeout for namedpipe otherwise this tests fails too. |
|
Thanks for the contributions and effort! Could you share a bit more about why you chose to implement the native HTTP handler? I understand that using built-in classes is generally the better approach and something we should aim for instead of custom implementations, but I am curious if there are other reasons. This PR touches some critical parts. Since the I really appreciate the extra tests, they are very helpful. |
|
@HofmeisterAn added memory output to new tests:
|
do not use automatic wrapper for dotnet 5+
|
Could you quickly elaborate on what this PR is about? Does it address a specific issue? What changes are included?
Are there any other changes besides that (apart from the test)? I like the idea of testing all client configurations, but I don't think it'll work OOB (unless the endpoints are available). We'll probably need to make it configurable and adjust the CI pipeline (agent configuration). |
I updated my first comment. And I need more time to fix test actions with 'dind', also maybe it will be good to test https client with self-signed certificates too. |
* configure daemons and clients to match github and local test environments * uses runners temp for certs * checkout in different path * reduce performance test asserts * let task some time to start monitoring
|
@HofmeisterAn Also the new parallel test for the 'ManagedHttps' client fails, and currently I don't know why, but I think this not related to the test: see here: https://github.com/bruegth/TestContainers.Docker.DotNet/actions/runs/17950135102/job/51046871489#step:9:18 |
… managedHttps Client
|
Maybe related to dotnet/runtime#107051? |
|
I haven't had the time to look closely at the changes yet, but they do seem quite complex. Maybe it makes sense to split them into smaller chunks. I'll take a closer look in the next few days. BTW we use a similar setup/configuration in Testcontainers to test the SSL support/implementation, maybe that helps (I assume in general it should work). You find the fixtures ( |
I see, but to test with different fixtures which creates docker dind containers add the dependency to testcontainers to this project? Also then we need to write same test for each fixture again, so I think my solution with is smarter because the tests only written once and it shows up a failure in client and not in test. If you wish I will create an other PR with only adds tests for clients and another one to add the native handler, but before that I need to know if you accept this PR in common. |
I meant this as a reference to a working dind setup running on GH-hosted runners.
Using built-in classes is great, and moving further in that direction sounds good 👍. Proper abstractions would really help (maybe start with DI, then add more handlers), especially if we support more protocols in the future (like SSH). WDYT? My only concern is that too many if-else blocks make the code harder to maintain. Also, worth keeping in mind the library wasn't originally designed for many of the things it's doing now. |
@HofmeisterAn Please take an look into: #44 But this solution uses reflection, which makes it difficult for AOT in future. |
|
@HofmeisterAn Prefer this PR #44 in flavor, so closing this one. |










In high load scenarios (many calls to docker daemon) the custom http handler seems to consume a lot sockets, memory and CPU time.
For these scenarios this new option uses the native http handler, which works with socket pools and consumes less memory and CPU time. But this http native handler do not supports 'npipe' or 'unix' socket connections.