-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(linters): Fix findings found by testifylint
: go-require
forinstrumental
and parsers/processors
#15887
chore(linters): Fix findings found by testifylint
: go-require
forinstrumental
and parsers/processors
#15887
Conversation
… `instrumental` and parsers
testifylint
: go-require
forinstrumental
and parserstestifylint
: go-require
forinstrumental
and parsers/processors
I see the following options for handling multiple errors in the code simulating the TCP Server in a separate goroutine:
@srebhan @DStrand1 |
@zak-pawel is there any reason to not collect errors in the |
@srebhan I think I prefer option number 2 ( |
@zak-pawel exactly, if we do store the errors we can then also check the error messages, types etc. But option 2 also works for me, just make sure the test doesn't sit and wait for the test-timeout (10min) to be hit... |
@srebhan Please, check |
Download PR build artifacts for linux_amd64.tar.gz, darwin_arm64.tar.gz, and windows_amd64.zip. 👍 This pull request doesn't change the Telegraf binary size 📦 Click here to get additional PR build artifactsArtifact URLs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is ok even though I don't like the channel stuff in the influx parser... But it's better than breaking things. IMO we are still better off if we are implementing mock-server stuff for instrumental and check the errors/metrics outside of the go-routine.
Anyway, thanks for your work @zak-pawel!
@srebhan I would like to understand why you would prefer it that way. The only difference is the place where we "mark" the test as failed. If we do it outside the goroutine, we will lose very valuable information about exactly which line the error occurred - we will have to guess it based on the content of the message... |
Well sometimes we want to test that an error occurs e.g. if providing bad configuration settings etc. So my take is, returning an error via the normal communication means would be the best option (e.g. return a HTTP error code on http connections). Alternatively, implement a mockup server that can trace issues during communication (as I proposed), this way we can test expected error cases as well as comparing results on the server side to what we expect ends up on the server in the unit-test. This also adds the flexibility to adapt metrics in the test cases as input and expected output are in one place. The least preferable option in my view is to check the result directly on the server side as it creates a rigid structure where the input needs to be known in the unit-test function as-well-as in the mocked server side. Furthermore, it will provide no means to stepwise test code and react on the unit-test side without a lot of synchronization-fu. But just my 2 cent... |
…`instrumental` and parsers/processors (influxdata#15887)
Summary
This is only the first part of a larger effort to address the findings identified by
testifylint: go-require
: #15535Once all the findings have been addressed,
testifylint: go-require
can be enabled in.golangci.yml
.In this PR, I’m focusing on parser/processors tests and
instrumental_test.go
. While fixing the parser/processors tests is quite straightforward, it seems that the approach to testing the simulation of the TCP Server's response in a separate goroutine requires discussion – because there are quite a few such tests in Telegraf’s test code, and it would be great to have a consistent approach in future PRs.Here are the findings that this PR addresses:
Checklist