-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add JET.jl to testsuite #249
Conversation
* Add JET to extras and target in Project.toml * Add a file ./test/jet_test.jl * Include this test in runtests.jl conditioned on Julia version >= 1.7
Codecov Report
@@ Coverage Diff @@
## master #249 +/- ##
==========================================
+ Coverage 97.23% 97.26% +0.02%
==========================================
Files 114 114
Lines 6585 6579 -6
==========================================
- Hits 6403 6399 -4
+ Misses 182 180 -2 |
I just skimmed over these reports - do you think there are some reports that hint at some errors that we should fix, or are they all false positives? We obviously cannot merge this with all these incorrect reports for now, so we cannot fix them, maybe we have to wait until JET.jl allows us to do check on a more granular level i.e. only for certain files, methods or only some specific tests. |
Waiting on aviatesk/JET.jl#539 |
One error remains: aviatesk/JET.jl#541 |
Wouldn't it make more sense to wait until JET is more mature? Or somehow deploy it in a way so that it doesn't block if there is an error? So that we don't have to rewrite completely correct functions. |
From what I understand, the maturity of JET is closely related to the maturity of Julia itself, so what manifests at JET issues might also indicate deeper problems with type inference.
I tried looking through the docs but didn't find a way to ignore individual functions. Of course we could encourage contributors to run JET locally before pushing, but just like for tests or formatting, there's nothing like a good old CI to beat laziness.
I agree that re-writing is a hassle, but I just took care of said hassle. From now on, if we activate JET by default, we won't be merging anything that makes it mad, and it will encourage contributors to write more healthy code. Of course the definition of healthy is subjective, but at the moment there is no objective definition, and JET is one of the best tools we have to measure it. As a side note, given the size of the Graphs codebase, the magnitude of this PR seems pretty reasonable to me. "I give you 12 modified files with trivial fixes for the most part, you give me an additional layer of safety for your code from now on." |
Maturity of JET could mean more than one thing. I think @simonschoelly may have been thinking of maturity of ergonomics of the tooling. I think with a bit more work to understand the data structure, it could be made more robust. |
I think there are some things we will want to disable. Maybe this is a way to look at it: Also, there are some things I don't yet know how to handle. For example, if I trap an error and throw an error with a helpful message, JET sometimes says: your code can throw an error here. I think, "I know, I want it too". I think there is some answer to this conundrum. I haven't thought much about it yet, nor have I asked about it. |
There's no doubt that the current API, which forces users to filter reports manually, is very cumbersome to use. I'm considering addressing this issue by introducing enhancements over this weekend. If you have any particular suggestions or requests, could you kindly leave a comment at aviatesk/JET.jl#515? I believe the first step would be to develop a feature that disregards all reports that are involved with a specific annotated method. This would likely "resolve" the cases of false positives, such as when Still I welcome to hear voices from actual users since I anticipate that there may be additional considerations. For instance, due to technical constraints, supporting annotations through comments is currently challenging, leading me to lean towards macro-based annotations first. Yet, adding JET solely for this purpose into the package dependencies doesn't strike me as the best design choice. Therefore, I envisage the need to create a lightweight package exclusively for facilitating JET annotations. In addition, I believe there's a vast scope for discussion on other aspects as well, such as deciding what kind of information JET should be taught and what capabilities should be enabled through annotations. |
The possibility of |
Just FYI: [email protected] and higher never reports such cases. |
I wasn't sure that these were incorrect, I thought maybe in some sense this level of strictness is correct, and is something to override. But I think you are saying that it may not be correct at all. I wonder if a separate system for associating comments with code, one with an API, might not be better than adding this to JET.jl itself. Then again, if it is not flexible and featureful enough, projects will implement their own. At any rate, I guess that having something like this tangled up with JET.jl proper sounds like a distraction.... you mentioned macros. That's reasonable. Perhaps a macro system that could be compatible with a future comment based system would be good... Or, do you think that is a Python-ism and that macros are good for a long-term solution? If there were a bit more documentation or tutorial about the data structures, I could have done a more careful job with my filter code. Again, I'm thinking of decoupling the filtering system and its UI from JET's reports. I don't have much time to commit to helping in other packages. But for my own, I can see that it may become less work for me to make a small package for filtering JET that I depend on. If experiments there are useful, maybe it's work moving it to JET... But I'm getting ahead of myself, my job requires that I spend most of my time writing technical software in pure Python. |
Could you provide a bit more detail on this? At the moment, we are considering a system to tell JET some information about user code, mainly things like "it's acceptable to ignore errors that arise within this method". However, are you proposing something that goes beyond JET, implying the communication of some information to the Julia compiler? Something like the interface declaration would fall under that category, but that's a broad topic, and it would require changes to the language itself. Alternatively, are you suggesting a filtering system that users can somehow extend? Ultimately, JET.jl is just a Julia package, and users can always extend and use it (as you said, even with the current "API", it's possible if there's a grasp of the data structure). Maybe you are suggesting the provision of another API to simplify things since the current one is not very nice, but it might be tough to discuss this without a concrete design.
I didn't mean to label it as Python-ism or anything similar. Extracting information from code comments in a reliable way demands considerably sophisticated software techniques. In other words, we need an advanced parser that can perform the parsing and lowering of a program while preserving the location and content of the comments. Recently, JuliaSyntax.jl has been introduced in Julia, which has made such tasks easier than before. However, as we prototype the feature we're discussing now, JET has not yet been integrated with JuliaSyntax.jl. This is why we're initially adopting a macro-based approach instead of a comment-based one. In short, conveying information about a method to Julia's runtime is easier using macros rather than reading comments. Naturally, in the future, we should implement comment-based functionalities as well, and when we do, it would be crucial to maintain as much compatibility with the macro-based features as possible. |
"Completely correct" might not include the following desirable properties:
These are very good reasons to be aggressive about trying to incorporate JET into package tests. Nice work, @gdalle, on improving inferrability in this package!
I think your choice of allowing |
Re-run tests as soon as JuliaRegistries/General#86248 is merged |
@simonschoelly @jlapeyre with the recent JET releases there was basically no code rewrite necessary. If you're okay with this I'm gonna merge |
I am ok with merging this, but maybe one last question, why do we need to change the error thrown in badj(x...) = _NI("badj") Its not a big thing, as it is very unlikely that anyone relies on that behavior anyways, but we are still able to throw that error in other places without any issues. |
Initially I changed it because JET was throwing a warning. Perhaps a later version of JET fixed it, and I could revert. I happen to think it is better Julia style to define interfaces using empty functions (with no methods) rather than a single method throwing an error by default. The end result is nearly the same in terms of user experience, but I find the first error clearer because it directly tells you what to implement. julia> function myfunc1 end
myfunc1 (generic function with 0 methods)
julia> myfunc2(args...) = error("Not implemented")
myfunc2 (generic function with 1 method)
julia> myfunc1(1)
ERROR: MethodError: no method matching myfunc1(::Int64)
Stacktrace:
[1] top-level scope
@ REPL[4]:1
julia> myfunc2(1)
ERROR: Not implemented
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] myfunc2(args::Int64)
@ Main ./REPL[2]:1
[3] top-level scope
@ REPL[5]:1 |
I think this is a misunderstanding. I did not mean to imply it. I have not thought much about this. I guess Python tools attach semantics to some comments because that's all that's available to them. Julia could use this, but also has macros available.
I was thinking that this is a separate concern, and it would be best to try to maintain an interface between tools for user ergonomics (which sounds like a big project) from what JET.jl does now. But, you are in a much better position to understand whether this makes sense.
I think its great that you and Tim are pursuing this. It's important work. |
See #247
I did not fix any "reports". There are currently 159 of them.
My best understanding is that there is not much that can be done to ignore selected reports. There is a mechanism to ignore them in specified dependencies.
For other packages I hacked something together which I include here. There are two mechanisms. For one you can give pairs
(message, filename)
to skip, wheremessage
is the content of a fieldmsg
in a struct returned by JET. andfilename
is the file it occurs in. I could also specify line number, but this would change when editing the file.Hopefully better tooling in JET itself, or a companion packing is in the offing.