-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial repo layout #6
Conversation
7943007
to
35e0f05
Compare
Signed-off-by: Dominykas Blyžė <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe in imported configs, the order is backwards, so i think "oldest first" is correct here (but we should verify on an actual travis build matrix)
all/gte-10.yml
Outdated
- 11 | ||
- 12 | ||
- 13 | ||
- 14 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this could import gte-11, etc, so they're all composed together - is there a reason you didn't go that route?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That did not occur to me, TBH - I kept it simple and explicit.
I figure explicit is better than indirect for someone looking at the file as well - no need to go click through and trace things?
Doubt performance is major issue, but fewer imports should be faster too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I doubt it affects performance much; i have a chain of imports to build up almost 300 jobs on all 250+ of my projects and it works fine :-)
as for explicit/implicit, that's a fair point - i'm more thinking of the likelihood of an error in one file being noticed, versus in the chain of imports it'd be very noticeable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, fair point on error tracking, but that's probably more of an issue when the files are a little more complex than what we have here?
Having errors in these files would have a pretty bad knock on effect, if the ecosystem adopts this, but we can mitigate against that:
- I do have a generator script; it's pretty trivial, so I didn't bother putting it anywhere else than a branch on my fork. I don't know where to put it just yet, but unit tests on it would be the first life of defense. Alternatively, I'm pondering if I can force renovate to update these files and avoid having to think about where to place the script. Time enough till October to figure this out 😂
- There should be an "integration" test as well. One idea I have is to add a
.travis.yml
in here, which just imports all the available files via local import (node --version
or smth as a test script itself). It shouldn't matter if Travis deduplicates or duplicates the final list (it has the various merge strategies, but I don't recall anything about deduplication) - doing this would at least guarantee the file validation - I think the job just freezes if it doesn't validate? Might need to work aroundYou can import up to 25 build configuration snippets in total.
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been asking travis for item 2 there for a long time :-/ some kind of travis reify
command would be amazing (we could build it, it's just better to use the canonical implementation). Certainly a viable (but complex) approach would be generating a build on a throwaway branch, downloading the derived config from it, and then cancelling the build (all theroetically doable via travis' API).
You're also totally right about the 25 import limit, which I've told them repeatedly is way too low, so maybe your item 1 is a more reliable approach.
(To be clear, I'm content to consider this a followon and not a blocker)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some kind of travis reify command would be amazing (we could build it, it's just better to use the canonical implementation)
I did build some of that into detect-node-support
. It's not canonical, but I think it's close enough (checked against https://github.com/travis-ci/travis-yaml) and the only missing feature, I think, is the if
support (cause I didn't need it). There is code in there for the merge strategies, but I didn't check how does fetch things, esp. from private repos 🤔 I wonder if it's possibly to write a CLI wrapper or smth around their travis-yaml
thing 🤔
Doesn't seem to be backwards: https://travis-ci.com/github/dominykas/allow-scripts/builds/174521172 So now you're going to have to tell me why do you think the order matters at all :) |
Travis runs jobs from top to bottom, and the newest nodes are both the most important ones and the fastest ones, so to get the fastest feedback, you want to see those results first (and you want them to thus finish first). This is more important when you have more than 5 builds (since 5 is the current throttling threshold for travis) but can also happen when you have other projects in the same org running. |
Travis runs jobs from top to bottom, and the newest nodes are both the most important ones and the fastest ones, so to get the fastest feedback, you want to see those results first (and you want them to thus finish first). This is more important when you have more than 5 builds (since 5 is the current throttling threshold for travis) but can also happen when you have other projects in the same org running. Signed-off-by: Dominykas Blyžė <[email protected]>
Updated the order, I figured it was that - it does make sense in here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
since specifying only the major version uses the latest version of that major range, wouldn't it be best to also include a version in the each list for the first version of the lowest major ( |
If I understand this correctly your question is about the fact that earlier versions might not have some new features that a package under test might be using, and therefore it could break for people who do use earlier versions? I think the general recommendation we're giving is that people should always be running the latest version in production (to have latest security fixes, etc). So I think it makes sense to also run tests in the latest version? Either way, have you seen examples of people actually doing that in their testing setup? I know there's some who test in all minor versions of a major release (or possibly even all versions?), and I do have that mentioned in the suggestion in #5, but I figure that for the initial release - this is the most common case that needs to be covered? |
yeah, thats what i'm trying to highlight.
i think thats a reasonable recommendation, but not everyone is set up in a way to do this easily or automatically. especially for a package that defines now, of course, if someone did define the lower bound of their range as higher than
i can't say i've really been on the lookout for it in other projects, but it has been my plan for my own projects for a while. i recently started using package-preview to run integration tests against the packaged versions of my projects. with babel in place, it wouldn't have been beneficial before since the transpilation happened as part of the test run. here is the config that i'm working to add along these lines. outside of a duplicated version, which seems unrelated, it's accomplishing the goal i'm after.
you may be right that this covers the most common case, but since adding it to these configs later could cause build failures that didn't happen before, you would introduce a breaking change to these configs. maybe this truly is a different use case, but i think the |
That's a good suggestion. I do that in my testing setups by testing every minor version - in other words, i'm not worried about x.y.0 breaking, as long as the latest x.y works. In terms of "breaking changes" to a testing matrix, adding new things that cause the build to properly fail is not a breaking change - however, removing anything is a breaking change. |
This is an interesting point. Is there a way to define I know the above support does not cover the needs of all maintainers, but I don't think the goal of this repo should be to support all of them (not initially anyways). That's a tradeoff - flexibility vs simplicity - I favor simplicity to drive adoption, given that probably ~30-50% of the top 1000 packages do not test in the latest version.
Given that the recommended and most common case is that people support the latest release in a major release line, then using new features available in that supported version is not a breaking change, because you're not breaking the "contract"? If you've communicated that you only support latest version, then dropping support for the versions you never intended to support is perfectly fine. Whether we have the means to communicate such support in machine readable format is a different question, which we should probably bring up somewhere else 🤔
Yeah, OK, I can see how it can be interpreted that way. Is there a succinct way to communicate "latest versions in all major release lines after X"? Either way, if you're going to support X.0.0, then you want all versions in your test matrix (i.e. X.1.n, X.2.n, ...), unless you explicitly say that you're going to support only X.0.0 or only X.0.0 and X.latest.latest (does anyone do that?). I have suggested a provision in #5 on how to extend the current layout to include all minor versions in the test matrix (i.e. either in a sub-folder or via Does the above provision cover the extensions you think we should add here, @travi? If so - can we continue discussing them in a separate issue and get this merged? I'm keen to start promoting this and actually making some PRs to start using this in the ecosystem. |
Opened up nodejs/package-maintenance#393 as a starting point for further discussion. |
While this might be desired by some maintainers, its not how semantic versioning works and would be a real pain as a consumer. However, in most cases, I think there are two cases for why this is desired. The primary one is that it is painful to configure a testing matrix that ensures that a change unintentionally uses a feature that didn't exist all the way down to The other case is when a change is introduced that does use a feature that did not exist in
Even if this were possible, it would effectively mean that consumers are then required to upgrade as soon as a new version is available or risk packages not working. Worse, it is uncertain whether it will work or not if no machine readable information is provided, like
I still question whether the reason for this is their intended support policy or that it would be more burdensome on their maintenance process to verify in such a matrix. I would hope that more maintainers would be supportive of supportive of the proper range definition if configuring and automated test matrix were simpler. Side note: I see this as similar to the common response for bug reports suggesting that a consumer of a package should delete their lockfile so that the latest version of a dependency is pulled into their project. I understand from a maintenance perspective why it is hard to test against all package versions, but raising the lower bound of the required semantic range when a new feature of a package dependency is used would allow npm itself to enforce the lockfile update and avoid the issue being filed in the first place. For many reasons, it is dangerous to assume that a consumer is always on the latest version, even within a semantic range, of a package dependency or of node.
I don't think this is true. Semantically speaking, if you are testing in Sure, there are cases where some random patch version may fail if you were to test against the full matrix, but that would be a bug and very likely to be fixed on the node side, once discovered. In my mind the goal of these configs should be to encourage maintainers to use a support policy that properly covers the semantic range of node versions. by testing the
in the end, that decision is certainly in your hands. i do think it is a disservice to the community of consumers to encourage maintainers to use a policy that really means only the latest version of a major is "supported" for the reasons above, but it is your decision whether my points are valid. |
@dominykas no, there is no way to define "latest major" in semver. If you want to lift the bottom threshold of your package's engines, or peer deps - which are part of your package's API - then you must do a manual bump and release. There's no nothing that makes a package's acceptable versions change over time without a new release, and it's a good thing there's not. |
@travi my 200+ packages' build matrix is likely the most thorough on the internet - I test almost every single node minor that has ever existed, and my travis job count is approaching 200 - and I think on only two occasions have I had a bug appear that was present in a In other words, while I fully agree with the spirit of your suggestion, I don't think it would actually provide value to the vast majority of projects or maintainers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Thanks everyone for the input and approvals. I'll leave this open for a few more days and will aim to merge it later in the week, unless we get any objections.
You do have some very good points - thanks! I think we'll address them once we add the "all minors" configurations. We can then start nudging people to use that, if it's not a burden to them. I will also open a new PR with missing READMEs and make sure to include the information on how to explicitly add more versions into the test matrix. At this point in time, I just want to get this rolling, so that we can start upgrading folks to supported major lines of Node.js (which is very much not the case right now). |
Works for me. Thanks for giving them consideration.
just a quick clarification in case it wasn't already clear, i was only proposing the addition of one more version for each file, a something along the lines of "all minors" could be valuable to some, but i think just one lower bound would likely cover most that are interested in that verification. happy to hold further discussion for a different issue, though. |
This scaffolds out the import files as suggested in #5.