Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Cabal support for LTS Snapshots #7556

Open
emilypi opened this issue Aug 18, 2021 · 48 comments
Open

RFC: Cabal support for LTS Snapshots #7556

emilypi opened this issue Aug 18, 2021 · 48 comments
Labels
type: discussion type: RFC Requests for Comment

Comments

@emilypi
Copy link
Member

emilypi commented Aug 18, 2021

Continuing from this discussion here: https://twitter.com/bkmlep/status/1427696844662919169

What is this RFC for?

Discussion amongst the Cabal team has led us to consider the idea of supporting Stackage Nightly snapshots and LTS (Long Term Support) releases. This issue should serve as a preliminary request for comment and the start of a design document for supporting Stackage LTS snapshots and features associated with it.

Why should I care?

LTS snapshots support closed universes of package sets that are guaranteed to work together. Many of us already work with Nix, which relies on Stackage to inform its stable package sets. Working with Cabal + Nix is already implicitly working in an LTS. This would be a good feature for Cabal to support, due to many users strictly consuming dependencies, in which case they do not care in particular about setting bounds, which is a feature for library maintainers and consumers who need finer grained dependency management.

With #6528, we could support LTS snapshots as remote freeze files, and allow users to consume other kinds of stable package sets.

When will this go in?

At least 1 year (Cabal 3.10), at most 2 years (Cabal 3.14)

I like this, how can I help?

Speak up on this issue, or get ahold of one of the Cabal team on libera.chat#hackage

@parsonsmatt
Copy link
Collaborator

parsonsmatt commented Aug 18, 2021

[admin edit, by @Mikolaj: the PVP section mentioned below has since been removed from the main post]

I'd love to be able to point cabal to a Stackage resolver. That would be fantastic.

I agree with gbaz - I'm not sure why PVP and version bounds are being discussed here. After some thought, I'm curious if you're intending to support a resolver attribute as a package-level property? If so, then I can see how it would be relevant - suppose someone has a .cabal file like

name: my-library
resolver: lts-14.5
library
  build-depends:
      base
    , aeson

While the resolver in this case specifies the versions of the libraries in question, it does not specify any version bounds. So the intent may be to take the bounds from lts-14.5 and force them to be equal in the build-depends upon publish to Hackage or similar.

I can see that feature being useful as an application developer, to avoid having a cabal.project or stack.yaml file. However, having those files is so useful for other things that I don't see that as a huge pain point. And, since we'd need to support something like extra-deps in stack.yaml to provide non-resolver dependencies, I don't think complicating the .cabal file format any further makes much sense.

As a library developer, I would not want to commit to == library-version.in.resolver, and would not use the feature in that context.

Having a --resolver flag on the CLI would be great for doing builds and specifying a known-good set of dependencies. And having resolver as a property in the cabal.project file would also be fantastic, though you would want to copy extra-deps too.

@emilypi
Copy link
Member Author

emilypi commented Aug 18, 2021

@parsonsmatt I removed the PVP commentary - @gbaz talked me down off the ledge 😄

@gbaz
Copy link
Collaborator

gbaz commented Aug 18, 2021

(deleted my earlier comments since the issue was edited)

I'd be against setting resolvers as a package level property, as packages can work with much larger sets of version bounds than in a single resolver, and furthermore that info would make it hard to back-into full dep sets for solving without a lot of extra fetching. Having a minimal cabal.project file seems fine (and note that i'd imagine we'd be able to set remote freeze files directly as a command line flag to configure).

#6528 covers remote freeze files, and i think the only slightly subtle design work is caching. The remaining aspect to bridging this is either having stackage again more prominently expose its freeze files, or creating a more general stack-package-set.yaml -> freeze file bridge as a service, which wouldn't be much work either.

I'd also like to improve hackage-server support for storing and serving package sets, which is some design work but maybe a gsoc worth of code (at most).

@Gabriella439
Copy link
Contributor

My understanding is the same as @gbaz: resolvers should be a project-level property, not a package-level property. This would also be consistent with how Nix and Stack work

@jkachmar
Copy link

At a high-level I think something like this would be quite useful for all the reasons you've listed; I don't really have much to add to this other than to be another voice of support.

Thanks for starting this conversation and writing up the RFC!


As a library developer, I would not want to commit to == library-version.in.resolver, and would not use the feature in that context.

Do you think it would be useful to have something like gen-bounds which would use the dependency info from the snapshot specified in a cabal.project file to generate lower bounds?

@parsonsmatt
Copy link
Collaborator

Do you think it would be useful to have something like gen-bounds which would use the dependency info from the snapshot specified in a cabal.project file to generate lower bounds?

It'd be nice to have a list of resolvers that I intend on supporting, with version bounds specified by those resolvers if not otherwise present. This would make it easier to have accurate version bounds when trying to support a wide range of dependencies.

@angerman
Copy link
Collaborator

Having support for fixed/curated package sets is certainly something I support. And I've mentioned this a long time ago, we could support this almost today if we allowed hackage-overlays to not only augment hackage, but also restrict/constrain the available package set.

I am just a bit confused about this statement?

Many of us already work with Nix, which relies on Stackage to inform its stable package sets. Working with Cabal + Nix is already implicitly working in an LTS.

@AlistairBurrowes
Copy link

👋 Just a +1 from me a user of stack. I think with this change I would eagerly switch back to cabal and I suspect many other stack users would to. I think cabal getting more users is good for cabal.

As an aside, in the long term I think you could even consider discontinuing stack in favor of cabal with this key addition (and perhaps a few others). Obviously that is a much broader discussion, but I think it would be good for cabal and Haskell more broadly.

@vaibhavsagar
Copy link
Collaborator

Aren't these already supported by downloading the correct cabal.config from Stackage?

@gbaz
Copy link
Collaborator

gbaz commented Aug 18, 2021

It'd be nice to have a list of resolvers that I intend on supporting, with version bounds specified by those resolvers if not otherwise present. This would make it easier to have accurate version bounds when trying to support a wide range of dependencies.

@parsonsmatt can you expand on how this might work? Like you could have a set of resolvers and CI would check against all of them? I imagine this would be maybe a twofold thing. First, make it easy to pass different resolvers on the command line, and second then teach the CI script to run a project multiple times, one with each setting...

I was just looking at a related ticket (#7367) on generalizing freeze file handling to decouple it as directly from cabal.project files -- I imagine some similar considerations would have to be taken into consideration in terms of where/how the configuration could be set in this proposal.

@Kleidukos
Copy link
Member

As an industrial user of both stack and cabal, I am in full support of being compatible with Stackage. I am truly happy to see that the new Cabal team is motivated and dynamic on the subject of improving the tooling.

@hasufell
Copy link
Member

Obviously, I support the idea of remote freeze files.


However, I'm mildly against adding direct syntax support for stack resolvers, whether it's *.cabal or cabal.project. So far this idea doesn't seem to be too strong here, but I'll list the arguments ahead of time:

  1. It will require to support pantry syntax, which can instead be done by 3rd party tools like stack2cabal. Cabal already has syntax to express version constraints etc. and it should stick to it. All stackage versions also provide cabal.config files, that can be used as cabal.project.freeze already. Ultimately, we could simply collaborate with stackage maintainers here to make this better as a service.
  2. It adds yet another way to do the same thing you can already do with freeze files. Tools that provide multiple options to do the same thing tend to become confusing. Instead, existing options need to be enhanced and better documented.
  3. Although this is a bit vague, cabal should stay true to its unix philosophy and UX. That, to me, means make as few assumptions about workflows as possible and provide powerful passive integration. That's exactly what remote freeze files are, IMO.

One problem with remote freeze files is the lack of support for pinning revisions. This can currently only be done implicitly via hackage-index, which is not the same as stackage snapshots. This needs discussion.

@michaelpj
Copy link
Collaborator

michaelpj commented Aug 18, 2021

An issue if we want a stack-like workflow is that the semantics of freeze files don't seem to be the same as that of resolvers. The most important difference is that in Stack user settings (e.g. extra-deps) override the resolver, whereas with a Cabal freeze file they conflict.

This is pretty important since essentially every project I've ever worked on with a Stack configuration has needed to tweak the resolver like this in some way.

So we might need some extensions to freeze files or the solver to make this usable.

Aren't these already supported by downloading the correct cabal.config from Stackage?

Having tried this, yes that gets you a file you can use as a freeze file, but it's not really usable for the reasons above.

@Kleidukos
Copy link
Member

Kleidukos commented Aug 18, 2021

The cabal.config files provided by Stackage are not immune to revisions, which is a problem if we want to use them directly
(see https://www.stackage.org/lts-18.5/cabal.config for example).

It will require to support pantry syntax, which can instead be done by 3rd party tools like stack2cabal.

We can have an integration of stack2cabal under the hood by cabal-install, I don't see what is the problem here.

@hasufell
Copy link
Member

hasufell commented Aug 18, 2021

@Kleidukos

The cabal.config files provided by Stackage are not immune to revisions, which is a problem if we want to use them directly
(see https://www.stackage.org/lts-18.5/cabal.config for example).

Yes, I pointed that out above. This needs to be discussed.

It will require to support pantry syntax, which can instead be done by 3rd party tools like stack2cabal.

We can have an integration of stack2cabal under the hood by cabal-install, I don't see what is the problem here.

Many:

  1. You rely on a file format you don't control and don't have any guarantees about backwards compatibility. Afaik, these aren't even versioned.
  2. You don't control what gets uploaded to the stackage server, so now you have to potentially synchronize releases with stack.
  3. It adds a dependency to possibly Cabal the library, which is a major concern. Having it live only in cabal-install will limit what 3rd party tools can do with Cabal.
  4. stack2cabal is a best effort tool. I'm not even sure the conversion can be done lossless.

@hasufell
Copy link
Member

hasufell commented Aug 18, 2021

@michaelpj

An issue if we want a stack-like workflow is that the semantics of freeze files don't seem to be the same as that of resolvers. The most important difference is that in Stack user settings (e.g. extra-deps) override the resolver, whereas with a Cabal freeze file they conflict.

This is pretty important since essentially every project I've ever worked on with a Stack configuration has needed to tweak the resolver like this in some way.

So we might need some extensions to freeze files or the solver to make this usable.

Excellent point. Stack2cabal indeed works around this by cloning source-repository-packages and dropping the package names from the freeze file.

I think our options are this:

  1. change default behavior and have source-repository-package and packages overwrite freeze file constraints. What will this break? Is the old behavior a use case we need to support?
  2. Add a config option to control the behavior. Will this make UX worse?
  3. Provide a tool/command that does this conversion for you (editing the freeze file) without changing default behavior and without adding config setting

I'm tilting towards 1, but I'm not sure I see the whole picture of use cases (e.g. source-repository-package can reference a branch, which may change the underlying package version.. in that case you'd want it in the freeze file, no?).

@michaelpj
Copy link
Collaborator

Excellent point. Stack2cabal indeed works around this by cloning source-repository-packages and dropping the package names from the freeze file.

It's not just source-repository-packages. Consider the very common case that you want a different version of foo than is in the snapshot. This is often perfectly possible while keeping everything building (well, at least for your package).

In stack.yaml, you add

extra-deps:
- foo-1.5

and that all works fine. But if you write in cabal.project

constraints:
  foo == 1.5

you will get a conflict with the constraints from the freeze file.

I don't know if it's possible given how the solver works, but if it were possible for the constraints from the freeze file to be overridden by "later" constraints, I think this would solve the problem.

(But arguably this is then the wrong behaviour for a "freeze" file, which should force you to take exactly the constraints it specifies. So maybe "freeze files" are the wrong vehicle for this, and we want something more like "default constraint files".)

@hasufell
Copy link
Member

hasufell commented Aug 18, 2021

@michaelpj

Excellent points. I've given this some thought as well and I think we might wanna reconsider our approach.

Adding more files will be confusing for new users, IMO, because the interaction needs to be documented:

  • cabal.project: main project file, serves as the basis
  • cabal.project.local: overwrites cabal.project settings, but not cabal.project.freeze
  • cabal.project.freeze: overwrites cabal.project and cabal.project.local
  • cabal.project.default_constraints: doesn't overwrite any file, so is higher in the hierarchy than cabal.project

I believe this is too much.


Instead I propose to redo how we handle project files. This may be a breaking change.

Include Directives

Spec

  1. remove explicit support for cabal.project.local and cabal.project.freeze (or deprecate it)
  2. instead allow to include project files (both remote and local) from within cabal.project, giving full flexibility of order
  3. From all the files and includes, generate a cabal.project.lock that shows the end-result
  4. Allow to set certain properties/conditions on includes
    • for local ones e.g. ignore-if-absent: <bool>
    • for remote ones the fetch strategy: fetch-strategy: "always-refetch" | "once" | "etag" | ...
  5. cabal init should generate a default cabal.project that resembles the original setup with cabal.project.local and cabal.project.freeze
  6. cabal freeze will generate a cabal.project.freeze file and then add the section in cabal.project if it doesn't already exist

Example

packages: ./
          ./3rdparty/mylib

-- stackage LTS file goes first, so later entries overwrite it
include
  url: https://www.stackage.org/lts-18.5/cabal.config
  fetch-strategy: once

-- some manual constraints for our projects, using newer versions than stackage LTS
constraints: abstract-deque ==0.3

-- local settings for dev, if the file exists
include
  file: cabal.project.local
  ignore-if-absent: True

-- hard constraint requirements not overwritable (e.g. making sure that vulnerable tls versions aren't selected)
include
  file: cabal.project.freeze
  ignore-if-absent: False

As such, a cabal.project.lock is direly needed. But I think this covers all and more use cases and doesn't try to just resemble an existing workflow. It's also a simplification, imo, since there are no special files.

This will need some thought about semantics of the lock file, about fetching, when to update the lock file etc etc.

Also: instead of relying on order, we could add a priority: <num> field. That might also make the job easier for e.g. cabal freeze and other tools.

@michaelpj
Copy link
Collaborator

Instead I propose to redo how we handle project files.

I don't think this works because it's the wrong kind of overriding. The *.project overriding works at the level of settings in those files, but what we need is overriding of constraints.

You can see this today: put constraints: foo = 1.0 in cabal.project and constraints: foo = 2.0 in cabal.project.local. You will get a solver error.

And you certainly don't want a constraints setting in cabal.project.local to entirely replace a constraints setting in cabal.project.

@hasufell
Copy link
Member

hasufell commented Aug 18, 2021

I don't think this works because it's the wrong kind of overriding.

Yes, constraints will have to be merged. I don't see the problem.

@fgaz
Copy link
Member

fgaz commented Aug 18, 2021

iirc --allow-newer=somepkg (and/or older) followed by --constraint=somepkg==x.y works (not sure if it does in project files too)

@merijn
Copy link
Collaborator

merijn commented Aug 18, 2021

To re-iterate my comments on IRC:

  • I feel like adding opt-in support for other resolvers like stackage would be great (for people who are not me).

  • I feel that changing the default behaviour of cabal-install to use stackage resolvers is wildly unacceptable.

  • I'm worried by the lack of concrete proposal how (possibly remote) resolvers/freeze files/whatever will affect the "happy path" of cabal-install.

hvr worked hard to keep the runtime of no-op invocations of cabal-install sub-200 milliseconds, so that it is easy to invoke unconditionally (say, via makefiles). I rely fairly heavily on this and I would hate to see automatic network access to fetch remote freeze files invalidate all this. It would also create difficulties with the current hashing/recompilation avoidance.

I would really like to hear how any possible remote resolver/freeze file design plans to deal with those issues, because if adding snapshot support breaks these usecases that'd be a big regression in cabal-install.

@hasufell
Copy link
Member

hvr worked hard to keep the runtime of no-op invocations of cabal-install sub-200 milliseconds, so that it is easy to invoke unconditionally (say, via makefiles). I rely fairly heavily on this and I would hate to see automatic network access to fetch remote freeze files invalidate all this. It would also create difficulties with the current hashing/recompilation avoidance.

We already discussed this here: #6528 (comment)

@jneira
Copy link
Member

jneira commented Aug 18, 2021

I agree with @merijn about this should be opt-in: what snapshot will be used as default, stackage, a hackage new one to be added, one setup by the user? Hard to make happy all users.
I would make the default snapshot configurable, defaulting to empty to use no one.

@michaelpj
Copy link
Collaborator

Note that if we implement this though the intersection of two features (freeze files (or something like them), and remote freeze files (or something like them)), then users who care about performance have a simple option: pull down the remote file, check it in, and just use it locally. Since this is a cabal.project concern, users can make this decision for their own project as they like without affecting e.g. Hackage.

@dcoutts
Copy link
Contributor

dcoutts commented Aug 18, 2021

I have an old RFC on this topic:

https://mail.haskell.org/pipermail/cabal-devel/2015-July/010214.html

@dcoutts
Copy link
Contributor

dcoutts commented Aug 18, 2021

And just to reiterate the obvious points:

  • The most natural way to integrate this is at the project level and not per-package
  • No separate "modes", just have snapshots as a particular kind of freeze file / package collection that is expressed as a collection of constraints for the solver.
  • Thinking in terms of constraints makes it fit the solver style and allows flexibility, e.g. partial collections, or tweaking collections to be less strict.

Nice to have (and the subject of the RFC above): a common format for expressing and distributing collections. Obviously one would want to be able to convert fixed collections (like stackage) into this format. But it's nice to have a format that anyone can create, host, distribute.

Nice to have: distribution via hackage (as well as private local or http) so distribution is open to anyone.

@dcoutts
Copy link
Contributor

dcoutts commented Aug 18, 2021

One important consideration here (and something not covered in my old RFC) is that many use cases need more than just unions of sets of constraints. Sometimes one needs to be able to override/replace rather than just union.

So one thing I played with a little in the past was a little algebra for combining collections, with both union and override. One could invent a syntax for that to use in the cabal.project file to combine named collections. That would let you do things like say "I want that LTS version with these local tweaks".

Another good thing about this approach is it might be able to replace the existing mechanism where hackage trustees tweak dependencies. Instead of tweaking .cabal files the constraint adjustments could be one of these package collections.

@hasufell
Copy link
Member

So one thing I played with a little in the past was a little algebra for combining collections, with both union and override. One could invent a syntax for that to use in the cabal.project file to combine named collections. That would let you do things like say "I want that LTS version with these local tweaks".

I'm not sure what is the difference to my proposed include directives, where you would do:

-- base constraints
include
  url: https://www.stackage.org/lts-18.5/cabal.config

-- explicit constraints are right-biased union
constraints: abstract-deque ==0.3

-- right-biased union
include
  url: https://www.stackage.org/security-masks/cabal.config
  combine -- this is default and can be omitted
    constraints: union

-- override, dropping all previous constraints
include
  file: cabal.project.freeze
  combine
    constraints: override

@parsonsmatt
Copy link
Collaborator

There's a lot of reinventing the wheel going on here.

The existing stack stuff is well-tested and has pretty good UX. The current design is the result of six years of feature development and bug fixes, with an explicit focus towards practical users and beginner experience. In my opinion, the wise thing to do here is to reuse as much of that work and experience as possible. Changes to that should ideally be informed by things the stack team would do, but don't want to break backwards compatibility or that aren't feasible outside of a greenfield new feature.

Supporting Stackage snapshots directly is by far the easiest thing. The relevant parts of pantry are upstreamed, cabal learns how to deal with it (lots and lots of prior art from stack here), and the cabal.project gets support for something analogous to resolver and extra-deps.

I don't know if re-using a .freeze file for this makes sense. A .freeze file is a way of saying "Only use these dependencies." A stack resolver is a more strong guarantee - "these dependencies are known to build together, and you're intended to override or add new things to it." The two file formats have different features. As far as I can tell, the work here is either:

a) extend the freeze mechanism and format to support what resolvers do, or
b) support stack resolvers directly

It's not at all clear to me that extending the freeze mechanism is less work than just supporting stack resolvers directly. Why duplicate work if we don't have to? Or, rephrasing - Why do we need to duplicate this work?

Implementing the support for Stackage resolvers directly would be excellent UX for cabal users. I would love to be able to have, as an example:

-- cabal.project
packages
     foo

resolver
    stackage: lts-18.5  
    overrides:               -- aka extra-deps
        text-1.2.5.0
        aeson-0.6.6.6

@parsonsmatt can you expand on how this might work? Like you could have a set of resolvers and CI would check against all of them? I imagine this would be maybe a twofold thing. First, make it easy to pass different resolvers on the command line, and second then teach the CI script to run a project multiple times, one with each setting...

@gbaz With stack, you can specify a different stack file at the command-line, so you can do stack build --stack-yaml stack-ghc-8.8.yaml with a resolver+extra-deps setup that works for that. In CI, I'd have a matrix of env vars, one of which is the stack.yaml that I want to use for that build.

With a gen-bounds approach, I'm not sure how it would look - maybe something similar to the Tested-With stanza in a cabal for specifying which GHC versions it should work with. The haskell-ci stuff works with that to generate CI files - seems straightforward to add resolvers to it and then have gen-bounds work with that information.

Even now gen-bounds could use that to inform dependencies on base and possibly other wired-in boot packages?

@hasufell
Copy link
Member

There's a lot of reinventing the wheel going on here.

I don't think so. We're trying to find a solution in the design space of Cabal that fits it. Just adding features from other tools in the same fashion will cause regressions in overall UX.

Supporting Stackage snapshots directly is by far the easiest thing. The relevant parts of pantry are upstreamed, cabal learns how to deal with it (lots and lots of prior art from stack here), and the cabal.project gets support for something analogous to resolver and extra-deps.

As explained earlier, it is not easy. pantry isn't under haskell namespace. You want to make it a dependency of Cabal and pull it into core libraries. That will require coordination and limit the things stack can do to it (e.g. breaking backwards-compat).

Implementing the support for Stackage resolvers directly would be excellent UX for cabal users.

Stackage resolvers aren't special. They are just a set of constraints. We should support constraints better (remote constraints in particular). Whether that's a stackage resolver or something else doesn't matter. Cabal shouldn't assume use cases.

So I think either include directives (more powerful) or the RFC from @dcoutts (a bit more specific to constraints) is the way to go.

@parsonsmatt
Copy link
Collaborator

I don't think so. We're trying to find a solution in the design space of Cabal that fits it. Just adding features from other tools in the same fashion will cause regressions in overall UX.

This thread started with "Let's support stack resolvers" and has now evolved to "Let's implement a custom programming language for set manipulation in cabal.project file format" or "Let's invent a method for includeing different files into our cabal.config wholesale." This is NIH syndrome and overcomplication at it's finest.

As explained earlier, it is not easy. pantry isn't under haskell namespace. You want to make it a dependency of Cabal and pull it into core libraries. That will require coordination and limit the things stack can do to it (e.g. breaking backwards-compat).

"Upstreaming a library" has always been easier for me than "redesigning and reimplementing all of that library's features."

So I think either include directives (more powerful) or the RFC from @dcoutts (a bit more specific to constraints) is the way to go.

Both of these suggestions are far more complicated with far worse UX than a simple resolver: stackage: lts-18.5.

If you want a complicated way of specifying these things, you can just use nix, which is a language for including other files and doing set manipulation on them. There's no need to reinvent that wheel.

@hasufell
Copy link
Member

@parsonsmatt

If you want a complicated way of specifying these things, you can just use nix, which is a language for including other files and doing set manipulation on them. There's no need to reinvent that wheel.

I agree, but are we re-inventing stack here or are we trying to improve cabal? I mean, as you say about nix, it already exists.

@michaelpj
Copy link
Collaborator

A stack resolver is a more strong guarantee - "these dependencies are known to build together, and you're intended to override or add new things to it." The two file formats have different features.

At least from my perspective there's something to be gained from "cablifying" the concept of a resolver. It seems to me (possibly wrongly) that a resolver just is a set of overridable constraints (i.e. constraints which can be "beaten" by conflicting user constraints). This lets us embrace the resolver workflow by adding smaller, more orthogonal features to cabal, namely:

  • Overridable constraints (which you perhaps could then use in other places that accept constraints too!)
  • Some mechanism for distributing and including sets of constraints (which you could also use for other purposes)

And there's no reason why we couldn't also have a resolver: foo sugar on top of these.

I think this is nice if what we want is the stack-resolver workflow, rather than just the feature exactly as-it-is-in-stack.

@gbaz
Copy link
Collaborator

gbaz commented Aug 18, 2021

@gbaz With stack, you can specify a different stack file at the command-line, so you can do stack build --stack-yaml stack-ghc-8.8.yaml with a resolver+extra-deps setup that works for that. In CI, I'd have a matrix of env vars, one of which is the stack.yaml that I want to use for that build.

With a gen-bounds approach, I'm not sure how it would look - maybe something similar to the Tested-With stanza in a cabal for specifying which GHC versions it should work with. The haskell-ci stuff works with that to generate CI files - seems straightforward to add resolvers to it and then have gen-bounds work with that information.

@parsonsmatt what I sketched above would let one set different files -- in fact, we can set different cabal.project files already (with different bounds for different ghcs), we just can't tie them to resolvers. So I think that use case is a good one, but covered. That said, I think referring this approach as the gen-bounds approach is odd -- the gen-bounds command imho is unrelated to this discussion.

On the broader stuff, I agree with Michael and Duncan. As I said, 95% of the way there is remote freeze files, and those are very straightforward to implement, involving teaching cabal about a new flag, but not a new way of fetching, nor a new format, etc.

As others have noted, that doesn't quite suffice because we want some overridable syntax for bounds. But I think such a syntax is a good idea anyway. And again, extending that syntax is a pretty clean and straightforward path once the design is nailed down, leveraging a ton of code that already exists. I think we can decide we do want overriding and merging in this thread, and have a separate discussion to nail down the exact best design to do so (I can think of a few alternatives, including some perhaps not even listed here).

Vis a vis the concern from @merijn:

I'm worried by the lack of concrete proposal how (possibly remote) resolvers/freeze files/whatever will affect the "happy path" of cabal-install.

I discussed in that ticket a number of approaches. Most fundamentally, we can use etags and http date checks to avoid refetching of unchanged files. (I.e. use http's on built in mechanisms for caching). Furthermore, I think its reasonable to only check for changed files on configure rather than install. And finally, since users have to specifically request a resolver, this behavior would only occur if they did so.

Finally, I think the revisions question can be ignored in the first pass, and we can work to resolve it if/when it becomes an issue. Index-pinning is a very good approximation, and should suffice for most scenarios.

@hasufell
Copy link
Member

hasufell commented Aug 19, 2021

As others have noted, that doesn't quite suffice because we want some overridable syntax for bounds. But I think such a syntax is a good idea anyway.

I think there are two ideas about this, one being new syntax in the freeze file for the constraints, such as foo ~== 1.0 (@michaelpj) . The other, from what I understood in @dcoutts' RFC, is that constraint sets wouldn't have special syntax. Instead, the interpretation/interaction of those is up to the tool (that would e.g. provide an algebra on how to combine them). That also keeps things clean, since a set is self-contained and doesn't contain special rules about specific constraints.

I believe interpreting sets on the tooling side is a good idea and allows us to gradually implement these things. The next step could be naming sets, fetching multiple freeze files and then have an algebra on how to combine them in cabal.project.

I may also point out that there's a subtle difference between freeze file and constraint set: freeze files can contain everything cabal.project can, pin index state etc etc. A constraint set would just be constraints.

@dcoutts
Copy link
Contributor

dcoutts commented Aug 19, 2021

I'm not sure what is the difference to my proposed include directives

@hasufell indeed I think it's very similar in concept.

@ivanperez-keera
Copy link
Contributor

ivanperez-keera commented Aug 19, 2021

At a high level, I would just say:

  • I'd like to see cabal support LTS.
  • I would not make the use of a LTS the default behaviour just yet.
  • I'd advocate for a later release date (>= 2y) as opposed to an earlier one (<2y), especially if the use of stackage's LTSs is made the default. There are currently several installation methods that have survived in parallel and adding one more (even if it's a "better" one) is not going to help as much IMO as having stability. Creators of tools/libraries/applications with Haskell (as opposed to tools for Haskell) need time to adapt. The Haskell ecosystem changes very quickly and, overall, a lot of time is being spent on adapting to the latest changes.

@Mikolaj
Copy link
Member

Mikolaj commented Aug 19, 2021

I'd advocate for a later release date (>= 2y) as opposed to an earlier one (<2y), especially if the use of stackage's LTSs is made the default.

I don't think making it the default in a compatibility-breaking way is even on the table. For making it the default in what cabal init produces, I agree we'd need some time to stabilize the feature, collect feedback, build awareness. For completely optional integration (in particular, incurring no performance regression), I don't think we need any particular delays, though we may have them anyway due to cabal having plenty of bugs, half-implemented features and other urgent tasks keeping contributors occupied.

@emilypi
Copy link
Member Author

emilypi commented Aug 26, 2021

It sounds like there's overwhelming support for this, and I'm happy to include it for the Cabal 3.8 release!

@gbaz
Copy link
Collaborator

gbaz commented Aug 26, 2021

I'd like to sketch a slightly modified version of @hasufell 's proposal, based on some irc chat:

include
  [url | file]: xxxx
  fetch-strategy: [always | once]

And that's it!

For now we leave .freeze and .local semantics as is -- we could add more configuration in a future release. This means that we can therefore omit the ignore-if-absent thing for now.

This only leaves the question of the combining stuff. But if we consider the use-cases, there's only two that matter in my opinion. First: some default package set that we may want to override (i.e. bumping the bytestring version). Second: pinning down a specific version for freeze purposes. If we think in terms of "and" and "or" the first scenario is "union with or" and the second is "union with and". But both cases are covered by simply having the semantics be "always override." For version constraints, each declaration of new constraints would simply replace the old constraints. For flag constraints, we would not override on a per-package basis, but a per-flag basis. I.e. "-foo" would override "+foo" but "-bar" wouldn't clear out any existing "foo" flags.

This should avoid needing to introduce any explicit package algebra for now, or modifying existing constraint syntax. As far as I know it should not harm any existing behavior, and it gives us a default that is easy to understand and which obeys least surprise.

(edit: the one use case this doesn't cover is "negative collections" of blacklisted versions, etc. that said, the preferences and deprecation mechanism on hackage give us at least a single "global" way of warning off certain packages. I think the negative collections idea is nice, but should be considered as a future extension of this work, since they're not the main initial use case we want to target, and it simplifies things considerably to just not worry about all the "algebra" questions of combining for a first cut)

Thoughts?

@michaelpj
Copy link
Collaborator

For version constraints, each declaration of new constraints would simply replace the old constraints.

I'm a bit unclear on what the behaviour of overriding sets of constraints is supposed to be. Let me list out some options that I see and see if that clarifies things.

  1. Per constraints stanza

That is:

constraints: foo == 1.0, bar == 1.0
constraints: foo = 2.0

would give us just foo == 2.0. I'm pretty sure nobody meant this but just including it for completeness.

  1. Per package

That is:

constraints: foo == 1.0, bar == 1.0
constraints: foo = 2.0

would give us foo == 2.0, bar == 1.0, which seems good.

But what about this case? (ref: "adding inequality to existing equality")

-- from "resolver"
constraints: foo == 1.0
-- from "user project file"
constraints: foo >= 1.0

If this gives us just foo >= 1.0, we could pick foo-2.0 in this scenario. I think this is less nice than getting foo >= 1.0, foo == 1.0 and being forced to pick foo-1.0.

  1. Per-package, but overriding only in case of conflict

The general case here is where the "overriding" constraints for a package p do not conflict with the earlier constraints for that package. Then it seems nicer to keep them.

However, I'm not sure it's easy to actually specify what this behaviour should be, so maybe it's not worth it. We could try and say something like:

s1 `overridingUnion` s2 = s2 `union` { c \in s1 | {c} \cup s2 is satisfiable }

or something, but quite possibly this has ugly corner cases and might be hard to compute.

Anyway, I think the "adding inequality to existing equality" usecase is worth considering, but it's probably fairly niche and shouldn't dictate the design.

the one use case this doesn't cover is "negative collections" of blacklisted versions, etc.

Doesn't it? I guess it depends what you want. I think the design you proposed (assuming we're overriding per-package) would let you give a set of blacklisted packages, but allow a user to override them, i.e.

-- poor man's negation, in the "resolver"
constraints: foo > 1.0 || foo < 1.0
-- from "user project file"
constraints: foo == 1.0

would give us foo == 1.0, which seems fine. What it wouldn't let you do is blacklist versions in a way that the user can't override, but that seems right to me.

@gbaz
Copy link
Collaborator

gbaz commented Aug 27, 2021

I meant 2 -- full overrides per package, since I think the behavior will be clearest and easiest to implement.

As for blacklists, yes. this will give us what you describe -- overridable ones. I suppose that's ok, although I worry a bit about the UX experience.

@phadej
Copy link
Collaborator

phadej commented Aug 30, 2021

FWIW:

[polinukli] /codetmp % cabal get nix-paths
Unpacking to nix-paths-1.0.1/
[polinukli] /codetmp % cd nix-paths-1.0.1 
[polinukli] /codetmp/nix-paths-1.0.1 % cabal build -w ghc-8.10.4
Resolving dependencies...
cabal: Could not resolve dependencies:
[__0] trying: nix-paths-1.0.1 (user goal)
[__1] next goal: nix-paths:setup.Cabal (dependency of nix-paths)
[__1] rejecting: nix-paths:setup.Cabal-3.2.1.0/installed-3.2.1.0 (conflict:
nix-paths => nix-paths:setup.Cabal>=1.9 && <1.25)
[__1] skipping: nix-paths:setup.Cabal-3.6.0.0, nix-paths:setup.Cabal-3.4.0.0,
nix-paths:setup.Cabal-3.2.1.0, nix-paths:setup.Cabal-3.2.0.0,
nix-paths:setup.Cabal-3.0.2.0, nix-paths:setup.Cabal-3.0.1.0,
nix-paths:setup.Cabal-3.0.0.0, nix-paths:setup.Cabal-2.4.1.0,
nix-paths:setup.Cabal-2.4.0.1, nix-paths:setup.Cabal-2.4.0.0,
nix-paths:setup.Cabal-2.2.0.1, nix-paths:setup.Cabal-2.2.0.0,
nix-paths:setup.Cabal-2.0.1.1, nix-paths:setup.Cabal-2.0.1.0,
nix-paths:setup.Cabal-2.0.0.2 (has the same characteristics that caused the
previous version to fail: excluded by constraint '>=1.9 && <1.25' from
'nix-paths')
[__1] rejecting: nix-paths:setup.Cabal-1.24.2.0,
nix-paths:setup.Cabal-1.24.0.0, nix-paths:setup.Cabal-1.22.8.0,
nix-paths:setup.Cabal-1.22.7.0, nix-paths:setup.Cabal-1.22.6.0,
nix-paths:setup.Cabal-1.22.5.0, nix-paths:setup.Cabal-1.22.4.0,
nix-paths:setup.Cabal-1.22.3.0, nix-paths:setup.Cabal-1.22.2.0,
nix-paths:setup.Cabal-1.22.1.1, nix-paths:setup.Cabal-1.22.1.0,
nix-paths:setup.Cabal-1.22.0.0, nix-paths:setup.Cabal-1.20.0.4,
nix-paths:setup.Cabal-1.20.0.3, nix-paths:setup.Cabal-1.20.0.2,
nix-paths:setup.Cabal-1.20.0.1, nix-paths:setup.Cabal-1.20.0.0,
nix-paths:setup.Cabal-1.18.1.7, nix-paths:setup.Cabal-1.18.1.6,
nix-paths:setup.Cabal-1.18.1.5, nix-paths:setup.Cabal-1.18.1.4,
nix-paths:setup.Cabal-1.18.1.3, nix-paths:setup.Cabal-1.18.1.2,
nix-paths:setup.Cabal-1.18.1.1, nix-paths:setup.Cabal-1.18.1,
nix-paths:setup.Cabal-1.18.0, nix-paths:setup.Cabal-1.16.0.3,
nix-paths:setup.Cabal-1.16.0.2, nix-paths:setup.Cabal-1.16.0.1,
nix-paths:setup.Cabal-1.16.0, nix-paths:setup.Cabal-1.14.0,
nix-paths:setup.Cabal-1.12.0, nix-paths:setup.Cabal-1.10.2.0,
nix-paths:setup.Cabal-1.10.1.0, nix-paths:setup.Cabal-1.10.0.0,
nix-paths:setup.Cabal-1.8.0.6, nix-paths:setup.Cabal-1.8.0.4,
nix-paths:setup.Cabal-1.8.0.2, nix-paths:setup.Cabal-1.6.0.3,
nix-paths:setup.Cabal-1.6.0.2, nix-paths:setup.Cabal-1.6.0.1,
nix-paths:setup.Cabal-1.4.0.2, nix-paths:setup.Cabal-1.4.0.1,
nix-paths:setup.Cabal-1.4.0.0, nix-paths:setup.Cabal-1.2.4.0,
nix-paths:setup.Cabal-1.2.3.0, nix-paths:setup.Cabal-1.2.2.0,
nix-paths:setup.Cabal-1.2.1, nix-paths:setup.Cabal-1.1.6,
nix-paths:setup.Cabal-1.24.1.0 (constraint from minimum version of Cabal used
by Setup.hs requires >=3.2)
[__1] fail (backjumping, conflict set: nix-paths, nix-paths:setup.Cabal)
After searching the rest of the dependency tree exhaustively, these were the
goals I've had most trouble fulfilling: nix-paths:setup.Cabal, nix-paths

However https://www.stackage.org/lts-18.8/package/nix-paths-1.0.1 is in Stackage LTS 18.8 (GHC-8.10.6)

I was really surprised to find this, but it makes a point: Stackage should be test-build with cabal-install too, if stackage support is integrated into cabal-install.


Somewhat related: Stackage snapshots record (also automatic) flag selections, which breaks across different environments (operating systems). I actually have no idea in which extend Stackage snapshots are built on Windows (and MacOS), and how these differences are expressed.

Anyway, the collection grammar should be expressive enough to allow flipping flags in different (staticly known) environments. (or people have to rely on automatic flags being selected correctly, which should be the case if the branches are mutually exclusive, which is the design guideline but not enforced - paranoid people may still want to pin the automatic flags too).

@emilypi emilypi added the type: RFC Requests for Comment label Sep 9, 2021
@gbaz
Copy link
Collaborator

gbaz commented Jan 5, 2022

related: #7783

This covers the ability to include freeze files, other project files, etc in project files. Which leaves only the constraints stuff, which really should be a distinct PR.

@AlistairB
Copy link

Hi, is this feature still being planned for 3.8? I'm stack user who is eager to switch to cabal if this support is in place.

@gbaz
Copy link
Collaborator

gbaz commented May 8, 2022

In 3.8 we will have conditionals and includes (including remote) in cabal project files. So it will cover many use cases. However, we will not yet have the ability to override constraints in included files. So if you include a given lts, you will not be able to bump the version of anything in the lts, outside of downloading and modifying its freeze file, then depending on the modified freeze file.

There's a design for changing the semantics of multiple constraints on the same package to allow later constraints to override earlier ones, but it will not make it in time for 3.8.

@ulysses4ever
Copy link
Collaborator

@gbaz you write:

In 3.8 we will have conditionals and includes (including remote) in cabal project files. So it will cover many use cases. However, we will not yet have the ability to override constraints in included files. So if you include a given lts, you will not be able to bump the version of anything in the lts, outside of downloading and modifying its freeze file, then depending on the modified freeze file.

There's a design for changing the semantics of multiple constraints on the same package to allow later constraints to override earlier ones, but it will not make it in time for 3.8.

Is there a ticket describing this feature request and / or design you mentioned. Should the current ticket be closed as completed in its basic form? (The ticket title reads: “support for LTS” not “support for overriding constraints from LTS”]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: discussion type: RFC Requests for Comment
Projects
None yet
Development

No branches or pull requests