-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support cosa init --ostree docker://quay.io/coreos-assembler/fcos:testing-devel
#2685
Comments
A specific thing this would really help unblock is reworking our build/CI flow to be more like:
The remainder of stuff here could be parallelized/configurable:
And we could now much more naturally represent stages of CI with container image tags. For example we might push |
This is my favorite part of this. This would enable consumers of these images to get feedback about the overall image state. Just to clarify, when you say OCI standard keys, are you referring to https://github.com/opencontainers/image-spec/blob/main/annotations.md? |
Yep! Specifically |
For coreos/coreos-assembler#2685 we want to copy e.g. `rpmostree.input-hash` into the container image. Extend the `ExportOpts` struct to support this, and also expose it via the CLI, e.g. `ostree container encapsulate --copymeta=rpmostree.input-hash ...`. And while I was thinking about this...we should by default copy some core ostree keys, such as `ostree.bootable` and `ostree.linux` since they are key pieces of metadata.
ostreedev/ostree-rs-ext#234 will help this |
For coreos/coreos-assembler#2685 we want to copy e.g. `rpmostree.input-hash` into the container image. Extend the `ExportOpts` struct to support this, and also expose it via the CLI, e.g. `ostree container encapsulate --copymeta=rpmostree.input-hash ...`. And while I was thinking about this...we should by default copy some core ostree keys, such as `ostree.bootable` and `ostree.linux` since they are key pieces of metadata.
For coreos/coreos-assembler#2685 we want to copy e.g. `rpmostree.input-hash` into the container image. Extend the `ExportOpts` struct to support this, and also expose it via the CLI, e.g. `ostree container encapsulate --copymeta=rpmostree.input-hash ...`. And while I was thinking about this...we should by default copy some core ostree keys, such as `ostree.bootable` and `ostree.linux` since they are key pieces of metadata.
For coreos/coreos-assembler#2685 we want to copy e.g. `rpmostree.input-hash` into the container image. Extend the `ExportOpts` struct to support this, and also expose it via the CLI, e.g. `ostree container encapsulate --copymeta=rpmostree.input-hash ...`. And while I was thinking about this...we should by default copy some core ostree keys, such as `ostree.bootable` and `ostree.linux` since they are key pieces of metadata.
I'm not sure I understand the value here. Maybe we can talk about it at the next video community meeting to make it more clear for people. |
Was that a response to me? If so I still don't understand how that answers the question. |
Nope, just keeping track of related PRs.
I tried to elaborate on all this in coreos/fedora-coreos-tracker#828 The simplest way to say it is that our center of gravity ships much closer to container image builds, and not custom JSON schema stored in a blob store. Right now the container image is exported from the blob store - this would flip things around; source of truth is a container image. Disk image builds are secondary/derivatives of that. |
Builds on ostreedev/ostree-rs-ext#235 Part of coreos/coreos-assembler#2685 Note making use of this will require bumping ostree-ext here.
Builds on coreos/rpm-ostree#3402 Relates to coreos/coreos-assembler#2685 Basically, source of truth for the CMD moves from being hardcoded in cosa hackiliy to being part of the ostree commit, which means it survives a full round trip of ostree → container → ostree → container
Hmm, also unsure about this. At the end of the day, we'll probably still always want public images sitting in object stores so it's convenient for users/higher-level tools to download and run without involving a container stack. Which means we'd still have something like the builds dir in S3. So there's a lot of force pulling us towards keeping it as canonical too. |
In our world, "images" is an ambiguous term. You're thinking disk/boot images, right? Yes, I agree. Wrapping those in a container is currently a bit of a weird thing to do.
I think the interesting angle here more is having disk images come after (follow, derive from) the container builds. But yes, when we go to generate a cosa build, we convert the container back into an ociarchive and store it in S3 as we do currently. |
Builds on coreos/rpm-ostree#3402 Relates to coreos/coreos-assembler#2685 Basically, source of truth for the CMD moves from being hardcoded in cosa hackiliy to being part of the ostree commit, which means it survives a full round trip of ostree → container → ostree → container
I feel like if we're pushing in this direction we should probably have a larger discussion about it. Would you like to bring it up at this week's meeting? |
Builds on coreos/rpm-ostree#3402 Relates to coreos/coreos-assembler#2685 Basically, source of truth for the CMD moves from being hardcoded in cosa hackiliy to being part of the ostree commit, which means it survives a full round trip of ostree → container → ostree → container
Builds on coreos/rpm-ostree#3402 Relates to coreos/coreos-assembler#2685 Basically, source of truth for the CMD moves from being hardcoded in cosa hackiliy to being part of the ostree commit, which means it survives a full round trip of ostree → container → ostree → container
For now, we need to support having the new format oscontainer in `meta.json`. Part of coreos#2685 And see coreos#2685 (comment) in particular.
For now, we need to support having the new format oscontainer in `meta.json`. Part of coreos#2685 And see coreos#2685 (comment) in particular.
For now, we need to support having the new format oscontainer in `meta.json`. Part of coreos#2685 And see coreos#2685 (comment) in particular.
For now, we need to support having the new format oscontainer in `meta.json`. Part of coreos#2685 And see coreos#2685 (comment) in particular.
For now, we need to support having the new format oscontainer in `meta.json`. Part of coreos#2685 And see coreos#2685 (comment) in particular.
For now, we need to support having the new format oscontainer in `meta.json`. Part of #2685 And see #2685 (comment) in particular.
Quite a while ago we added this special case code to kola which learned how to do in-place updates to the weird bespoke "ostree repo in container" OCP/RHCOS-specific container image. A huge benefit of the change to use ostree-native containers is that this approach can now be shared across FCOS/RHCOS. (Also, rpm-ostree natively understands this, so it's much much more efficient and less awkward than the wrappers we had in `pivot` around rpm-ostree) But the benefits get even larger: this issue: coreos#2685 proposes rethinking of our pipeline to more cleanly split up "build OS update container" from "build disk images". With this, it becomes extra convenient to do a flow of: - build OS update container, push to registry - `kola run -p stable --oscontainer quay.io/fcos-devel/testos@sha256...` IOW we're not generating a disk image to test the OS update - we're using the *stable* disk image and doing an in-place update before we run tests. Now...as of right now nothing in the pipeline passes this flag, so the code won't be used (except for manual testing). Suddenly with this, the number of tests we can run roughly *doubles*. For example, we can run e.g. `kola run rpm-ostree` both with and without `--oscontainer`. In most cases, things should be the same. But, I think it will be interesting to try to explictly use this for at least some tests; it's almost a full generalization of the `kola run-upgrades` bits.
Quite a while ago we added this special case code to kola which learned how to do in-place updates to the weird bespoke "ostree repo in container" OCP/RHCOS-specific container image. A huge benefit of the change to use ostree-native containers is that this approach can now be shared across FCOS/RHCOS. (Also, rpm-ostree natively understands this, so it's much much more efficient and less awkward than the wrappers we had in `pivot` around rpm-ostree) But the benefits get even larger: this issue: #2685 proposes rethinking of our pipeline to more cleanly split up "build OS update container" from "build disk images". With this, it becomes extra convenient to do a flow of: - build OS update container, push to registry - `kola run -p stable --oscontainer quay.io/fcos-devel/testos@sha256...` IOW we're not generating a disk image to test the OS update - we're using the *stable* disk image and doing an in-place update before we run tests. Now...as of right now nothing in the pipeline passes this flag, so the code won't be used (except for manual testing). Suddenly with this, the number of tests we can run roughly *doubles*. For example, we can run e.g. `kola run rpm-ostree` both with and without `--oscontainer`. In most cases, things should be the same. But, I think it will be interesting to try to explictly use this for at least some tests; it's almost a full generalization of the `kola run-upgrades` bits.
Part of coreos#2685 I'm looking at replacing the guts of `cosa build ostree` with the new container-native `rpm-ostree compose image`. In order for that to work, we need two things: - The committed overlays from `overlays/` - xref coreos/rpm-ostree#4005 - The rendered `image.json` which is also an overlay now Basically in combination with the above PR, this works now when invoked manually: ``` $ cosa build --prepare-only $ sudo rpm-ostree compose image --cachedir=cache/buildimage --layer-repo tmp/repo src/config/manifest.yaml oci:tmp/fcos.oci ```
Part of coreos#2685 I'm looking at replacing the guts of `cosa build ostree` with the new container-native `rpm-ostree compose image`. In order for that to work, we need two things: - The committed overlays from `overlays/` - xref coreos/rpm-ostree#4005 - The rendered `image.json` which is also an overlay now Basically in combination with the above PR, this works now when invoked manually: ``` $ cosa build --prepare-only $ sudo rpm-ostree compose image --cachedir=cache/buildimage --layer-repo tmp/repo src/config/manifest.yaml oci:tmp/fcos.oci ```
Part of #2685 I'm looking at replacing the guts of `cosa build ostree` with the new container-native `rpm-ostree compose image`. In order for that to work, we need two things: - The committed overlays from `overlays/` - xref coreos/rpm-ostree#4005 - The rendered `image.json` which is also an overlay now Basically in combination with the above PR, this works now when invoked manually: ``` $ cosa build --prepare-only $ sudo rpm-ostree compose image --cachedir=cache/buildimage --layer-repo tmp/repo src/config/manifest.yaml oci:tmp/fcos.oci ```
This is a big step towards coreos#2685 I know there's a lot going on with the pipeline, and I don't want to conflict with all that work - but at the same time, in my opinion we are just too dependent on complex Jenkins flows and our bespoke "meta.json in S3". The core of CoreOS *is a container image* now. This new command adds an opinionated flow where one can do: ``` $ cosa init $ cosa build-cimage quay.io/cgwalters/ostest ``` And *that's it* - we do proper change detection, reading and writing from the remote container image. We don't do silly things like storing an `.ociarchive` in S3 when we have native registries available. Later, we can build on this and rework our disk images to derive from that container image, as coreos#2685 calls for. Also in the near term future, I think we can rework `cmd-build` such that it reuses this flow, but outputs to an `.ociarchive` instead. However, this code is going to need a bit more work to run in supermin.
PR in #3128 which starts the ball rolling here |
This is a big step towards coreos#2685 I know there's a lot going on with the pipeline, and I don't want to conflict with all that work - but at the same time, in my opinion we are just too dependent on complex Jenkins flows and our bespoke "meta.json in S3". The core of CoreOS *is a container image* now. This new command adds an opinionated flow where one can do: ``` $ cosa init $ cosa build-cimage quay.io/cgwalters/ostest ``` And *that's it* - we do proper change detection, reading and writing from the remote container image. We don't do silly things like storing an `.ociarchive` in S3 when we have native registries available. Later, we can build on this and rework our disk images to derive from that container image, as coreos#2685 calls for. Also in the near term future, I think we can rework `cmd-build` such that it reuses this flow, but outputs to an `.ociarchive` instead. However, this code is going to need a bit more work to run in supermin.
This is a big step towards coreos#2685 I know there's a lot going on with the pipeline, and I don't want to conflict with all that work - but at the same time, in my opinion we are just too dependent on complex Jenkins flows and our bespoke "meta.json in S3". The core of CoreOS *is a container image* now. This new command adds an opinionated flow where one can do: ``` $ cosa init $ cosa build-cimage quay.io/cgwalters/ostest ``` And *that's it* - we do proper change detection, reading and writing from the remote container image. We don't do silly things like storing an `.ociarchive` in S3 when we have native registries available. Later, we can build on this and rework our disk images to derive from that container image, as coreos#2685 calls for. Also in the near term future, I think we can rework `cmd-build` such that it reuses this flow, but outputs to an `.ociarchive` instead. However, this code is going to need a bit more work to run in supermin.
This is a big step towards coreos#2685 I know there's a lot going on with the pipeline, and I don't want to conflict with all that work - but at the same time, in my opinion we are just too dependent on complex Jenkins flows and our bespoke "meta.json in S3". The core of CoreOS *is a container image* now. This new command adds an opinionated flow where one can do: ``` $ cosa init $ cosa build-cimage quay.io/cgwalters/ostest ``` And *that's it* - we do proper change detection, reading and writing from the remote container image. We don't do silly things like storing an `.ociarchive` in S3 when we have native registries available. Later, we can build on this and rework our disk images to derive from that container image, as coreos#2685 calls for. Also in the near term future, I think we can rework `cmd-build` such that it reuses this flow, but outputs to an `.ociarchive` instead. However, this code is going to need a bit more work to run in supermin.
This will cause us to run through the ostree-native container stack when generating the disk images. Today for RHCOS we're using the "custom origin" stuff which lets us inject metadata about the built source, but rpm-ostree doesn't understand it. With this in the future (particularly after coreos/coreos-assembler#2685) `rpm-ostree status` will show the booted container and *understand it*. We'll have the digest of the OCI archive at least...though that may get changed if it gets converted to docker v2s2 when pushing to a registry... Now in the future what we want is to entirely rework our build pipeline like this: coreos/coreos-assembler#2685
Builds on coreos/rpm-ostree#3402 Relates to coreos/coreos-assembler#2685 Basically, source of truth for the CMD moves from being hardcoded in cosa hackiliy to being part of the ostree commit, which means it survives a full round trip of ostree → container → ostree → container
Builds on coreos/rpm-ostree#3402 Relates to coreos/coreos-assembler#2685 Basically, source of truth for the CMD moves from being hardcoded in cosa hackiliy to being part of the ostree commit, which means it survives a full round trip of ostree → container → ostree → container
This is part of coreos/fedora-coreos-tracker#828 conceptually, which was actually in retrospect framed too broadly. Focus shifted to coreos layering but that's really the "user experience" half. Re engineering how we build and ship FCOS (the first issue) still applies.
In particular, I think we should support a
cosa init --ostree
mode that takes a container image as input and outputs just a container. We may not even generate abuilds/
directory, and nometa.json
stuff should be created for this.Note how the input to "cosa image build" is just the ostree container, not config git (or rpms).
Further, I want to emphasize that the "build ostree container" and "build disk images" can (and would normally be) separate processes. (Now, how testing is integrated here is not depicted, but basically we'd still probably generate a qemu image to sanity test our container builds, but it would be discarded and regenerated by the image build process only once that image had passed other approvals)
The text was updated successfully, but these errors were encountered: