Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate Copr as the build system #1795

Closed
1 task
evgeni opened this issue Nov 3, 2022 · 40 comments
Closed
1 task

Investigate Copr as the build system #1795

evgeni opened this issue Nov 3, 2022 · 40 comments
Assignees

Comments

@evgeni
Copy link
Member

evgeni commented Nov 3, 2022

Open questions:

  • can we have own/dedicated builders (paid by us) to allow better performance?
@ehelms ehelms changed the title Move Koji off of current AWS account to a new account or new infrastructure (e.g. Copr). Investigate Copr as the build system Nov 16, 2022
@ehelms
Copy link
Member

ehelms commented Nov 16, 2022

After some refreshing of Copr knowledge, digging into current availability, I have put together an initial draft of what our usage of Copr would look like. As acknowledged, there are some open questions around how we configure it, and tooling choices. In order to migrate to Copr, the biggest initial undertaking would be to get the tooling in place. Our previous Copr tooling relied on tito and that would need to be re-vamped.

Copr Usage Design

Copr provides the idea of a project that represents one or more repositories which are defined by the chroots (e.g. RHEL, Centos stream) included in the project.

The Foreman project would use Copr as a build system and locationg for staging repositories that are tested before being pulled to yum.theforeman.org.

Managing Configuration

In order to manage the configuration, this can be stored within the foreman-packaging repository package_manifest.yaml or a new file and propagated to Copr either through Obal functionality or a stand-alone script. This will allow storing the configuration alongside the packages and facilitate keeping them in sync when branching.

Repositories

Nightly Server

  • Create new projects:
    • @theforeman/foreman-nightly
    • @theforeman/foreman-plugins-nightly
    • @theforeman/katello-nightly
  • Add chroots:
    • rhel-8-x86_64 (to be added by Copr team still)
  • Add the modules that are used
  • Add our external repositories
  • Enable "Create repositories manually" (to allow staging workflow)

Client

  • Create new projects:
    • @theforeman/foreman-client
  • Add chroots:
    • rhel-7-x86_64 (to be added by Copr team still)
    • rhel-8-x86_64 (to be added by Copr team still)
    • rhel-9-x86_64 (to be added by Copr team still)
  • Add our external repositories
  • Enable "Create repositories manually" (to allow staging workflow)

Modules

Copr can generate modulemd during repository generation based upon configuration through Copr or by uploading an existing module metadata. This will allow continuity of current modules.

https://docs.fedoraproject.org/en-US/modularity/building-modules/copr/building-modules-in-copr/#_submit_an_existing_modulemd_yaml

Branching

  • Create new projects:
    • @theforeman/foreman-<version>
    • @theforeman/foreman-plugins-<version>
    • @theforeman/katello-<version>
  • Update the configuration in foreman-packaging and apply the configuration to Copr project

Scratch Builds

Copr does not have the notion of scratch builds as Koji does, instead a new project that is a copy of the primary project must be created. The process, using foreman-nightly example:

  • copr-cli fork @theforeman/foreman-nightly @theforeman/foreman-nightly-scratch-<uuid>
  • Set "Delete after days" on @theforeman/foreman-nightly-<uuid> to 7 days to automatically delete it after 7 days
  • run builds targeted against scratch build repo

There would be a single scratch build repo per pull request, thus all RPM builds in a PR would end up in the same temporary project. Repoclosure would be run against the split project, in theory giving a better repoclosure result.

Benefits from what we have today:

  • "Scratch builds" generate a usable, completely testable set of repositories automatically
  • RPMs would no longer need to be downloaded locally, converted to a repository for repoclosure to be performed
  • Repoclosure may become more robust as it could be a complete, dedicated repository to run against

Tooling

Copr provides a few interfaces:

The python-copr library would allow easy integration and building of modules into Obal to support our current workflows through the same interface.

Comps

Comps can be uploaded to the project configuration and are scoped per chroot. This would mean any updates to comps will need to be identified and uploaded to update the chroot after merge.

Open Questions

  • Copr projects can support multiple chroots that when builds are submitted are built against, e.g. rhel-8 and rhel-9, do we want all builds coupled to this? Or would we rather have dedicated projects per OS?
    • foreman-nightly-el8 vs. foreman-nightly
    • Pros:
      • Less to manage as it's configured in a single project
      • Submit a single build and it either passes or fails for all OSes
    • Cons:
      • If a build fails it fails for all OSes, can make onboarding new OS harder
      • Less control/flexibility over individual OS streams of builds
    • Copr does provide the ability to run a build against a specific chroot giving the granularity of builds for targeted OSes while having a single project with multiple chroots defined
  • How to sign packages with Copr?
  • How to sync repositories from Copr to yum.theforeman.org?

@evgeni
Copy link
Member Author

evgeni commented Jan 30, 2023

One thing that come up somewhere else:

Today we have a very crude way to build the foreman-discovery-image and don't ship it as an RPM upstream. Downstream does ship it as an RPM, but builds stuff differently (using koji/brew) and we see a variety of problems due to the differences.
Copr does not support building ISOs and the recommended "solution" is image-builder, but I am not even sure image-builder would work for our usecase (my understanding is it's customizing the RHEL installer, while we actually want a live cd with own software).

This by no means should mean we shouldn't use Copr, just that we should also go and unify fdi builds, and it won't be on the current downstream solution if we use Copr.

@evgeni
Copy link
Member Author

evgeni commented Jan 30, 2023

And to answer your open question: I think I prefer a layout as close to the current repo layout as possible, so no dedicated per OS projects. As you say, target chroots can be configured if needed and otherwise I think it's a fair thing to say that if one OS fails, the build as such is faulty.

If you're cautious about bootstrapping net-new OSes (like EL9), I think this can be either done in a separate bootstrap project and then copied over or by using the config to limit chroots more granularly.

@evgeni
Copy link
Member Author

evgeni commented Feb 28, 2023

Last infra-sig you asked for explicit YAY/NAY on the design.

I'd have one question: the client repo currently has no nighty/release differentiation, which matches what we have downstream, but not upstream (right now). I'd be happy to change that, but also don't think that's something we should piggy-back on the copr change?

Overall I'm "YAY" tho :)

@ekohl @pcreech @Odilhao @zjhuntin any opinions here?

@ekohl
Copy link
Member

ekohl commented Feb 28, 2023

I'm mentally going through the tasks that need to be done.

First of all: I think we said we want to maintain our yum.theforeman.org host so we'll need to figure out how we sync from COPR to our repositories. Today we use rsync, but not sure if COPR supports that (and if it's the best choice).

That also implies we consider COPR as staging repositories. How do we make it clear for users that they're not intended as real repositories? Do we include -staging- in the repo names?

Speaking of naming: should we use the chance to rename Katello to use the Foreman version numbering? So foreman-katello-3.6 instead of katello-4.8? That makes branching procedures easier since you can just replace nightly with the version number instead of figuring out which is which.

That brings us to branching. We'll need to replace the tooling we have to branch and update the procedure. Given we just branched, I do think it's the best time since I'd rather not do it under time pressure. One implication is that (at least for Foreman) https://github.com/theforeman/tool_belt becomes obsolete, which I think is a good thing.

During releasing we also sign RPMs with GPG, which relies on Koji. How do we replace this?

This is not a YAY/NAY on the design: I think it's incomplete and needs a bit of refinement.

  • foreman-nightly-el8 vs. foreman-nightly

My vote would be foreman-nightly.

@ehelms
Copy link
Member

ehelms commented Mar 1, 2023

That also implies we consider COPR as staging repositories. How do we make it clear for users that they're not intended as real repositories? Do we include -staging- in the repo names?

Sounds like a good approach.

First of all: I think we said we want to maintain our yum.theforeman.org host so we'll need to figure out how we sync from COPR to our repositories. Today we use rsync, but not sure if COPR supports that (and if it's the best choice).

I'll work on tracking down this answer.

Speaking of naming: should we use the chance to rename Katello to use the Foreman version numbering? So foreman-katello-3.6 instead of katello-4.8? That makes branching procedures easier since you can just replace nightly with the version number instead of figuring out which is which.

+1

During releasing we also sign RPMs with GPG, which relies on Koji. How do we replace this?

I am asking about this one.

@ehelms
Copy link
Member

ehelms commented Mar 14, 2023

During releasing we also sign RPMs with GPG, which relies on Koji. How do we replace this?

I am asking about this one.

The answer to this is that Copr can generate a key and sign packages with but does not have the ability to allow a key generated outside Copr to be used to sign. We can request this as a feature and see where we get. Generally, this seems like it should work for us, except for the fact that we also sign the tarballs.

@ekohl
Copy link
Member

ekohl commented Mar 14, 2023

Technically I think we can maintain our current process where we sign manually for a release. That they're signed with another key in copr isn't really a problem.

Perhaps then the question becomes what we want to signal to users with it. If you say that it's about integrity and we've verified the bits then I'm not sure how true that really is.

And for Debian we sign automatically. Would the same be good enough for RPMs?

@ehelms
Copy link
Member

ehelms commented Mar 30, 2023

To close this issue out, we'll establish the next steps here, open follow on issues and close this investigation.

@ehelms
Copy link
Member

ehelms commented May 4, 2023

Making a note of a something we will have to handle:

  • Copr does not handle building modules to the level of our needs, we will have to have a post-processing solution similar to our current mashing procedure.

@ehelms
Copy link
Member

ehelms commented Jul 7, 2023

@evgeni @ekohl Could y'all read over fedora-copr/copr#2782 and let me know your thoughts on the tradeoffs of the work the Copr team would have to do just for us versus us implementing the necessary steps ourselves until we get off EL8?

@ehelms
Copy link
Member

ehelms commented Jul 17, 2023

Here is a proposal for how we handle modularity with Copr given modularity is deprecated and asking the Copr team to address this is a lot of work (see: fedora-copr/copr#2782).

Process proposal:

  1. Builds are created in Copr for the respective project which is 1:1 to a repository we publish (e.g. https://copr.fedorainfracloud.org/coprs/g/theforeman/foreman-nightly/)
  2. The repository is copied to a new location: https://yum.stage.theforeman.org
  3. A script is ran on the stage repository to add modularity metadata
  4. Tests are ran against https://yum.stage.theforeman.org in our pipelines
  5. If tests pass, the stage repository is promoted to production https://yum.theforeman.org

This proposal will require us to create a new vhost and ensure we have enough storage on web02 to support the stage repositories.

Alternatively, we could continue to use http://koji.katello.org/releases/, however, this will mean it takes us longer to get off of the Koji infrastructure after we have migrated to Copr.

@ekohl
Copy link
Member

ekohl commented Jul 21, 2023

I think a new name makes a lot of sense. More than trying to shoe horn it into Koji.

The repository is copied to a new location: https://yum.stage.theforeman.org

I think we should look at consistency with debs, so deriving from stagingdeb.theforeman.org I'm proposing stagingyum. But this is a minor detail.

A script is ran on the stage repository to add modularity metadata

AFAIK this is something we don't have on Koji now, but do have such a script on the real repos. Having those match would be a good thing for testing quality.

ensure we have enough storage on web02 to support the stage repositories.

Keeping old releases on Koji it doesn't matter much since they're hard links and we need to keep them anyway, but on web02 they'll have costs. I think we can also adopt cleaning of it. For example, we remove repos for old (unsupported) releases. May need to consider n-2 testing support, though I'd prefer to have the n-2 and n-1 consume the release repos.

Overall 👍 for the plan.

@Odilhao
Copy link
Member

Odilhao commented Jul 21, 2023

Sorry for the delay, with copr team providing the chroot, that means that we don't have control over the buildroot?

I'm trying to find the best design for our necessary packages for python, specially the effort to enable PEP-517 on EL8, my plan was to later create on branch or one project with all the spec files necessary to build the buildroot.

@FrostyX
Copy link

FrostyX commented Jul 21, 2023

with copr team providing the chroot, that means that we don't have control over the buildroot?

It depends on what control you have in mind. Copr allows you to specify what packages and modules should be installed in the buildroot and what repositories should be enabled.

@FrostyX
Copy link

FrostyX commented Jul 21, 2023

Add chroots:
rhel-7-x86_64 (to be added by Copr team still)
rhel-8-x86_64 (to be added by Copr team still)
rhel-9-x86_64 (to be added by Copr team still)

Just for the record, those are already available in Copr.

@ehelms
Copy link
Member

ehelms commented Jul 22, 2023

@Odilhao Here is the working example of the configuration for Foreman: https://github.com/theforeman/foreman-packaging/pull/9290/files

Initially, I have provided support for defining builldroot packages, modules and external repos in Obal: https://github.com/theforeman/obal/blob/master/obal/data/roles/copr_project/tasks/main.yaml#L18C14-L20

@ehelms
Copy link
Member

ehelms commented Jul 22, 2023

Add chroots:
rhel-7-x86_64 (to be added by Copr team still)
rhel-8-x86_64 (to be added by Copr team still)
rhel-9-x86_64 (to be added by Copr team still)

Just for the record, those are already available in Copr.

@FrostyX So we know in the future, are the buildroots based off of latest or .0 release? (e.g. 8.8 or 8.0)

@FrostyX
Copy link

FrostyX commented Jul 25, 2023

@FrostyX So we know in the future, are the buildroots based off of latest or .0 release? (e.g. 8.8 or 8.0)

When I do this on our builders:

$ mock -r rhel-8-x86_64 --shell
<mock-chroot> sh-4.4# cat /etc/os-release

it says

VERSION="8.8 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.8
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"

Update: You can see the mock config and what repositories are used here:
https://github.com/rpm-software-management/mock/blob/main/mock-core-configs/etc/mock/templates/rhel-8.tpl

@ehelms
Copy link
Member

ehelms commented Aug 4, 2023

Follow up to the staging repository, we will need to decide where the work happens based on storage and parallelization concerns. I've outlined this in the PR that adds the script to do the generation for feedback:

theforeman/foreman-packaging#9596 (comment)

@ehelms
Copy link
Member

ehelms commented Sep 1, 2023

Let's talk about RPM signing with Copr. At this point we have a workflow that looks like:

Copy repository from Copr -> Add modularity -> Copy repository to stagingyum.theforeman.org -> Test

We have three options for packaging signing:

  1. All packages are automatically signed by Copr and we can choose to use the GPG key created for a given repository as our signing key for packages (we would need a separate one for any non-RPM assests to be signed)
  2. Generate stage repository, then halt the pipeline. Download RPMs locally from the staging repository, sign them, createrepo and then rsync them back to stagingyum and continue testing.
  3. Via a pipeline generate stage repository, copy to stagingyum on web01, sign packages on web01 automatically, createrepo update and then run tests.

@ekohl
Copy link
Member

ekohl commented Sep 5, 2023

All packages are automatically signed by Copr and we can choose to use the GPG key created for a given repository as our signing key for packages (we would need a separate one for any non-RPM assests to be signed)

We have foreman-release, which stores the GPG key, allowing users to simply dnf install https://yum.theforeman.org/..../foreman-release.rpm. I like that workflow. That means we should set up a process that makes sure we have the correct key in foreman-packaging. Not too different from today.

One thing to keep in mind is that copr may have one GPG key per repo. So katello might be different from foreman and yet different from plugins then client is yet another key. That would argue for option 2 from an end user perspective.

Generate stage repository, then halt the pipeline. Download RPMs locally from the staging repository, sign them, createrepo and then rsync them back to stagingyum and continue testing.

This feels error prone. Because if you have:

  • copr -> stagingyum
  • stagingyum -> user machine
  • user machine -> stagingyum
  • stagingyum -> yum

Then the next time you sync in new files from copr, you will overwrite the already signed files with copr signed files. Today koji takes care of that for us and I'm not sure how I'd rewrite the logic.

Via a pipeline generate stage repository, copy to stagingyum on web01, sign packages on web01 automatically, createrepo update and then run tests.

I'm not sure I like this approach, because in my mind GPG signing is a fixed point where you "bless" the files that they went through the proper channels. If it's automated, there's no guarantee at all anymore and supply chain attacks may be easier. On the other hand, you could argue that today we barely inspect the RPM files either.

Overall I'm still torn. I was slightly leaning to option 1, but having multiple GPG keys for a single release is tedious. We'll need to expand our process to publish them so users can securely sync them (to systems like Pulp). Perhaps the copr team can say if it's technically possible to use a key between multiple repositories.

@ehelms
Copy link
Member

ehelms commented Sep 5, 2023

having multiple GPG keys for a single release is tedious. We'll need to expand our process to publish them so users can securely sync them (to systems like Pulp). Perhaps the copr team can say if it's technically possible to use a key between multiple repositories.

@FrostyX what do you think? Any related experiences you have seen for Copr users?

@ehelms
Copy link
Member

ehelms commented Sep 5, 2023

From some initial testing, it does not appear that comps support in Copr does what we expect of comps in our traditional method so I've reached out to ask if there is a bug here or a native way to exclude packages. This may be another case where we need to handle this filtering during stage repository creation similar to modularity.

Tracking issue for Copr -- fedora-copr/copr#2901

@ehelms
Copy link
Member

ehelms commented Sep 7, 2023

For the time being I have proposed doing the comps filtering in our stage generation script -- theforeman/theforeman-rel-eng#275

Another Option

By splitting out repositories based on comps during stage generation, this does open up the possibility to have a single build repository that is then split out into end-user repositories. This would solve the single GPG key issue as well since all packages would build into a single repository signed by a single Copr GPG key.

Proposal for using our own key

If we stick with our own key, I'd propose the following workflow for releases process:

Stage 1
Generate stage repository -> push to stagingyum -> Run installation tests -> STOP

Stage 2
Release engineer reposync's repository from stagingyum -> signs packages -> rsyncs back to stagingyum -> update repo -> run signature verification -> push RPMs to production

@FrostyX
Copy link

FrostyX commented Sep 8, 2023

having multiple GPG keys for a single release is tedious. We'll need to expand our process to publish them so users can securely sync them (to systems like Pulp). Perhaps the copr team can say if it's technically possible to use a key between multiple repositories.

@FrostyX what do you think? Any related experiences you have seen for Copr users?

Hello @ehelm,
I brought the topic up at a Copr team meeting and found out that there were some discussions in the past about the scope for which GPG keys are used. But the conclusion always was that having a different GPG key for every project might not be ideal in some use cases but the decision has already been made and the change would be too expensive to make it worth it.

One of the discussed options was uploading someone's GPG keys to be used for their Copr namespace. We don't want this - I can elaborate if necessary.

The other option was to have a checkbox in user/group settings that would make all projects in that namespace use the same GPG key (which would be generated the same way other Copr GPG keys are but it would be assigned to the user/group namespace instead of a project). We can implement this feature if it is a hard requirement for you but it will probably take some time.

Tagging @xsuchy and @praiskup in case they want to add something.

@ehelms
Copy link
Member

ehelms commented Sep 8, 2023

One of the discussed options was uploading someone's GPG keys to be used for their Copr namespace. We don't want this - I can elaborate if necessary.

Nah, we understand this one, managing the security of this would be painful at best.

The other option was to have a checkbox in user/group settings that would make all projects in that namespace use the same GPG key (which would be generated the same way other Copr GPG keys are but it would be assigned to the user/group namespace instead of a project). We can implement this feature if it is a hard requirement for you but it will probably take some time.

I think for our use case this would be too broad. I would think of it having to be more like:

  1. Interface to tell Copr to generate a GPG key
  2. See all of my GPG keys in a list
  3. Be able to assign a GPG key to a project selectively

I would equate this to creating tokens in Github where I can assign different permissions to different tokens and manage them.

We are a bigger, possibly more unique in this respect project so I understand not undertaking implementing all of this if you do not have other stakeholders asking for this.

@ehelms
Copy link
Member

ehelms commented Sep 12, 2023

After some further thought, I am leaning towards this workflow.

For Releases

  1. Release engineer generates stage repositories locally.
  2. Release engineer signs the RPMs locally and generates the repository.
  3. Release engineer runs a script to rsync local repo to stagingyum.
  4. Release engineer kicks off pipeline.

For Nightly

  1. Jenkins generates stage repository.
  2. Jenkins rsyncs to stagingyum.
  3. Jenkins runs tests.

@ekohl @evgeni

@ekohl
Copy link
Member

ekohl commented Sep 19, 2023

I also thought about this and it does make sense, but we don't want to overwrite files which were already signed. I wonder how we prevent that.

Practical use case: when we release Foreman 3.9.0-rc2, how do we prevent rewriting all the packages which were already signed when we released RC1?

With koji we have download_rpms where we list all tagged packages, including their sigs and any packages not signed with the correct key is downloaded.

Can we download the production repo metadata, list all RPMs and then do the same for the staging repo. All files on staging that aren't in production should be downloaded and signed. Is that feasible?

@ehelms
Copy link
Member

ehelms commented Sep 19, 2023

An example using repodiff to identify only changed packages:

$ repodiff --repofrompath old,https://stagingyum.theforeman.org/foreman/nightly/el8/x86_64/ --repofrompath new,https://download.copr.fedorainfracloud.org/results/@theforeman/foreman-nightly-staging/rhel-8-x86_64/ --repo-old old --repo-new new --refresh --archlist x86_64,noarch,src --simple
Added old repo from https://stagingyum.theforeman.org/foreman/nightly/el8/x86_64/
Added new repo from https://download.copr.fedorainfracloud.org/results/@theforeman/foreman-nightly-staging/rhel-8-x86_64/
old                                                                                                          24 kB/s | 3.5 kB     00:00    
new                                                                                                          17 kB/s | 2.3 kB     00:00    
Added package  : nodejs-tslib-2.6.2-1.el8
Added package  : python-websockify-0.10.0-3.el8

Modified packages
nodejs-babel-core-7.22.10-1.el8 -> nodejs-babel-core-7.22.17-1.el8
nodejs-graphql-tag-2.11.0-1.el8 -> nodejs-graphql-tag-2.12.6-1.el8
nodejs-node-gyp-6.1.0-9.el8 -> nodejs-node-gyp-6.1.0-10.el8
nodejs-node-sass-4.13.1-1.el8 -> nodejs-node-sass-4.14.1-2.el8
nodejs-react-ellipsis-with-tooltip-1.0.8-4.el8 -> nodejs-react-ellipsis-with-tooltip-1.1.1-1.el8
nodejs-theforeman-vendor-12.0.1-1.el8 -> nodejs-theforeman-vendor-12.2.0-1.el8
nodejs-uuid-3.3.2-4.el8 -> nodejs-uuid-3.4.0-1.el8
rubygem-autoprefixer-rails-10.4.13.0-1.el8 -> rubygem-autoprefixer-rails-10.4.15.0-1.el8
rubygem-autoprefixer-rails-doc-10.4.13.0-1.el8 -> rubygem-autoprefixer-rails-doc-10.4.15.0-1.el8
rubygem-css_parser-1.14.0-1.el8 -> rubygem-css_parser-1.16.0-1.el8
rubygem-css_parser-doc-1.14.0-1.el8 -> rubygem-css_parser-doc-1.16.0-1.el8
rubygem-excon-0.100.0-1.el8 -> rubygem-excon-0.102.0-1.el8
rubygem-excon-doc-0.100.0-1.el8 -> rubygem-excon-doc-0.102.0-1.el8
rubygem-execjs-2.8.1-1.el8 -> rubygem-execjs-2.9.0-1.el8
rubygem-execjs-doc-2.8.1-1.el8 -> rubygem-execjs-doc-2.9.0-1.el8
rubygem-foreman_maintain-1:1.3.5-1.el8 -> rubygem-foreman_maintain-1:1.4.0-1.el8
rubygem-foreman_maintain-doc-1:1.3.5-1.el8 -> rubygem-foreman_maintain-doc-1:1.4.0-1.el8
rubygem-globalid-1.1.0-1.el8 -> rubygem-globalid-1.2.1-1.el8
rubygem-globalid-doc-1.1.0-1.el8 -> rubygem-globalid-doc-1.2.1-1.el8
rubygem-kafo-7.0.0-1.el8 -> rubygem-kafo-7.1.0-1.el8
rubygem-kafo-doc-7.0.0-1.el8 -> rubygem-kafo-doc-7.1.0-1.el8
rubygem-mime-types-3.5.0-1.el8 -> rubygem-mime-types-3.5.1-1.el8
rubygem-mime-types-doc-3.5.0-1.el8 -> rubygem-mime-types-doc-3.5.1-1.el8
rubygem-pg-1.5.3-1.el8 -> rubygem-pg-1.5.4-1.el8
rubygem-pg-debuginfo-1.5.3-1.el8 -> rubygem-pg-debuginfo-1.5.4-1.el8
rubygem-pg-debugsource-1.5.3-1.el8 -> rubygem-pg-debugsource-1.5.4-1.el8
rubygem-pg-doc-1.5.3-1.el8 -> rubygem-pg-doc-1.5.4-1.el8
rubygem-sequel-5.71.0-1.el8 -> rubygem-sequel-5.72.0-1.el8
rubygem-sequel-doc-5.71.0-1.el8 -> rubygem-sequel-doc-5.72.0-1.el8
rubygem-sprockets-4.2.0-1.el8 -> rubygem-sprockets-4.2.1-1.el8
rubygem-sprockets-doc-4.2.0-1.el8 -> rubygem-sprockets-doc-4.2.1-1.el8

Summary
Added packages: 2
Removed packages: 0
Modified packages: 31

@ekohl
Copy link
Member

ekohl commented Sep 19, 2023

That looks hard to parse and you would need to glob to get exact paths, but it is generally the idea that would work.

@ehelms
Copy link
Member

ehelms commented Sep 19, 2023

  • Release engineer runs a script to rsync local repo to stagingyum.

How would we handle this? Right now only the yumrepostage user has the rsync script available. Would we deploy something for each user on web01 to be able to rsync signed RPMs to it and then that script puts them in the right place? And I suppose add a createrepo step there.

@praiskup
Copy link

One of the discussed options was uploading someone's GPG keys to be used for their Copr namespace. We don't want this - I can elaborate if necessary.

This feature would be relatively expensive for implementation, and the question is whether it is worth it.
We could, IMO, manually upload keys if a few teams really needed it.

I think for our use case this would be too broad. I would think of it having to be more like:

  1. Interface to tell Copr to generate a GPG key
  2. See all of my GPG keys in a list
  3. Be able to assign a GPG key to a project selectively

This is an interesting idea.

After some further thought, I am leaning towards this workflow.

Seems like you propose GPG signatures outside of Copr, while the building stays in Copr. Correct?
How much effort we would save you if we implemented the GPG key handling mechanism, or at least
uploaded a custom signature upon your request in the meantime? (this is somewhat needed before
we start working on something non-trivial).

@Odilhao
Copy link
Member

Odilhao commented Sep 28, 2023

One question, do we have any quota/limit rate while building on copr? I'm building one projectwith 100+ packages, I'm releasing locally to control the build order, looks like one build got stuck and never started, it's on the same status for 4 hours and counting.

@FrostyX
Copy link

FrostyX commented Sep 28, 2023

looks like one build got stuck and never started, it's on the same status for 4 hours and counting.

We had issues with spawning new builders because subscription-manager was timeouting. @xsuchy already resolved it. There is a bit of a queue but builds are running again.

@ehelms ehelms added this to Copr Oct 13, 2023
@ehelms ehelms moved this to In Progress in Copr Oct 13, 2023
@ekohl
Copy link
Member

ekohl commented Oct 26, 2023

https://github.com/orgs/theforeman/projects/6 lists various tasks.

@ehelms
Copy link
Member

ehelms commented Nov 3, 2023

Two things i am noticing that I don't have a clear understanding of:

  1. Even if rsync does not push an updated file, it appears to trigger a change to the last modified date (e.g. https://stagingyum.theforeman.org/plugins/nightly/el8/x86_64/)
  2. Some files are getting cached I assume by Fastly and if those packages change, they are not getting updated from the webservers perspective. This results in the RPM metadata having one checksum and the file being downloaded having another. Example:

@ehelms
Copy link
Member

ehelms commented Dec 14, 2023

This is the build system now! This is active for nightly and 3.9, so I will close this issue.

@ehelms ehelms closed this as completed Dec 14, 2023
@github-project-automation github-project-automation bot moved this from In Progress to Done in Copr Dec 14, 2023
@Odilhao
Copy link
Member

Odilhao commented Dec 14, 2023

Investigation completed

@evgeni
Copy link
Member Author

evgeni commented Dec 14, 2023

No idea why, but I've read "This is the build system now!" in @zjhuntin's voice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Archived in project
Development

No branches or pull requests

6 participants