-
Notifications
You must be signed in to change notification settings - Fork 841
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Umbrella Issue: Lingering 3rd Party Resource Usage #7708
Comments
IMHO, CNCF should be contacted (by steering?) to provide an safe path for infrastructure migration. |
I disagree, I think we should contact these projects to notify that they need to stop depending on our resources (pending discussion here that we agree that we should eliminate this) and they can contact CNCF themselves if they require additional resources from the CNCF, they have a better understanding of their own requirements and all CNCF projects have maintainers with service desk access. (Also you can only see your own service desk tickets, so filing our own is a visibility problem for those projects) However, I also think we should discuss this on the sub-issues, because they're not all quite the same. (e.g. for 2. nobody should need additional resources anyhow) |
This is also roughly how kubernetes/test-infra#12863 and kubernetes/test-infra#33226 were previously handled, we reached out to each project to give a deadline and then eventually cut them of if nothing was done, it cannot permanently be our responsibility if no action will be taken by the external project |
+1 on removing those projects as part of the K8S CI but I don't think it's worth breaking their CI. I'm onboard on whatever option we pick to close the long-standing conversation. |
For (1), are we covering CI also or do we scope the problem to artifacts hosting and distribution ? |
I think we can set a reasonable grace period and still shut it down if we actually reach that, as we have for all past cases. Sometimes we attempt to help and the other project is not responsive, in which case we can't permanently limit ourselves. I agree that we should prefer to avoid breakage though. (See also proposing publishing no further content hosting while not removing existing content)
This issue should track both overall, but I intended 1) to cover pkgs.k8s.io and 3) to cover CI #7709. |
Since 2018 then workgroup K8s Infra (now SIG K8s Infra) has been disentangling CI / content hosting for third parties from the Kubernetes project's infra as we began to plan migrating everything to Kubernetes community control (versus largely having been provided and run directly by Google).
This meant kicking out e.g. CI for rules_k8s and rules_docker that were not actually Kubernetes projects but were using our CI, due to decisions made by Googlers at the time, when it was all funded and run by Google anyhow.
Today we have all but eliminated these issues so that the Kubernetes project can self-manage resources to run the Kubernetes project, without being responsible to or exposed to external project's utilization.
We have one notable recent regression, and a few specific lingering legacies. This issue tracks resolving them in total.
I don't think there are others, excepting perhaps the coreDNS image used by kubeadm, this is debatably an essential Kubernetes release artifact, it again predates SIG K8s Infra. We may also revisit this however.
/sig k8s-infra
/priority important-longterm
cc @kubernetes/sig-k8s-infra-leads
The text was updated successfully, but these errors were encountered: