Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OTA-1378,OTA-1379: add retry logic for pulling images and less logs for sigs #969

Merged
merged 3 commits into from
Oct 24, 2024

Conversation

PratikMahajan
Copy link
Contributor

we're adding a retry logic that tries to pull layers if it fails due to any reason. We retry for 3 times before ultimately failing.

Also moved the warn log for sig pulling to debug and added a counter to tell how many sig images we've ignored

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 23, 2024
@PratikMahajan PratikMahajan force-pushed the fix-sig-log-retry branch 5 times, most recently from 4d9eb3f to 0368585 Compare October 24, 2024 15:14
@PratikMahajan PratikMahajan changed the title add retry logic for pulling images and less logs for sigs OTA-1379: add retry logic for pulling images and less logs for sigs Oct 24, 2024
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Oct 24, 2024
@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 24, 2024

@PratikMahajan: This pull request references OTA-1379 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

we're adding a retry logic that tries to pull layers if it fails due to any reason. We retry for 3 times before ultimately failing.

Also moved the warn log for sig pulling to debug and added a counter to tell how many sig images we've ignored

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

&repo,
&tag,
e
if tag.contains(".sig") {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tag-based discovery is one option for finding Sigstore signatures. In the future, we might move to listing referrers (containers/image#2030). But if we do, the failure modes for this line:

  • Misclassifying a non-sig as a Sigstore signature because it happens to use this tag structure, or
  • Misclassifying a Sigstore signature as a non-sig, non-release ignored image,

both seem low, so 🤷, I'm ok with this heuristic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for misclassifying non-sig to sig will always be a risk if we do string comparison. but imo should be rare.
if signature gets classifies as non-sig, the logs should bring that into notice. Not a lot worried about the mismatch.
We can also change this logic when we get listing referrers in dkregistry and pull it downstream to cincinnati.

move the encountered signatures log from warn to debug
and count the number of signature as well as invalid
releases and log the count instead.
adds retry logic so we're more resilient to failures on
container registry part.
we try fetching the manifest and manifest ref for 3 times before
ultimately failing.
retries fetching the blob instead of erroring out and erasing
the progress till the point
Err(e) => {
// signatures are not identified by dkregistry and not useful for cincinnati graph, dont retry and return error
if tag.contains(".sig") {
return Err(e);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and then this error bubbles up and is converted to a debug message via the .sig branch of fetch_releases's get_manifest_layers handling in 022a8d6.

Copy link
Member

@wking wking left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 24, 2024
Copy link
Contributor

openshift-ci bot commented Oct 24, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: PratikMahajan, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [PratikMahajan,wking]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link

/retest-required

Remaining retests: 0 against base HEAD 4de7870 and 2 for PR HEAD 31ceb1d in total

@wking wking changed the title OTA-1379: add retry logic for pulling images and less logs for sigs OTA-1378,OTA-1379: add retry logic for pulling images and less logs for sigs Oct 24, 2024
@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 24, 2024

@PratikMahajan: This pull request references OTA-1378 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

This pull request references OTA-1379 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

we're adding a retry logic that tries to pull layers if it fails due to any reason. We retry for 3 times before ultimately failing.

Also moved the warn log for sig pulling to debug and added a counter to tell how many sig images we've ignored

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@PratikMahajan
Copy link
Contributor Author

/override ci/prow/customrust-cargo-test
/override ci/prow/cargo-test

override known test failures

Copy link
Contributor

openshift-ci bot commented Oct 24, 2024

@PratikMahajan: Overrode contexts on behalf of PratikMahajan: ci/prow/cargo-test, ci/prow/customrust-cargo-test

In response to this:

/override ci/prow/customrust-cargo-test
/override ci/prow/cargo-test

override known test failures

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Contributor

openshift-ci bot commented Oct 24, 2024

@PratikMahajan: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 894e8e8 into openshift:master Oct 24, 2024
13 checks passed
JianLi-RH added a commit to JianLi-RH/cincinnati that referenced this pull request Oct 25, 2024
This PR is only used to test openshift#969

Please ignore it, I will close it later
wking added a commit to wking/cincinnati that referenced this pull request Oct 25, 2024
…X_REPLICAS

We'd dropped 'replicas' in 8289781 (replace HPA with keda
ScaledObject, 2024-10-09, openshift#953), following AppSRE advice [1].  Rolling
that Template change out caused the Deployment to drop briefly to
replicas:1 before Keda raised it back up to MIN_REPLICAS (as predicted
[1]).  But in our haste to recover from the incdent, we raised both
MIN_REPLICAS (good) and restored the replicas line in 0bbb1b8
(bring back the replica field and set it to min-replicas, 2024-10-24, openshift#967).

That means we will need some future Template change to revert
0bbb1b8 and re-drop 'replicas'.  In the meantime, every Template
application will cause the Deployment to blip to the Template-declared
value briefly, before Keda resets it to the value it prefers.  Before
this commit, the blip value is MIN_REPLICAS, which can lead to
rollouts like:

  $ oc -n cincinnati-production get -w -o wide deployment cincinnati
  NAME         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                                          IMAGES                                                                SELECTOR
  ...
  cincinnati   0/6     6            0           86s   cincinnati-graph-builder,cincinnati-policy-engine   quay.io/app-sre/cincinnati:latest,quay.io/app-sre/cincinnati:latest   app=cincinnati
  cincinnati   0/2     6            0           2m17s   cincinnati-graph-builder,cincinnati-policy-engine   quay.io/app-sre/cincinnati:latest,quay.io/app-sre/cincinnati:latest   app=cincinnati
  ...

when Keda wants 6 replicas and we push:

  $ oc process -p MIN_REPLICAS=2 -p MAX_REPLICAS=12 -f dist/openshift/cincinnati-deployment.yaml | oc -n cincinnati-production apply -f -
  deployment.apps/cincinnati configured
  prometheusrule.monitoring.coreos.com/cincinnati-recording-rule unchanged
  service/cincinnati-graph-builder unchanged
  ...

The Pod terminations on the blip to MIN_REPLICAS will drop our
capacity to serve clients, and at the moment it can take some time to
recover that capacity in replacement Pods.  Changes like 31ceb1d
(add retry logic to fetching blob from container registry, 2024-10-24, openshift#969)
should speed new-Pod availability and reduce that risk.

This commit moves the blip over to MAX_REPLICAS to avoid
Pod-termination risk entirely.  Instead, we'll surge unnecessary Pods,
and potentially autoscale unnecessary Machines to host those Pods.
But then Keda will return us to its preferred value, and we'll delete
the still-coming-up Pods and scale down any extra Machines.  Spending
a bit of money on extra cloud Machines for each Template application
seems like a lower risk than the Pod-termination risk, to get us
through safely until we are prepared to remove 'replicas' again and
eat its one-time replicas:1, Pod-termination blip.

[1]: https://gitlab.cee.redhat.com/service/app-interface/-/blob/649aa9b681acf076a39eb4eecf0f88ff1cacbdcd/docs/app-sre/runbook/custom-metrics-autoscaler.md#L252 (internal link, sorry external folks)
wking added a commit to wking/cincinnati that referenced this pull request Oct 25, 2024
…X_REPLICAS

We'd dropped 'replicas' in 8289781 (replace HPA with keda
ScaledObject, 2024-10-09, openshift#953), following AppSRE advice [1].  Rolling
that Template change out caused the Deployment to drop briefly to
replicas:1 before Keda raised it back up to MIN_REPLICAS (as predicted
[1]).  But in our haste to recover from the incident, we raised both
MIN_REPLICAS (good) and restored the replicas line in 0bbb1b8
(bring back the replica field and set it to min-replicas, 2024-10-24, openshift#967).

That means we will need some future Template change to revert
0bbb1b8 and re-drop 'replicas'.  In the meantime, every Template
application will cause the Deployment to blip to the Template-declared
value briefly, before Keda resets it to the value it prefers.  Before
this commit, the blip value is MIN_REPLICAS, which can lead to
rollouts like:

  $ oc -n cincinnati-production get -w -o wide deployment cincinnati
  NAME         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                                          IMAGES                                                                SELECTOR
  ...
  cincinnati   0/6     6            0           86s   cincinnati-graph-builder,cincinnati-policy-engine   quay.io/app-sre/cincinnati:latest,quay.io/app-sre/cincinnati:latest   app=cincinnati
  cincinnati   0/2     6            0           2m17s   cincinnati-graph-builder,cincinnati-policy-engine   quay.io/app-sre/cincinnati:latest,quay.io/app-sre/cincinnati:latest   app=cincinnati
  ...

when Keda wants 6 replicas and we push:

  $ oc process -p MIN_REPLICAS=2 -p MAX_REPLICAS=12 -f dist/openshift/cincinnati-deployment.yaml | oc -n cincinnati-production apply -f -
  deployment.apps/cincinnati configured
  prometheusrule.monitoring.coreos.com/cincinnati-recording-rule unchanged
  service/cincinnati-graph-builder unchanged
  ...

The Pod terminations on the blip to MIN_REPLICAS will drop our
capacity to serve clients, and at the moment it can take some time to
recover that capacity in replacement Pods.  Changes like 31ceb1d
(add retry logic to fetching blob from container registry, 2024-10-24, openshift#969)
should speed new-Pod availability and reduce that risk.

This commit moves the blip over to MAX_REPLICAS to avoid
Pod-termination risk entirely.  Instead, we'll surge unnecessary Pods,
and potentially autoscale unnecessary Machines to host those Pods.
But then Keda will return us to its preferred value, and we'll delete
the still-coming-up Pods and scale down any extra Machines.  Spending
a bit of money on extra cloud Machines for each Template application
seems like a lower risk than the Pod-termination risk, to get us
through safely until we are prepared to remove 'replicas' again and
eat its one-time replicas:1, Pod-termination blip.

[1]: https://gitlab.cee.redhat.com/service/app-interface/-/blob/649aa9b681acf076a39eb4eecf0f88ff1cacbdcd/docs/app-sre/runbook/custom-metrics-autoscaler.md#L252 (internal link, sorry external folks)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants