Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The goldilock dasboard is not displaying all containers in namespace #729

Open
2 tasks done
meadows12 opened this issue Sep 30, 2024 · 3 comments
Open
2 tasks done
Labels
bug Something isn't working triage This bug needs triage

Comments

@meadows12
Copy link

What happened?

I have enabled Goldilocks for all namespaces in the cluster, but I am not receiving recommendations for all containers in the dashboard. Only one container is missing, and I see this error in the dashboard logs: no matching Workloads found for VPA/goldilocks-webhook-portal.

When I checked the VPA Custom Resource Definition (CRD), I found that the VPA for this container has already been created with the appropriate recommendation.

What did you expect to happen?

The goldilock dashboard should display all the containers in namespace

How can we reproduce this?

  • Install goldilocks and vpa-recommender using helm chart and enable it for all the namespaces.
  • Check if you get the recommendation of all the containers in the dashboard. If not, check the logs of dashboard/ goldilock-controller and also check if the VPA is created for each of the container in the namespace.

Version

9.0.0

Search

  • I did search for other open and closed issues before opening this.

Code of Conduct

  • I agree to follow this project's Code of Conduct

Additional context

No response

@meadows12 meadows12 added bug Something isn't working triage This bug needs triage labels Sep 30, 2024
@sudermanjr
Copy link
Member

If the VPA is there, it is very unusual for the dashboard not to display it. However, reproducing this will be very difficult without a lot more detail. First step would be to turn up the logging on the dashboard, share those logs, and then share the full YAML of the workload and the VPA that was created.

@meadows12
Copy link
Author

meadows12 commented Oct 1, 2024

Hey @sudermanjr, attaching logs from dashboard and VPA that was created

E1001 05:55:32.790906       1 summary.go:162] no matching Workloads found for VPA/goldilocks-db-init
E1001 05:55:32.888959       1 summary.go:162] no matching Workloads found for VPA/goldilocks-webhook-portal

And this is the VPA created

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  generation: 4
  labels:
    creator: Fairwinds
    source: goldilocks
  managedFields:
    - apiVersion: autoscaling.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:creator: {}
            f:source: {}
        f:spec:
          .: {}
          f:targetRef: {}
          f:updatePolicy:
            .: {}
            f:updateMode: {}
      manager: goldilocks
      operation: Update
    - apiVersion: autoscaling.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          .: {}
          f:conditions: {}
          f:recommendation:
            .: {}
            f:containerRecommendations: {}
      manager: recommender
      operation: Update
      subresource: status
  name: goldilocks-webhook-portal
  namespace: webhook
status:
  conditions:
    - lastTransitionTime: ***
      status: 'True'
      type: RecommendationProvided
  recommendation:
    containerRecommendations:
      - containerName: ***
        lowerBound:
          cpu: 10m
          memory: '52428800'
        target:
          cpu: 11m
          memory: '52428800'
        uncappedTarget:
          cpu: 11m
          memory: '52428800'
        upperBound:
          cpu: 11m
          memory: '52428800'
      - containerName: ***
        lowerBound:
          cpu: 22m
          memory: '716657383'
        target:
          cpu: 23m
          memory: '716711186'
        uncappedTarget:
          cpu: 23m
          memory: '716711186'
        upperBound:
          cpu: 23m
          memory: '743613777'
spec:
  targetRef:
    apiVersion: ***
    kind: ***
    name: webhook-portal
  updatePolicy:
    updateMode: 'Off'

And what do you mean by the YAML of the workload exactly ?

@meadows12
Copy link
Author

meadows12 commented Oct 8, 2024

@sudermanjr btw, the kind of this is custom RD created of our own and creates custom workload. Not any Kubernetes workload like deployment, statefulset, demonset. Would that be an issue ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage This bug needs triage
Projects
None yet
Development

No branches or pull requests

2 participants