Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to initialize database tables: unable to open database file: out of memory - persistent volume attached to metricsScraper #9654

Open
rubber-ant opened this issue Nov 12, 2024 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@rubber-ant
Copy link

What happened?

│ I1112 15:36:28.587847       1 main.go:43] "Starting Metrics Scraper" version="1.2.1"                                                                                                  │
│ W1112 15:36:28.587961       1 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.                                │
│ I1112 15:36:28.588173       1 main.go:51] Kubernetes host: https://10.96.0.1:443                                                                                                      │
│ I1112 15:36:28.588181       1 main.go:52] Namespace(s): []                                                                                                                            │
│ F1112 15:36:28.588834       1 main.go:70] Unable to initialize database tables: unable to open database file: out of memory (14)                                                      │
│ Stream closed EOF for kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7c97bc8d64-wz254 (kubernetes-dashboard-metrics-scraper)

What did you expect to happen?

Volume mounted to metricScraper

How can we reproduce it (as minimally and precisely as possible)?

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
  namespace: kubernetes-dashboard
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
    volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: xfs

helm

    values:
      - metricsScraper:
          containers:
            args:
              - --v=0
              - --db-file=/mnt/metrics.db
            volumeMounts:
                - mountPath: /mnt
                  name: scraper-pv-storage
          volumes:
            - name: scraper-pv-storage
              persistentVolumeClaim:
                claimName: my-pvc

pv and pvc

kubectl  get pv,pvc -A | grep dashboard
persistentvolume/pvc-d837a84b-2d16-4b2b-a2d1-54d28c93fb82   100Gi      RWO            Retain           Bound    kubernetes-dashboard/my-pvc       xfs            <unset>                          4m52s
kubernetes-dashboard   persistentvolumeclaim/my-pvc                    Bound    pvc-d837a84b-2d16-4b2b-a2d1-54d28c93fb82   100Gi      RWO            xfs            <unset>                 12m

Anything else we need to know?

backend using ceph - using the same storageclass for other sts without any issue

Ref: #5537

What browsers are you seeing the problem on?

No response

Kubernetes Dashboard version

7.9.0

Kubernetes version

1.29.10

Dev environment

No response

@rubber-ant rubber-ant added the kind/bug Categorizes issue or PR as related to a bug. label Nov 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant