Skip to content
This repository has been archived by the owner on Apr 12, 2023. It is now read-only.

chown: changing ownership of '/var/lib/grafana': Operation not permitted #121

Open
piyushkv1 opened this issue Feb 22, 2019 · 3 comments
Open

Comments

@piyushkv1
Copy link

Deploying in openshift environment with admin user, deployment failed with blow error.
chown: changing ownership of '/var/lib/grafana': Operation not permitted

[root@sc2-rdops-vm05-dhcp-162-15 ~]# oc project monitoring
Now using project "monitoring" on server "https://sc2-rdops-vm05-dhcp-162-15.eng.vmware.com:8443".
[root@sc2-rdops-vm05-dhcp-162-15 ~]# oc get all
NAME READY STATUS RESTARTS AGE
pod/alertmanager-668794449d-rlpqh 1/1 Running 0 6m
pod/grafana-core-8547f86b4b-9rp26 0/1 CrashLoopBackOff 6 6m
pod/grafana-import-dashboards-5xffj 0/1 Init:0/1 0 6m
pod/kube-state-metrics-69b9d65dd5-xnzpv 1/1 Running 0 6m
pod/node-directory-size-metrics-v7zdz 2/2 Running 0 6m
pod/node-directory-size-metrics-vql5w 2/2 Running 0 6m
pod/node-directory-size-metrics-xgz8f 2/2 Running 0 6m
pod/prometheus-core-86b8455f76-8szgp 0/1 CrashLoopBackOff 6 6m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager NodePort 172.30.184.34 9093:30825/TCP 6m
service/grafana NodePort 172.30.108.211 3000:32502/TCP 6m
service/kube-state-metrics ClusterIP 172.30.47.40 8080/TCP 6m
service/prometheus NodePort 172.30.180.246 9090:32270/TCP 6m
service/prometheus-node-exporter ClusterIP None 9100/TCP 6m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-directory-size-metrics 3 3 3 3 3 6m
daemonset.apps/prometheus-node-exporter 0 0 0 0 0 6m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/alertmanager 1 1 1 1 6m
deployment.apps/grafana-core 1 1 1 0 6m
deployment.apps/kube-state-metrics 1 1 1 1 6m
deployment.apps/prometheus-core 1 1 1 0 6m

NAME DESIRED CURRENT READY AGE
replicaset.apps/alertmanager-668794449d 1 1 1 6m
replicaset.apps/grafana-core-8547f86b4b 1 1 0 6m
replicaset.apps/kube-state-metrics-69b9d65dd5 1 1 1 6m
replicaset.apps/prometheus-core-86b8455f76 1 1 0 6m

NAME DESIRED SUCCESSFUL AGE
job.batch/grafana-import-dashboards 1 0 6m

[root@sc2-rdops-vm05-dhcp-162-15 ~]# oc logs grafana-core-8547f86b4b-9rp26
chown: changing ownership of '/var/lib/grafana': Operation not permitted
chown: changing ownership of '/var/log/grafana': Operation not permitted

[root@sc2-rdops-vm05-dhcp-162-15 ~]# oc logs grafana-import-dashboards-5xffj
Error from server (BadRequest): container "grafana-import-dashboards" in pod "grafana-import-dashboards-5xffj" is waiting to start: PodInitializing

[root@sc2-rdops-vm05-dhcp-162-15 ~]# oc logs grafana-import-dashboards-5xffj
Error from server (BadRequest): container "grafana-import-dashboards" in pod "grafana-import-dashboards-5xffj" is waiting to start: PodInitializing
[root@sc2-rdops-vm05-dhcp-162-15 ~]# oc logs prometheus-core-86b8455f76-8szgp
time="2019-02-22T15:46:35Z" level=warning msg="Flag -storage.local.memory-chunks is deprecated. Its value 500000 is used to override -storage.local.target-heap-size to 1536000000." source="config.go:317"
time="2019-02-22T15:46:35Z" level=info msg="Starting prometheus (version=1.7.0, branch=master, revision=bfa37c8ee39d11078662dce16c162a61dccf616c)" source="main.go:88"
time="2019-02-22T15:46:35Z" level=info msg="Build context (go=go1.8.3, user=root@7a6329cc02bb, date=20170607-09:43:48)" source="main.go:89"
time="2019-02-22T15:46:35Z" level=info msg="Host details (Linux 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 prometheus-core-86b8455f76-8szgp (none))" source="main.go:90"
time="2019-02-22T15:46:35Z" level=info msg="Loading configuration file /etc/prometheus/prometheus.yaml" source="main.go:252"
time="2019-02-22T15:46:35Z" level=error msg="Error opening memory series storage: cannot create persistent directory /prometheus/data: mkdir data: permission denied" source="main.go:192"

@saedabdu
Copy link

@piyushkv1
Had similar issue on AKS, and the following helped.

---
apiVersion: batch/v1
kind: Job
metadata: {name: grafana-chown}
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: grafana-chown
        command: [chown, -R, "472:472", /var/lib/grafana]
        image: busybox:latest
        volumeMounts:
        - {name: grafana-persistent-storage, mountPath: /var/lib/grafana}
      volumes:
      - name: grafana-persistent-storage
        persistentVolumeClaim:
          claimName: grafana-persistent-storage

@pipo02mix
Copy link
Contributor

Yeah it could work. Also you can run it as initContainer so you ensure it runs previous the deployment

@bkmanikandanraj
Copy link

bkmanikandanraj commented Nov 15, 2022

@piyushkv1 Had similar issue on AKS, and the following helped.

---
apiVersion: batch/v1
kind: Job
metadata: {name: grafana-chown}
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: grafana-chown
        command: [chown, -R, "472:472", /var/lib/grafana]
        image: busybox:latest
        volumeMounts:
        - {name: grafana-persistent-storage, mountPath: /var/lib/grafana}
      volumes:
      - name: grafana-persistent-storage
        persistentVolumeClaim:
          claimName: grafana-persistent-storage

This worked on Amazon EKS too (Grafana + EFS).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants