-
Notifications
You must be signed in to change notification settings - Fork 353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add k0s support #1399
feat: Add k0s support #1399
Conversation
Can we verify this with our operator installation once? Ref https://github.com/kubearmor/KubeArmor/tree/main/deployments/helm/KubeArmorOperator |
This is not working with our operator due to the k0s socket path not being available in ContainerRuntimeSocketMap variable. {"level":"warn","ts":1694083369.086009,"caller":"runtime/runtime.go:34","msg":"Could'nt detect runtime"}
{"level":"error","ts":1694083369.086016,"caller":"snitch-cmd/root.go:115","msg":"Not able to runtime","stacktrace":"github.com/kubearmor/KubeArmor/pkg/KubeArmorOperator/cmd/snitch-cmd.snitch\n\t/KubeArmor/pkg/KubeArmorOperator/cmd/snitch-cmd/root.go:115\ngithub.com/kubearmor/KubeArmor/pkg/KubeArmorOperator/cmd/snitch-cmd.glob..func2\n\t/KubeArmor/pkg/KubeArmorOperator/cmd/snitch-cmd/root.go:63\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:944\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:992\nmain.Execute\n\t/KubeArmor/pkg/KubeArmorOperator/cmd/main.go:32\nmain.main\n\t/KubeArmor/pkg/KubeArmorOperator/cmd/main.go:39\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"} Do we need to update the |
On updating the ContainerRuntimeSocketMap and RuntimeStorageVolumes variables to include k0s socket and state volume path this is working with our operator. {
"level": "info",
"ts": 1694093231.398183,
"caller": "snitch-cmd/root.go:112",
"msg": "Detected containerd as node runtime, runtime socket=/run/k0s/containerd.sock"
}
{
"level": "info",
"ts": 1694093231.398207,
"caller": "snitch-cmd/root.go:120",
"msg": "Detected runtime storage location /run/k0s/containerd"
} anurag@k0s:~$ k -n kube-system get all -l kubearmor-app
NAME READY STATUS RESTARTS AGE
pod/kubearmor-operator-8465fc8dc4-9wtvw 1/1 Running 0 10m
pod/kubearmor-relay-55969ff67-kdzhd 1/1 Running 0 9m54s
pod/kubearmor-apparmor-containerd-e6e27-zzmch 1/1 Running 0 9m18s
pod/kubearmor-controller-555db79b49-zd5zn 2/2 Running 0 9m15s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubearmor-controller-metrics-service ClusterIP 10.97.235.6 <none> 8443/TCP 9m55s
service/kubearmor ClusterIP 10.103.176.111 <none> 32767/TCP 9m54s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kubearmor-apparmor-containerd-e6e27 1 1 1 1 1 kubearmor.io/btf=yes,kubearmor.io/enforcer=apparmor,kubearmor.io/runtime-storage=run_containerd,kubearmor.io/runtime=containerd,kubearmor.io/socket=run_k0s_containerd.sock,kubernetes.io/os=linux 9m18s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kubearmor-operator 1/1 1 1 10m
deployment.apps/kubearmor-relay 1/1 1 1 9m54s
deployment.apps/kubearmor-controller 1/1 1 1 9m54s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubearmor-operator-8465fc8dc4 1 1 1 10m
replicaset.apps/kubearmor-relay-55969ff67 1 1 1 9m54s
replicaset.apps/kubearmor-controller-555db79b49 1 1 1 9m54s
anurag@k0s:~$ However, if we don't update the {
"level": "info",
"ts": 1694090635.6830027,
"caller": "snitch-cmd/root.go:112",
"msg": "Detected containerd as node runtime, runtime socket=/run/k0s/containerd.sock"
}
{
"level": "info",
"ts": 1694090635.683014,
"caller": "snitch-cmd/root.go:120",
"msg": "Detected runtime storage location /run/containerd"
} So, my question is, should we also update the |
ddc0916
to
8934a53
Compare
@anurag-rajawat Can you try owner only policies to verify whether the need to update RuntimeVolumes is needed or not. KubeArmor/tests/ksp/ksp_test.go Line 529 in 88f1632
You can check for difference in Alerts (it will show DefaultPosture if the runtime volume is configured incorrectly ) Regarding CRI Socket, yes let's update the values in @rksharma95 Do you think we should update the operator to accept custom values, needing to update operator shouldn't be a necessity, it should accept custom paths for socket as well. WDYT? |
yes we can make this change, we should be fine with adding this option either as command line input or an option field added to the |
I can definitely try it but the |
8934a53
to
0cc14ee
Compare
When debugging, I found that we need to include the containerd runtime storage path Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/run/k0s/containerd.sock" to rootfs at "/var/run/containerd/containerd.sock": open /run/k0s/containerd/io.containerd.runtime.v2.task/k8s.io/kubearmor/rootfs/run/containerd/containerd.sock: read-only file system: unknown
I ran the ksp test suite and only this policy test was failing. Karmor probe output is as follows: $ karmor probe
Found KubeArmor running in Kubernetes
Daemonset :
kubearmor Desired: 1 Ready: 1 Available: 1
Deployments :
kubearmor-operator Desired: 1 Ready: 1 Available: 1
kubearmor-relay Desired: 1 Ready: 1 Available: 1
kubearmor-controller Desired: 1 Ready: 1 Available: 1
Containers :
kubearmor-operator-b945487bd-9nf9g Running: 1 Image Version: anuragrajawat/kubearmor-operator:v0.1
kubearmor-relay-bb96bd6d5-jscb5 Running: 1 Image Version: kubearmor/kubearmor-relay-server
kubearmor-controller-6b5d689967-q8q89 Running: 2 Image Version: gcr.io/kubebuilder/kube-rbac-proxy:v0.12.0
kubearmor-apparmor-containerd-e6e27-mbdxb Running: 1 Image Version: kubearmor/kubearmor:stable
Node 1 :
OS Image: Ubuntu 20.04.6 LTS
Kernel Version: 5.15.0-83-generic
Kubelet Version: v1.27.5+k0s
Container Runtime: containerd://1.7.5
Active LSM: AppArmor
Host Security: false
Container Security: true
Container Default Posture: audit(File) audit(Capabilities) audit(Network)
Host Default Posture: audit(File) audit(Capabilities) audit(Network)
Host Visibility: none
Armored Up pods :
+-------------+--------------------------------+------------+--------------------------------------+----------------------------------------------------+
| NAMESPACE | DEFAULT POSTURE | VISIBILITY | NAME | POLICY |
+-------------+--------------------------------+------------+--------------------------------------+----------------------------------------------------+
| multiubuntu | File(audit), | none | ubuntu-1-deployment-6676567dd5-88gzz | |
| | Capabilities(audit), Network | | | |
| | (audit) | | | |
+ + + +--------------------------------------+----------------------------------------------------+
| | | | ubuntu-2-deployment-75b69b5979-hgh68 | |
| | | | | |
| | | | | |
+ + + +--------------------------------------+----------------------------------------------------+
| | | | ubuntu-3-deployment-9cd84c7b5-r5cdf | |
| | | | | |
| | | | | |
+ + + +--------------------------------------+----------------------------------------------------+
| | | | ubuntu-5-deployment-85fc9485dd-qx7x2 | ksp-group-2-audit-file-path-owner-from-source-path |
| | | | | |
| | | | | |
+ + + +--------------------------------------+----------------------------------------------------+
| | | | ubuntu-4-deployment-f66bb7fd-7rnbp | ksp-group-2-audit-file-path-owner-from-source-path |
| | | | | |
| | | | | |
+-------------+--------------------------------+------------+--------------------------------------+----------------------------------------------------+ and the alert is as follows: {
"Timestamp": 1694419544,
"UpdatedTime": "2023-09-11T08:05:44.872156Z",
"ClusterName": "default",
"HostName": "hp-notebook",
"NamespaceName": "multiubuntu",
"Owner": {
"Ref": "Deployment",
"Name": "ubuntu-4-deployment",
"Namespace": "multiubuntu"
},
"PodName": "ubuntu-4-deployment-f66bb7fd-7rnbp",
"Labels": "container=ubuntu-4,group=group-2",
"ContainerID": "2f79ccb0a07c316a560f02b6837c138b27a5566e6b0532ebbce5693558acf2d9",
"ContainerName": "ubuntu-4-container",
"ContainerImage": "docker.io/kubearmor/ubuntu-w-utils:0.1@sha256:b4693b003ed1fbf7f5ef2c8b9b3f96fd853c30e1b39549cf98bd772fbd99e260",
"HostPPID": 166381,
"HostPID": 166387,
"PPID": 234,
"PID": 240,
"UID": 1000,
"ParentProcessName": "/bin/su",
"ProcessName": "/bin/bash",
"PolicyName": "DefaultPosture",
"Type": "MatchedPolicy",
"Source": "/bin/bash -c cat /home/user1/secret_data1.txt",
"Operation": "File",
"Resource": "/dev/pts/0",
"Data": "syscall=SYS_OPENAT fd=-100 flags=O_RDWR|O_NONBLOCK",
"Enforcer": "AppArmor",
"Action": "Block",
"Result": "Permission denied"
} Can you please help me in figuring out the problem? |
0cc14ee
to
6332061
Compare
a933a09
to
fb01ead
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update the support matrix document with k0s support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anyway to add a single testcase that can automate k0s test?
fb01ead
to
a6773f4
Compare
a6773f4
to
e3aab1e
Compare
e3aab1e
to
445df19
Compare
743eac6
to
43b6d99
Compare
Signed-off-by: Anurag Rajawat <[email protected]>
Signed-off-by: Anurag Rajawat <[email protected]>
Signed-off-by: Anurag Rajawat <[email protected]>
43b6d99
to
f696513
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👌🏽
Purpose of PR?:
Fixes #1318
Does this PR introduce a breaking change?
No.
If the changes in this PR are manually verified, list down the scenarios covered::
Additional information for reviewer? :
Mention if this PR is part of any design or a continuation of previous PRs
Checklist:
<type>(<scope>): <subject>