Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix downward api for status fields in k3k-kubelet #185

Merged
merged 2 commits into from
Jan 16, 2025

Conversation

galal-hussein
Copy link
Collaborator

Comment on lines 74 to 89
// look for status.* fields in the env
if pod.Annotations == nil {
pod.Annotations = make(map[string]string)
}
for i, container := range pod.Spec.Containers {
for j, env := range container.Env {
if env.ValueFrom != nil {
if env.ValueFrom.FieldRef != nil {
if strings.Contains(env.ValueFrom.FieldRef.FieldPath, "status.") {
pod.Annotations[FieldpathField+"_"+strconv.Itoa(i)+"_"+env.Name] = env.ValueFrom.FieldRef.FieldPath
pod.Spec.Containers[i].Env = removeEnv(pod.Spec.Containers[i].Env, j)
}
}
}
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When possible I try to avoid too many nested ifs, what about a small rewrite like:

Suggested change
// look for status.* fields in the env
if pod.Annotations == nil {
pod.Annotations = make(map[string]string)
}
for i, container := range pod.Spec.Containers {
for j, env := range container.Env {
if env.ValueFrom != nil {
if env.ValueFrom.FieldRef != nil {
if strings.Contains(env.ValueFrom.FieldRef.FieldPath, "status.") {
pod.Annotations[FieldpathField+"_"+strconv.Itoa(i)+"_"+env.Name] = env.ValueFrom.FieldRef.FieldPath
pod.Spec.Containers[i].Env = removeEnv(pod.Spec.Containers[i].Env, j)
}
}
}
}
}
// look for status.* fields in the env
if pod.Annotations == nil {
pod.Annotations = make(map[string]string)
}
for i, container := range pod.Spec.Containers {
for j, env := range container.Env {
if env.ValueFrom == nil || env.ValueFrom.FieldRef == nil {
continue
}
fieldPath := env.ValueFrom.FieldRef.FieldPath
if strings.Contains(fieldPath, "status.") {
annotationKey := fmt.Sprintf("%s_%d_%s", FieldpathField, i, env.Name)
pod.Annotations[annotationKey] = fieldPath
pod.Spec.Containers[i].Env = removeEnv(pod.Spec.Containers[i].Env, j)
}
}
}

Also because of the strconv and all the params I find cleaner to use a fmt.Sprintf.

I would also move this logic in a separate func, with a specific name and a small comment on why it's needed. Something like cleanupFieldRef.

Comment on lines 684 to 692
s := strings.SplitN(name, "_", 3)
if len(s) != 3 {
continue
}
containerIndex, err = strconv.Atoi(s[1])
if err != nil {
return err
}
envName = s[2]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't try to spread outside this logic, it looks part of the webhook fieldRef stuff. Would it make sense to have inside the webhook package two func to handle this?

Something like:

func FormatFieldPathAnnotationKey() {
    // formatting logic here
}

func ParseFieldPathAnnotationKey(annotationKey string) (name string, containerIndex int, envName string, error) {
    // parse logic here
}

The cleanest way would be have a FieldPathAnnotationKey type, but this is up to you. If it's used only here it's fine enough.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah okay, I can add the logic in the webhook package

}
envName = s[2]

// re-adding these envs to the pod
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After readding them, should we cleanup the annotations as well? Do we need them anymore?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will remove them from the pods on the host cluster, removing them after they are created on the virtual cluster will require a new controller which I think its an overkill

Copy link
Collaborator

@enrichman enrichman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of comments!

@galal-hussein
Copy link
Collaborator Author

A couple of comments!

Thanks I will address the comments today

Signed-off-by: galal-hussein <[email protected]>
@galal-hussein galal-hussein merged commit 9d0c907 into rancher:main Jan 16, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants