Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

⚠️ Changes to make IPAM deployable with CAPI #427

Conversation

peppi-lotta
Copy link
Member

@peppi-lotta peppi-lotta commented Feb 6, 2024

These changes allow metal3 ip-address-manager(IPAM) to be deployed with cluster-api (CAPI). Currently deploying IPAM with CAPI assumes the CRDs are in a git release and the release has a metadata.yaml. IPAM's current release doesn't have a metadata.yaml so this need to be added. Action need to be taken to get IPAM as an ipam-provider for CAPI so that metal3IPAM can be specified when running clusterctl init with the --ipam flag.

IPAM's ipam-components.yaml has CRDs for ipClaim and ipAddresses which work but IPAM is unable to resolve the ipAddresClaims that CAPI has defined. This is a next step after these changes are
meged to make IPAM resolve CAPI's ipAddressClaims. Check more in CAPI ipam contract

@metal3-io-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign furkatgofurov7 for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@metal3-io-bot metal3-io-bot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Feb 6, 2024
@Rozzii
Copy link
Member

Rozzii commented Feb 15, 2024

@Rozzii
Copy link
Member

Rozzii commented Feb 15, 2024

/cc @furkatgofurov7

# dependency for CAPM3
namespace: capm3-system
# Adds namespace to all resources
namespace: ipam-provider-m3-system
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit of a mouth full. Could we simplify it a bit?
I suggest ipam-metal3-system. For reference the CAPI in-cluster IPAM has the namespace capi-ipam-in-cluster-system so following that we would get capi-ipam-metal3-system, but since we can use this also without CAPI, I think we can drop the capi- prefix.

Suggested change
namespace: ipam-provider-m3-system
namespace: ipam-metal3-system

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the in-cluster provider I went with capi-ipam-in-cluster-, so it could be capi-ipam-metal3- to stay consistent here. I'd argue that it might help to be consistent, so it's easier to navigate for people that are new to CAPI, but it's entirely up to you.

# dependency for CAPM3
namespace: capm3-system
# Adds namespace to all resources
namespace: ipam-provider-m3-system

namePrefix: ipam-
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This prefix must match so that it + the namespace name (=system) becomes the namespace above.

Suggested change
namePrefix: ipam-
namePrefix: ipam-metal3-

@kashifest
Copy link
Member

@furkatgofurov7 @schrej can we get some reviews from you folks on this ?

Copy link
Member

@furkatgofurov7 furkatgofurov7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK there is no way to make it work without CAPM3 changes, can we verify both changes in CI (should be possible to run CI from CAPM3 pointing to the branch of current change)

@@ -3,6 +3,7 @@ kind: Kustomization

resources:
- manager.yaml
- namespace.yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# dependency for CAPM3
namespace: capm3-system
# Adds namespace to all resources
namespace: ipam-provider-m3-system
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@schrej
Copy link
Member

schrej commented Feb 20, 2024

Currently deploying IPAM with CAPI assumes the path to the CRDs has a directory named 'in-cluster' in it

What exactly do you mean with this? That the --ipam flag of clusterctl expects in-cluster? Or did we hardcode some directory into capi tooling somewhere? If yes, that's probably a mistake.

IPAM's ipam-components.yaml has CRDs for ipClaim and ipAddresses which work but IPAM is unable to resolve the ipAddresClaims that CAPI has defined.

Imho we should avoid deploying custom claims and addresses when using clusterctl to install metal3 ipam. If the plan is to support the CAPI variants of those resources anyway, deploying the custom resources is probably superfluous, and will only lead to confusion (and annoying autocompletion). As long as the official resources aren't supported, I'd also not accept the addition of the metal3 ipam to the official list of providers integrated in clusterctl, as it's not compatible with other providers. Not sure if the CAPI team has the same opinion here.

@lentzi90
Copy link
Member

@peppi-lotta can you update the description? The part about "in-cluster directory" was a misunderstanding if I recall correctly

@kashifest
Copy link
Member

Imho we should avoid deploying custom claims and addresses when using clusterctl to install metal3 ipam. If the plan is to support the CAPI variants of those resources anyway, deploying the custom resources is probably superfluous, and will only lead to confusion (and annoying autocompletion). As long as the official resources aren't supported, I'd also not accept the addition of the metal3 ipam to the official list of providers integrated in clusterctl, as it's not compatible with other providers. Not sure if the CAPI team has the same opinion here.

I want to understand this part a bit more. I can agree to the point that metal3 ipam needs to handle the CAPI variant of the IPAM resources to be accepted as a CAPI IPAM provider. But on the point of deploying metal3 specific resources and CRDs, it should remain as it is now since if a user want to deploy a metal3 IPAM then the user is aware that metal3 IPAM allows you to have this custom claims on top and offers you some additional workflow. So I dont see any harm deploying those. Otherwise if someone doesnt want metal3 IPAM but only CAPI specific variants then the current IPAM provider should be enough.

@Rozzii
Copy link
Member

Rozzii commented May 6, 2024

What is the current state of this ticket ? @peppi-lotta could you please summarize it here? The PR went silent in February.

@peppi-lotta peppi-lotta marked this pull request as draft May 7, 2024 12:32
@metal3-io-bot metal3-io-bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 7, 2024
@peppi-lotta
Copy link
Member Author

These changes along metal3-io/cluster-api-provider-metal3#1442 help make it possible to deploy this ip-address-manager with clusterctl. This has been on hold while I've been researching the possibility to switch to use in-cluster-ipam in CAPM3. This is currently not possible so I'll continue to make this ip-address-manager an IPAM-provider for CAPI. I'll hopefully be able to continue with this soon.

@metal3-io-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues will close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@metal3-io-bot metal3-io-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 5, 2024
@Rozzii
Copy link
Member

Rozzii commented Aug 6, 2024

/remove-lifecycle stale

@metal3-io-bot metal3-io-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 6, 2024
@peppi-lotta peppi-lotta closed this Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
Status: Done / Closed
Development

Successfully merging this pull request may close these issues.

7 participants