-
Notifications
You must be signed in to change notification settings - Fork 180
uyuni terraform integration
According to documentation, one should use cloud-init, which avoids configuring an ssh connection to the host. If cloud-init cannot be used for some reason, a remote execution provisioner should be used. Terraform even have a section talking about best pratices
A Uyuni provider, where we can use the output of another resource (a AWS instance for example) to extract the IPaddress or DNS, pass more properties, like ssh key, or bastion machine, and declaratively onboard systems to Uyuni.
For this solution is expected we need: - Access to Uyuni XML-RPC API. This can be a security issue, depending on where we are running the terraform apply - Know recently created machine private DNS or IP address to where Uyuni should connect - SSH connection information for Uyuni to connect and bootstrap the machine. Could be username and password or authentication key. - One resource should be defined by each created machine. The number of machines can be dynamic.
cons: - More codebase to support - Is not the recommend way to onboard machines on a system management tool - Users will be forced to define one more resource per machine - Possible security issue in exposing Uyuni XML-RPC API
We can set a run cmd when creating the machine which will download the bootstrap script and register the machine. No connection between teraform running machine and the recently created machine is needed. No connection between teraform running machine and Uyuni machine is needed.
Configure bootstrap script: https://documentation.suse.com/external-tree/en-us/suma/4.0/suse-manager/client-configuration/registration-bootstrap.html
Run command: curl -s http://hub-server.tf.local/pub/bootstrap/bootstrap-default.sh | bash -s
Note: Anytime user_data
will be updated to change the provisioning, terraform will destroy and then recreate the machines with new IP etc.
If bootstrapping failed for some reason but machine created successfully, ideally there should be a way to onboard these machines. Or put it in other words, should we find a way to bootstrap an existing machine created with terraform? We can use the existing mechanism and do it by hand?
Cloud-init ton aws examples:
- https://registry.terraform.io/providers/hashicorp/template/latest/docs/data-sources/cloudinit_config
- https://registry.terraform.io/providers/hashicorp/cloudinit/latest/docs/data-sources/cloudinit_config#example-usage
https://www.terraform.io/docs/provisioners/remote-exec.html Remote call running in the recently created machine. It will download the bootstrap script and register the machine in Uyuni. For this solution we need: - Open an ssh connection between the terraform running machine and the recently created one. - Define a bootstrap script the same way as defined in the cloud-init section - Download and run the command the same way as defined in the cloud-init section
Registration should be done using cloud-init and in case it's not possible, use the remove-exec provisioner. For both solutions, we don't need to develop any new code. Documentation should be developed to help customers defined the auto registration of recently created machines on the cloud.
Return IP addresses and hostnames, offer them in the onboarding page/API. The registration page should also be enhanced to support SSH bastion hosts.
Cons:
- Schema dependents on provider and no backward compatibility insured
- Terraform specific solution
VHM (Virtual Host Manager) can connect to cloud providers and virtual host managers to inspect which machines are available. We can develop a feature which can look for host registered as systems in Uyuni and not visible on VHM anymore to find which can potentially be removed/deleted.
Machines analyzed for possible deletion should be first linked to machines inspected in the VHM. Workflow can be: - Register cloud configuration/provider in VHM for inspection - Start register machines - New registered machines are automatically linked to the corresponding machine in the VHM - When we run delete analyzes only machines with a corresponding match in the VHM should be analyzed. If the match in VHM is not present anymore, the registered machine should/can be proposed for deletion.
This implementation can work with all existing tools (terraform, cloud formation, etc) since it is tied to each provider and not to a specific tool.
Terraform remote exec provisioner running a remove script. It should be discarded in favor of option 1, since this is terraform specific.
Docs: https://www.terraform.io/docs/backends/types/http.html
Uyuni should implement all the needed API methods. Allows teams to share terraform state file. No direct support for terraform workspaces.
Not too difficult to implement and would be straightforward to add workspace support. There is a chance we can tie workspaces with CLM environments.
Only very basic security checks are possible. Nothing granular.
High barrier of entry (~300 methods) and the only implementation available at this point is Terraform Cloud/Enterprise. Not sure if this could be an option.