You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As an extension to this issue which was a workaround to logically implement this open feature request, I tried manually determining which machines are in Ready state to filter node configuration targets.
However, due to resource dependency between resource blocks like maas_network_interface_link on maas_network_interface_physical, it is impossible to successfully apply terraform modules including both resources where multiple machines have to be configured. This is because the TF provider currently passes over any/all error messages from the MAAS API over to the TF user. As such, this heavily limits the utility of such maas_ resources that have a dependency on another resource that is destined to fail because one of the maas_machines being configured is not in the expected state (i.e. in the Deployed or Release Failed state instead of the Ready state).
In cases like this, it would be better for the provider to "tone down" the error from the MAAS API to something like a warning so that execution of subsequent dependent resources in the module can proceed for those machines that did not encounter any errors.
I currently simulate this behavior using a child module that applies the maas_block_device, maas_network_interface_physical, and the maas_network_interface_link resources on a per-machine basis. While this approach works to ensure all Ready machines are properly configured, the parent module expectedly fails with errors eventually:
terraform_data.fetch_ready_machines_list: Destroying... [id=c86987d7-2132-fe7c-d987-0201bcf84f9f] terraform_data.fetch_ready_machines_list: Destruction complete after 0s terraform_data.fetch_ready_machines_list: Creating... terraform_data.fetch_ready_machines_list: Provisioning with 'local-exec'... terraform_data.fetch_ready_machines_list (local-exec): Executing: ["/bin/sh" "-c" "maas $PROFILE machines read | jq -r '.[] | select( .virtualmachine_id == null ) | select( .status_name == \"Ready\") | \"- \\(.hostname)\"' > $READY_NODES_YAML_PATH\n"] maas_network_interface_link.configure_subnet["compute-node05ob48-enp0s25"]: Destroying... [id=87] maas_network_interface_link.configure_subnet["compute-node04ob48-usb0"]: Destroying... [id=86] maas_network_interface_link.configure_subnet["compute-node04ob48-enp0s25"]: Destroying... [id=89] maas_block_device.disks["compute-node08ob48-sdb"]: Destroying... [id=40] maas_block_device.disks["compute-node04ob48-sdb"]: Destroying... [id=29] maas_block_device.disks["compute-node08ob48-sda"]: Destroying... [id=26] maas_block_device.disks["compute-node05ob48-sda"]: Destroying... [id=30] maas_network_interface_link.configure_subnet["compute-node06ob48-usb0"]: Destroying... [id=78] maas_block_device.disks["compute-node09ob48-sdb"]: Destroying... [id=35] maas_network_interface_link.configure_subnet["compute-node05ob48-usb0"]: Destroying... [id=82] maas_network_interface_link.configure_subnet["compute-node08ob48-enp0s25"]: Destroying... [id=80] maas_block_device.disks["compute-node06ob48-sda"]: Destroying... [id=32]
maas_block_device.disks["compute-node09ob48-sda"]: Destroying... [id=38]
maas_network_interface_link.configure_subnet["controller-node01ob48-enp0s25"]: Destroying... [id=84]
maas_block_device.disks["controller-node02ob48-sda"]: Destroying... [id=42]
maas_network_interface_link.configure_subnet["compute-node10ob48-enp0s25"]: Destroying... [id=94]
maas_block_device.disks["compute-node10ob48-sda"]: Destroying... [id=33]
maas_network_interface_link.configure_subnet["controller-node02ob48-usb0"]: Destroying... [id=90]
terraform_data.fetch_ready_machines_list: Still creating... [10s elapsed]
maas_network_interface_link.configure_subnet["compute-node10ob48-usb0"]: Destroying... [id=93]
terraform_data.fetch_ready_machines_list: Creation complete after 12s [id=f28a571d-8cdd-b486-1122-94d879e81aac]
...truncated-output...
module.disk-nic-config["node07ob48"].maas_network_interface_physical.configure_nic["compute-only-node07ob48-usb0"]: Creating... module.disk-nic-config["node07ob48"].maas_network_interface_physical.configure_nic["compute-only-node07ob48-enp0s25"]: Creating... module.disk-nic-config["node07ob48"].maas_network_interface_physical.configure_nic["compute-only-node07ob48-enp0s25"]: Creation complete after 8s [id=177] module.disk-nic-config["node07ob48"].maas_network_interface_physical.configure_nic["compute-only-node07ob48-usb0"]: Creation complete after 8s [id=178] module.disk-nic-config["node07ob48"].maas_network_interface_link.configure_subnet["compute-only-node07ob48-usb0"]: Creating... module.disk-nic-config["node07ob48"].maas_network_interface_link.configure_subnet["compute-only-node07ob48-enp0s25"]: Creating... module.disk-nic-config["node07ob48"].maas_network_interface_link.configure_subnet["compute-only-node07ob48-enp0s25"]: Creation complete after 3s [id=406] module.disk-nic-config["node07ob48"].maas_network_interface_link.configure_subnet["compute-only-node07ob48-usb0"]: Creation complete after 4s [id=405] ╷
│ Error: ServerError:409 Conflict (Cannot unlink subnet interface because the machine is not New, Ready, Allocated, or Broken.)
│
│
╵
╷
│ Error: ServerError:409 Conflict (Cannot unlink subnet interface because the machine is not New, Ready, Allocated, or Broken.)
│
│
╵
╷
│ Error: ServerError:409 Conflict (Cannot delete block device because the machine is not Ready.)
The provider should provide one of the following features for configuring maas_machines in an IaC/scriptable way:
Filter machines to apply configurations before calling the MAAS API; it should be okay to report warnings on machines that failed the test.
Ignore/update the state for such operations with current infrastructure status before proceeding to configure the node(s); ideal to have this during the plan phase itself
Implement a SINGLE machine configuration resource that will take care of both storage and networking on a per-machine level. Not sure if it would be possible to have a "wrapper" resource that can do this job, but that should be able to solve the dependency issues above.
The text was updated successfully, but these errors were encountered:
As an extension to this issue which was a workaround to logically implement this open feature request, I tried manually determining which machines are in
Ready
state to filter node configuration targets.However, due to resource dependency between resource blocks like
maas_network_interface_link
onmaas_network_interface_physical
, it is impossible to successfully apply terraform modules including both resources where multiple machines have to be configured. This is because the TF provider currently passes over any/all error messages from the MAAS API over to the TF user. As such, this heavily limits the utility of suchmaas_
resources that have a dependency on another resource that is destined to fail because one of themaas_machine
s being configured is not in the expected state (i.e. in theDeployed
orRelease Failed
state instead of theReady
state).In cases like this, it would be better for the provider to "tone down" the error from the MAAS API to something like a warning so that execution of subsequent dependent resources in the module can proceed for those machines that did not encounter any errors.
I currently simulate this behavior using a child module that applies the
maas_block_device
,maas_network_interface_physical
, and themaas_network_interface_link
resources on a per-machine basis. While this approach works to ensure allReady
machines are properly configured, the parent module expectedly fails with errors eventually:The provider should provide one of the following features for configuring
maas_machines
in an IaC/scriptable way:plan
phase itselfThe text was updated successfully, but these errors were encountered: