Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error marshaling MachineConfigurationInput #80

Open
trarbr opened this issue Dec 21, 2023 · 12 comments
Open

Error marshaling MachineConfigurationInput #80

trarbr opened this issue Dec 21, 2023 · 12 comments
Assignees

Comments

@trarbr
Copy link

trarbr commented Dec 21, 2023

Hello 👋 New user of Talos and Pulumi here. I hope I am creating this issue in the right place - if not, please point me to where it should be.

My goal is to use Pulumi to provision a Talos cluster on VM's running in Azure. Right now I have provisioned a single VM (and all the related Azure resources), and the VM uses the Talos Linux 1.6.0 disk image in the community gallery. I want this VM to act as a control plane node.

I am working from the example in the Pulumi registry (https://www.pulumi.com/registry/packages/talos/), with the YAML engine. Just copy/pasting that YAML and running pulumi up gives me this error:

% pulumi up
Previewing update (dev):
     Type                 Name                          Plan     Info
     pulumi:pulumi:Stack  provision-sandbox-yaml-2-dev           2 errors

Diagnostics:
  pulumi:pulumi:Stack (provision-sandbox-yaml-2-dev):
    error: rpc error: code = Unknown desc = invocation of talos:machine/getConfiguration:getConfiguration returned an error: cannot encode config to call ReadDataSource for "talos_machine_configuration": objectEncoder failed on property "machine_secrets": objectEncoder failed on property "secrets": objectEncoder failed on property "bootstrap_token": Expected a string, got: {map[]}
    
      on Pulumi.yaml line 6:
       6:     fn::invoke:
       7:       function: talos:machine/getConfiguration:getConfiguration
       8:       arguments:
       9:         clusterName: "exampleCluster"
      10:         machineType: "controlplane"
      11:         clusterEndpoint: "https://cluster.local:6443"
      12:         machineSecrets: ${secrets.machineSecrets}
      13:       return: machineConfiguration
    error: an unhandled error occurred: waiting for RPCs: marshaling properties: awaiting input property "machineConfigurationInput": runtime error

My Pulumi program looks like this:

name: provision-sandbox-yaml-2
runtime: yaml
description: A minimal Pulumi YAML program
variables:
  configuration:
    fn::invoke:
      function: talos:machine/getConfiguration:getConfiguration
      arguments:
        clusterName: "exampleCluster"
        machineType: "controlplane"
        clusterEndpoint: "https://cluster.local:6443"
        machineSecrets: ${secrets.machineSecrets}
      return: machineConfiguration

resources:
  secrets:
    type: talos:machine/secrets:Secrets
  configurationApply:
    type: talos:machine/configurationApply:ConfigurationApply
    properties:
      clientConfiguration: ${secrets.clientConfiguration}
      machineConfigurationInput: ${configuration}
      node: "10.5.0.2"
      configPatches:
        - fn::toJSON:
            machine:
              install:
                disk: "/dev/sdd"
  bootstrap:
    type: talos:machine:Bootstrap
    properties:
      node: "10.5.0.2"
      clientConfiguration: ${secrets.clientConfiguration}
    options:
      dependsOn:
        - ${configurationApply}

outputs: {}

Let me know if you need more context/logging.

@UnstoppableMango
Copy link
Collaborator

Yep, you're in the right spot!

At a glance your program looks fine, but I'll be honest I haven't actually used the YAML provider yet so the semantics are a little unfamiliar. I'll see if I can re-produce locally.

I assume in your actual program you've replaced clusterEndpoint: "https://cluster.local:6443" with your Azure VM's details?

@trarbr
Copy link
Author

trarbr commented Dec 22, 2023 via email

@trarbr
Copy link
Author

trarbr commented Jan 2, 2024

Happy New Years 😄

I've done a bit of additional debugging.

The following is a complete program that attempts to standup a Talos controlplane node on a VM in Azure:

name: sandbox-provisioner
runtime: yaml
description: Provision the sandbox infrastructure to run the entire substation gateway
variables:
  centralManagementEndpoint:
    fn::invoke:
      arguments:
        expand: ${controlPlaneCentralManagementVirtualMachine.id}
        publicIpAddressName: ${controlPlaneCentralManagementPublicIPAddress.name}
        resourceGroupName: ${resourceGroup.name}
      function: azure-native:network:getPublicIPAddress
      return: ipAddress
  talosConfiguration:
    fn::invoke:
      function: talos:machine/getConfiguration:getConfiguration
      arguments:
        clusterName: "central-management-cluster"
        machineType: "controlplane"
        clusterEndpoint: "https://${centralManagementEndpoint}:6443"
        machineSecrets: ${talosSecrets.machineSecrets}
      return: machineConfiguration
outputs:
  centralManagementEndpoint: ${centralManagementEndpoint}
  talosSecrets: ${talosSecrets.machineSecrets}
resources:
  talosSecrets:
    type: talos:machine/secrets:Secrets
  # talosConfigurationApply:
  #   type: talos:machine/configurationApply:ConfigurationApply
  #   properties:
  #     clientConfiguration: ${talosSecrets.clientConfiguration}
  #     machineConfigurationInput: ${talosConfiguration}
  #     node: "10.0.1.4" # TODO: Figure out how to pull this from the NetworkInterface
  #     configPatches:
  #      - fn::toJSON:
  #         machine:
  #           install:
  #             disk: "/dev/sdd"
  # talosBootstrap:
  #   type: talos:machine:Bootstrap
  #   properties:
  #     node: "10.0.1.4"
  #     clientConfiguration: ${talosSecrets.clientConfiguration}
  #   options:
  #     dependsOn:
  #       - ${talosConfigurationApply}
  resourceGroup:
    type: azure-native:resources:ResourceGroup
    properties:
      resourceGroupName: rg-sandbox-cluster-we-001
  networkSecurityGroup: # we could have different network security groups for control plane and worker nodes
    type: azure-native:network:NetworkSecurityGroup
    properties:
      networkSecurityGroupName: sandbox-network-security-group
      resourceGroupName: ${resourceGroup.name}
      securityRules:
        - access: Allow
          destinationAddressPrefix: '*'
          destinationPortRange: '50000'
          direction: Inbound
          name: apid
          priority: 1001
          protocol: Tcp
          sourceAddressPrefix: '*'
          sourcePortRange: '*'
        - access: Allow
          destinationAddressPrefix: '*'
          destinationPortRange: '50001'
          direction: Inbound
          name: trustd
          priority: 1002
          protocol: Tcp
          sourceAddressPrefix: '*'
          sourcePortRange: '*'
        - access: Allow
          destinationAddressPrefix: '*'
          destinationPortRange: '2379-2380'
          direction: Inbound
          name: etcd
          priority: 1003
          protocol: Tcp
          sourceAddressPrefix: '*'
          sourcePortRange: '*'
        - access: Allow
          destinationAddressPrefix: '*'
          destinationPortRange: '6443'
          direction: Inbound
          name: k8s
          priority: 1004
          protocol: Tcp
          sourceAddressPrefix: '*'
          sourcePortRange: '*'
  centralManagementNetwork:
    type: azure-native:network:VirtualNetwork
    properties:
      virtualNetworkName: central-management-network
      resourceGroupName: ${resourceGroup.name}
      addressSpace:
        addressPrefixes:
          - 10.0.0.0/16
  controlPlaneCentralManagementSubnet:
    type: azure-native:network:Subnet
    properties:
      subnetName: control-plane-central-management-subnet
      virtualNetworkName: ${centralManagementNetwork.name}
      resourceGroupName: ${resourceGroup.name}
      addressPrefix: 10.0.1.0/24
  controlPlaneCentralManagementPublicIPAddress:
    type: azure-native:network:PublicIPAddress
    properties:
      publicIpAddressName: control-plane-central-management-node-01-public-ip
      resourceGroupName: ${resourceGroup.name}
      publicIPAllocationMethod: Static
      publicIPAddressVersion: IPv4
  controlPlaneCentralManagementNetworkInterface:
    type: azure-native:network:NetworkInterface
    properties:
      networkInterfaceName: control-plane-central-management-node-01-network-interface
      networkSecurityGroup:
        id: ${networkSecurityGroup.id}
      resourceGroupName: ${resourceGroup.name}
      ipConfigurations:
        - name: ipconfig
          publicIPAddress:
            id: ${controlPlaneCentralManagementPublicIPAddress.id}
          subnet:
            id: ${controlPlaneCentralManagementSubnet}
  controlPlaneCentralManagementVirtualMachine:
    type: azure-native:compute:VirtualMachine
    properties:
      vmName: control-plane-central-management-virtual-machine
      resourceGroupName: ${resourceGroup.name}
      osProfile:
        computerName: control-plane-central-management
        adminUsername: talosadmin # ignored by Talos but required by Pulumi
        adminPassword: talosadmin # ignored by Talos but required by Pulumi
        # customData: '' # TODO: Talos machine stuff
      hardwareProfile:
        vmSize: Standard_B4as_v2 # 4 vCPU, 16 GiB RAM, recommended by Talos
      networkProfile:
        networkInterfaces:
          - id: ${controlPlaneCentralManagementNetworkInterface.id}
            primary: true
      storageProfile:
        imageReference:
          communityGalleryImageId: /CommunityGalleries/siderolabs-c4d707c0-343e-42de-b597-276e4f7a5b0b/Images/talos-x64/Versions/1.6.0
        osDisk:
          name: control-plane-central-management-os-disk
          caching: ReadWrite
          createOption: FromImage
          managedDisk:
            storageAccountType: Premium_LRS

When I run this program, I get the following output:

% pulumi up -y 
Previewing update (sandbox):
     Type                 Name                         Plan     Info
     pulumi:pulumi:Stack  sandbox-provisioner-sandbox           1 error

Diagnostics:
  pulumi:pulumi:Stack (sandbox-provisioner-sandbox):
    error: rpc error: code = Unknown desc = invocation of talos:machine/getConfiguration:getConfiguration returned an error: cannot encode config to call ReadDataSource for "talos_machine_configuration": objectEncoder failed on property "machine_secrets": objectEncoder failed on property "secrets": objectEncoder failed on property "bootstrap_token": Expected a string, got: {map[]}
    
      on Pulumi.yaml line 14:
      14:     fn::invoke:
      15:       function: talos:machine/getConfiguration:getConfiguration
      16:       arguments:
      17:         clusterName: "central-management-cluster"
      18:         machineType: "controlplane"
      19:         clusterEndpoint: "https://${centralManagementEndpoint}:6443"
      20:         machineSecrets: ${talosSecrets.machineSecrets}
      21:       return: machineConfiguration


Updating (sandbox):
     Type                 Name                         Status         Info
     pulumi:pulumi:Stack  sandbox-provisioner-sandbox  **failed**     1 error

Diagnostics:
  pulumi:pulumi:Stack (sandbox-provisioner-sandbox):
    error: rpc error: code = Unknown desc = invocation of talos:machine/getConfiguration:getConfiguration returned an error: cannot encode config to call ReadDataSource for "talos_machine_configuration": objectEncoder failed on property "machine_secrets": objectEncoder failed on property "cluster": objectEncoder failed on property "secret": Expected a string, got: {map[]}
    
      on Pulumi.yaml line 14:
      14:     fn::invoke:
      15:       function: talos:machine/getConfiguration:getConfiguration
      16:       arguments:
      17:         clusterName: "central-management-cluster"
      18:         machineType: "controlplane"
      19:         clusterEndpoint: "https://${centralManagementEndpoint}:6443"
      20:         machineSecrets: ${talosSecrets.machineSecrets}
      21:       return: machineConfiguration

Outputs:
    centralManagementEndpoint: "51.124.226.166"
    talosSecrets             : {
        certs     : {
            etcd             : {
                cert: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJmekNDQVNTZ0F3SUJBZ0lSQUpROEVQRTFpbkFuZzgvTUFnQUprOW93Q2dZSUtvWkl6ajBFQXdJd0R6RU4KTUFzR0ExVUVDaE1FWlhSalpEQWVGdzB5TkRBeE1ESXdPRE0yTlRCYUZ3MHpNekV5TXpBd09ETTJOVEJhTUE4eApEVEFMQmdOVkJBb1RCR1YwWTJRd1dUQVRCZ2NxaGtqT1BRSUJCZ2dxaGtqT1BRTUJCd05DQUFRMDBOb3VUdzFZCkdPUnRIN3NyN2o3TjkvdFZjNkp0bm5XQkxYUVlMaHRyY0UwWWJURy9xZDloT1JNck11d3Z3NUVEYk5ZTFdsNDcKQjE2TnZSK3B2Ym14bzJFd1h6QU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSApBd0VHQ0NzR0FRVUZCd01DTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRk9wdHhBbEZVaDJkCjNLOER3S1R2M29zcFNwR2ZNQW9HQ0NxR1NNNDlCQU1DQTBrQU1FWUNJUUNJZG94SVpqRlM4c3g5OEhzNnoyUlIKYmRRVFE2bExFb21oeXBIdlFUL2ZPd0loQUtPeXBVeVNqVDQwc2JvZ2c1bmFKK3kveFdvUnZYU2o1WWJVbDZXbwp1ZW04Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
                key : {}
            }
            k8s              : {
                cert: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpVENDQVMrZ0F3SUJBZ0lRTEFlUWd0SW9KbWtDYTJhbkduVFdCekFLQmdncWhrak9QUVFEQWpBVk1STXcKRVFZRFZRUUtFd3ByZFdKbGNtNWxkR1Z6TUI0WERUSTBNREV3TWpBNE16WTFNRm9YRFRNek1USXpNREE0TXpZMQpNRm93RlRFVE1CRUdBMVVFQ2hNS2EzVmlaWEp1WlhSbGN6QlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VICkEwSUFCTURhUWJDdGZOd0VOaC9CbXZtRkMzNmZ3VXd4UjlPSmt3R3NURmRLWUtsTU5wMGJ1ck56ZVI4eCtWeUYKTGQzdm5yUXZZN0h0cDdCOHdHSnhTeFd2cU5hallUQmZNQTRHQTFVZER3RUIvd1FFQXdJQ2hEQWRCZ05WSFNVRQpGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFCkZnUVVMTVJrNXE3ZFN5YkIyM2xHQ3M4YXNIbnBBOHd3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnREdsekUrYVgKaUh0WG8xdDg4bGZPYVl3bnJFeTh5RWxHMmFQeE15UDQzamdDSVFDUFZQa00vN1JNMDFId1V1WEkxditCZ2sxbQpxYUZzUXN4UHdZRkFqdXk0VHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
                key : {}
            }
            k8sAggregator    : {
                cert: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJYekNDQVFhZ0F3SUJBZ0lSQUtlRXVhTmRWRi9qK2s4bmg4K2J0Rzh3Q2dZSUtvWkl6ajBFQXdJd0FEQWUKRncweU5EQXhNREl3T0RNMk5UQmFGdzB6TXpFeU16QXdPRE0yTlRCYU1BQXdXVEFUQmdjcWhrak9QUUlCQmdncQpoa2pPUFFNQkJ3TkNBQVExOVRmUnFPRks1MkpyaHlBbEI4aDVVR1pMaDdPUVVvekloWGRPb1pUU3VvZWdXalF4Cjhia3NQUGVHL2N3QnVRRTlDTW12NW1RT2pzemR1ckdxWkdiam8yRXdYekFPQmdOVkhROEJBZjhFQkFNQ0FvUXcKSFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUZNQU1CQWY4dwpIUVlEVlIwT0JCWUVGTnFrbzlITlY5bUo3VUhkSHc5eStSeC9ieWVBTUFvR0NDcUdTTTQ5QkFNQ0EwY0FNRVFDCklFQzAwWWI2YmVNbUhma3JHVEcxeHVtRWdxaVFOK2t5b1VkWGpVUmdFVFY2QWlCQkZha2dXRzVsUW5yaUZ0cXQKSXVGc29mcFFhLzJJWmJZdENkU1JRYlhGNnc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
                key : {}
            }
            k8sServiceaccount: {
                key: {}
            }
            os               : {
                cert: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJQakNCOGFBREFnRUNBaEE1RHkra25NUnBZUStoQWJMaUFjVW5NQVVHQXl0bGNEQVFNUTR3REFZRFZRUUsKRXdWMFlXeHZjekFlRncweU5EQXhNREl3T0RNMk5UQmFGdzB6TXpFeU16QXdPRE0yTlRCYU1CQXhEakFNQmdOVgpCQW9UQlhSaGJHOXpNQ293QlFZREsyVndBeUVBU2g0VG9OYlYyK3FmYzR0aXViMXVRRjRkUFpNR2N2a215b0RFClgyd08yMzZqWVRCZk1BNEdBMVVkRHdFQi93UUVBd0lDaERBZEJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjREFRWUkKS3dZQkJRVUhBd0l3RHdZRFZSMFRBUUgvQkFVd0F3RUIvekFkQmdOVkhRNEVGZ1FVMnY3bVRFOGV1aWhocnUzRgpsbXdmTXdUVFpQTXdCUVlESzJWd0EwRUFIQnVuZ0hmU3l1d0l6ZTBDVmRjL01GZUg1a1J5U0ZmSWNVaHlGQjQxCnM4UmROZ0J1QUhiZDhTSkR5WVF2QUdpUHpXaURnM0c5TzhvWE1kSjVjMXlqRFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
                key : {}
            }
        }
        cluster   : {
            id    : "_hMmvjK5TtnbZ-izSUil7KrMQ55xhw5rUTeeXiXdXDM="
            secret: {}
        }
        secrets   : {
            bootstrapToken           : {}
            secretboxEncryptionSecret: {}
        }
        trustdinfo: {
            token: {}
        }
    }

Resources:
    9 unchanged

Duration: 2s

This is with the v0.2.0 of the Talos provider (I also tried v0.1.8, same result).

The specific property that triggers the error varies from each run, but it always seems to be one of those that prints as an empty map in the talosSecrets output.

@UnstoppableMango
Copy link
Collaborator

Happy new year!

Sorry for the delay, I took a look before the holidays but didn't get very far.

I'm able to reproduce, and it seems like something is failing to create the Secrets.machineSecrets.cert.*.key properties and the error surfaces because something else is treating them as objects instead of strings. You can see in your output the key : {} lines. I'm not sure if that represents null and pulumi just formats it as {} for output or if it's actually {} at this point. It could be a clue as to what point in the process the error is happening.

Going out on a limb, the keys are probably null to begin with and either the YAML parser or the gRPC ser/des is making it's best guess that they were supposed to be objects. I'm not familiar with the inner workings of Talos, so I can't say if those fields start as null or whether it's correct for them to be null. Judging by the fact we have Certificate.key marked as required in the provider I'd wager they're not supposed to be null. This provider is a terraform wrapper so there's not much custom code in the repo. If I had to guess, I'd say the bug is either in our type mappings in provider/resources.go or in the YAML provider.

I double-checked with the typescript SDK and that's still working fine. You could try a different SDK in the meantime while we work this out.

You could also try populating the secrets yourself. The Sidero guys made a pretty nice API where you can just machineSecrets: ${secrets.machineSecrets} but machineSecrets is a full object and nothing is stopping you from creating it yourself.

I'm still learning the YAML sdk so my code is a little jank, but it would look something like this:

configuration:
  fn::invoke:
    function: talos:machine/getConfiguration:getConfiguration
    arguments:
      clusterName: "exampleCluster"
      machineType: "controlplane"
      clusterEndpoint: "https://cluster.local:6443"
      machineSecrets:
        certs:
          etcd:
            cert: ${secrets.machineSecrets.certs.etcd.cert}
            key: ${secrets.machineSecrets.certs.etcd.key}
          k8s:
            cert: ${secrets.machineSecrets.certs.k8s.cert}
            key: ${secrets.machineSecrets.certs.k8s.key}
          k8s_aggregator:
            cert: ${secrets.machineSecrets.certs.k8s_aggregator.cert}
            key: ${secrets.machineSecrets.certs.k8s_aggregator.key}
          k8s_serviceaccount:
            key: ${secrets.machineSecrets.certs.k8s_serviceaccount.key}
          os:
            cert: ${secrets.machineSecrets.certs.os.cert}
            key: ${secrets.machineSecrets.certs.os.key}
        cluster:
          id: ${secrets.machineSecrets.cluster.id}
          secret: ${secrets.machineSecrets.cluster.secret}
        secrets:
          bootstrapToken: ${secrets.machineSecrets.secrets.bootstrap_token}
          secretboxEncryptionSecret: ${secrets.machineSecrets.secrets.secretbox_encryption_secret}
        trustdinfo:
          token: ${secrets.machineSecrets.trustdinfo.token}
    return: machineConfiguration

To get certs for testing (or use if you so desire) you can use the pulumi TLS package

key:
  type: tls:index/privateKey:PrivateKey
  properties:
    rsaBits: 256
    algorithm: ECDSA

cert:
  type: tls:index/selfSignedCert:SelfSignedCert
  properties:
    allowedUses: ['any_extended']
    privateKeyPem: ${key.privateKeyPem}
    validityPeriodHours: 60

@UnstoppableMango
Copy link
Collaborator

Updated sample, this one runs at least.

name: pulumi-talos-80-repro
runtime: yaml
description: A minimal Pulumi YAML program
variables:
  configuration:
    fn::invoke:
      function: talos:machine/getConfiguration:getConfiguration
      arguments:
        clusterName: "exampleCluster"
        machineType: "controlplane"
        clusterEndpoint: "https://cluster.local:6443"
        machineSecrets:
          certs:
            etcd:
              cert: ${secrets.machineSecrets.certs.etcd.cert}
              key: ${secrets.machineSecrets.certs.etcd.key}
            k8s:
              cert: ${secrets.machineSecrets.certs.k8s.cert}
              key: ${secrets.machineSecrets.certs.k8s.key}
            k8sAggregator:
              cert:
                fn::toBase64: ${cert.certPem}
              key:
                fn::toBase64: ${key.privateKeyPem}
            k8sServiceaccount:
              key:
                fn::toBase64: 'somekey'
            os:
              cert: ${secrets.machineSecrets.certs.os.cert}
              key: ${secrets.machineSecrets.certs.os.key}
          cluster:
            id: ${secrets.machineSecrets.cluster.id}
            secret: ${secrets.machineSecrets.cluster.secret}
          secrets:
            bootstrapToken:
              fn::toBase64: 'sometoken'
            secretboxEncryptionSecret:
              fn::toBase64: 'somesecret'
          trustdinfo:
            token: ${secrets.machineSecrets.trustdinfo.token}
      return: machineConfiguration

resources:
  secrets:
    type: talos:machine/secrets:Secrets

  key:
    type: tls:index/privateKey:PrivateKey
    properties:
      rsaBits: 256
      algorithm: ECDSA

  cert:
    type: tls:index/selfSignedCert:SelfSignedCert
    properties:
      allowedUses: ['any_extended']
      privateKeyPem: ${key.privateKeyPem}
      validityPeriodHours: 60

  # configurationApply:
  #   type: talos:machine/configurationApply:ConfigurationApply
  #   properties:
  #     clientConfiguration: ${secrets.clientConfiguration}
  #     machineConfigurationInput: ${configuration}
  #     node: "10.5.0.2"
  #     configPatches:
  #       - fn::toJSON:
  #           machine:
  #             install:
  #               disk: "/dev/sdd"

  # bootstrap:
  #   type: talos:machine:Bootstrap
  #   properties:
  #     node: "10.5.0.2"
  #     clientConfiguration: ${secrets.clientConfiguration}
  #   options:
  #     dependsOn:
  #       - ${configurationApply}

outputs:
  secrets: ${secrets.machineSecrets}
  # config: ${configuration}
  pem: ${cert.certPem}
  pem64:
    fn::toBase64: ${cert.certPem}

@trarbr
Copy link
Author

trarbr commented Jan 3, 2024

Thank you very much for the example, it is very useful.

While testing your example, I found a potentially-related issue: properties that have an underscore in their name, e.g. k8s_aggregator, are converted from snake case to camel case somewhere, and it's not possible to access them in YAML.

I tested it with a few variations:

  • ${talosSecrets.machineSecrets.certs.k8s_aggregator.cert}.
    Fails with: error: receiver must be a list or object, not nil

  • ${talosSecrets.machineSecrets.certs.k8sAggregator.cert}
    Fails with:

    Error: k8sAggregator does not exist on talosSecrets.machineSecrets.certs
      on Pulumi.yaml line 187:
    187:         ${talosSecrets.machineSecrets.certs.k8sAggregator.cert}
    Existing properties are: k8s_aggregator, k8s, etcd, k8s_serviceaccount, os
    
  • ${talosSecrets.machineSecrets.certs["k8s_aggregator"].cert}
    Fails with: Index property access is only allowed on Maps and Lists

I don't know if this is related to the original issue, but I just wanted to call it out.

@UnstoppableMango
Copy link
Collaborator

In regards to the underscores, I believe that is an issue with Pulumi logging. When you write your program, you shouldn't have underscores. I'll see if there is an existing issue in the Pulumi repo, I think that problem has been around for a while :)

@UnstoppableMango
Copy link
Collaborator

I'm not sure if its related, but I was able to get a similar underscore problem in a TS pulumi program. I think something is mixed up with the property naming.

@pjoomen
Copy link

pjoomen commented Feb 24, 2024

It looks like the Python pulumi program has a similar issue (ie. property naming, and camelcase/underscore mismatch):

Exception: invoke of talos:machine/getConfiguration:getConfiguration failed: invocation of talos:machine/getConfiguration:getConfiguration returned an error: [AttributeName("machine_secrets").AttributeName("certs").AttributeName("k8s_serviceaccount")] Missing Configuration for Required Attribute: Must set a configuration value for the machine_secrets.certs.k8s_serviceaccount attribute as the provider has marked it as required.

Inspection of the secret (generated with talos.machine.Secrets("secrets") shows:

"k8s_serviceaccount": null,

Inspecting a manually generated secrets.yaml show the key being used is k8sserviceaccount and I get the same error when manually importing this resource.

@pjoomen
Copy link

pjoomen commented Feb 24, 2024

Python program:

import pulumiverse_talos as talos

this_secrets = talos.machine.Secrets("thisSecrets")
this_configuration = talos.machine.get_configuration_output(
    cluster_name="example-cluster",
    machine_type="controlplane",
    cluster_endpoint="https://cluster.local:6443",
    machine_secrets=this_secrets.machine_secrets,
)

Results in:

Exception: invoke of talos:machine/getConfiguration:getConfiguration failed: invocation of talos:machine/getConfiguration:getConfiguration returned an error: [AttributeName("machine_secrets").AttributeName("secrets").AttributeName("bootstrap_token")] Missing Configuration for Required Attribute: Must set a configuration value for the machine_secrets.secrets.bootstrap_token attribute as the provider has marked it as required.

@UnstoppableMango
Copy link
Collaborator

I've done some more testing and am pretty confident the properties need to be renamed. I was hoping to get some integration tests added first to be more confident in the fix, and I've got those in a branch. I should be able to get a branch up eventually for the actual rename fix

@ringods
Copy link
Member

ringods commented Mar 24, 2024

I think this is an issue in the code generation for the SDKs when using ExtraTypes in the schema. I filed an issue for it for further investigation:

pulumi/pulumi-terraform-bridge#1786

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants