-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Platform Request: Proxmox #736
Comments
To be clear: do you know if Proxmox requires that the userdata be specifically in cloud-config format? Fedora CoreOS expects the user to be able to pass arbitrarily-formatted userdata (containing an Ignition config). FCOS doesn't use or support cloud-init. |
(And thanks for writing this up!) |
Thank you for the reply. |
Hmm, let me rephrase the question. I assume there's a web page or API call where I can configure the userdata for a new VM, right? If I put a JSON file in the userdata, instead of a cloud-config, will Proxmox prevent me from publishing that JSON file to the VM? |
Hmm. The point is that you compile a form containing user data, yes. I mean, there is a form where you put these information (ssh-key, network config), but it is a key=value form, and at the end of the day, it is exposed in cloud-config format to the VM. There is no way to compile an arbitrary JSON file. |
We discussed this during the community meeting today. #738 and #739 were raised as topics for further discussion. |
I was able to get fedora coreos running on proxmox by bypassing the cloud-init stuff in the UI, and instead mounting the ignition file with with a qemu arg: You can repro this by following the autologin tutorial and using this script. Place the qcow2 image and ignition json in the right place. #!/bin/bash
set -x
VMID=101
IMG=/root/fedora-coreos.qcow2
POOL=local-zfs
IGN=/mnt/pve/myserver/snippets/autologin.ign
qm status $VMID
if [[ "$?" == 0 ]]; then
qm stop $VMID
qm destroy $VMID --purge
fi
set -e
qm create ${VMID} \
--name fcos-${VMID} \
--storage ${POOL} \
--bios ovmf \
--scsihw virtio-scsi-pci \
--scsi0 ${POOL}:vm-${VMID}-disk-1 \
--efidisk0 $POOL:1 \
--net0 virtio,bridge=vmbr10 \
--memory 8192 \
--serial0 socket \
--vga serial0 \
--args "-fw_cfg name=opt/com.coreos/config,file=$IGN"
qm importdisk ${VMID} $IMG $POOL --format=qcow2
qm showcmd ${VMID}
qm start ${VMID}
|
To anyone following this issue I bumped into this tonight, a wrapper to convert cloud-init to ignition, using vms not LXC though. |
FWIW it looks like the wrapper is pretty hard-coded. |
That plus the wrapper works fine with FCOS 32.20201018, but less so with more recent versions (had issues with 34.20211031.3.0 upwards). The custom embedded |
Looking at Proxmox API documentation, it looks like you can set the ARGS value for the config for a VM. https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/qemu/{vmid}/config |
I tried the wrapper on the weekend with fcos 35 and it worked great, except for the geco motd service that failed, but I just disabled that. |
Initially I posted here asking about anyone getting Fedora CoreOS 37 to work using the Geco code. There were errors with the motd and issue files as well as the qemu guest service. However, a posting on the ProxMox forum indicated some idea of what was required to remediate it (though it did not speak to the /etc/issue concern). I was able to workout that changing the fcos-base-tmplt.yaml file in the Geco github repo would allow Fedora CoreOS 37 to run properly under ProxMox now. Follows hereupon, the changes I had to make in pursuance of effectuating a functional Fedora CoreOS 37 environment within ProxMox.
I hope this information is of use to people here and I have also posted it in the ProxMox forum as well. For your reference: Stuart, N3GWG |
It seems like the ibmcloud platform is implement the same way like describe by @alcir. They are using cloud-init user-data: |
Thanks for pointing this out, SimonErm! I went looking, and the Kubevirt and Nutanix platform editions also use the same ConfigDrive code. I suppose it wouldn't be too hard to add Proxmox as a provider by just copying e.g., the ibmcloud module. I think that would probably be a good idea to add a Proxmox provider to Ignition, so that we aren't relying on another provider not changing their metadata approach! The frustrating thing about Proxmox, though, is that there isn't a particularly sane, secure (i.e., respecting Proxmox RBAC), out-of-the-box way to substitute the generated cloud-init metadata with custom metadata. This is something I have been investigating for a while. Basically, with e.g., a Fedora CoreOS IBMCloud image (or a theoretical future Proxmox one that works the same way), you're substituting the YAML cloud-init user-data blob with a JSON blob, by setting a I'd be totally fine with that limitation, because I intend to drive this via the API anyway, but the cicustom data blob needs to be stored as a file on one of the Proxmox server's "Storage" datastores that accepts "Snippets," and there's no way through the API or Web UI to upload such a "Snippet" which completely precludes the use of proper RBAC and Proxmox's authentication -- an API token no longer is enough credentials to add an ignition file, and you can't use SSO to do it either. There are workarounds, but they all boil down to either "make and upload your own cloud-init ISO," which requires external tooling, or "host an upload server and set the Proxmox host to look for Snippets there", which requires bending RBAC and bypassing the authentication much more than I'm comfortable with. There's been a Proxmox Bugzilla entry about Snippet management via the API, but it was posted in 2019 and is still marked "undecided," so I'm not confident that this will be added any time soon, nor am I hopeful about official Butane/Ignition support on the UI... And I sort of wonder how much sense it would make to add a Proxmox module to Ignition if it's not properly fleshed out on their end either. I hope it would, because this would make me a lot more confident than I would be using the IBMCloud or Kubevirt images and crossing my fingers that those APIs don't change. With all of that said, I found this lovely project recently. I haven't tried it out yet, and probably will play with it as soon as I finish typing this comment... but it enables remote, API-based access and control of a Proxmox server, and has built-in custom cloud-init/configdrive ISO management that looks like it should solve the bootstrapping issue of "how do I get my ignition data on the server?" |
Gentlemen,Please do forgive my ignorance; but why would you want to use this code when you have a project like this:robinelfrink/ansible-proxmox-api: Ansible Proxmox APIgithub.comWhat are the advantages of the go application over using an Ansible module such as this? Granted, I am not sure it is part of the community collections yet but that is potentially easily rectified.Disclaimers:1) I do work for Red Hat2) this is my personal opinion and my personal question3) If Red Hat as a corporation happens to be in agreement with my thinking, it is purely coincidental.
|
PR to implement Afterburn support: coreos/afterburn#1023 |
This was discussed in today's community meeting.
|
That's great news! @jlebon if I wanted to try and start this by, e.g.; duplicating the |
That's great that you're interested! Thank you! We now have two "levels" of platform enablement: an emerging path and the full path. The former is less work but no disk images are built or published (see #1569 for more details). In that case, users are expected to take e.g. the QEMU image and "restamp" the platform ID within it to the target ID (here e.g. We have issue templates to guide contributors. Here's the emerging enablement checklist, and here's the full one. So it mostly comes down to how much work you're willing to do. :) What I would suggest is to go with the emerging platform checklist to start. An emerging platform can always be later promoted to a full one, and you or anyone else interested can pursue that. All that said, either way the next required step is Ignition support, which yes you can base it off of other existing config drive-based providers. |
In order to implement support for a new cloud platform in Fedora CoreOS, we need to know several things about the platform. Please try to answer as many questions as you can.
Proxmox Virtual Environment is an open-source server virtualization management platform. It is a Debian-based Linux distribution with a modified Ubuntu LTS kernel and allows deployment and management of virtual machines (KVM) and containers (LXC).
Proxmox is used by various VPS providers. And it is adopted in homelabs or small business private servers.
Proxmox VE; short PVE
In an environment where you own the hypervisor, you could easily use the installation ISO and perform the install process using a console. This requires some sort manual deployment. In a private environment you are in control of the network, and you can use a DHCP server for instance.
On the contrary, in an environment that is not in your control, usually a cloud-init metadata is provided by exposing a
/dev/sr0
device that, if mounted, contains 3 files:meta-data
network-config
user-data
. The first one contains the instance-id, the second one contains network configuration (see below) and the last one can contains the SSH key of the root user.If using a raw image, and userdata is not provided, the VM is simply unconfigured: no network address if there is not a DHCP (pretty usual in a VPS provider environment), no default user.
Yes, as said before, the metadata is contained in the files in such drive.
The point is this: what if no DHCP is available? However, the network config can be retrieved in the
network-config
file in this form:It is contained in the
user-data
file:No
I'm not pretty sure (BTW no mechanism is required), it uses KVM and it can rely on qemu guest agent.
It is not required, btw it can use qemu guest agent.
KVM supports: QCOW2, RAW, VHD, VMDK
https://pve.proxmox.com/wiki/Cloud-Init_Support
The text was updated successfully, but these errors were encountered: