-
Notifications
You must be signed in to change notification settings - Fork 180
HackWeek 20: Uyuni SUSE Manager containerization project
Michael Calmer edited this page Mar 25, 2021
·
22 revisions
- This is the right place to record progress and plans
- Rocket.Chat for discussions
- Official HackWeek page for announcements
- A concluding Lightning Talk Slot was booked for a presentation of results
- have a lot of fun!
- learn about container building (buildah)
- learn about container orchestration (podman with Docker Compose support, k3s)
- learn about containerized application deployment (helm)
- learn about roadblocks in delivering Uyuni as containers
- all development happens on the Uyuni
containers
branch/PR - all new files are to be added in the
containers/
directory, for now - we use Dockerfiles (built with
docker build
orbuildah
), locally for now - we explicitly do not care about traditional clients at least for now
- create a "fat" container with everything needed for a Proxy
- 🟢 start from https://github.com/SUSE/spacewalk/wiki/Proxy-container
- 🟢 find out which directories need to be mounted externally
- 🟢 find out how it is best to specify configuration parameters (environment variables? answer files?)
- 🟢 pass the machine-id as parameter
- 🟢 add a startup (Salt) script. Figure out registration to the Server, activation of Proxy functionality (
configure-proxy.sh
), certs - 🟢 add a Server-side Python script to prepare configuration to onboard a Proxy
- 🟢 ensure the Proxy works (manual test)
- 🟡 ensure the Proxy container can be killed and restarted. With the same mount points and parameters, it should come back to full functionality ("proto-HA")
- try to slim down the Proxy container
- ⚪ remove traditional stack processes/packages, if possible
- ⚪ split out a first component (eg. salt-broker) into another container. In parallel:
- ⚪ try orchestration with Podman
- ⚪ try orchestration with k3s/Helm
- ⚪ try to skip packaging for one of the packages (eg. salt-broker) - sources straight from git to image
- create a "fat" container with everything needed for a Server
- ⚪ start from https://gitlab.suse.de/mbologna/sumadocker/-/tree/saltcontainer
- ⚪ find out which directories need to be mounted externally. Starting point: HA paper
- ⚪ add a startup (Python?) script
- ⚪ try until it breaks
- try to slim down the Server container
- ⚪ carve PostgreSQL out. Try Postgres-in-containers or outside of them
- ⚪ disable Cobbler. What needs to be done in order to make Cobbler "optional"?
- ⚪ disable or remove the traditional stack
- other research
- ⚪ we will need a solution about commandline tools. Would it be possible to create a UI around them like Rancher does?
- Dockerfile syntax, best practices
- K8s init containers
- How to preseed a minion with an accepted key
- mounting permanent directories require correct permissions. The setup can set it from internal. When a mounted directory should have content, we need to create it or copy it over. Best practice still needed.
- configuration files changed by
configure-proxy.sh
:
/etc/apache2/conf.d/cobbler-proxy.conf
/etc/apache2/vhosts.d/ssl.conf
/etc/jabberd/c2s.xml
/etc/jabberd/router-users.xml
/etc/jabberd/router.xml
/etc/jabberd/s2s.xml
/etc/jabberd/sm.xml
/etc/squid/squid.conf
/etc/ssh/sshd_config
/etc/sysconfig/rhn/up2date
/etc/rhn/rhn.conf
- configuration files related to the
mgrsshtunnel
user:
/etc/group
/etc/passwd
/etc/shadow
/var/spacewalk/mgrsshtunnel/
- Key material
/etc/salt/pki/minion/minion.pem
/etc/salt/pki/minion/minion.pub
/etc/ssh/ssh_host_*_key.*
/etc/apache2/ssl.crt/server.crt
/etc/apache2/ssl.csr/server.csr
/etc/apache2/ssl.key/server.key
/etc/pki/spacewalk/jabberd/server.pem
/etc/jabberd/server.pem
/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT
/var/lib/ca-certificates/*
/var/spacewalk/gpgdir
- Identifiers
/etc/salt/minion_id
/etc/machine-id
/etc/sysconfig/rhn/systemid
/var/lib/dbus/machine-id
- "permanent" directories
/srv/www/htdocs/pub/
/var/cache/squid/
/var/spool/rhn-proxy
/var/log/
- what does configure-proxy.sh do?
- it requests, via Salt events received by the
fetch-certificate
script, the/etc/sysconfig/rhn/systemid
file.systemid
is generated on the Server side by Java code and essentially contains the system ID (duh) and other basic system infos, with a signature (example). Should be only useful for traditional clients - performs "proxy activation" via XMLRPC calls (rhn-proxy-activate script)
- installs the spacewalk-proxy-management package
- configures jabberd
- changes Squid config files
- changes Cobbler config files
- optionally generates SSL certs
- configures SSL certs for Apache and Jabberd
- configures (and optionally generates) an SSH key for salt-ssh proxying. Configures sshd via the mgr-proxy-ssh-push-init script
- uninteresting parts: optionally setting up a configuration channel for Proxy configuration, opening firewall ports, activating SLP, enabling services (code)
- it requests, via Salt events received by the
- does it make sense to send traceback emails? Specifically from the Proxy?