-
Notifications
You must be signed in to change notification settings - Fork 180
HackWeek 20: Uyuni SUSE Manager containerization project
Silvio Moioli edited this page Mar 20, 2021
·
22 revisions
- This is the right place to record progress and plans
- Rocket.Chat for discussions
- Official HackWeek page for announcements
- A concluding Lightning Talk Slot was booked for a presentation of results
- have a lot of fun!
- learn about container building (buildah)
- learn about container orchestration (podman with Docker Compose support, k3s)
- learn about containerized application deployment (helm)
- learn about roadblocks in delivering Uyuni as containers
- all development happens on the Uyuni
containers
branch/PR - all new files are to be added in the
containers/
directory, for now - we use Dockerfiles (built with
docker build
orbuildah
), locally for now - we explicitly do not care about traditional clients at least for now
- create a "fat" container with everything needed for a Proxy
- start from https://github.com/SUSE/spacewalk/wiki/Proxy-container
- find out which directories need to be mounted externally
- surely: one for configuration/answer files/certs, one for the Squid cache (
/var/cache/squid
), one for logs - we need to see if there are others and why
- for any directory we decide not to mount for a given reason, document it in the Dockerfile)
- surely: one for configuration/answer files/certs, one for the Squid cache (
- find out how it is best to specify configuration parameters (environment variables? answer files?)
- pass the machine-id as parameter
- add a startup (Python?) script. Figure out registration to the Server, activation of Proxy functionality (
configure-proxy.sh
), certs - ensure the Proxy works (manual test)
- ensure the Proxy container can be killed and restarted. With the same mount points and parameters, it should come back to full functionality ("proto-HA")
- try to slim down the Proxy container
- remove traditional stack processes/packages, if possible
- split out a first component (eg. salt-broker) into another container. In parallel:
- try orchestration with Podman
- try orchestration with k3s/Helm
- try to skip packaging for one of the packages (eg. salt-broker) - sources straight from git to image
- create a "fat" container with everything needed for a Server
- start from https://gitlab.suse.de/mbologna/sumadocker/-/tree/saltcontainer
- find out which directories need to be mounted externally. Starting point: HA paper
- add a startup (Python?) script
- try until it breaks
- try to slim down the Server container
- carve PostgreSQL out. Try Postgres-in-containers or outside of them
- disable Cobbler. What needs to be done in order to make Cobbler "optional"?
- disable or remove the traditional stack
- other research
- we will need a solution about commandline tools. Would it be possible to create a UI around them like Rancher does?
- Dockerfile syntax, best practices
- what does configure-proxy.sh do?
- it requests, via Salt events received by the
fetch-certificate
script, the/etc/sysconfig/rhn/systemid
file.systemid
is generated on the Server side by Java code and essentially contains the system ID (duh) and other basic system infos, with a signature (example). Should be only useful for traditional clients - performs "proxy activation" via XMLRPC calls (rhn-proxy-activate script)
- installs the spacewalk-proxy-management package
- configures jabberd
- changes Squid config files
- changes Cobbler config files
- optionally generates SSL certs
- configures SSL certs for Apache and Jabberd
- configures (and optionally generates) an SSH key for salt-ssh proxying. Configures sshd via the mgr-proxy-ssh-push-init script
- uninteresting parts: optionally setting up a configuration channel for Proxy configuration, opening firewall ports, activating SLP, enabling services (code)
- it requests, via Salt events received by the
- does it make sense to send traceback emails? Specifically from the Proxy?