Releases: scality/metalk8s
MetalK8s 2.5.0
2.5.0 Release Notes
Documentation
https://metal-k8s.readthedocs.io/en/2.5.0/
Upgrade Notes
Please follow the upgrade instructions here.
Customizations done on MetalK8s services or deployments, such as the number of replicas for specific services like Prometheus, Alertmanager and Grafana, will be lost after upgrading to 2.5.0. This issue is solved starting with this release, see instructions here.
Warning: Username and password customizations for K8s and Grafana will reset to default values once you upgrade to 2.5.0 or higher versions.
Changelog
Full list of closed issues is available here.
What's new
- MetalK8s 2.5.0 is now based on Kubernetes 1.16.8
- Rebrand of the MetalK8s UI
- Kubernetes API and Grafana are now configured to use OIDC, and Dex is
deployed to serve as their trusted Identity Provider (see here to manage Dex) - A new framework has been added to manage Services configuration, ensuring that node reboots, upgrades, downgrades or restore operations do not lead to loss of configuration (more details in the documentation)
MetalK8s 2.4.3
2.4.3 Release Notes
2.4 is now deprecated and will no longer be supported after July 2020.
It is recommended to upgrade to 2.5.0 and higher versions as soon as possible.
Documentation
https://metal-k8s.readthedocs.io/en/2.4.3
Upgrade Notes
Please follow the upgrade instructions here
Customizations done on MetalK8s service or deployments, such as the number of replicas
for a specific service or alert rule in AlertManager, will be lost after
upgrading to 2.4.3. This issue is solved starting from 2.5.0 release.
Changelog since 2.4.2
Full list of closed issues is available here
What's new
- MetalK8s 2.4.3 is now based on Kubernetes 1.15.11.
- Revamp solution lifecycle and environments as well as associated tooling.
More information is available in the documentation
MetalK8s 2.4.2
This is a maintenance release, which features:
prometheus-adapter
is deployed, enabling the usekubectl top
among others- host-local
nginx
on every node to provide HA access tokube-apiserver
- documentation access from the UI
- safer
etcd
expansions
MetalK8s 2.4.1
This is a maintenance release, which features:
- Ability to add labels when creating Volumes from the UI
- Fix a bug where Yum database gets locked during deployment of a new Node
(including bootstrap)
MetalK8s 2.4.0
This is the first GA release for MetalK8s 2.x, which installs a cluster
using Kubernetes 1.15.
Here is a highlight of some of its features:
- Support for CentOS 7 and RHEL 7
- Offline installation (except core OS repositories)
- Infrastructure management based on SaltStack
- Management web UI
- Standard Nodes and Pods monitoring through Prometheus,
Alertmanager and Grafana - Declarative local storage provisioning built on top of
PersistentVolumes
MetalK8s 2.4.0-beta2
MetalK8s 2.4.0-beta2
Objectives
- Use status.conditions for Volume CRs in place of status.phase
- Add persistent storage for Prometheus and Alertmanager
- Set
LimitNOFILE
for containerd service - Add CoreDNS and node-exporter dashboards
- Add documentation PDFs in the ISO
- Add
template
field to PersistentVolume definition
MetalK8s 2.4.0-beta1
Objectives
- support for future upgrades in-place, automated
- TLS encryption for SaltAPI
MetalK8s 2.4.0-alpha1
Objectives
- First iteration of volume management
- Tooling to manage solutions
- Deployment of an Ingress controller
- Various bug fixes
MetalK8s 1.1.0
This release primarily upgrades the Kubernetes version to 1.11.10
Other changes include:
The ability to override Helm chart configurations ('values') of several charts deployed with MetalK8s
- More platform checks during deployment
- Updates of various services and dashboards deployed as part of the cluster
- Increased memory limits for etcd on low-resource servers
- Various bug-fixes
See the Changes section in the reference guide, or ChangeLog.rst for an exhaustive listing.
MetalK8s 2.0.0-alpha3
Objectives
- Offline platform upgrade
- Cluster monitoring
- UI: Node management
- UI: Cluster status
- Start 'offline solution lifecycle'
- Start 'basic log collection'