|
| 1 | +--- |
| 2 | +title: Announcing KubeDB v2025.10.17 |
| 3 | +date: "2025-10-27" |
| 4 | +weight: 15 |
| 5 | +authors: |
| 6 | +- Arnob Kumar Saha |
| 7 | +tags: |
| 8 | +- autoscaler |
| 9 | +- backup |
| 10 | +- clickhouse |
| 11 | +- cloud-native |
| 12 | +- database |
| 13 | +- distributed |
| 14 | +- kafka |
| 15 | +- kubedb |
| 16 | +- kubernetes |
| 17 | +- mariadb |
| 18 | +- postgres |
| 19 | +- recommendation |
| 20 | +- redis |
| 21 | +- restore |
| 22 | +- security |
| 23 | +- tls |
| 24 | +- valkey |
| 25 | +--- |
| 26 | + |
| 27 | +KubeDB **v2025.10.27** introduces enhancements like rack awareness for Kafka, distributed autoscaling and advanced backup/restore for MariaDB, health check improvements for Postgres, ACL support for Redis/Valkey, and autoscaling with recommendations for ClickHouse. This release focuses on improving fault tolerance, security, scalability, and recovery capabilities for databases in Kubernetes. |
| 28 | + |
| 29 | +## Key Changes |
| 30 | +- **Rack Awareness for Kafka**: Added support for rack-aware replica placement to enhance fault tolerance. |
| 31 | +- **Distributed MariaDB Enhancements**: Introduced autoscaling support and KubeStash (Stash 2.0) backup/restore, with Restic driver for continuous archiving and new replication strategies for PITR. |
| 32 | +- **Postgres Improvements**: Updated health checks to avoid unnecessary LSN advancement and fixed standby join issues. |
| 33 | +- **Redis/Valkey ACL**: Added Access Control List (ACL) for fine-grained user permissions, plus new Redis versions. |
| 34 | +- **ClickHouse Features**: Introduced autoscaling for compute and storage, along with recommendation engine for version updates, TLS, and auth rotations. |
| 35 | + |
| 36 | +## ClickHouse |
| 37 | + |
| 38 | +This release introduces Recommendations and the AutoScaling feature for ClickHouse. |
| 39 | + |
| 40 | +### AutoScaling |
| 41 | + |
| 42 | +This release introduces the ClickHouseAutoscaler — a Kubernetes Custom Resource Definition (CRD) — that enables automatic compute (CPU/memory) and storage autoscaling for ClickHouse. Here’s a sample manifest to deploy ClickHouseAutoscaler for a KubeDB-managed ClickHouse: |
| 43 | + |
| 44 | +```yaml |
| 45 | +apiVersion: autoscaling.kubedb.com/v1alpha1 |
| 46 | +kind: ClickHouseAutoscaler |
| 47 | +metadata: |
| 48 | + name: ch-compute-autoscale |
| 49 | + namespace: demo |
| 50 | +spec: |
| 51 | + databaseRef: |
| 52 | + name: clickhouse-prod |
| 53 | + compute: |
| 54 | + clickhouse: |
| 55 | + trigger: "On" |
| 56 | + podLifeTimeThreshold: 5m |
| 57 | + resourceDiffPercentage: 20 |
| 58 | + minAllowed: |
| 59 | + cpu: 1 |
| 60 | + memory: 2Gi |
| 61 | + maxAllowed: |
| 62 | + cpu: 2 |
| 63 | + memory: 3Gi |
| 64 | + controlledResources: ["cpu", "memory"] |
| 65 | + containerControlledValues: "RequestsAndLimits" |
| 66 | +``` |
| 67 | +
|
| 68 | +### Recommendation Engine |
| 69 | +
|
| 70 | +KubeDB now supports recommendations for ClickHouse, including Version Update, TLS Certificate Rotation, and Authentication Secret Rotation. Recommendations are generated if .spec.authSecret.rotateAfter is set, based on: |
| 71 | +
|
| 72 | +AuthSecret lifespan > 1 month with < 1 month remaining. AuthSecret lifespan < 1 month with < 1/3 lifespan remaining. Example recommendation: |
| 73 | +
|
| 74 | +```yaml |
| 75 | +apiVersion: supervisor.appscode.com/v1alpha1 |
| 76 | +kind: Recommendation |
| 77 | +metadata: |
| 78 | + annotations: |
| 79 | + kubedb.com/recommendation-for-version: 24.4.1 |
| 80 | + creationTimestamp: "2025-10-20T09:40:21Z" |
| 81 | + generation: 1 |
| 82 | + labels: |
| 83 | + app.kubernetes.io/instance: ch |
| 84 | + app.kubernetes.io/managed-by: kubedb.com |
| 85 | + app.kubernetes.io/type: version-update |
| 86 | + kubedb.com/version-update-recommendation-type: major-minor |
| 87 | + name: ch-x-clickhouse-x-update-version-lr21eg |
| 88 | + namespace: demo |
| 89 | + resourceVersion: "192088" |
| 90 | + uid: 286152eb-ba5c-45d8-bc54-6ef0a7255362 |
| 91 | +spec: |
| 92 | + backoffLimit: 10 |
| 93 | + description: Latest Major/Minor version is available. Recommending version Update |
| 94 | + from 24.4.1 to 25.7.1. |
| 95 | + operation: |
| 96 | + apiVersion: ops.kubedb.com/v1alpha1 |
| 97 | + kind: ClickHouseOpsRequest |
| 98 | + metadata: |
| 99 | + name: update-version |
| 100 | + namespace: demo |
| 101 | + spec: |
| 102 | + databaseRef: |
| 103 | + name: ch |
| 104 | + type: UpdateVersion |
| 105 | + updateVersion: |
| 106 | + targetVersion: 25.7.1 |
| 107 | + status: {} |
| 108 | + recommender: |
| 109 | + name: kubedb-ops-manager |
| 110 | + requireExplicitApproval: true |
| 111 | + rules: |
| 112 | + failed: has(self.status) && has(self.status.phase) && self.status.phase == 'Failed' |
| 113 | + inProgress: has(self.status) && has(self.status.phase) && self.status.phase == |
| 114 | + 'Progressing' |
| 115 | + success: has(self.status) && has(self.status.phase) && self.status.phase == 'Successful' |
| 116 | + target: |
| 117 | + apiGroup: kubedb.com |
| 118 | + kind: ClickHouse |
| 119 | + name: ch |
| 120 | + vulnerabilityReport: |
| 121 | + message: no matches for kind "ImageScanReport" in version "scanner.appscode.com/v1alpha1" |
| 122 | + status: Failure |
| 123 | +status: |
| 124 | + approvalStatus: Pending |
| 125 | + failedAttempt: 0 |
| 126 | + outdated: false |
| 127 | + parallelism: Namespace |
| 128 | + phase: Pending |
| 129 | + reason: WaitingForApproval |
| 130 | +``` |
| 131 | +
|
| 132 | +## Kafka |
| 133 | +
|
| 134 | +This release introduces rack awareness support for Kafka. A new `brokerRack` field has been added to the Kafka CRD to enable rack-aware replica placement using a specified Kubernetes topology key, such as `topology.kubernetes.io/zone`. When enabled, a default `replica.selector.class.name` configuration is automatically applied to distribute replicas across different racks or zones for improved fault tolerance and high availability. |
| 135 | + |
| 136 | +```yaml |
| 137 | +apiVersion: kubedb.com/v1alpha2 |
| 138 | +kind: Kafka |
| 139 | +metadata: |
| 140 | + name: kafka-prod |
| 141 | + namespace: demo |
| 142 | +spec: |
| 143 | + version: 4.0.0 |
| 144 | + brokerRack: |
| 145 | + topologyKey: topology.kubernetes.io/zone |
| 146 | + …… |
| 147 | + ….. |
| 148 | + … |
| 149 | +``` |
| 150 | + |
| 151 | +## MariaDB |
| 152 | + |
| 153 | +### Distributed MariaDB Autoscaler |
| 154 | +In this release, we have introduced support for the Distributed MariaDB Autoscaler and KubeStash (also known as Stash 2.0) backup and restore functionalities. |
| 155 | + |
| 156 | +To enable autoscaling, the metrics server and monitoring agent (Prometheus) must be installed on all clusters where Distributed MariaDB pods are running. You can provide the monitoring agent details via the PlacementPolicy CR. |
| 157 | + |
| 158 | +```yaml |
| 159 | +apiVersion: apps.k8s.appscode.com/v1 |
| 160 | +kind: PlacementPolicy |
| 161 | +metadata: |
| 162 | + labels: |
| 163 | + app.kubernetes.io/managed-by: Helm |
| 164 | + name: distributed-mariadb |
| 165 | +spec: |
| 166 | + clusterSpreadConstraint: |
| 167 | + distributionRules: |
| 168 | + - clusterName: demo-controller |
| 169 | + monitoring: |
| 170 | + prometheus: |
| 171 | + url: http://prometheus-operated.monitoring.svc.cluster.local:9090 |
| 172 | + replicaIndices: |
| 173 | + - 2 |
| 174 | + - clusterName: demo-worker |
| 175 | + monitoring: |
| 176 | + prometheus: |
| 177 | + url: http://prometheus-operated.monitoring.svc.cluster.local:9090 |
| 178 | + replicaIndices: |
| 179 | + - 0 |
| 180 | + - 1 |
| 181 | + - 3 |
| 182 | + slice: |
| 183 | + projectNamespace: kubeslice-demo-distributed-mariadb |
| 184 | + sliceName: demo-slice |
| 185 | + nodeSpreadConstraint: |
| 186 | + maxSkew: 1 |
| 187 | + whenUnsatisfiable: ScheduleAnyway |
| 188 | + zoneSpreadConstraint: |
| 189 | + maxSkew: 1 |
| 190 | + whenUnsatisfiable: ScheduleAnyway |
| 191 | +``` |
| 192 | + |
| 193 | +We have implemented several enhancements to bolster continuous archiving and point-in-time recovery for MariaDB in KubeDB. Here's an overview of the key updates: |
| 194 | + |
| 195 | +### Restic Driver for Base Backup Support |
| 196 | + |
| 197 | +We now offer support for the Restic driver in MariaDB continuous archiving and recovery operations. Previously, only the VolumeSnapshotter driver was supported. |
| 198 | + |
| 199 | +To utilize the Restic driver, configure the MariaDBArchiver Custom Resource (CR) by setting `.spec.fullBackup.Driver` to "Restic". |
| 200 | + |
| 201 | +### Replication Strategies for MariaDB Archiver Restore |
| 202 | + |
| 203 | +We have introduced a new replication strategy feature that supports two distinct methods for restoring MariaDB replicas. The available strategies are outlined below: |
| 204 | + |
| 205 | +***none***: Each MariaDB replica restores the base backup and binlog files independently. Once the restore process is complete, the replicas join the cluster individually. |
| 206 | + |
| 207 | +***sync***: The base backup and binlog files are restored solely on pod-0. The other replicas then synchronize their data using MariaDB’s native replication mechanism from pod-0. |
| 208 | + |
| 209 | +Two more methods will be added in upcoming releases. Below is a sample YAML configuration for setting up a MariaDBArchiver in KubeDB: |
| 210 | + |
| 211 | +Note: You must set RunAsUser to the database user ID (999) in the JobTemplate. |
| 212 | + |
| 213 | +```yaml |
| 214 | +apiVersion: archiver.kubedb.com/v1alpha1 |
| 215 | +kind: MariaDBArchiver |
| 216 | +metadata: |
| 217 | + name: mariadbarchiver-sample |
| 218 | + namespace: demo |
| 219 | +spec: |
| 220 | + pause: false |
| 221 | + databases: |
| 222 | + namespaces: |
| 223 | + from: Selector |
| 224 | + selector: |
| 225 | + matchLabels: |
| 226 | + kubernetes.io/metadata.name: demo |
| 227 | + selector: |
| 228 | + matchLabels: |
| 229 | + archiver: "true" |
| 230 | + retentionPolicy: |
| 231 | + name: rp |
| 232 | + namespace: demo |
| 233 | + encryptionSecret: |
| 234 | + name: "encrypt-secret" |
| 235 | + namespace: "demo" |
| 236 | + fullBackup: |
| 237 | + driver: "Restic" |
| 238 | + scheduler: |
| 239 | + successfulJobsHistoryLimit: 1 |
| 240 | + failedJobsHistoryLimit: 1 |
| 241 | + schedule: "0 0 * * *" |
| 242 | + sessionHistoryLimit: 2 |
| 243 | + jobTemplate: |
| 244 | + spec: |
| 245 | + securityContext: |
| 246 | + runAsUser: 999 |
| 247 | + runAsGroup: 0 |
| 248 | + manifestBackup: |
| 249 | + scheduler: |
| 250 | + successfulJobsHistoryLimit: 1 |
| 251 | + failedJobsHistoryLimit: 1 |
| 252 | + schedule: "0 0 * * *" |
| 253 | + sessionHistoryLimit: 2 |
| 254 | + backupStorage: |
| 255 | + ref: |
| 256 | + name: "storage" |
| 257 | + namespace: "demo" |
| 258 | +``` |
| 259 | + |
| 260 | +Here’s a sample YAML configuration for restoring MariaDB from a backup using the new features: |
| 261 | + |
| 262 | +```yaml |
| 263 | +apiVersion: kubedb.com/v1 |
| 264 | +kind: MariaDB |
| 265 | +metadata: |
| 266 | + name: restore-mariadb |
| 267 | + namespace: demo |
| 268 | +spec: |
| 269 | + init: |
| 270 | + archiver: |
| 271 | + replicationStrategy: sync |
| 272 | + encryptionSecret: |
| 273 | + name: encrypt-secret |
| 274 | + namespace: demo |
| 275 | + fullDBRepository: |
| 276 | + name: md-full |
| 277 | + namespace: demo |
| 278 | + recoveryTimestamp: "2026-10-01T06:33:02Z" |
| 279 | + version: "11.6.2" |
| 280 | + replicas: 3 |
| 281 | + storageType: Durable |
| 282 | + storage: |
| 283 | + accessModes: |
| 284 | + - ReadWriteOnce |
| 285 | + resources: |
| 286 | + requests: |
| 287 | + storage: 1Gi |
| 288 | + deletionPolicy: WipeOut |
| 289 | +``` |
| 290 | + |
| 291 | +## Postgres |
| 292 | + |
| 293 | +### Health Check |
| 294 | + |
| 295 | +The way we do the write check for postgres databases has been changed. Previously we used to create a `kubedb_write_check table` for checking if we can write in the database in order to mark the database as healthy. From now on, we will try to run `BEGIN READ WRITE; ROLLBACK;`. |
| 296 | + |
| 297 | +As a result of doing this, LSN will not be advanced unnecessarily. Note this might throw `HighRollBackAlert` in your `postgres` database in case you had set a lower threshold. |
| 298 | + |
| 299 | +### Bug fix |
| 300 | + |
| 301 | +We have fixed a bug which was not letting a standby join with primary saying `grpc call to pg_controldata failed` with some error. |
| 302 | + |
| 303 | +## Redis/Valkey |
| 304 | + |
| 305 | +### ACL (Access Control List) |
| 306 | + |
| 307 | +In this Release, we have added Access Control List (ACL) support for Redis. |
| 308 | + |
| 309 | +Initially, You can deploy Redis/Valkey with user-associated password and rules. |
| 310 | + |
| 311 | +```yaml |
| 312 | +apiVersion: kubedb.com/v1 |
| 313 | +kind: Redis |
| 314 | +metadata: |
| 315 | + name: vk |
| 316 | + namespace: demo |
| 317 | +spec: |
| 318 | + version: 8.2.2 |
| 319 | + mode: Cluster |
| 320 | + cluster: |
| 321 | + shards: 3 |
| 322 | + replicas: 2 |
| 323 | + storageType: Durable |
| 324 | + deletionPolicy: WipeOut |
| 325 | + acl: |
| 326 | + secretRef: |
| 327 | + name: acl-secret |
| 328 | + rules: |
| 329 | + - app1 ${k1} allkeys +@string +@set -SADD |
| 330 | + - app2 ${k2} allkeys +@string +@set -SADD |
| 331 | +``` |
| 332 | + |
| 333 | +Deploy the secret that has the necessary passwords for ACL users: |
| 334 | + |
| 335 | +```yaml |
| 336 | +apiVersion: v1 |
| 337 | +kind: Secret |
| 338 | +metadata: |
| 339 | + name: acl-secret |
| 340 | + namespace: demo |
| 341 | +type: Opaque |
| 342 | +stringData: |
| 343 | + k1: "pass1" |
| 344 | + k2: "pass2" |
| 345 | +``` |
| 346 | + |
| 347 | +To add/update/delete ACL, you can use RedisOpsRequest. As an example: |
| 348 | + |
| 349 | +```yaml |
| 350 | +apiVersion: ops.kubedb.com/v1alpha1 |
| 351 | +kind: RedisOpsRequest |
| 352 | +metadata: |
| 353 | + name: rdops-reconfigure |
| 354 | + namespace: demo |
| 355 | +spec: |
| 356 | + type: Reconfigure |
| 357 | + databaseRef: |
| 358 | + name: vk |
| 359 | + configuration: |
| 360 | + auth: |
| 361 | + syncACL: |
| 362 | + - app1 ${k1} +get ~mykeys:* |
| 363 | + - app10 ${k10} +get ~mykeys:* |
| 364 | + deleteUsers: |
| 365 | + - app2 |
| 366 | + secretRef: |
| 367 | + name: <new/old secret name which contains referenced keys> |
| 368 | +``` |
| 369 | + |
| 370 | +Please follow these docs below for the complete guidance on redis ACL: [Using Access Control Lists (ACL) in Redis](https://kubedb.com/docs/v2025.10.17/guides/redis/configuration/acl/), [Reconfiguring Redis ACL](https://kubedb.com/docs/v2025.10.17/guides/redis/reconfigure/acl/). |
| 371 | + |
| 372 | +### New version support |
| 373 | +Redis versions `7.4.6`, `8.0.4` and `8.2.2` are now available in KubeDB. |
| 374 | + |
| 375 | +## Support |
| 376 | +- **Contact Us**: Reach out via [our website](https://appscode.com/contact/). |
| 377 | +- **Release Updates**: Join our [google group](https://groups.google.com/a/appscode.com/g/releases) for release updates. |
| 378 | +- **Stay Updated**: Follow us on [Twitter/X](https://x.com/KubeDB) for product announcements. |
| 379 | +- **Tutorials**: Subscribe to our [YouTube channel](https://youtube.com/@appscode) for tutorials on production-grade Kubernetes tools. |
| 380 | +- **Learn More**: Explore [Production-Grade Databases in Kubernetes](https://kubedb.com/). |
| 381 | +- **Report Issues**: File bugs or feature requests on [GitHub](https://github.com/kubedb/project/issues/new). |
0 commit comments