Releases: karmada-io/karmada
karmada v1.0.1 release
Changes since v1.0.0
Bug Fixes
karmadactl
andkubectl-karmada
: Fixedinit
can not update theAPIService
issue. (#1207, @prodanlabs )karmada-controller-manager
: Fixed ApplyPolicySucceed event type mistake(should beNormal
butWarning
). (#1267, @Garrybest )karmada-controller-manager
andkarmada-agent
: Fixed resync slow down reconcile issue. (#1265, @Garrybest )
karmada v1.0.0 release
What's New
Aggregated Kubernetes API Endpoint
The newly introduced karmada-aggregated-apiserver
component aggregates all registered clusters and allows users to access member clusters through Karmada by the proxy endpoint, e.g.
- Retrieve `Node` from `member1`: /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes
- Retrieve `Pod` from `member2`: /apis/cluster.karmada.io/v1alpha1/clusters/member2/proxy/api/v1/namespaces/default/pods
Please refer to user guide for more details.
(Feature contributor: @kevin-wangzefeng @GitHubxsy @XiShanYongYe-Chang @mrlihanbo @jrkeen @prodanlabs @carlory @RainbowMango)
Promoting Workloads from Legacy Clusters to Karmada
Legacy workloads running in Kubernetes now can be promoted to Karmada smoothly without container restart.
In favor of promote
commands added to Karmada CLI, any kind of Kubernetes resources can be promoted to Karmada easily, e.g.
# Promote deployment(default/nginx) from cluster1 to Karmada
kubectl karmada promote deployment nginx -n default -c cluster1
(Feature contributor: @lonelyCZ @iawia002 @dddddai)
Verified Integration with Ecosystem
Benefiting from the Kubernetes native API support, Karmada can easily integrate the single cluster ecosystem for multi-cluster, multi-cloud purpose. The following components have been verified by the Karmada community:
argo-cd
: refer to working with argo-cdFlux
: refer to propagating helm charts with fluxIstio
: refer to working with IstioFilebeat
: refer to working with FilebeatSubmariner
: refer to working with SubmarinerVelero
: refer to working with VeleroPrometheus
: refer to working with Prometheus
(Feature contributor: @lfbear @learner0810 @zirain @Rains6 @gy95 @XiShanYongYe-Chang )
OverridePolicy Improvements
By leverage of the new-introduced RuleWithCluster
fields to OverridePolicy
and ClusterOverridePolicy
, users are now able to define override policies with a single policy for specified workloads.
(Feature contributor: @iawia002 @lfbear @RainbowMango @lonelyCZ @jameszhangyukun )
Karmada Installation Improvements
Introduced init
command to Karmada CLI
. Users are now able to install Karmada by a single command.
Please refer to Installing Karmada for more details.
(Feature contributor: @prodanlabs @lonelyCZ @jrkeen )
Configuring Karmada Controllers
Now all controllers provided by Karmada work as plug-ins. Users are now able to turn off any of them from the default enabled list.
See --controllers
flag of karmada-controller-manager
and karmada-agent
for more details.
(Feature contributor: @snowplayfire @iawia002 @jameszhangyukun )
Resource Interpreter Webhook Enhancement
Introduced ReviseReplica
support for the Resource Interpreter Webhook
framework, which enables scheduling all customized workloads just like Kubernetes native ones.
Refer to Resource Interpreter Webhook Proposal for more design details.
(Feature contributor: @iawia002)
Other Notable Changes
Bug Fixes
karmada-controller-manager
: Fixed the issue that the annotation of resource template cannot be updated. (@mrlihanbo #1012)karmada-controller-manager
: Fixed the issue of generating binding reference key. (@JarHMJ #1003)karmada-controller-manager
: Fixed the inefficiency of en-queue failed task issue. (@Garrybest #1068)
Features & Enhancements
Karmada CLI
: Introduced--cluster-provider
flag tojoin
command to specify provider of joining cluster. (@2hangchen #1025)Karmada CLI
: Introducedtaint
command to set taints for clusters. (@lonelyCZ #889)Karmada CLI
: TheApplied
condition ofWork
andScheduled/FullyApplied
ofResourceBinding
are available forkubectl get
. (@lonelyCZ #1110)karmada-controller-manager
: The cluster discovery feature now supportsv1beta1
ofcluster-api
. (@iawia002 #1029)karmada-controller-manager
: TheJob
'sstartTime
andcompletionTime
now available at resource template. (@Garrybest #1034)karmada-controller-manager
: introduced--controllers
flag to enable or disable controllers. (@snowplayfire #1083)karmada-controller-manager
: Support retainownerReference
from observed objects. (@snowplayfire #1116)karmada-controller-manager
andkarmada-agent
: Introducedcluster-cache-sync-timeout
flag to specify the time waiting for cache sync. (@snowplayfire #1112)
Instrumentation (Metrics and Events)
karmada-scheduler-estimator
: Introduced/metrics
endpoint to emit metrics. (@Garrybest #1030)- Introduced
ApplyPolicy
andScheduleBinding
events for resource template. (@mrlihanbo #1070)
Deprecation
- The
ReplicaSchedulingPolicy
API deprecated at v0.9.0 now has been removed in favor ofReplicaScheduling
ofPropagationPolicy
. (@iawia002 #1161)
Contributors
Thank you to everyone who contributed to this release!
Users whose commits are in this release (alphabetically by user name)
- @2hangchen
- @aven-ai
- @BDXGD
- @carlory
- @dddddai
- @eightzero
- @fanzhihai0215
- @feeltimeQ
- @fleeto
- @Garrybest
- @ghl116
- @gy95
- @haiker2011
- @Haleygo
- @iawia002
- @imroc
- @JackZxj
- @jameszhangyukun
- @JarHMJ
- @jrkeen
- @kevin-wangzefeng
- @leonharetd
- @lfbear
- @lonelyCZ
- @mrlihanbo
- @Phil-sun
- @pigletfly
- @prodanlabs
- @RainbowMango
- @Rains6
- @Shike-Ada
- @snowplayfire
- @wawa0210
- @XiShanYongYe-Chang
- @zirain
karmada v0.10.1 release
Changes since v0.10.0
Bug Fixes
- karmada-controller-manager: fixed the issue of generating binding reference key. (#1003, @JarHMJ)
- karmada-controller-manager: fixed resource template annotations can not update issue. (#1012, @mrlihanbo)
karmada v0.10.0 release
What's New
Resource Interpreter Webhook
The newly introduced Resource Interpreter Webhook
framework allows users to implement their own CRD plugins that will be consulted at all parts of propagation process. With this feature, CRDs and CRs will be propagated just like Kubernetes native resources, which means all scheduling primitives also support custom resources. An example as well as some helpful utilities are provided to help users better understand how this framework works.
Refer to Proposal for more details.
(Feature contributor: @RainbowMango, @XiShanYongYe-Chang, @gy95)
Significant Scheduling Enhancement
-
Introduced
dynamicWeight
primitive toPropagationPolicy
andClusterPropagationPolicy
. With this feature, replicas could be divided by a dynamic weight list, and the weight of each cluster will be calculated based on the available replicas during scheduling.
This feature can balance the cluster's utilization significantly. (#841) -
Introduced
Job
schedule (divide) support. AJob
that desires many replicas now could be divided into many clusters just likeDeployment
.
This feature makes it possible to run huge Jobs across small clusters. (#898)
(Feature contributor: @Garrybest )
Workloads Observation from Karmada Control Plane
After workloads (e.g. Deployments) are propagated to member clusters, users may also want to get the overall workload status across many clusters, especially the status of each pod
. In this release, a get
subcommand was introduced to the kubectl-karmada
. With this command, user are now able get all kinds of resources deployed in member clusters from the Karmada control plane.
For example (get deployment
and pods
across clusters):
$ kubectl karmada get deployment
NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION
nginx member2 1/1 1 1 19m Y
nginx member1 1/1 1 1 19m Y
$ kubectl karmada get pods
NAME CLUSTER READY STATUS RESTARTS AGE
nginx-6799fc88d8-vzdvt member1 1/1 Running 0 31m
nginx-6799fc88d8-l55kk member2 1/1 Running 0 31m
(Feature contributor: @lfbear @QAQ-rookie)
Other Notable Changes
- karmada-scheduler-estimator: The number of pods becomes an important reference when calculating available replicas for the cluster. (@Garrybest, #777)
- The labels (
resourcebinding.karmada.io/namespace
,resourcebinding.karmada.io/name
,clusterresourcebinding.karmada.io/name
) which were previously added on the Work object now have been moved to annotations. (@XiShanYongYe-Chang, #752) - Bugfix: Fixed the impact of cluster unjoining on resource status aggregation. (@dddddai, #817)
- Instrumentation: Introduced events (
SyncFailed
andSyncSucceed
) to the Work object. (@wawa0210, #800) - Instrumentation: Introduced condition (
Scheduled
) to the ResourceBinding andClusterResourceBinding
. (@dddddai, #823) - Instrumentation: Introduced events (
CreateExecutionNamespaceFailed
andRemoveExecutionNamespaceFailed
) to the Cluster object. (@pigletfly, #749) - Instrumentation: Introduced several metrics (
workqueue_adds_total
,workqueue_depth
,workqueue_longest_running_processor_seconds
,workqueue_queue_duration_seconds_bucket
) forkarmada-agent
andkarmada-controller-manager
. (@Garrybest, #831) - Instrumentation: Introduced condition (
FullyApplied
) to the ResourceBinding andClusterResourceBinding
. (@lonelyCZ, #825) - karmada-scheduler: Introduced feature gates. (@iawia002, #805)
- karmada-controller-manager: Deleted resources from member clusters that use "Background" as the default delete option. (@RainbowMango, #970)
Contributors
Thank you to everyone who contributed to this release!
Users whose commits are in this release (alphabetically by user name)
- @2hangchen
- @algebra2k
- @benjaminhuo
- @Charlie17Li
- @ctripcloud
- @dddddai
- @duguhaotian
- @fleeto
- @Garrybest
- @gf457832386
- @gy95
- @hyschumi
- @iawia002
- @jameszhangyukun
- @kerthcet
- @kevin-wangzefeng
- @learner0810
- @lfbear
- @lonelyCZ
- @mrlihanbo
- @penghuima
- @Phil-sun
- @pigletfly
- @QAQ-rookie
- @RainbowMango
- @snowplayfire
- @TeodoraBoros
- @wawa0210
- @wzshiming
- @XiShanYongYe-Chang
- @youhonglian
- @yvoilee
karmada v0.9.0 release
What's New
Upgrading support
Users are now able to upgrade from the previous version smoothly. With the multiple version feature of CRD, objects with different schemas can be automatically converted between versions. Karmada uses the semantic versioning and will provide workarounds for inevitable breaking changes.
In this release, the ResourceBining
and ClusterResourceBinding
promote to v1alpha2
and the previous v1alpha1
version is still available for one more release. With the upgrading instruction, the previous version of Karmada can promote smoothly.
(Feature contributor: @RainbowMango )
Introduced karmada-scheduler-estimator to facilitate end-to-end scheduling accuracy
Karmada scheduler aims to assign workload to clusters according to constraints and available resources of each member cluster. The kube-scheduler
working on each cluster takes the responsibility to assign Pods to Nodes.
Even though Karmada has the capacity to reschedule failure workload between member clusters, but the community still commits lots of effort to improve the accuracy of the end-to-end scheduling.
The karmada-scheduler-estimator
is the effective assistant of karmada-scheduler
, it provides prediction-based scheduling decisions that can significantly improve the scheduling efficiency and avoid the wave of rescheduling among clusters. Note that this feature is implemented as a pluggable add-on. For the instructions please refer to scheduler estimator guideline.
(Feature contributor: @Garrybest )
Maintainability improvements
A bunch of significant maintainability improvements were added to this release, including:
-
Simplified Karmada installation with helm chart.
(Feature contributor: @algebra2k @jrkeen ) -
Provided metrics to observe scheduler status, the metrics API now served at
/metrics
ofkarmada-scheduler
.
With these metrics, users are now able to evaluate the scheduler's performance and identify the bottlenecks.
(Feature contributor: @qianjun1993 ) -
Provided events to Karmada API objects as supplemental information to debug problems.
(Feature contributor: @pigletfly )
Other Notable Changes
- karmada-controller-manager: The
ResourceBinding
/ClusterResourceBinding
won't be deleted after associatePropagationPolicy
/ClusterPropagationPolicy
is removed and is still available untilresource template
is removed.(@qianjun1993, #601) - Introduced
--leader-elect-resource-namespace
which is used to specify the namespace of election object to components karmada-controller-manager/
karmada-scheduler/
karmada-agent`. (@XiShanYongYe-Chang #698) - Deprecation: The API
ReplicaSchedulingPolicy
has been deprecated and will be removed from the following release. The feature now has been integrated into ReplicaScheduling. - Introduced
kubectl-karmada
commands as the extensions forkubectl
. (@XiShanYongYe-Chang #686) karmada-controller-manager
introduced aversion
command to represent version information. (@RainbowMango #717 )karmada-scheduler
/karmada-webhook
/karmada-agent
/karmada-scheduler-estimator
introduced aversion
command to represent version information. (@lonelyCZ #719 )- Provided instructions about how to use the
Submariner
to connect the network between member clusters. (@XiShanYongYe-Chang #737 ) - Added four metrics to the
karmada-scheduler
to monitor scheduler performance. (@qianjun1993 #747)
Contributors
Thank you to everyone who contributed to this release!
Users whose commits are in this release (alphabetically by user name)
karmada v0.8.0 release
What's New
Automatic cluster discovery with cluster-api
For users who are using cluster-api (sigs.k8s.io/cluster-api), Karmada is now able to automatically discover & join clusters when provisioned, and unjoin them in case of destroyed.
Note that this features is implemented as a built-in plugin. To enbale it, simply indicate the following to flags in karmada-controller-manager
config:
--cluster-api-kubeconfig string Path to the cluster-api management cluster kubeconfig file.
--cluster-api-context string Name of the cluster context in cluster-api management cluster kubeconfig file.
(Feature contributor: @XiShanYongYe-Chang )
Introduced CommandOverrider and ArgsOverrider to simplify commands customization per cluster
For multi-cluster applications, it's quite common to set different arguments when running on different clusters or environments.
In this release, two overrider plugins: CommandOverrider
and ArgsOverrider
are introduced, based on industry best practices. These two handy tools allow users to declare complex declarations and avoid configuration mistakes.
Workload types supported now are: Deployment
, ReplicaSet
, DaemonSet
, StatefulSet
and Pod
, more types including CRDs will be supported in later releases.
(Feature contributor: @lfbear @betaincao )
Better integration support with Kubernetes ecosystem
The Kubernetes native APIs support and patterns to run cloud-native applications of Karmada
make it quite easy to quickly integrate with other projects in the Kubernetes ecosystem.
In release, several useful features that will help Karmada
work seamlessly with other systems.
ResourceBinding
andClusterResourceBinding
now supports present theapplied
status. (@pigletfly #595)- More types of resources now support aggregating status to the resource template, inlcuding
Job
,Service
, andIngress
. (@mrlihanbo #609) - argo-cd is also verified to run full featured with Karmada to achieve multi-cluster GitOps.
Other Notable Changes
- karmadactl: introduced
cordon
anduncordon
commands to mark a cluster schedulable and un-schedulable. (#464, @algebra2k ) - karmada-controller-manager: introduced
--skipped-propagating-namespaces
flag to skip resources in certain namespaces from propagating. (#533, @pigletfly ) - karmada-controller-manager/karmada-agent/karmada-scheduler: Introduced flags to config the QPS and burst which are used to control the client traffic interacting with
Karmada
or member cluster's kube-apiserver. (#611, @Garrybest )- --cluster-api-qps QPS to use while talking with cluster kube-apiserver.
- --cluster-api-burst Burst to use while talking with cluster kube-apiserver.
- --kube-api-qps QPS to use while talking with karmada-apiserver.
- --kube-api-burst Burst to use while talking with karmada-apiserver.
- Karmada quick-start scripts now support running on
Mac OS
. (#538, @lfbear )
Contributors
Thank you to everyone who contributed to this release!
Users whose commits are in this release (alphabetically by user name)
karmada v0.7.0 release
What's New
Support multi-cluster service discovery
In many cases, a Kubernetes user may want to split their deployments across multiple clusters, but still retain mutual dependencies between workloads running in those clusters.
Users are now able to export
and import
services between clusters with Multi-Cluster Service API (MCS-API). (@XiShanYongYe-Chang)
Support more precise cluster status management
Besides reporting cluster status, the cluster status controller
now also renews the lease
. The newly introduced cluster monitor
monitors the lease
and will mark cluster ready status to unknown
in case of cluster status controller
not working. (@Garrybest)
Support replica scheduling based on cluster resources
In some scenarios, users want to divide
the replicas in a deployment
to multiple clusters if a single cluster doesn't have sufficient resources.
Users now able to declare the replica scheduling preference by the new field ReplicaDivisionPreference
in PropagationPolicy
and ClusterPropagationPolicy
. (@qianjun1993)
Support more convenient APIs to divide replicas by weight list
Users now able to declare cluster weight by ReplicaDivisionPreference
in PropagationPolicy
and ClusterPropagationPolicy
, with the preference Weighted
, the scheduler will divide replicas according to the WeightPreference
. (@qianjun1993)
This feature is designed to replace the standalone ReplicaSchedulingPolicy
API in the future.
Other Notable Changes
- karmada-agent: Introduced
--karmada-context
flag to indicate the cluster context in karmada kubeconfig file. (#415, @mrlihanbo) - karmada-agent and karmada-controller-manager: Introduced
--cluster-lease-duration
and--cluster-lease-renew-interval-fraction
flags to specify the lease expiration period and renew interval fraction. (#421, @pigletfly) - karmada-scheduler: Added a filter plugin to prevent the cluster from scheduling if the required API is not installed. (#470, @vincent-pli)
- karmada-controller-manager: Introduced
--skipped-propagating-apis
flag to skip the resources from propagating. (#345, @pigletfly) - Installation: Now the
hack/deploy-karmada.sh
andhack/deploy-karmada-agent.sh
scripts support install Karmada components on bothKind
clusters and standalone clusters. (#458, @lfbear) - In the case of resources already in member clusters, in order to avoid conflict karmada will refuse to propagate and adopt the resource by default. (#471, @mrlihanbo)
Contributors
Thank you to everyone who contributed to this release!
Users whose commits are in this release (alphabetically by user name)
karmada v0.6.0 release
What's New
Support syncing with member cluster behind proxy
In some scenarios where certain clusters may not be directly connected from the Internet, such as:
- The member clusters are behind a NAT gateway from the Karmada control plane
- The member clusters are in an on-prem Intranet while Karmada runs in the cloud
By setting proxy-url
in the kubeconfig
when registering member clusters, Karmada will talk to member clusters through indicated proxy. (#307, @liufen90)
Introduced ImageOverrider for simplifying image replacement
In most scenarios where clusters are running in different cloud or data centers, the workload requires a different image registry. ImageOverrider
is a handy tool to override images for a workload before they are propagated to clusters. (#370, @XiShanYongYe-Chang)
Support scheduling based on cluster taint toleration
Karmada-scheduler
now reflects taints on member clusters and tolerations defined in PropagationPolicy
and ClusterPropagationPolicy
when scheduling resources. (#320, @mrlihanbo)
Support scheduling based on cluster topology
Karmada-scheduler
now supports scheduling resources according to the topology information(cluster/provider/region/zone)
defined in cluster
objects. (#357, @mrlihanbo)
Other Notable Changes
- Installation: introduced
hack/remote-up-karmada.sh
to install Karmada on a specified Kubernetes as host. (#367, @lfbear) - karmadactl: introduced the
version
command to show the version it is built from. Try it on by command:# karmadactl version
. (#285, @algebra2k) - API: added short name for most APIs. (#376, @pigletfly)
- The resource templates now match PropagationPolicy or ClusterPropagationPolicy in alphabetical order
when there are multiple policies that match. (#306, @XiShanYongYe-Chang) - Always generates
ResourceBinding
objects for namespace-scoped resource template. (#315, @vincent-pli) - karmada-controller-manager: introduced the
leader-elect
command line flag to enable or disable leadership election. (#321, @pigletfly) - The
Work
objects name now consist of the resource template's.metada.name
,.metada.kind
and.metadata.namespace
. (#359, @Garrybest)
Contributors
Thank you to everyone who contributed to this release!
Users whose commits are in this release (alphabetically by user name)
- @algebra2k
- @anirudhramnath
- @daixiang0
- @futuretea
- @Garrybest
- @gy95
- @hantmac
- @huiwq1990
- @Iceber
- @kevin-wangzefeng
- @leofang94
- @LeoLiuYan
- @liufen90
- @lfbear
- @mrlihanbo
- @pigletfly
- @RainbowMango
- @vincent-pli
- @XiShanYongYe-Chang
- @yangcheng-icbc
karmada v0.5.0 release
What's New
Support resource status aggregation from Karmada
Users are now able to query aggregated status of resources(propagated by Karmada) from Karmada API-server, no need to connect to each member cluster.
All resource's status in member clusters will be aggregated to its binding
objects.
In addition, if the resource type is deployment
, deployment status will be also reflected.
karmada-agent
to support pull-based synchronization between control plan and member clusters
karmada-agent
is introduced in this release to support cases the member clusters not directly reachable from the Karmada control plan.
The agent basically pulls all useful configurations from the Karmada control plane and applies to member clusters it serves.
The karmada-agent
also completes cluster registration automatically.
ReplicaSchedulingPolicy
API to customize replica scheduling constraints of Deployments
Users are now able to customize replica scheduling constraints of Deployments with ReplicaScheduling Policy API.
The replicas will be divided into different numbers for member clusters according to weight list indicated by the policy.
Other Notable Changes
- The label
karmada.io/override
andkarmada.io/cluster-override
have been deprecated and replaced bypolicy.karmada.io/applied-overrides
andpolicy.karmada.io/applied-cluster-overrides
to indicate applied override rules. - The
ResourceBinding
andClusterResourceBinding
names now consist ofresource kind
andresource name
. - Both
PropagationPolicy
andClusterPropagationPolicy
names now restricted to no more than 63 characters. OverridePolicy
andClusterOverridePolicy
changes will take effect immediately now.- Users are now able to use new flag
--cluster-status-update-frequency
when configuringkarmada-agent
andkarmada-controller-manager
, to specify cluster status update frequency.
Contributors
Thank you to everyone who contributed to this release!
Users whose commits are in this release (alphabetically by user name)
- @kevin-wangzefeng
- @mrlihanbo
- @RainbowMango
- @tinyma123
- @XiShanYongYe-Chang
- @yangcheng-icbc
karmada v0.4.0 release
What's New
New policy APIs have been added to support cluster level resources propagation and customization
Users are now able to use ClusterPropagationPolicy
to propagate both cluster-scoped and namespace-scoped resources. In addition, users are able to use ClusterOverridePolicy
to define the overall policy to realize differentiation propagation.
Support resource and policy detector
The detector watches both resources and policy (PropagationPolicy and ClusterPropagationPolicy) changes, all changes on resources or policies will take effect immediately.
Namespace Auto-provision feature get on board
Namespaces created on Karmada
will be synced to all member clusters automatically. Users don't need to propagate namespaces anymore.
Scheduler now able to reschedule resources when policy changes
Once the Placement
rule in the PropagationPolicy
changed, the scheduler will reschedule to meet the declaration.
Scheduler now support failure recovery
Once any of the clusters becomes failure, the scheduler now able to re-schedule the resources to available clusters.
This feature is controlled by flag --failover
and disabled by default.
Other Notable Changes
- The
PropagationWork
API is nowWork
and located at thework.karmada.io
group. - The
PropagationBinding
API is nowResourceBinding
and located at thework.karmada.io
group. - The label
karmada.io/driven-by
has been deprecated and replaced bypropagationpolicy.karmada.io/namespace
,propagationpolicy.karmada.io/name
, andclusterpropagationpolicy.karmada.io/name
. - The label
karmada.io/created-by
has been deprecated and replaced bypropagationpolicy.karmada.io/namespace
,propagationpolicy.karmada.io/name
,clusterpropagationpolicy.karmada.io/name
,resourcebinding.karmada.io/namespace
,resourcebinding.karmada.io/name
,clusterresourcebinding.karmada.io/name
,work.karmada.io/namespace
,work.karmada.io/name
. - Added new annotation
policy.karmada.io/applied-placement
for bothResourceBinding
andClusterResourceBinding
resources, to indicate the placement rule. - Added Validating Admission Webhook to restrict resource selector change for
PropagationPolicy
andClusterPropagationPolicy
objects.
Contributors
Thank you to everyone who contributed to this release!
Users whose commits are in this release (alphabetically by user name)