You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 31, 2022. It is now read-only.
I work for Sift and we have possibly a unique use case for our Bigtable instances. For each of our instances we have 2 clusters. At a given time one instance is "primary" and serves all live requests (configured via application profiles). The "secondary" instance is mostly idle so cpu usage is low. We want to keep the secondary the same size as the primary instance so that we can switch live traffic to the secondary instance.
This use case does not seem to be supported in the current implementation here. It seems scaling decisions are made independently for each cluster based on the metrics for that cluster.
Our plan is to fork the repo and make a few tweaks. In AutoscaleJob we lookup what is the primary cluster for that instance (by getting app profile data) and use that cluster as the source for metrics for the scaling algorithms. This will work for us but probably couldn't be contributed upstream in its current form.
Is there interest in a feature like this being contributed? Have others expressed interested in functionality similar to this?
This is an interesting use case, answering your question, I believe that this is not a functionality that was requested before (at least not exactly this).
Thinking about it, is this the best approach for you? I mean, does these switches happen too often?
If not, wouldn't it make sense to have a service that requests the secondary cluster to scale up the number of nodes, to the same as the primary, only when it's going to be used? If you do that, the autoscaler would be able to handle the traffic changes after the secondary assumes the handling of the traffic.
Failing over the secondary does not happen frequently, but when it does happen we need to be able to failover very quickly. Adding a manual step to scale up the secondary cluster, then wait for tablets to be moved to the new nodes would delay our recovery. Also just switching traffic to the secondary and waiting for autoscale to happen would mean very poor performance for an unacceptable length of time.
If we were to put together a PR that adds a field in BigtableCluster for app profile for primary cluster. If this is set, the scaling logic will use that primary cluster from the app profile to scale that cluster. Would you be interested in accepting that or should we continue with our forked repo?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I work for Sift and we have possibly a unique use case for our Bigtable instances. For each of our instances we have 2 clusters. At a given time one instance is "primary" and serves all live requests (configured via application profiles). The "secondary" instance is mostly idle so cpu usage is low. We want to keep the secondary the same size as the primary instance so that we can switch live traffic to the secondary instance.
This use case does not seem to be supported in the current implementation here. It seems scaling decisions are made independently for each cluster based on the metrics for that cluster.
Our plan is to fork the repo and make a few tweaks. In
AutoscaleJob
we lookup what is the primary cluster for that instance (by getting app profile data) and use that cluster as the source for metrics for the scaling algorithms. This will work for us but probably couldn't be contributed upstream in its current form.Is there interest in a feature like this being contributed? Have others expressed interested in functionality similar to this?
cc @rravi-sift
The text was updated successfully, but these errors were encountered: