Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re run the stack after some changes #38

Open
brasrox opened this issue Nov 5, 2023 · 2 comments
Open

Re run the stack after some changes #38

brasrox opened this issue Nov 5, 2023 · 2 comments

Comments

@brasrox
Copy link

brasrox commented Nov 5, 2023

I am using the stack and have deployed it.
now I made some changes, like deploying ingress, but now they say that the entire stack will be replaced starting with the vcn.

how can I make my cluster stack less destructible?

@brasrox
Copy link
Author

brasrox commented Nov 5, 2023

image

its force recreate the vcn and destroy all others resources

my tfvar:


# App name suffix
app_name="Homolog"

##################################################################################################
tag_values={
	"freeformTags" = {
	    "Environment" = "Homolog-QA",
	    "DeploymentType" = "App-Cluster-Terraform"
	},
	"definedTags" = {}
}

vcn_cidr_blocks="10.5.0.0/16"
#create_new_vcn=false #tested false or true but allways recreate it
#existent_vcn_ocid="ocid1.vcn.oc1.xxxxxx" #same here

# Cluster config
cluster_workers_visibility="Private"
cluster_endpoint_visibility="Public"

# BASIC_CLUSTER or ENHANCED_CLUSTER."
cluster_type="ENHANCED_CLUSTER"

# Pool of nodes
node_pool_autoscaler_enabled_1=false
node_pool_initial_num_worker_nodes_1=3
node_pool_max_num_worker_nodes_1=5

# Cluster Tools
cert_manager_enabled=false
metrics_server_enabled=false
prometheus_enabled=false
grafana_enabled=false

# Ingress confs
ingress_nginx_enabled=false
ingress_tls=true
ingress_cluster_issuer="letsencrypt-prod"
ingress_email_issuer="[email protected]"


# Extra configs
#extra_route_tables
#extra_security_lists

@brokedba
Copy link

Something similar happened to me due to the image ids of the worker node not matching the data sources after few months (filter was "last').
You can use lifecycle block under your vcn . see my fork example :

resource "oci_containerengine_node_pool" "oke_node_pool" {
  cluster_id         = var.oke_cluster_ocid
  compartment_id     = var.oke_cluster_compartment_ocid
  ...
  --- snipet
    lifecycle {
    ignore_changes = [
      node_config_details.0.size,
      node_source_details.0.image_id  # Add this line to ignore changes to image_id
    ]
  }

HTH
Clouddude

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants