Deploy and maintain Quilt stacks with Terraform using this comprehensive Infrastructure as Code (IaC) repository.
📚 Complete Documentation: This README covers common configuration scenarios and deployment workflows. For a complete reference of all Terraform variables with types, defaults, and validation rules, see VARIABLES.md. For comprehensive deployment examples covering multiple scenarios, see EXAMPLES.md. For operational procedures and maintenance guides, see OPERATIONS.md.
- Cloud Team Operations Guide
- Prerequisites
- Quick Start
- Rightsize Your Search Domain
- Database Configuration
- Network Configuration
- CloudFormation Parameters
- Complete Variable Reference
- Deployment Examples
- Troubleshooting
- Terraform Commands Reference
This section provides step-by-step instructions specifically for cloud teams to ensure simple installation and maintenance of the Quilt platform.
Step 1.1: Install Required Tools
# Install Terraform (if not already installed)
# macOS
brew install terraform
# Linux
wget https://releases.hashicorp.com/terraform/1.6.0/terraform_1.6.0_linux_amd64.zip
unzip terraform_1.6.0_linux_amd64.zip
sudo mv terraform /usr/local/bin/
# Verify installation
terraform --version # Should show >= 1.5.0Step 1.2: Configure AWS CLI
# Install AWS CLI (if not already installed)
# macOS
brew install awscli
# Configure AWS credentials
aws configure
# Enter: Access Key ID, Secret Access Key, Region, Output format (json)
# Verify access
aws sts get-caller-identityStep 1.3: Set Up Terraform State Backend
# Create S3 bucket for Terraform state (one-time setup)
aws s3 mb s3://YOUR-COMPANY-terraform-state --region YOUR-AWS-REGION
# Enable versioning
aws s3api put-bucket-versioning \
--bucket YOUR-COMPANY-terraform-state \
--versioning-configuration Status=Enabled
# Optional: Create DynamoDB table for state locking
aws dynamodb create-table \
--table-name terraform-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--region YOUR-AWS-REGIONStep 2.1: Request SSL Certificate
# Request certificate in AWS Certificate Manager
aws acm request-certificate \
--domain-name "data.YOUR-COMPANY.com" \
--subject-alternative-names "*.data.YOUR-COMPANY.com" \
--validation-method DNS \
--region YOUR-AWS-REGION
# Note the CertificateArn from the outputStep 2.2: Validate Certificate
# Get validation records
aws acm describe-certificate --certificate-arn "arn:aws:acm:YOUR-AWS-REGION:YOUR-ACCOUNT-ID:certificate/YOUR-CERT-ID"
# Add the DNS validation records to your domain's DNS
# Wait for validation (usually 5-10 minutes)Step 3.1: Create Project Directory
# Create project directory
mkdir quilt-production
cd quilt-production
# Initialize git repository
git initStep 3.2: Download Template Files
# Download the example configuration
curl -o main.tf https://raw.githubusercontent.com/quiltdata/iac/main/examples/main.tf
# Create variables file for sensitive data
cat > terraform.tfvars << 'EOF'
# Add your sensitive variables here
google_client_secret = "your-google-oauth-secret"
okta_client_secret = "your-okta-oauth-secret"
EOF
# Create .gitignore
cat > .gitignore << 'EOF'
.terraform/
*.tfplan
*.tfstate
*.tfstate.backup
terraform.tfvars
.terraform.lock.hcl
EOFStep 3.3: Obtain CloudFormation Template
Contact your Quilt account manager to obtain the CloudFormation template file and save it as quilt-template.yml in your project directory.
Edit main.tf with your specific values:
# Open main.tf in your preferred editor
vim main.tf # or code main.tf, nano main.tf, etc.Required changes:
- AWS Account ID: Replace
"YOUR-ACCOUNT-ID"with your AWS account ID - AWS Region: Replace
"YOUR-AWS-REGION"with your preferred AWS region - S3 Backend: Replace
"YOUR-TERRAFORM-STATE-BUCKET"with your bucket name - Stack Name: Update
local.name(≤20 chars, lowercase + hyphens) - Domain: Replace
"YOUR-COMPANY"inlocal.quilt_web_hostwith your domain - Certificate ARN: Replace
"YOUR-CERT-ID"with your SSL certificate ID - Route53 Zone: Replace
"YOUR-ROUTE53-ZONE-ID"with your hosted zone ID - All other placeholders: Replace any remaining
YOUR-*values with actual values
⚠️ WARNING: Do NOT runterraform applywith placeholder values. This will cause deployment failures and may create resources with incorrect configurations.
Choose ElasticSearch sizing based on your data volume:
- Small (< 100GB): Use commented "Small" configuration
- Medium (100GB-1TB): Use default configuration (already uncommented)
- Large (1TB-5TB): Uncomment "Large" configuration
- Enterprise (5TB+): Uncomment "X-Large" or larger configuration
# Initialize Terraform
terraform init
# Validate configuration
terraform validate
# Format code
terraform fmt
# Create execution plan
terraform plan -out=tfplan
# Review the plan carefully - ensure no unexpected resource deletions# Apply the configuration
terraform apply tfplan
# Deployment typically takes 20-30 minutes
# Monitor progress in AWS Console if needed# Get outputs
terraform output admin_password # Save this password securely
terraform output quilt_url # Your Quilt catalog URL
# Test access
curl -I https://data.YOUR-COMPANY.com # Should return 200 OK- Access Quilt Catalog: Navigate to your Quilt URL
- Login: Use admin email and the password from terraform output
- Change Password: Immediately change the default admin password
- Configure Users: Set up additional users and permissions as needed
Health Checks (5 minutes daily)
# Check infrastructure status
terraform refresh
terraform plan # Should show "No changes"
# Check application health
curl -f https://data.YOUR-COMPANY.com/health || echo "Health check failed"
# Check ElasticSearch cluster health
aws es describe-elasticsearch-domain --domain-name your-stack-nameBackup Verification (10 minutes weekly)
# Verify RDS automated backups
aws rds describe-db-snapshots --db-instance-identifier your-stack-name
# Check ElasticSearch snapshots (if configured)
aws es describe-elasticsearch-domain --domain-name your-stack-nameSecurity Updates (15 minutes weekly)
# Check for Terraform module updates
# Visit: https://github.com/quiltdata/iac/releases
# Update to latest stable version if available
# Edit main.tf and update the ref= parameter
# Example: ref=1.3.0 -> ref=1.4.0Capacity Planning (20 minutes monthly)
# Check ElasticSearch storage usage
aws cloudwatch get-metric-statistics \
--namespace AWS/ES \
--metric-name StorageUtilization \
--dimensions Name=DomainName,Value=YOUR-STACK-NAME Name=ClientId,Value=YOUR-ACCOUNT-ID \
--start-time $(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 86400 \
--statistics Average
# Check RDS storage usage
aws cloudwatch get-metric-statistics \
--namespace AWS/RDS \
--metric-name DatabaseConnections \
--dimensions Name=DBInstanceIdentifier,Value=your-stack-name \
--start-time $(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 86400 \
--statistics AverageWhen to Scale: When storage utilization > 80%
Step 1: Plan the Scaling
# Current configuration check
terraform show | grep search_volume_size
# Calculate new size needed (current_size * 1.5 recommended)
# Example: 1024GB -> 1536GBStep 2: Update Configuration
# Edit main.tf
vim main.tf
# Update search_volume_size value
# Example: search_volume_size = 1536
# Plan the change
terraform plan -out=tfplanStep 3: Apply During Maintenance Window
# Schedule during low-usage period
# Scaling causes temporary performance impact
terraform apply tfplan
# Monitor the scaling process
aws es describe-elasticsearch-domain --domain-name your-stack-nameVertical Scaling (Instance Size)
# Edit main.tf
# Update db_instance_class
# Example: db.t3.small -> db.t3.medium
terraform plan -out=tfplan
terraform apply tfplan # Causes brief downtimeStorage Scaling
# RDS storage scales automatically if enabled
# Check current storage
aws rds describe-db-instances --db-instance-identifier your-stack-nameDatabase Backup
# Create manual snapshot
aws rds create-db-snapshot \
--db-instance-identifier your-stack-name \
--db-snapshot-identifier your-stack-name-manual-$(date +%Y%m%d)Configuration Backup
# Backup Terraform state
aws s3 cp s3://YOUR-TERRAFORM-STATE-BUCKET/quilt/terraform.tfstate \
./terraform.tfstate.backup.$(date +%Y%m%d)
# Backup configuration files
tar -czf quilt-config-backup-$(date +%Y%m%d).tar.gz *.tf *.ymlDatabase Recovery
# List available snapshots
aws rds describe-db-snapshots --db-instance-identifier your-stack-name
# Restore from snapshot (update main.tf)
# Add: db_snapshot_identifier = "snapshot-name"
# Then: terraform plan && terraform applyElasticSearch Monitoring
# Create storage utilization alarm
aws cloudwatch put-metric-alarm \
--alarm-name "Quilt-ES-Storage-High" \
--alarm-description "ElasticSearch storage utilization > 80%" \
--metric-name StorageUtilization \
--namespace AWS/ES \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--dimensions Name=DomainName,Value=YOUR-STACK-NAME Name=ClientId,Value=YOUR-ACCOUNT-ID \
--evaluation-periods 2 \
--alarm-actions arn:aws:sns:YOUR-AWS-REGION:YOUR-ACCOUNT-ID:quilt-alertsRDS Monitoring
# Create CPU utilization alarm
aws cloudwatch put-metric-alarm \
--alarm-name "Quilt-RDS-CPU-High" \
--alarm-description "RDS CPU utilization > 80%" \
--metric-name CPUUtilization \
--namespace AWS/RDS \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--dimensions Name=DBInstanceIdentifier,Value=your-stack-name \
--evaluation-periods 2 \
--alarm-actions arn:aws:sns:YOUR-AWS-REGION:YOUR-ACCOUNT-ID:quilt-alertsSymptoms: Terraform apply fails with database parameter errors
Solution:
# Check current RDS version
aws rds describe-db-instances --db-instance-identifier your-stack-name
# If PostgreSQL < 11.22, upgrade manually first
aws rds modify-db-instance \
--db-instance-identifier your-stack-name \
--engine-version 11.22 \
--apply-immediately
# Wait for upgrade to complete, then retry TerraformSymptoms: "ValidationException: A change/update is in progress"
Solution:
# Check domain status
aws es describe-elasticsearch-domain --domain-name your-stack-name
# Wait for current operation to complete (check Processing field)
# Then retry Terraform applySymptoms: Certificate remains in "Pending Validation" status
Solution:
# Check DNS validation records
aws acm describe-certificate --certificate-arn your-cert-arn
# Verify DNS records are correctly added to your domain
# Use DNS lookup tools to confirm propagation
dig _validation-record.data.YOUR-COMPANY.com CNAME- Use IAM roles instead of access keys where possible
- Enable MFA for all administrative accounts
- Rotate credentials regularly (quarterly)
- Use least privilege principle for all permissions
- Use internal ALB for VPN-only access when possible
- Configure WAF with appropriate geofencing
- Enable VPC Flow Logs for network monitoring
- Use private subnets for all internal services
- Enable encryption at rest for all storage services
- Use SSL/TLS for all data in transit
- Configure CloudTrail for audit logging
- Enable GuardDuty for threat detection
# Check monthly costs by service
aws ce get-cost-and-usage \
--time-period Start=2023-11-01,End=2023-12-01 \
--granularity MONTHLY \
--metrics BlendedCost \
--group-by Type=DIMENSION,Key=SERVICE
# Identify optimization opportunities
# - Unused EBS volumes
# - Over-provisioned instances
# - Unnecessary data transfer- Use Reserved Instances for production workloads
- Right-size instances based on actual usage
- Implement lifecycle policies for S3 storage
- Use Spot Instances for non-critical workloads where applicable
- Level 1: Cloud team member (daily operations)
- Level 2: Senior cloud engineer (scaling, troubleshooting)
- Level 3: Cloud architect (design changes, major issues)
- Quilt Support: Contact your account manager for application issues
- AWS Support: Use your AWS support plan for infrastructure issues
- Community: GitHub issues for module-related problems
- Cloud Team Lead: [contact information]
- On-call Engineer: [contact information]
- Quilt Account Manager: [contact information]
📖 Additional Documentation: For comprehensive enterprise installation guidance, refer to the official documentation at docs.quilt.bio. This Terraform module complements the standard installation process with Infrastructure as Code automation.
- Terraform >= 1.5.0
- AWS CLI >= 2.0 configured with appropriate permissions
- Git for version control and configuration management
- jq (optional) for JSON processing in automation scripts
Quilt provides Terraform-compatible CloudFormation templates via email:
- Initial Installation: Template delivered in your installation welcome email from Quilt
- Platform Updates: Updated templates sent regularly via platform update emails
- Template Location: Save the template as
quilt-template.ymlin your project directory - Version Management: Always use the latest template version for updates and security patches
- Template Validation: Verify template integrity before deployment
- AWS Account with administrative permissions or specific IAM policies (see AWS Permissions)
- AWS Region selection based on data residency and compliance requirements
- SSL Certificate in AWS Certificate Manager for HTTPS access
- Domain Name with DNS control for certificate validation and CNAME setup
- VPC Planning (if using existing VPC) with proper subnet architecture
For Internet-Facing Deployments:
- Public subnets in at least 2 Availability Zones for load balancer
- Private subnets in at least 2 Availability Zones for application services
- Isolated subnets in at least 2 Availability Zones for database and search
- Internet Gateway for public subnet access
- NAT Gateways for private subnet internet access
For Internal/VPN-Only Deployments:
- Private subnets in at least 2 Availability Zones for application services and load balancer
- Isolated subnets in at least 2 Availability Zones for database and search
- VPC Endpoints for AWS service access (S3, ECR, CloudWatch, etc.)
- VPN or Direct Connect for user access
Security Groups:
- Application Load Balancer security group (port 443 from users)
- Application services security group (port 80 from ALB)
- Database security group (port 5432 from application)
- ElasticSearch security group (port 443 from application)
Minimum Requirements:
- Database: db.t3.small (2 vCPU, 2GB RAM) for development
- ElasticSearch: 1x m5.large.elasticsearch (2 vCPU, 8GB RAM, 512GB storage) for development
- Application: ECS Fargate tasks (0.5 vCPU, 1GB RAM per task)
Production Recommendations:
- Database: db.t3.medium or larger (2+ vCPU, 4+ GB RAM) with Multi-AZ
- ElasticSearch: 2x m5.xlarge.elasticsearch (4 vCPU, 16GB RAM, 1TB+ storage) with zone awareness
- Application: Multiple ECS Fargate tasks across availability zones
Storage Considerations:
- Database Storage: 100GB minimum, auto-scaling enabled
- ElasticSearch Storage: Size based on data volume (see Rightsize Your Search Domain)
- Application Logs: CloudWatch Logs with appropriate retention policies
The deploying user or role needs the following AWS permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"rds:*",
"es:*",
"ecs:*",
"elbv2:*",
"elasticloadbalancing:*",
"cloudformation:*",
"iam:CreateRole",
"iam:DeleteRole",
"iam:GetRole",
"iam:PassRole",
"iam:AttachRolePolicy",
"iam:DetachRolePolicy",
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:AddRoleToInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:GetBucketLocation",
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"cloudwatch:*",
"logs:*",
"route53:*",
"acm:*",
"secretsmanager:*",
"kms:*"
],
"Resource": "*"
}
]
}Ensure the following AWS service-linked roles exist (created automatically if missing):
AWSServiceRoleForElasticLoadBalancingAWSServiceRoleForECSAWSServiceRoleForRDSAWSServiceRoleForElasticsearch
- VPC Flow Logs: Enable for network monitoring and security analysis
- Security Groups: Follow principle of least privilege
- NACLs: Additional layer of network security (optional)
- WAF: Web Application Firewall for additional protection (configured in CloudFormation)
- Encryption at Rest: Enabled for RDS, ElasticSearch, and S3
- Encryption in Transit: TLS 1.2+ for all communications
- Key Management: AWS KMS for encryption key management
- Backup Encryption: All backups encrypted with KMS
- IAM Roles: Use IAM roles instead of access keys where possible
- MFA: Multi-factor authentication for administrative access
- Audit Logging: CloudTrail enabled for all API calls
- Monitoring: CloudWatch and GuardDuty for security monitoring
- Choose AWS region based on data residency requirements
- Consider AWS Local Zones for specific geographic requirements
- Review AWS compliance certifications for your region
- SOC 2: AWS infrastructure is SOC 2 compliant
- GDPR: Configure data retention and deletion policies
- HIPAA: Use HIPAA-eligible AWS services if handling PHI
- FedRAMP: Use FedRAMP authorized regions if required
- CloudWatch Metrics: Infrastructure and application metrics
- CloudWatch Logs: Application and infrastructure logs
- CloudWatch Alarms: Proactive alerting for issues
- AWS X-Ray: Distributed tracing (optional)
- AWS Config: Configuration compliance monitoring
- AWS GuardDuty: Threat detection
- AWS Security Hub: Centralized security findings
- AWS Systems Manager: Patch management and compliance
Your project structure should look like this:
quilt_stack/
├── main.tf
├── variables.tf # Optional: for sensitive variables
├── terraform.tfvars # Optional: for configuration values
└── my-company.yml # Your CloudFormation template
Use examples/main.tf as a starting point for your main.tf.
It is neither necessary nor recommended to modify any module in this repository. All supported customization is possible with arguments to
module.quilt.
Here's a minimal configuration:
provider "aws" {
region = "YOUR-AWS-REGION"
allowed_account_ids = ["YOUR-ACCOUNT-ID"]
default_tags {
tags = {
Environment = "production"
Project = "quilt"
}
}
}
terraform {
backend "s3" {
bucket = "YOUR-TERRAFORM-STATE-BUCKET"
key = "quilt/terraform.tfstate"
region = "YOUR-AWS-REGION"
}
}
locals {
name = "quilt-prod"
build_file_path = "./quilt-template.yml"
quilt_web_host = "quilt.yourcompany.com"
}
module "quilt" {
source = "github.com/quiltdata/iac//modules/quilt?ref=1.3.0"
name = local.name
template_file = local.build_file_path
internal = false
create_new_vpc = true
cidr = "10.0.0.0/16"
parameters = {
AdminEmail = "[email protected]"
CertificateArnELB = "arn:aws:acm:YOUR-AWS-REGION:YOUR-ACCOUNT-ID:certificate/YOUR-CERT-ID"
QuiltWebHost = local.quilt_web_host
PasswordAuth = "Enabled"
Qurator = "Enabled"
}
}terraform init
terraform plan -out=tfplan
terraform apply tfplan| Argument | internal = true (private ALB for VPN) |
internal = false (internet-facing ALB) |
|---|---|---|
| intra_subnets | Isolated subnets (no NAT) for db & search |
" |
| private_subnets | For Quilt services | " |
| public_subnets | n/a | For IGW, ALB |
| user_subnets | For ALB (when create_new_vpc = false) |
n/a |
| user_security_group | For ALB access | n/a |
| api_endpoint | For API Gateway when create_new_vpc = false |
n/a |
This endpoint must be reachable by your VPN clients.
resource "aws_vpc_endpoint" "api_gateway_endpoint" {
vpc_id = ""
service_name = "com.amazonaws.${var.region}.execute-api"
vpc_endpoint_type = "Interface"
subnet_ids = module.vpc.private_subnet_ids
security_group_ids = ""
private_dns_enabled = true
}You may wish to set a specific AWS profile before executing terraform
commands.
export AWS_PROFILE=your-aws-profileWe discourage the use of
provider.profilein team environments where profile names may differ across users and machines.
Your primary consideration is the total data node disk size. If you multiply your average document size (likely a function of the number of deep-indexed documents and your depth limit) by the total number of documents that will give you "Source data" below.
Each shallow-indexed document requires a constant number of bytes on the order of 1kB.
Follow AWS's documentation on Sizing Search Domains and note the following simplified formula:
Source data * (1 + number of replicas) * 1.45= minimum storage requirement
For a production Quilt deployment the number of replicas will be 1, so multiplying "Source data" by 3 (2.9 rounded up) is a fair starting point. Be sure to account for growth in your Quilt buckets. "Live" resizing of existing domains is supported but requires time and may reduce quality of service during the blue/green update.
Below are known-good search sizes that you can set on the quilt module.
search_dedicated_master_enabled = false
search_zone_awareness_enabled = false
search_instance_count = 1
search_instance_type = "m5.large.elasticsearch"
search_volume_size = 512search_dedicated_master_enabled = true
search_zone_awareness_enabled = true
search_instance_count = 2
search_instance_type = "m5.xlarge.elasticsearch"
search_volume_size = 1024search_dedicated_master_enabled = true
search_zone_awareness_enabled = true
search_instance_count = 2
search_instance_type = "m5.xlarge.elasticsearch"
search_volume_size = 2*1024
search_volume_type = "gp3"search_dedicated_master_enabled = true
search_zone_awareness_enabled = true
search_instance_count = 2
search_instance_type = "m5.2xlarge.elasticsearch"
search_volume_size = 3*1024
search_volume_type = "gp3"
search_volume_iops = 16000search_dedicated_master_enabled = true
search_zone_awareness_enabled = true
search_instance_count = 2
search_instance_type = "m5.4xlarge.elasticsearch"
search_volume_size = 6*1024
search_volume_type = "gp3"
search_volume_iops = 18750search_dedicated_master_enabled = true
search_zone_awareness_enabled = true
search_instance_count = 2
search_instance_type = "m5.12xlarge.elasticsearch"
search_volume_size = 18*1024
search_volume_type = "gp3"
search_volume_iops = 40000
search_volume_throughput = 1187search_dedicated_master_enabled = true
search_zone_awareness_enabled = true
search_instance_count = 4
search_instance_type = "m5.12xlarge.elasticsearch"
search_volume_size = 18*1024
search_volume_type = "gp3"
search_volume_iops = 40000
search_volume_throughput = 1187As a rule, terraform apply is sufficient to both deploy and update Quilt.
Before calling apply read terraform plan carefully to ensure that it does
not inadvertently destroy and recreate the stack. The following modifications
are known to cause issues (see examples/main.tf for context).
- Modifying
local.name. - Modifying
local.build_file_path. - Modifying
quilt.template_file.
And for older versions of Terraform and customers whose usage predates the present module:
- Modifying
template_url=(in older versions of Terraform).
terraform initIf for instance you change the provider pinning you may need to -upgrade:
terraform init -upgradeterraform fmt
terraform validate
terraform plan -out tfplan
If the plan is what you want:
terraform apply tfplan
Sensitive values must be named in order to display on the command line:
terraform output admin_password
terraform state list
Or, to show a specific entity:
terraform state show 'thing.from.list'
terraform refresh
terraform destroy
- Start with a clean commit of the previous apply in your Quilt Terraform folder (nothing uncommitted).
- In your
main.tffile, do the following: - Initialize.
- Plan.
- Verify the plan.
- Apply.
- Commit the appropriate files.
*.tfterraform.lock.hcl- Your Quilt
build_file
You may wish to create a .gitignore file similar to the following:
.terraform
tfplan
We recommend that you use remote state so that no passwords are checked into version control.
Due to how Terraform evaluates (or fails to evaluate) arguments in a precondition
(e.g. user_security_group = aws_security_group.lb_security_group.id) you may
see the following error message. Provide a static string instead of a dynamic value.
│ 27: condition = !local.configuration_error
│ ├────────────────
│ │ local.configuration_error is true
│
│ This check failed, but has an invalid error message as described in the other accompanying messages.
Provide a static string instead (e.g. user_security_group = "123") and you should
receive a more informative message similar to the following:
│ In order to use an existing VPC (create_new_vpc == false) correct the following attributes:
│ ❌ api_endpoint (required if var.internal == true, else must be null)
│ ✅ create_new_vpc == false
│ ✅ intra_subnets (required)
│ ✅ private_subnets (required)
│ ❌ public_subnets (required if var.internal == false, else must be null)
│ ✅ user_security_group (required)
│ ❌ user_subnets (required if var.internal == true and var.create_new_vpc == false, else must be null)
│ ✅ vpc_id (required)
InvalidParameterCombination: Cannot upgrade postgres from 11.X to 15.Y
Later versions of the current module set database auto_minor_version_upgrade = false.
As a result some users may find their Quilt RDS instance on Postgres 11.19.
These users should first upgrade to 11.22 using the AWS Console and then apply
a recent version of the present module, which will upgrade Postgres to 15.5.
Users who have auto-minor-version-upgraded to 11.22 can apply the present module to automatically upgrade to 15.5 (without any manual steps).
Engine version changes are applied during the next maintenance window, therefore you may not see them immediately in AWS Console.
Error: updating Elasticsearch Domain (arn:aws:es:foo:bar/baz) config: ValidationException: A change/update is in progress. Please wait for it to complete before requesting another change.
If you encounter the above error we suggest that you use the latest version of the
current repo which no longer uses an auto_tune_options configuration block in
the search module. We further recommend that you only use
search instances that support Auto-Tune
as the AWS service may automatically enable Auto-Tune without cause and without warning,
leading to search domains that are difficult to upgrade.
Some users have overcome the above error by pinning the provider to 5.20.0 as shown below but this is not recommended given that 5.20.0 is an older version.
provider "aws" {
version = "= 5.20.0"
}