-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update main.tf #144
base: master
Are you sure you want to change the base?
Update main.tf #144
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bridgecrew has found errors in this PR ⬇️
Name = "${local.resource_prefix.value}-data" | ||
Environment = local.resource_prefix.value | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
} | |
resource "aws_s3_bucket_server_side_encryption_configuration" "data" { | |
bucket = aws_s3_bucket.data.bucket | |
rule { | |
apply_server_side_encryption_by_default { | |
sse_algorithm = "AES256" | |
} | |
} | |
} | |
Ensure data stored in the S3 bucket is securely encrypted at rest
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
Name = "${local.resource_prefix.value}-data" | ||
Environment = local.resource_prefix.value | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
} | |
resource "aws_s3_bucket" "data_log_bucket" { | |
bucket = "data-log-bucket" | |
} | |
resource "aws_s3_bucket_logging" "data" { | |
bucket = aws_s3_bucket.data.id | |
target_bucket = aws_s3_bucket.data_log_bucket.id | |
target_prefix = "log/" | |
} | |
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_13
Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
@@ -15,4 +15,26 @@ resource "aws_kms_key" "c" { | |||
} | |||
|
|||
|
|||
resource "aws_s3_bucket" "data" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.data | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
Name = "${local.resource_prefix.value}-data" | ||
Environment = local.resource_prefix.value | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
} | |
resource "aws_s3_bucket_versioning" "data" { | |
bucket = aws_s3_bucket.data.id | |
versioning_configuration { | |
status = "Enabled" | |
} | |
} | |
resource "aws_s3_bucket" "destination" { | |
bucket = aws_s3_bucket.data.id | |
versioning_configuration { | |
status = "Enabled" | |
} | |
} | |
resource "aws_iam_role" "replication" { | |
name = "aws-iam-role" | |
assume_role_policy = <<POLICY | |
{ | |
"Version": "2012-10-17", | |
"Statement": [ | |
{ | |
"Action": "sts:AssumeRole", | |
"Principal": { | |
"Service": "s3.amazonaws.com" | |
}, | |
"Effect": "Allow", | |
"Sid": "" | |
} | |
] | |
} | |
POLICY | |
} | |
resource "aws_s3_bucket_replication_configuration" "data" { | |
depends_on = [aws_s3_bucket_versioning.data] | |
role = aws_iam_role.data.arn | |
bucket = aws_s3_bucket.data.id | |
rule { | |
id = "foobar" | |
status = "Enabled" | |
destination { | |
bucket = aws_s3_bucket.destination.arn | |
storage_class = "STANDARD" | |
} | |
} | |
} | |
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_72
Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.
main.tf
Outdated
# bucket does not have access logs | ||
# bucket does not have versioning | ||
bucket = "${local.resource_prefix.value}-data" | ||
acl = "public-read" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
acl = "public-read" | |
acl = "private" |
Ensure bucket ACL does not grant READ permission to everyone
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_1
Description
Unprotected S3 buckets are one of the major causes of data theft and intrusions. An S3 bucket that allows **READ** access to everyone can provide attackers the ability to read object data within the bucket, which can lead to the exposure of sensitive data. The only S3 buckets that should be globally accessible for unauthenticated users or for **Any AWS Authenticate Users** are those used for hosting static websites. Bucket ACL helps manage access to S3 bucket data.We recommend AWS S3 buckets are not publicly accessible for READ actions to protect S3 data from unauthorized users and exposing sensitive data to public access.
Benchmarks
- NIST-800-53 AC-17
🎉 Fixed by commit 5252e0f - Bridgecrew bot fix for main.tf
Name = "${local.resource_prefix.value}-data" | ||
Environment = local.resource_prefix.value | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
} | |
resource "aws_s3_bucket_server_side_encryption_configuration" "data" { | |
bucket = aws_s3_bucket.data.bucket | |
rule { | |
apply_server_side_encryption_by_default { | |
sse_algorithm = "aws:kms" | |
} | |
} | |
} | |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_56
Description
TBAName = "${local.resource_prefix.value}-data" | ||
Environment = local.resource_prefix.value | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
} | |
resource "aws_s3_bucket_versioning" "data" { | |
bucket = aws_s3_bucket.data.id | |
versioning_configuration { | |
status = "Enabled" | |
} | |
} | |
Ensure AWS S3 object versioning is enabled
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_16
Description
S3 versioning is a managed data backup and recovery service provided by AWS. When enabled it allows users to retrieve and restore previous versions of their buckets.S3 versioning can be used for data protection and retention scenarios such as recovering objects that have been accidentally/intentionally deleted or overwritten.
Benchmarks
- PCI-DSS V3.2.1 10.5.3
- FEDRAMP (MODERATE) CP-10, SI-12
|
||
resource "aws_s3_bucket_object" "data_object" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure S3 bucket Object is encrypted by KMS using a customer managed Key (CMK)
Resource: aws_s3_bucket_object.data_object | ID: BC_AWS_GENERAL_106
How to Fix
resource "aws_s3_bucket_object" "object" {
bucket = "your_bucket_name"
key = "new_object_key"
source = "path/to/file"
+ kms_key_id = "ckv_kms"
# The filemd5() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the md5() function and the file() function:
# etag = "${md5(file("path/to/file"))}"
etag = filemd5("path/to/file")
}
No description provided.