Skip to content

Commit

Permalink
Bakst feedback
Browse files Browse the repository at this point in the history
  • Loading branch information
tsmithv11 committed Nov 24, 2024
1 parent f5031ac commit 851fc30
Show file tree
Hide file tree
Showing 8 changed files with 28 additions and 28 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@

=== Description

This policy is checking whether AWS Simple Queue Service (SQS) queues are using Customer Master Keys (CMK) instead of the default AWS-managed keys for encryption.
This policy detects whether AWS Simple Queue Service (SQS) queues are encrypted with default AWS-managed keys instead of customer-managed keys (CMKs).

The use of CMK over default keys is encouraged because CMKs allow for enhanced security and control. CMKs enable users to manage key policies, set usage permissions, and closely monitor access controls and key rotations. Using AWS-managed keys, on the other hand, places these controls in the hands of AWS, potentially broadening access and reducing oversight for the user. By ensuring SQS queues use CMK, organizations can enforce stricter access control and auditing, thus improving the security of data stored in and transmitted through SQS.
Using a customer-managed key (CMK) is recommended over default AWS-managed keys as CMKs provide enhanced security and greater control. With CMKs, users can define key policies, manage usage permissions, and monitor access controls and key rotations. By ensuring SQS queues use CMKs, organizations can enforce stricter access control and auditing, improving the security of data stored in and transmitted through SQS. In contrast, relying on AWS-managed keys shifts these controls to AWS, potentially broadening access and reducing user oversight.

=== Fix - Buildtime

Expand All @@ -36,9 +36,10 @@ The use of CMK over default keys is encouraged because CMKs allow for enhanced s
* *Resource:* aws_sqs_queue
* *Arguments:* kms_master_key_id

To ensure AWS SQS uses a Customer Managed Key (CMK) rather than the default AWS keys, you need to specify the `kms_master_key_id` in your `aws_sqs_queue` resource. This attribute should reference the ARN of the CMK you intend to use for encryption.
Specify the `kms_master_key_id` attribute in your `aws_sqs_queue` resource to ensure AWS SQS uses a Customer Managed Key (CMK) for AWS SQS encryption instead of the default AWS-managed keys. Provide the ARN of the CMK you intend to use for encryption in this attribute.

This example shows how to modify an SQS queue resource in Terraform to use a CMK for encryption.

Here's how you can update the SQS queue resource in Terraform to use a CMK for encryption:

[source,go]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@

The policy is checking whether AWS CloudFront web distributions have geographic restrictions enabled. This involves setting up geo restrictions to control access based on the geographic location of users attempting to access the content distributed by CloudFront.

The reason this is considered important is because enabling geographic restrictions allows for better control over where your content can be accessed from, which can help comply with legal and regulatory requirements specific to certain regions. It can also improve security by preventing access from regions that are not relevant to your business or where you know that malicious activity might originate. Overall, implementing geo restrictions can help protect your data and ensure compliance with regional laws and policies.
Enabling geographic restrictions is important for maintaining control over where your content is accessible. This helps ensure compliance with regional legal and regulatory requirements while enhancing security by blocking access from regions irrelevant to your business or associated with potential malicious activity. By implementing geo restrictions, you can better protect your data and align with regional laws and policies.

=== Fix - Buildtime

Expand All @@ -36,11 +36,11 @@ The reason this is considered important is because enabling geographic restricti
* *Resource:* aws_cloudfront_distribution
* *Arguments:* restrictions

Enable geo restriction for your AWS CloudFront distribution. Include a `restrictions` block inside the `aws_cloudfront_distribution` resource to configure geo restrictions by specifying which countries are allowed or denied.
Enable geo restriction for your AWS CloudFront distribution by including a restrictions block inside the aws_cloudfront_distribution resource. This block allows you to configure geo restrictions by specifying which countries are allowed or denied access.

Here is an example of how to enable geo restriction for an AWS CloudFront distribution using Terraform:
This example demonstrates how to enable geo restriction for an AWS CloudFront distribution using Terraform.

[source,hcl]
[source,go]
----
resource "aws_cloudfront_distribution" "example" {
...
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,7 @@

=== Description

This policy is checking to ensure that an AWS S3 bucket does not have global view ACL (Access Control List) permissions enabled. The focus is on preventing the bucket from being publicly accessible by anyone on the internet, which could happen if global read permissions are allowed through its ACL settings.

Having global view permissions enabled on an S3 bucket means that anyone with the correct URL can view the contents of the bucket, potentially exposing sensitive data or files. This is a security risk because it can lead to unauthorized access or data breaches, wherein malicious actors could exploit the publicly available data for nefarious purposes. Ensuring that S3 buckets do not have these permissions enabled is crucial for maintaining data privacy and security in cloud environments.
This policy detects whether AWS S3 buckets have global view ACL (Access Control List) permissions enabled. It aims to prevent buckets from being publicly accessible, which could occur if global read permissions are granted through its ACL settings.

=== Fix - Buildtime

Expand All @@ -36,9 +34,9 @@ Having global view permissions enabled on an S3 bucket means that anyone with th
* *Resource:* aws_s3_bucket_acl
* *Arguments:* access_control_policy

Ensure that your AWS S3 bucket does not have global view permissions by avoiding 'public-read', 'public-read-write', or 'authenticated-read' ACL settings. Properly restrict access by setting the ACL to 'private' or using more specific bucket policies and IAM roles.
Set your AWS S3 bucket ACL to private to avoid global view permissions. Do not use settings like public-read, public-read-write, or authenticated-read. Instead, use specific bucket policies and IAM roles to restrict access.

Here's how to update your Terraform configuration to ensure the S3 bucket does not have global view ACL permissions:
This example demonstrates how to modify your Terraform configuration to ensure the S3 bucket's ACL does not have global view ACL permissions.

[source,go]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@

=== Description

The policy is checking for the use of TLS/SSL protocols in AWS Elastic Load Balancer (ELB) listeners. The purpose of this check is to ensure that data transmitted between clients and the load balancer is encrypted and secure. TLS/SSL (Transport Layer Security/Secure Sockets Layer) are cryptographic protocols designed to provide secure communication over a network by encrypting the data exchanged.
This policy detects whether AWS Elastic Load Balancer (ELB) listeners are configured to use TLS/SSL (Transport Layer Security/Secure Sockets Layer) protocols. These protocols safeguard communication by encrypting data exchanged over the network, ensuring that data transmitted between clients and the load balancer remains secure and protected from unauthorized access.

Without TLS/SSL, data transmitted over the network is susceptible to being intercepted, read, or tampered with by malicious actors. This can lead to data breaches, loss of sensitive information, and other security vulnerabilities. Therefore, it is important for ELB listeners to use TLS/SSL to protect the integrity and confidentiality of data in transit, ensuring secure communication channels for applications.
Without TLS/SSL, transmitted data is vulnerable to interception, tampering, or unauthorized access by malicious actors, potentially leading to data breaches and the loss of sensitive information. Enforcing TLS/SSL on ELB listeners is essential to protect the integrity and confidentiality of data in transit, ensuring secure communication channels for applications.

=== Fix - Buildtime

Expand All @@ -36,9 +36,9 @@ Without TLS/SSL, data transmitted over the network is susceptible to being inter
* *Resource:* aws_elb
* *Arguments:* instance_protocol

Ensure the AWS Elastic Load Balancer listener uses TLS/SSL by specifying the `instance_protocol` as `HTTPS` or `SSL` in your `aws_elb` resource configuration.
Specify the instance_protocol as HTTPS or SSL in your aws_elb resource configuration to ensure the AWS Elastic Load Balancer listener uses TLS/SSL.

To fix this issue, you should update your Terraform configuration to use `HTTPS` or `SSL` as the protocol for the load balancer listener. This will ensure that traffic between clients and the load balancer is encrypted.
In this example, the Terraform configuration is modified to set HTTPS or SSL as the protocol for the load balancer listener. This ensures that traffic between clients and the load balancer is encrypted.

[source,go]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@

=== Description

This policy is ensuring that Route 53 domains have transfer lock protection enabled. The transfer lock feature is crucial because it prevents unauthorized domain transfers to another registrar. Whenever a domain is locked, it cannot be transferred without explicit permission from the domain owner, safeguarding against accidental or malicious domain hijacking. Without this protection, a domain could be transferred away without the knowledge or approval of the owner, leading to potential service disruptions, loss of business, and security issues, especially if the domain is critical for business operations or brand presence.
This policy detects whether Route 53 domains have transfer lock protection disabled. The transfer lock feature is important because it prevents unauthorized domain transfers to another registrar. When a domain is locked, it cannot be transferred without explicit permission from the domain owner, protecting against accidental or malicious domain hijacking. Without this protection, a domain could be transferred without the owner’s knowledge or approval, leading to potential service disruptions, loss of business, and security risks, especially for domains critical to business operations or brand presence.

=== Fix - Buildtime

Expand All @@ -33,9 +33,9 @@ This policy is ensuring that Route 53 domains have transfer lock protection enab
* *Resource:* aws_route53domains_registered_domain
* *Arguments:* transfer_lock

Ensure that your Route 53 domains have transfer lock protection enabled. The domain transfer lock is a security feature that prevents unauthorized domain transfers. For each `aws_route53domains_registered_domain` resource, set the `transfer_lock` attribute to `true`.
Set the `transfer_lock` attribute to `true` for each `aws_route53domains_registered_domain` resource to ensure that your Route 53 domains have transfer lock protection enabled. This security feature prevents unauthorized domain transfers.

In this example, the transfer lock protection for an AWS Route 53 domain is enabled using Terraform templates.
In this example, transfer lock protection for an AWS Route 53 domain is enabled using a Terraform configuration.

[source,go]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

=== Description

The policy is checking to ensure that an AWS S3 bucket is configured to enforce secure data transport, specifically by requiring the use of HTTPS for data transfer. This is important because transmitting data over HTTPS ensures that the data is encrypted during transit. This encryption helps protect the data from interception or eavesdropping by unauthorized parties during transmission between clients and the S3 bucket. By requiring secure data transport, the risk of cyberattacks, such as man-in-the-middle attacks, is reduced, thereby enhancing the overall security posture of the cloud environment.
To ensure secure data transport, configure your AWS S3 bucket to block public access or explicitly enforce aws:SecureTransport = true in your bucket policy. This ensures that all data transfers to and from the bucket use HTTPS, providing encryption and protecting the data from unauthorized access during transit.

=== Fix - Buildtime

Expand All @@ -34,8 +34,9 @@ The policy is checking to ensure that an AWS S3 bucket is configured to enforce
* *Resource:* aws_s3_bucket_acl
* *Arguments:* aws_s3_bucket_public_access_block, access_control_policy

To ensure secure data transport, configure your AWS S3 bucket to either be public, block public access or else explicitly enforce `aws:SecureTransport = true`.
To ensure secure data transport, configure your AWS S3 bucket to block public access or explicitly enforce `aws:SecureTransport = true` in your bucket policy. This ensures that all data transfers to and from the bucket use HTTPS, providing encryption and protecting the data from unauthorized access during transit.

The following example demonstrates how to configure an AWS S3 bucket policy in Terraform to enforce secure data transport by requiring HTTPS for all data transfers to and from the bucket.

[source,go]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

=== Description

This policy is checking to ensure that an AWS Transfer Server is configured to use the latest security policy, as defined as no older than 24 months. In the context of AWS Transfer, security policies dictate the encryption algorithms and protocols that are used during file transfers. Using outdated security policies can be detrimental because they may include deprecated or weaker encryption methods, which can be more susceptible to security vulnerabilities and attacks. By ensuring that the latest security policy is used, it helps to maintain strong encryption standards, enhance data protection, and comply with best practices for secure communications. This can protect sensitive data being transferred to and from the server from unauthorized access and potential breaches.
This policy detects whether an AWS Transfer Server is not configured to use the latest security policy, defined as one that is no older than 24 months. Security policies in AWS Transfer specify the encryption algorithms and protocols used during file transfers. Using outdated policies can be risky, as they may rely on deprecated or weaker encryption methods vulnerable to security threats. By ensuring that the latest security policy is applied, organizations can maintain strong encryption standards, enhance data protection, and adhere to best practices for secure communication, safeguarding sensitive data from unauthorized access and potential breaches.

=== Fix - Buildtime

Expand All @@ -34,9 +34,9 @@ This policy is checking to ensure that an AWS Transfer Server is configured to u
* *Resource:* aws_transfer_server
* *Arguments:* security_policy_name

Ensure your AWS Transfer Server uses the latest security policy to secure data transfers. Associate each `aws_transfer_server` resource with the latest available `security_policy_name` to maintain high security standards.
Set your AWS Transfer Server to use the latest security policy to secure data transfers. Associate each `aws_transfer_server` resource with the most recent `security_policy_name` to maintain high security standards.

In this example, a security policy for an AWS Transfer Server is updated to the latest version using Terraform.
In this example, the security policy for an AWS Transfer Server is updated to the latest version using Terraform.

[source,go]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

=== Description

This policy is checking for the use of local users within Azure Storage accounts. Local users can pose a security risk because they might have direct access to the storage account, which can potentially lead to unauthorized data access or compromise if those credentials are exposed or not managed securely. The best practice is to use Azure Active Directory (AD) identities or managed identities to handle access control. This is preferred because Azure AD provides advanced security features, centralized management, and stronger authentication methods, reducing the risk of unauthorized access compared to local users that rely solely on static credentials.
This policy detects whether local users are used within Azure Storage accounts. Local users can pose a security risk as they may have direct access to the storage account, which could lead to unauthorized data access or compromise if their credentials are exposed or not managed securely. The best practice is to use Azure Active Directory (AD) identities or managed identities for access control. Azure AD offers advanced security features, centralized management, and stronger authentication methods, reducing the risk of unauthorized access compared to local users, which rely solely on static credentials.

=== Fix - Buildtime

Expand All @@ -34,9 +34,9 @@ This policy is checking for the use of local users within Azure Storage accounts
* *Resource:* azurerm_storage_account
* *Arguments:* local_user_enabled

Avoid the use of local users for Azure Storage to enhance security and manageability. Instead, use Azure Active Directory or other centralized identity management solutions to control access to storage accounts. This reduces the dependency on local users, which can be less secure and harder to manage.
Use Azure Active Directory or other centralized identity management solutions to control access to Azure Storage accounts. Relying on Azure AD reduces the dependency on static credentials. Avoid using local users, as they depend on static credentials, which can be less secure and harder to manage.

In this example, `local_user_enabled` is switched from `true` to `false`.
In this example, the configuration of the Azure Storage account is modified to disable the use of local users by switching the `local_user_enabled` setting from `true` to `false`, ensuring the use of centralized identity management.

[source,go]
----
Expand Down

0 comments on commit 851fc30

Please sign in to comment.