The Five Terraform Misconfigurations That Fail an AWS Well-Architected Security Review
Five specific Terraform patterns that consistently fail the AWS Well-Architected Security pillar — with HCL before/after examples you can fix today.
Most linters tell you what is wrong. Checkov, tfsec, and Trivy are excellent at flagging misconfigured resources. What they don’t tell you is why the misconfiguration matters to your overall architecture, or how it maps to the questions an AWS Well-Architected Security review would actually ask.
The AWS Well-Architected Framework (Security Pillar) isn’t a linter ruleset — it’s a structured set of questions that evaluate your architecture’s design decisions. The five patterns below are the most common failures we see in real-world Terraform codebases when we run them through a Well-Architected Security lens.
Every example below includes the specific WAF question it violates, a BEFORE/AFTER HCL snippet, and the one-line architectural reason it matters.
IMDSv1 is not disabled on EC2 instances
WAF Question
“How do you protect credentials and secrets? — SEC 02”
Instance Metadata Service v1 (IMDSv1) allows any process on an EC2 instance to call http://169.254.169.254/latest/meta-data/ without a session token. If your application has a Server-Side Request Forgery (SSRF) vulnerability, an attacker can reach that URL from the application's network context and harvest temporary IAM credentials — exactly what happened in the Capital One breach. IMDSv2 requires a PUT request to obtain a session token first, which SSRF cannot replicate.
# main.tf — EC2 instance without IMDSv2 enforcementresource "aws_instance" "api_server" { ami = "ami-0c55b159cbfafe1f0" instance_type = "t3.medium" # No metadata_options block = IMDSv1 is active by default}# main.tf — IMDSv2 enforcedresource "aws_instance" "api_server" { ami = "ami-0c55b159cbfafe1f0" instance_type = "t3.medium" metadata_options { http_tokens = "required" # enforces IMDSv2 http_put_response_hop_limit = 1 # blocks container escape to IMDS http_endpoint = "enabled" }}Also check aws_launch_template
The fix: Set http_tokens = 'required' and http_put_response_hop_limit = 1 on every aws_instance and aws_launch_template resource.
IAM policies use wildcards on sensitive actions
WAF Question
“How do you manage permissions for people and machines? — SEC 03”
Wildcard IAM policies — Action: 's3:*' or Resource: '*' — violate the principle of least privilege. A single compromised role with s3:* can read, write, and delete every bucket in your account. The WAF Security Pillar explicitly asks reviewers to verify that machine identities only have permissions required for their function, evaluated against the blast radius of a compromise.
# iam.tf — overly permissive policyresource "aws_iam_policy" "app_s3_policy" { name = "app-s3-access" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = "s3:*" # wildcard action Resource = "*" # wildcard resource } ] })}# iam.tf — least-privilege policyresource "aws_iam_policy" "app_s3_policy" { name = "app-s3-access" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = [ "s3:GetObject", "s3:PutObject" ] Resource = "arn:aws:s3:::{var.app_bucket_name}/*" } ] })}Use IAM Access Analyzer to generate least-privilege policies
The fix: Replace wildcard actions with specific action lists and scope Resource to the ARN(s) the role actually needs to touch.
S3 buckets lack server-side encryption with a customer-managed KMS key
WAF Question
“How do you protect your data at rest? — SEC 08”
Default S3 SSE-S3 encryption is applied automatically but uses AWS-managed keys, giving you no control over key rotation, access auditing, or cross-account boundaries. The Well-Architected Security Pillar specifically calls for customer-managed KMS keys (SSE-KMS) for sensitive data so that key policy enforcement, CloudTrail key-usage logs, and key disablement are within your control — not AWS's. Without this, a data exfiltration incident may be entirely silent at the key layer.
# storage.tf — bucket with default (SSE-S3) or no encryption configresource "aws_s3_bucket" "customer_data" { bucket = "acme-customer-data-prod"}resource "aws_s3_bucket_server_side_encryption_configuration" "customer_data" { bucket = aws_s3_bucket.customer_data.id rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" # SSE-S3: AWS-managed keys } }}# storage.tf — bucket encrypted with a customer-managed KMS keyresource "aws_kms_key" "s3_key" { description = "CMK for customer data S3 bucket" deletion_window_in_days = 30 enable_key_rotation = true}resource "aws_s3_bucket" "customer_data" { bucket = "acme-customer-data-prod"}resource "aws_s3_bucket_server_side_encryption_configuration" "customer_data" { bucket = aws_s3_bucket.customer_data.id rule { apply_server_side_encryption_by_default { sse_algorithm = "aws:kms" # SSE-KMS kms_master_key_id = aws_kms_key.s3_key.arn } bucket_key_enabled = true # reduces KMS API costs }}The fix: Add a server_side_encryption_configuration block referencing a customer-managed aws_kms_key ARN on every sensitive-data bucket.
CloudTrail is not enabled in all regions or not writing to a protected bucket
WAF Question
“How do you detect and investigate security events? — SEC 04”
CloudTrail is your primary source of truth for control-plane activity. If it is only enabled in your primary region, API calls made in any other region (where an attacker may deliberately operate) are invisible. If the Trail's S3 bucket lacks Object Lock or is writable by the same role that runs workloads, attackers can delete or tamper with their own trail. The WAF Security Pillar expects multi-region trail coverage and tamper-evident log storage as baseline posture.
# cloudtrail.tf — single-region trail, no log protectionresource "aws_cloudtrail" "main" { name = "main-trail" s3_bucket_name = aws_s3_bucket.cloudtrail_logs.id include_global_service_events = true is_multi_region_trail = false # only captures primary region enable_log_file_validation = false # logs can be tampered silently}# cloudtrail.tf — multi-region, tamper-evident trailresource "aws_cloudtrail" "main" { name = "main-trail" s3_bucket_name = aws_s3_bucket.cloudtrail_logs.id include_global_service_events = true is_multi_region_trail = true # captures all regions enable_log_file_validation = true # SHA-256 digest per log file event_selector { read_write_type = "All" include_management_events = true data_resource { type = "AWS::S3::Object" values = ["arn:aws:s3:::{var.sensitive_bucket_name}/"] } }}# Protect the log bucket from deletionresource "aws_s3_bucket_object_lock_configuration" "cloudtrail_lock" { bucket = aws_s3_bucket.cloudtrail_logs.id rule { default_retention { mode = "GOVERNANCE" days = 90 } }}enable_log_file_validation is separate from Object Lock
The fix: Set is_multi_region_trail = true and apply an S3 Object Lock configuration on the CloudTrail bucket with governance retention.
Security groups allow unrestricted ingress from 0.0.0.0/0 on management ports
WAF Question
“How do you protect your network resources? — SEC 05”
Security groups permitting SSH (port 22) or RDP (port 3389) from 0.0.0.0/0 expose management surfaces to the entire internet. Automated scanners find open port 22 in under four minutes after an EC2 instance becomes publicly reachable. The Well-Architected Security Pillar expects you to limit exposure by using private subnets and AWS Systems Manager Session Manager (no open ports required) or, at minimum, restricting CIDR ranges to known bastion or VPN endpoints.
# networking.tf — security group open to the worldresource "aws_security_group" "app_server" { name = "app-server-sg" vpc_id = var.vpc_id ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] # entire internet can reach SSH } ingress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] }}# networking.tf — no inbound SSH; use SSM Session Manager insteadresource "aws_security_group" "app_server" { name = "app-server-sg" vpc_id = var.vpc_id # No SSH ingress rule at all — SSM Session Manager requires no inbound port ingress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] }}# Grant the instance permission to use SSMresource "aws_iam_role_policy_attachment" "ssm_core" { role = aws_iam_role.app_instance_role.name policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"}SSM Session Manager also solves key management
The fix: Remove 0.0.0.0/0 ingress rules on port 22/3389. Replace with SSM Session Manager (no inbound rules needed at all) or restrict cidr_blocks to a bastion CIDR variable.
WAF Security Pillar — Quick Checklist
Five checks. Run these before your next Well-Architected Review.
| # | Check | WAF | Severity |
|---|---|---|---|
| 1 | IMDSv2 enforced on all EC2 / launch templates | SEC 02 | HIGH |
| 2 | No wildcard IAM actions or resources | SEC 03 | HIGH |
| 3 | SSE-KMS with CMK on sensitive S3 buckets | SEC 08 | HIGH |
| 4 | Multi-region CloudTrail with log validation + Object Lock | SEC 04 | HIGH |
| 5 | No 0.0.0.0/0 on SSH/RDP; prefer SSM Session Manager | SEC 05 | MEDIUM |
See All Five in Your Own Terraform
ArchGuard runs a full AWS Well-Architected Security review against your Terraform and surfaces findings like these with remediation context — not just flag names.
Request a Free ReviewNo credit card. You share your Terraform; we return a structured findings report.