Security Pillar

The Five Terraform Misconfigurations That Fail an AWS Well-Architected Security Review

Five specific Terraform patterns that consistently fail the AWS Well-Architected Security pillar — with HCL before/after examples you can fix today.

Apr 21, 20269 min read#terraform#aws#well-architected#iac

Most Terraform security posts focus on passing Checkov. This one asks a different question: which patterns fail an AWS Well-Architected Security review — and therefore carry real blast-radius risk even if your linter score is green?

Every example below includes the specific WAF question it violates, a BEFORE/AFTER HCL snippet, and the one-line architectural reason it matters.

Why passing Checkov isn’t the same as passing a Well-Architected review

Checkov, tfsec, and Trivy operate on a rule-based model: they check whether specific resource attributes match known-bad configurations. That’s valuable, fast, and should be part of every CI pipeline. But there’s a ceiling on what any linter can evaluate.

The AWS Well-Architected Security Pillar doesn’t ask “is this configuration valid?” — it asks “is this architecture sound for your workload?” That’s a different question. It requires understanding how services connect, what the blast radius of a failure looks like, and whether your design has the properties your SLA demands.

According to the Verizon 2025 Data Breach Investigations Report, 15% of breaches involve cloud misconfiguration. The five patterns below all pass default Checkov rulesets in isolation. All five are findings in an AWS Well-Architected Security review.

1HIGHSEC 2 — Protect credentials & secrets

IMDSv1 is not disabled on EC2 instances

WAF Question

How do you protect credentials and secrets? — SEC 02

Instance Metadata Service v1 (IMDSv1) allows any process on an EC2 instance to call http://169.254.169.254/latest/meta-data/ without a session token. If your application has a Server-Side Request Forgery (SSRF) vulnerability, an attacker can reach that URL from the application's network context and harvest temporary IAM credentials — exactly what happened in the Capital One breach. IMDSv2 requires a PUT request to obtain a session token first, which SSRF cannot replicate.

main.tf✗ Before
# main.tf — EC2 instance without IMDSv2 enforcementresource "aws_instance" "api_server" {  ami           = "ami-0c55b159cbfafe1f0"  instance_type = "t3.medium"  # No metadata_options block = IMDSv1 is active by default}
main.tf✓ After
# main.tf — IMDSv2 enforced (Well-Architected compliant)resource "aws_instance" "api_server" {  ami           = "ami-0c55b159cbfafe1f0"  instance_type = "t3.medium"  metadata_options {    http_tokens                 = "required"   # enforces IMDSv2    http_put_response_hop_limit = 1            # blocks container escape to IMDS    http_endpoint               = "enabled"  }}

Also check aws_launch_template

If you use Auto Scaling groups with launch templates, the metadata_options block must be set there too — aws_instance settings do not propagate automatically.

The fix: Set http_tokens = 'required' and http_put_response_hop_limit = 1 on every aws_instance and aws_launch_template resource.

2HIGHSEC 3 — Permissions management

IAM policies use wildcards on sensitive actions

WAF Question

How do you manage permissions for people and machines? — SEC 03

Wildcard IAM policies — Action: 's3:*' or Resource: '*' — violate the principle of least privilege. A single compromised role with s3:* can read, write, and delete every bucket in your account. The WAF Security Pillar explicitly asks reviewers to verify that machine identities only have permissions required for their function, evaluated against the blast radius of a compromise.

iam.tf✗ Before
# iam.tf — overly permissive policyresource "aws_iam_policy" "app_s3_policy" {  name = "app-s3-access"  policy = jsonencode({    Version = "2012-10-17"    Statement = [      {        Effect   = "Allow"        Action   = "s3:*"           # wildcard action        Resource = "*"              # wildcard resource      }    ]  })}
iam.tf✓ After
# iam.tf — least-privilege policyresource "aws_iam_policy" "app_s3_policy" {  name = "app-s3-access"  policy = jsonencode({    Version = "2012-10-17"    Statement = [      {        Effect   = "Allow"        Action   = [          "s3:GetObject",          "s3:PutObject"        ]        Resource = "arn:aws:s3:::{var.app_bucket_name}/*"      }    ]  })}

Use IAM Access Analyzer to generate least-privilege policies

AWS IAM Access Analyzer Policy Generation observes actual CloudTrail activity and proposes a minimal policy. Run it against your staging role before hardening production.

The fix: Replace wildcard actions with specific action lists and scope Resource to the ARN(s) the role actually needs to touch.

3HIGHSEC 8 — Protect data at rest

S3 buckets lack SSE-KMS with a customer-managed key or a public access block

WAF Question

How do you protect your data at rest? — SEC 08

Default S3 SSE-S3 encryption uses AWS-managed keys, giving you no control over key rotation, access auditing, or cross-account boundaries. The Well-Architected Security Pillar calls for customer-managed KMS keys (SSE-KMS) for sensitive data so that key policy enforcement, CloudTrail key-usage logs, and key disablement are within your control. A missing aws_s3_bucket_public_access_block is a separate but equally critical gap: without it, a permissive bucket policy or ACL change can silently re-expose the bucket.

storage.tf✗ Before
# storage.tf — bucket with default (SSE-S3) or no encryption configresource "aws_s3_bucket" "customer_data" {  bucket = "acme-customer-data-prod"}resource "aws_s3_bucket_server_side_encryption_configuration" "customer_data" {  bucket = aws_s3_bucket.customer_data.id  rule {    apply_server_side_encryption_by_default {      sse_algorithm = "AES256"   # SSE-S3: AWS-managed keys    }  }}# No aws_s3_bucket_public_access_block — public access can be re-enabled
storage.tf✓ After
# storage.tf — SSE-KMS with customer-managed key + public access blockedresource "aws_kms_key" "s3_key" {  description             = "CMK for customer data S3 bucket"  deletion_window_in_days = 30  enable_key_rotation     = true}resource "aws_s3_bucket" "customer_data" {  bucket = "acme-customer-data-prod"}resource "aws_s3_bucket_server_side_encryption_configuration" "customer_data" {  bucket = aws_s3_bucket.customer_data.id  rule {    apply_server_side_encryption_by_default {      sse_algorithm     = "aws:kms"      kms_master_key_id = aws_kms_key.s3_key.arn    }    bucket_key_enabled = true   # reduces KMS API costs  }}resource "aws_s3_bucket_public_access_block" "customer_data" {  bucket                  = aws_s3_bucket.customer_data.id  block_public_acls       = true  block_public_policy     = true  ignore_public_acls      = true  restrict_public_buckets = true}

The fix: Add an aws_s3_bucket_server_side_encryption_configuration block with a customer-managed aws_kms_key ARN, and add aws_s3_bucket_public_access_block to every bucket holding sensitive data.

4MEDIUMSEC 5 — Network protection

Security groups allow unrestricted ingress from 0.0.0.0/0 on management ports

WAF Question

How do you protect your network resources? — SEC 05

Security groups permitting SSH (port 22) or RDP (port 3389) from 0.0.0.0/0 expose management surfaces to the entire internet. Automated scanners find open port 22 in under four minutes after an EC2 instance becomes publicly reachable. The Well-Architected Security Pillar expects you to limit exposure by using private subnets and AWS Systems Manager Session Manager (no open ports required) or, at minimum, restricting CIDR ranges to known bastion or VPN endpoints.

networking.tf✗ Before
# networking.tf — security group open to the worldresource "aws_security_group" "app_server" {  name   = "app-server-sg"  vpc_id = var.vpc_id  ingress {    from_port   = 22    to_port     = 22    protocol    = "tcp"    cidr_blocks = ["0.0.0.0/0"]   # entire internet can reach SSH  }  ingress {    from_port   = 443    to_port     = 443    protocol    = "tcp"    cidr_blocks = ["0.0.0.0/0"]  }  egress {    from_port   = 0    to_port     = 0    protocol    = "-1"    cidr_blocks = ["0.0.0.0/0"]  }}
networking.tf✓ After
# networking.tf — no inbound SSH; use SSM Session Manager insteadresource "aws_security_group" "app_server" {  name   = "app-server-sg"  vpc_id = var.vpc_id  # No SSH ingress rule — SSM Session Manager requires no inbound port  ingress {    from_port   = 443    to_port     = 443    protocol    = "tcp"    cidr_blocks = ["0.0.0.0/0"]  }  egress {    from_port   = 0    to_port     = 0    protocol    = "-1"    cidr_blocks = ["0.0.0.0/0"]  }}# Grant the instance permission to use SSMresource "aws_iam_role_policy_attachment" "ssm_core" {  role       = aws_iam_role.app_instance_role.name  policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"}

SSM Session Manager also solves key management

With Session Manager, there are no SSH keypairs to rotate, no bastion hosts to patch, and every session is logged to CloudWatch/S3 automatically — which is itself a WAF Security Pillar win for SEC 04.

The fix: Remove 0.0.0.0/0 ingress rules on port 22/3389. Replace with SSM Session Manager (no inbound rules needed at all) or restrict cidr_blocks to a bastion CIDR variable.

5HIGHSEC 2 — Protect credentials & secrets

Secrets and passwords hardcoded in Terraform variables or tfvars files

WAF Question

How do you protect credentials and secrets? — SEC 02

Terraform variable defaults and tfvars files are checked into version control and their values are persisted verbatim in terraform.tfstate. Anyone with repository read access — or access to the state backend S3 bucket — can retrieve the plaintext credential. The WAF Security Pillar explicitly prohibits long-lived static credentials and requires that secrets be stored in a dedicated secrets management service (AWS Secrets Manager or SSM Parameter Store) with rotation enforced.

variables.tf✗ Before
# variables.tf — database password hardcoded as a default valuevariable "db_password" {  description = "Production database master password"  default     = "Sup3rS3cr3tP@ssw0rd!"   # stored in git and in terraform.tfstate}resource "aws_db_instance" "main" {  identifier     = "prod-db"  engine         = "postgres"  instance_class = "db.t3.medium"  username       = "admin"  password       = var.db_password   # plaintext in state file}
variables.tf✓ After
# RDS manages the credential in Secrets Manager — Terraform never handles plaintextresource "aws_db_instance" "main" {  identifier     = "prod-db"  engine         = "postgres"  instance_class = "db.t3.medium"  username       = "admin"  # RDS generates a strong password and stores it in Secrets Manager automatically.  # The plaintext never appears in Terraform state or version control.  manage_master_user_password   = true  master_user_secret_kms_key_id = aws_kms_key.rds_secret_key.arn}resource "aws_kms_key" "rds_secret_key" {  description         = "CMK for RDS master user secret"  enable_key_rotation = true}

tfvars files are not a safe alternative

Passing secrets via terraform.tfvars or -var flags still writes them to terraform.tfstate in plaintext. The state backend must be encrypted (S3 + SSE-KMS), but the real fix is keeping secrets out of Terraform values entirely.

The fix: Remove all credential defaults from Terraform variables. For RDS, enable manage_master_user_password to have RDS generate and rotate the credential via Secrets Manager. For application secrets, use aws_secretsmanager_secret and reference the ARN — never the value — in Terraform.

What an AWS Well-Architected review catches beyond these five

A Well-Architected review examines patterns that span individual resources — things no linter can evaluate from a single resource block:

  • ·CloudTrail coverage across all regions with tamper-evident log storage (is_multi_region_trail = true, enable_log_file_validation = true, and S3 Object Lock on the log bucket) — a WAF Security SEC 04 finding that requires evaluating the trail, the S3 bucket, and the bucket policy together.
  • ·VPC network topology: whether workloads are correctly isolated between subnets, whether NAT gateways are deployed redundantly across AZs, and whether VPC Flow Logs are enabled for network forensics.
  • ·Cross-service IAM trust relationships: whether Lambda execution roles can assume roles in other accounts, or whether resource-based policies on S3 or SQS expose resources to unintended principals.
  • ·Blast radius analysis: if a given role or instance is compromised, what other resources become accessible? Linters check individual resources; architectural reviews map lateral movement paths.
  • ·Workload context: is this a dev environment being over-hardened, or a production database being under-protected relative to its SLA? Context changes which findings are Critical versus Informational.

These cross-resource patterns are why the ArchGuard methodology combines deterministic checks with AI-assisted architectural reasoning — and why the output is a structured findings report rather than a linter log.

WAF Security Pillar — Quick Checklist

Five checks. Run these before your next Well-Architected Review.

#CheckWAFSeverity
1IMDSv2 enforced on all EC2 / launch templatesSEC 02HIGH
2No wildcard IAM actions or resourcesSEC 03HIGH
3SSE-KMS with CMK + public access block on sensitive S3 bucketsSEC 08HIGH
4No 0.0.0.0/0 on SSH/RDP; prefer SSM Session ManagerSEC 05MEDIUM
5No credentials or passwords in Terraform variables or tfvarsSEC 02HIGH

Frequently asked questions

Does Checkov catch all five of these misconfigurations?

Partially. Checkov has rules for IMDSv2 (CKV_AWS_79), IAM wildcards (CKV_AWS_40), S3 encryption (CKV_AWS_19), and open security groups (CKV_AWS_24). It does not have a rule for hardcoded secrets in Terraform variable defaults — that requires semantic understanding of what the value represents, which rule-based linters cannot evaluate. And for the patterns it does check, Checkov reports the WHAT without the architectural WHY or blast-radius context.

Which WAF Security pillar question covers hardcoded secrets?

SEC 02: "How do you protect your credentials and secrets?" The pillar requires that credentials are stored in a dedicated secrets management service (AWS Secrets Manager or SSM Parameter Store), never in configuration files, environment variables, or source control.

What's the difference between these five fixes and passing a full Well-Architected Review?

A full Well-Architected Security review examines the relationships between resources — cross-service IAM trust chains, network isolation between workloads, CloudTrail coverage, blast-radius analysis for each finding, and workload context (prod vs. dev SLA expectations). These five fixes address the most common individual-resource failures, but a review evaluates how those resources interact as an architecture.

How do I prioritise remediating all five today?

Start with the four HIGH severity items: IMDSv1 (fix in minutes with a metadata_options block), IAM wildcards (use IAM Access Analyzer to generate least-privilege policies), hardcoded secrets (enable manage_master_user_password on RDS; migrate app secrets to Secrets Manager), and S3 public access blocks (automated with aws_s3_bucket_public_access_block). The MEDIUM severity security group fix can follow once management access is routed through SSM.

Does AWS Secrets Manager add meaningful cost?

AWS Secrets Manager costs approximately $0.40 per secret per month, plus $0.05 per 10,000 API calls. For a typical production workload with 10–20 secrets, that is $4–8/month — negligible relative to the exposure cost of a plaintext credential in a state file.

AI-Assisted Review

See All Five in Your Own Terraform

ArchGuard runs a full AWS Well-Architected Security review against your Terraform and surfaces findings like these with remediation context — not just flag names.

Get Started

Upload your Terraform. Get a structured findings report.