GCP VM instances should not use project-wide SSH keys for access control.

Why is this an issue?

SSH keys stored in a project’s metadata are automatically deployed to all VM instances in that project by default. Project-level SSH key management bypasses fine-grained VM-level access control, is error-prone, and increases the blast radius when a key is compromised. This rule detects google_compute_instance resources that do not set metadata.block-project-ssh-keys = true.

What is the potential impact?

If a project-level SSH key is compromised, all VM instances in the project may be accessible to an attacker. Unlike OS Login, manual SSH key management does not enforce the principle of least privilege and makes it difficult to revoke access when a user leaves the project.

How to fix it

Code examples

Noncompliant code example

resource "google_compute_instance" "example" { # Noncompliant: missing block-project-ssh-keys
  name         = "example"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  network_interface {
    network = "default"

    access_config {
    }
  }
}

Compliant solution

resource "google_compute_instance" "example" {
  name         = "example"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  metadata = {
    block-project-ssh-keys = true
  }

  network_interface {
    network = "default"

    access_config {
    }
  }
}

Prefer OS Login over manual key management

While blocking project-wide SSH keys reduces the attack surface, using OS Login is the preferred approach. OS Login ties SSH access to IAM identities, enforces the principle of least privilege, and makes it straightforward to revoke access when a user leaves the project.

Resources

Documentation

Standards