Secrets Management
Policy In Code, Not In Hope
In the cloud, consistency is crucial: policy in code, minimal privileges, and visibility into drift.
For Secrets Management, automation takes the lead: guardrails in code, least privilege, and continuous drift control.
This way you maintain speed in the cloud, without security depending on manual luck.
Immediate measures (15 minutes)
Why this matters
The core of Secrets Management is risk reduction in practice. Technical context supports the measure selection, but implementation and assurance are central.
Why Secrets Management
Hardcoded credentials have been the number 1 cause of data breaches for years. Research by GitGuardian shows that in 2024 more than 12 million secrets were found in public GitHub repositories. The problem is not that developers don't know it's wrong – the problem is that it's so easy to do it wrong.
What is a secret?
| Type | Examples | Risk if leaked |
|---|---|---|
| API keys | AWS access keys, Stripe keys, Google API keys | Full service access, financial abuse |
| Database credentials | Connection strings, passwords | Data exfiltration, ransomware |
| Certificates & private keys | TLS certs, SSH private keys, signing keys | Man-in-the-middle, impersonation |
| Tokens | OAuth tokens, JWT signing secrets, PATs | Account takeover, lateral movement |
| Encryption keys | AES keys, KMS key material | Decryption of all protected data |
| Service accounts | GCP service account JSON, Azure SP credentials | Full cloud environment compromise |
Lifecycle of a secret
Creation → Storage → Distribution → Usage → Rotation → Revocation
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
Strong Encrypted Encrypted Minimal Automatic Immediate
random at rest in transit scope & frequent upon compromise
Every step you skip is an attack vector. Most organizations do steps 1 and 2 reasonably well and ignore the rest.
HashiCorp Vault
Vault is the de facto standard for centralized secrets management. Core concepts: seal/unseal (master key via Shamir's Secret Sharing, in production auto-unseal via cloud KMS), auth methods (AppRole, Kubernetes, AWS IAM, OIDC), secret engines (KV, database, transit, PKI).
KV Secrets Engine (v2)
# Store and retrieve a secret
vault kv put secret/myapp/database username="dbadmin" password="s3cur3-p@ss"
vault kv get secret/myapp/database
vault kv get -version=2 secret/myapp/database # Specific version
vault kv rollback -version=1 secret/myapp/database # RestoreDynamic Secrets: Database Credentials On-Demand
Instead of long-lived credentials, Vault creates temporary credentials per request:
# Configure database secret engine
vault secrets enable database
vault write database/config/postgres \
plugin_name=postgresql-database-plugin \
connection_url="postgresql://{{username}}:{{password}}@db.internal:5432/prod" \
allowed_roles="readonly" \
username="vault_admin" \
password="vault_admin_password"
# Role: credentials valid for 1 hour
vault write database/roles/readonly \
db_name=postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' \
VALID UNTIL '{{expiration}}'; \
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
# Retrieve dynamic credentials -- automatically revoked after 1 hour
vault read database/creds/readonlyTransit Engine & Policy
# Encryption as a Service
vault secrets enable transit
vault write -f transit/keys/payment-data
vault write transit/encrypt/payment-data plaintext=$(echo "NL91ABNA0417164300" | base64)# policy: app-readonly.hcl
path "secret/data/myapp/*" {
capabilities = ["read", "list"]
}
path "database/creds/readonly" {
capabilities = ["read"]
}
path "transit/encrypt/payment-data" {
capabilities = ["update"]
}
path "sys/*" {
capabilities = ["deny"]
}
Python hvac Library
import hvac
client = hvac.Client(url='https://vault.internal:8200')
client.auth.approle.login(
role_id='db02de05-c0f8-4d4b-a7c3-xxx',
secret_id='6a174c20-f6de-a53c-74d2-xxx'
)
# Retrieve KV secret
secret = client.secrets.kv.v2.read_secret_version(
path='myapp/database', mount_point='secret'
)
db_password = secret['data']['data']['password']
# Dynamic database credentials
creds = client.secrets.database.generate_credentials(
name='readonly', mount_point='database'
)Cloud-Native Solutions
AWS Secrets Manager
import boto3, json
client = boto3.client('secretsmanager', region_name='eu-west-1')
response = client.get_secret_value(SecretId='prod/myapp/database')
secret = json.loads(response['SecretString'])# Enable automatic rotation (30 days)
aws secretsmanager rotate-secret \
--secret-id prod/myapp/database \
--rotation-lambda-arn arn:aws:lambda:eu-west-1:111111111111:function:SecretsRotation \
--rotation-rules '{"AutomaticallyAfterDays": 30}'
# SSM Parameter Store: cheaper alternative
aws ssm put-parameter --name "/prod/myapp/db-password" \
--value "s3cur3-p@ss" --type SecureString --key-id alias/myapp-keyAzure Key Vault
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
credential = DefaultAzureCredential() # Managed identity in Azure
client = SecretClient(
vault_url="https://myapp-vault.vault.azure.net/",
credential=credential
)
secret = client.get_secret("database-password")
db_password = secret.valueGCP Secret Manager
from google.cloud import secretmanager
client = secretmanager.SecretManagerServiceClient()
name = "projects/my-project/secrets/database-password/versions/latest"
response = client.access_secret_version(request={"name": name})
db_password = response.payload.data.decode("UTF-8")Comparison Table
| Feature | AWS Secrets Manager | Azure Key Vault | GCP Secret Manager | HashiCorp Vault |
|---|---|---|---|---|
| Automatic rotation | Yes (Lambda) | Yes (Event Grid) | Yes (Cloud Functions) | Yes (dynamic secrets) |
| Versioning | Yes | Yes | Yes | Yes (KV v2) |
| Audit logging | CloudTrail | Azure Monitor | Cloud Audit Logs | Audit device |
| Encryption | KMS | HSM-backed | Cloud KMS | Transit / auto-unseal |
| Dynamic secrets | No | No | No | Yes |
| Multi-cloud | No | No | No | Yes |
| Managed identity | IAM roles | Managed Identity | Workload Identity | AppRole / K8s auth |
| Cost | ~$0.40/secret/month | ~$0.03/operation | ~$0.06/secret/month | Open source / Enterprise |
Secrets in CI/CD
GitHub Actions Secrets
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Deploy
env:
DB_PASSWORD: ${{ secrets.DB_PASSWORD }} # Automatically masked in logs
run: ./deploy.shGitLab CI/CD Variables
Configure secrets as masked (hidden in logs), protected (only on protected branches), and with environment scope (only for specific environments).
OIDC Federation: No More Static Credentials
# GitHub Actions → AWS without long-lived keys
name: Deploy to AWS
on:
push:
branches: [main]
permissions:
id-token: write # Required for OIDC
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials via OIDC
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::111111111111:role/GitHubActions-Deploy
aws-region: eu-west-1
# No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY needed
- name: Deploy
run: aws ecs update-service --cluster prod --service myapp --force-new-deploymentSecrets in Containers
Kubernetes Secrets: Base64 is Not Encryption
# Anyone with kubectl get secret can decode this:
kubectl get secret db-creds -o jsonpath='{.data.password}' | base64 -dEnable encryption at rest via
EncryptionConfiguration, or better: use External Secrets
Operator.
External Secrets Operator
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: production
spec:
refreshInterval: 5m
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: db-creds
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: secret/data/myapp/database
property: passwordVault Agent Injector
apiVersion: v1
kind: Pod
metadata:
name: myapp
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "myapp"
vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/readonly"
vault.hashicorp.com/agent-inject-template-db-creds: |
{{- with secret "database/creds/readonly" -}}
postgresql://{{ .Data.username }}:{{ .Data.password }}@db:5432/prod
{{- end }}
spec:
containers:
- name: myapp
image: myapp:latest
# Credentials available in /vault/secrets/db-credsSecrets Never in Docker Images
# WRONG: secret in image layer (even if you delete it later)
FROM python:3.12-slim
ENV DATABASE_URL=postgresql://admin:password123@db:5432/prod
# RIGHT: multi-stage build, secrets only in build stage
FROM python:3.12-slim AS builder
RUN --mount=type=secret,id=pip_token \
PIP_INDEX_URL=https://$(cat /run/secrets/pip_token)@pypi.internal/simple/ \
pip install -r requirements.txt
FROM python:3.12-slim
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY . .
# No secrets in the final imageRotation
Rotation limits the impact of a compromise. If a leaked key automatically becomes invalid after 24 hours, an attacker has a limited window of opportunity.
| Strategy | How it works | Suitable for |
|---|---|---|
| Dynamic secrets | New credentials per request, short TTL | Database credentials, cloud tokens |
| Scheduled rotation | Periodic replacement (30/60/90 days) | API keys, service accounts |
| Event-driven rotation | Rotate upon suspicious activity | Any type of secret |
| Dual-version graceful | Two versions temporarily active | Everything without downtime |
Graceful Rotation with Two Active Versions
import boto3
def lambda_handler(event, context):
"""AWS Secrets Manager rotation Lambda (4 steps)."""
secret_id = event['SecretId']
step = event['Step']
client = boto3.client('secretsmanager')
if step == "createSecret":
new_password = client.get_random_password(
PasswordLength=32, ExcludeCharacters='/@"\\',
)['RandomPassword']
client.put_secret_value(
SecretId=secret_id,
ClientRequestToken=event['ClientRequestToken'],
SecretString=new_password,
VersionStages=['AWSPENDING']
)
elif step == "setSecret":
# Apply new password to the database
pass # ALTER USER ... PASSWORD ...
elif step == "testSecret":
# Verify that new credentials work
pass
elif step == "finishSecret":
# Promote AWSPENDING → AWSCURRENT
client.update_secret_version_stage(
SecretId=secret_id, VersionStage='AWSCURRENT',
MoveToVersionId=event['ClientRequestToken'],
)Detection of Leaked Secrets
Pre-commit Hooks
# gitleaks: scan for secrets
gitleaks detect --source . --verbose
# Pre-commit configuration (.pre-commit-config.yaml)
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
# trufflehog: deeper scan including git history
trufflehog git file://. --only-verifiedGitHub/GitLab Secret Scanning
Enable secret scanning and push protection via repository settings. GitHub blocks pushes that contain known secret patterns (AWS keys, GCP service accounts, Stripe keys, etc.).
.gitignore Best Practices
# Secrets and credentials
.env
.env.*
*.pem
*.key
*.p12
credentials.json
service-account.json
secrets.yaml
vault-token
# Terraform state (contains plaintext secrets!)
*.tfstate
*.tfstate.backup
.terraform/
What to Do When a Leak Occurs
# 1. IMMEDIATELY: rotate the leaked secret
# Don't wait. Do it now. Not after the standup.
# 2. Check if the secret was abused
# Check CloudTrail, Azure Activity Log, GCP Audit Logs
# 3. Remove from git history with BFG Repo-Cleaner
bfg --replace-text passwords.txt repo.git
cd repo.git && git reflog expire --expire=now --all
git gc --prune=now --aggressive && git push --force
# 4. ALL team members: delete local clone and re-clone
# 5. Document the incidentCommon Mistakes
| Mistake | Why it goes wrong | Solution |
|---|---|---|
| Secrets in source code | "It's just for testing" | Vault or cloud secrets manager from day 1 |
.env in git |
.gitignore forgotten or added too late |
Pre-commit hooks, .env as template |
| Base64 as encryption | Kubernetes docs almost suggest it | Encryption at rest, External Secrets Operator |
| Shared service accounts | "Everyone uses the same API key" | Per-service credentials, dynamic secrets |
| No rotation | "It works, doesn't it?" | Automatic rotation, short TTL |
| Secrets in CI/CD logs | echo $PASSWORD in debug mode |
Masked variables, never print secrets |
| Secrets in Docker layers | COPY .env . in Dockerfile |
Multi-stage builds, runtime injection |
| Terraform state in git | State contains plaintext secrets | Remote backend (S3, GCS) with encryption |
| Long-lived PATs | Tokens that never expire | Short expiry, OIDC federation |
| Secrets in Slack/Teams | "Can you send me the password?" | Share Vault URL, never the secret itself |
| Same key everywhere | Dev, staging, and prod share a key | Separate secrets per environment |
| No audit logging | No idea who read which secret | Vault audit logs, CloudTrail |
Checklist
| Priority | Measure | Category |
|---|---|---|
| P0 - Now | Scan repositories for existing secrets (gitleaks/trufflehog) | Detection |
| P0 - Now | Rotate all discovered leaked secrets | Incident response |
| P0 - Now | Enable GitHub/GitLab secret scanning | Detection |
| P1 - This sprint | Implement pre-commit hooks for secret detection | Prevention |
| P1 - This sprint | Migrate hardcoded secrets to Vault or cloud secrets manager | Storage |
| P1 - This sprint | Remove static credentials from CI/CD; use OIDC | CI/CD |
| P1 - This sprint | Enable encryption at rest for Kubernetes Secrets | Container |
| P2 - This quarter | Implement automatic rotation for database credentials | Rotation |
| P2 - This quarter | Migrate to dynamic secrets (Vault) where possible | Storage |
| P2 - This quarter | Implement External Secrets Operator in Kubernetes | Container |
| P2 - This quarter | Configure audit logging for all secret access | Monitoring |
| P3 - Roadmap | OIDC federation for all CI/CD pipelines | CI/CD |
| P3 - Roadmap | Transit engine for application encryption | Encryption |
| P3 - Roadmap | Automatic incident response upon secret leak detection | Automation |
| P3 - Roadmap | Central secrets management dashboard with compliance reporting | Governance |
We build the most advanced cloud architectures in the world.
Kubernetes clusters with service meshes. Zero-trust networking. Everything
Infrastructure as Code, everything automated. And then someone commits
AWS_SECRET_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE to a
public GitHub repository. At 4:47 PM on a Friday. In a commit with
the message "quick fix."
It would be funny if it weren't so depressing. We have
Vault. We have AWS Secrets Manager. We have OIDC federation. And yet
.env files are still the
"secrets management solution" for some teams. A .env file is not
secrets management. It's a text file with passwords in it. It's
the digital equivalent of a Post-it on your monitor.
And then there's the developer who says: "It's a private repo, so it's safe." Private. The repo that three former employees, two interns who left last year, and that one contractor from 2021 still have access to. But yeah, private. So it's safe. Sleep well.
Summary
Secrets management is not an optional addition but a fundamental part of every security architecture. Use a centralized secrets manager (HashiCorp Vault for multi-cloud or the native solution from your cloud provider), implement automatic rotation with short TTLs, use dynamic secrets where possible, eliminate long-lived credentials in CI/CD via OIDC federation, and protect containers with External Secrets Operator or Vault Agent Injector. Proactively scan for leaked secrets with pre-commit hooks and platform-native scanning. The majority of data breaches don't start with a sophisticated attack but with a forgotten credential in a Git repository. Preventing that is more effective than any detection measure.
Further reading in the knowledge base
These articles in the portal provide more background and practical context:
- The cloud — someone else's computer, your responsibility
- Containers and Docker — what it is and why you need to secure it
- Encryption — the art of making things unreadable
- Least Privilege — give people only what they need
You need an account to access the knowledge base. Log in or register.
Related security measures
These articles provide additional context and depth: