Secure Development Lifecycle
Web Security Without Firefighting
Web risk is rarely mysterious. It usually lies in predictable mistakes that persist under time pressure.
With Secure Development Lifecycle the greatest gains come from secure defaults that are automatically enforced in every release.
This makes security less of a loose afterthought check and more of a standard quality of your product.
Immediate measures (15 minutes)
Why this matters
The core of Secure Development Lifecycle is risk reduction in practice. Technical context supports the choice of measures, but implementation and embedding are central.
Shift Left: Security in the Development Process
The term "shift left" refers to moving security activities to earlier phases in the development cycle. In a traditional waterfall model, security sits on the right -- a penetration test just before release. In an SDL model, security is in every phase, from the first design decision.
| SDL Phase | Activities | Tools / Deliverables |
|---|---|---|
| Training | Security awareness for developers, OWASP Top 10, secure coding | OWASP WebGoat, Juice Shop, internal workshops |
| Requirements | Define security requirements, privacy requirements, compliance mapping | Abuse cases, misuse cases, data classification |
| Design | Threat modeling, security architecture review, crypto choices | STRIDE, Attack Trees, security design patterns |
| Implementation | Secure coding standards, peer review, SAST, pre-commit hooks | Semgrep, SonarQube, gitleaks, IDE plugins |
| Verification | DAST, SCA, fuzzing, penetration tests, code review | OWASP ZAP, Nuclei, Trivy, pip-audit |
| Release | Final security review, generate SBOM, prepare incident response plan | CycloneDX, release gates in CI/CD |
| Response | Vulnerability disclosure, patch management, post-incident analysis | PSIRT process, CVE request, lessons learned |
The economics are mercilessly clear. IBM's Systems Sciences Institute calculated that a defect found in the maintenance phase costs up to a hundred times more to fix than the same defect found during design. For security defects, that factor is even higher, because you need to add the costs of incident response, reputational damage, and potential fines on top.
Static Application Security Testing (SAST)
SAST analyzes source code without running the application. It finds patterns known to be vulnerable: SQL concatenation (see Web 01), unvalidated input in templates (see Web 05), hardcoded credentials, insecure cryptographic functions. It is the digital equivalent of a building inspector checking the blueprint before a single brick is laid.
Tools per language
| Tool | Language / Framework | Open Source | Details |
|---|---|---|---|
| Semgrep | Multi-language (Python, JS, Java, Go, Ruby, ...) | Yes | Rule-based, fast, custom rules |
| SonarQube | Multi-language | Community Edition free | Quality gates, technical debt tracking |
| Bandit | Python | Yes | Specific to Python security |
| SpotBugs + Find Security Bugs | Java | Yes | Bytecode analysis |
| gosec | Go | Yes | Go-specific checks |
| Brakeman | Ruby on Rails | Yes | Framework-aware analysis |
| CodeQL | Multi-language | Free for open source | GitHub integration, dataflow analysis |
Semgrep: custom rules
Semgrep is particularly powerful due to the ability to write custom rules that are specific to your codebase. A rule that checks whether Flask routes validate input (see Web 12 for input validation):
# .semgrep/flask-sql-injection.yml
rules:
- id: flask-raw-sql-query
patterns:
- pattern: |
cursor.execute(f"...", ...)
- pattern-not: |
cursor.execute("...", (...))
message: >
Possible SQL injection: use parameterized queries
instead of f-strings. See Web 01.
languages: [python]
severity: ERROR
metadata:
cwe: "CWE-89"
owasp: "A03:2021 Injection"
confidence: HIGH
- id: flask-unvalidated-redirect
pattern: |
redirect(request.args.get(...))
message: >
Open redirect: validate the redirect URL against an allowlist.
languages: [python]
severity: WARNING
metadata:
cwe: "CWE-601"Running locally:
# Scan the entire codebase with your own rules
semgrep --config .semgrep/ --config p/owasp-top-ten .
# Only scan changed files (useful in CI)
semgrep --config .semgrep/ --config p/python \
$(git diff --name-only HEAD~1 -- '*.py')SAST in GitHub Actions
A complete SAST workflow that runs on every pull request:
# .github/workflows/sast.yml
name: SAST
on:
pull_request:
branches: [main, develop]
push:
branches: [main]
jobs:
semgrep:
runs-on: ubuntu-latest
container:
image: semgrep/semgrep
steps:
- uses: actions/checkout@v4
- name: Semgrep scan
run: |
semgrep ci \
--config p/owasp-top-ten \
--config p/python \
--config .semgrep/ \
--sarif --output semgrep.sarif
env:
SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}
- name: Upload SARIF
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: semgrep.sarif
bandit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Bandit scan
run: |
pip install bandit
bandit -r src/ -f sarif -o bandit.sarif \
-ll # medium and higher only
- name: Upload SARIF
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: bandit.sarifNote: SAST produces false positives. A SAST tool that reports everything is just as useless as one that reports nothing. Tune your rules. Start with high severity, fix those, and gradually lower the threshold. A team buried under five hundred SAST warnings will ignore them. Just like a car alarm that goes off too often.
Dynamic Application Security Testing (DAST)
Where SAST looks at source code, DAST tests the running application. It sends requests, observes the responses, and looks for vulnerabilities that are only visible in a live environment: incorrect HTTP headers (see Web 11), missing CSP (see Web 09), open redirects, and more.
Tools
| Tool | Type | Open Source | Usage |
|---|---|---|---|
| OWASP ZAP | Full DAST | Yes | Active and passive scanning |
| Nuclei | Template-based scanner | Yes | Fast checks with community templates |
| Burp Suite Pro | Full DAST | No (commercial) | Manual + automated tests |
| Nikto | Web server scanner | Yes | Quick config checks |
ZAP in CI/CD
OWASP ZAP offers a Docker image that fits directly into your CI/CD pipeline:
# .github/workflows/dast.yml
name: DAST
on:
workflow_run:
workflows: ["Deploy Staging"]
types: [completed]
jobs:
zap-scan:
if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: ZAP Baseline Scan
uses: zaproxy/action-baseline@v0.14.0
with:
target: "https://staging.example.com"
rules_file_name: ".zap/rules.tsv"
cmd_options: >
-a
-j
-config api.disablekey=true
-config scanner.threadPerHost=2
- name: Upload ZAP report
if: always()
uses: actions/upload-artifact@v4
with:
name: zap-report
path: report_html.htmlConfigure rules via a TSV file to suppress false positives:
# .zap/rules.tsv
# Rule ID Action Description
10035 IGNORE Strict-Transport-Security Header Not Set (staging has no TLS)
10096 IGNORE Timestamp Disclosure
90033 WARN Loosely Scoped Cookie
Nuclei for quick checks
Nuclei is lighter than ZAP and particularly suited for automated checks on known misconfigurations:
# Install nuclei
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest
# Scan with OWASP-related templates
nuclei -u https://staging.example.com \
-t http/misconfiguration/ \
-t http/exposures/ \
-t http/vulnerabilities/ \
-severity medium,high,critical \
-o nuclei-results.txtSoftware Composition Analysis (SCA)
According to estimates, 70 to 90 percent of modern applications consist of open-source components. You may write ten percent of your code yourself; the rest is npm packages, PyPI libraries, and Maven dependencies that you install with a single command and then never look at again. Until someone publishes a CVE for that one logging library that is throughout your entire stack -- Log4Shell, anyone?
SCA scans your dependencies for known vulnerabilities and license issues.
Tools
| Tool | Ecosystem | Open Source | Details |
|---|---|---|---|
| npm audit | Node.js | Yes (built-in) | npm audit --omit=dev |
| pip-audit | Python | Yes | Uses OSV database |
| Trivy | Multi (OS, containers, code) | Yes | All-in-one scanner |
| Dependabot | Multi | Free (GitHub) | Automatic PRs for updates |
| Snyk | Multi | Free tier | Fix suggestions, container scanning |
| Grype | Multi | Yes | SBOM-compatible, fast |
SBOM: Software Bill of Materials
An SBOM is an ingredient list for your software. Two standards:
- CycloneDX (OWASP) -- XML or JSON, security-focused
- SPDX (Linux Foundation) -- ISO standard, license-focused
Generating with Trivy:
# Generate SBOM in CycloneDX format
trivy fs --format cyclonedx --output sbom.json .
# Scan SBOM for vulnerabilities
trivy sbom sbom.json --severity HIGH,CRITICALTrivy in GitHub Actions
# .github/workflows/sca.yml
name: SCA
on:
pull_request:
paths:
- "requirements*.txt"
- "package*.json"
- "go.sum"
- "Dockerfile"
jobs:
trivy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Trivy vulnerability scan
uses: aquasecurity/trivy-action@0.28.0
with:
scan-type: "fs"
scan-ref: "."
format: "sarif"
output: "trivy-results.sarif"
severity: "MEDIUM,HIGH,CRITICAL"
ignore-unfixed: true
- name: Upload Trivy SARIF
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: trivy-results.sarif
pip-audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: pip-audit
run: |
pip install pip-audit
pip-audit -r requirements.txt \
--desc \
--format json \
--output pip-audit-results.jsonSecret Scanning & Pre-commit Hooks
There is a special corner in hell reserved for developers who commit AWS keys to a public repository. And that corner is overcrowded. GitGuardian reported in 2024 more than 12.8 million secrets detected in public Git commits. API keys, database passwords, private keys, OAuth client secrets -- you could fill an entire assessment with them without ever having to write an exploit.
Tools
| Tool | Approach | Open Source |
|---|---|---|
| gitleaks | Regex patterns on Git history | Yes |
| truffleHog | Entropy + regex, scans history | Yes |
| detect-secrets | Baseline model, plugin architecture | Yes (Yelp) |
| GitHub Secret Scanning | Built into GitHub (public repos free) | N/A |
| GitGuardian | SaaS + CLI, real-time | Free tier |
Pre-commit configuration with gitleaks
Pre-commit hooks run before code is committed. If the scan finds a secret, the commit is blocked. Install once and you prevent a category of incidents that can ruin your career.
# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.21.2
hooks:
- id: gitleaks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: check-added-large-files
args: ["--maxkb=500"]
- id: detect-private-key
- id: check-merge-conflict
- repo: https://github.com/PyCQA/bandit
rev: 1.8.3
hooks:
- id: bandit
args: ["-ll", "-q"]Installation:
# Install pre-commit framework
pip install pre-commit
# Activate hooks in your repository
pre-commit install
# One-time scan of all files
pre-commit run --all-files
# Run gitleaks separately on the entire Git history
gitleaks detect --source . --verboseGitleaks custom config
Configure gitleaks to detect specific patterns or ignore false positives:
# .gitleaks.toml
title = "Custom gitleaks config"
[extend]
useDefault = true
[[rules]]
id = "internal-api-key"
description = "Internal API key pattern"
regex = '''(?i)x-api-key\s*[:=]\s*['"]?[a-z0-9]{32,}['"]?'''
tags = ["api", "internal"]
[allowlist]
paths = [
'''tests/fixtures/''',
'''docs/examples/''',
]
regexTarget = "line"
regexes = [
'''EXAMPLE_KEY''',
'''test-api-key-not-real''',
]Code Review for Security
Automated tooling catches a lot, but not everything. Business logic vulnerabilities -- an IDOR where user A can access user B's data (see Web 14 for API security), a race condition in a payment flow, an authorization check missing on an admin endpoint -- escape every scanner. For those you need human reviewers.
Security-focused review checklist
When reviewing code, check at minimum the following:
| Category | Checkpoints |
|---|---|
| Authentication | Passwords hashed with bcrypt/argon2? No hardcoded credentials? Brute-force protection? (See Web 10) |
| Authorization | Is authorization checked on every endpoint? No direct object references without ownership check? |
| Input validation | All input validated and sanitized? Parameterized queries? Template variables? (See Web 01, Web 12) |
| Output encoding | Context-aware output encoding for HTML, JS, URL? (See Web 02) |
| Cryptography | No homemade crypto? Current algorithms (AES-256-GCM, Ed25519)? No ECB mode? (See Web 13) |
| Error handling | No stack traces in production? No sensitive data in error messages? |
| Logging | Are security events logged? No passwords or tokens in logs? |
| Dependencies | New dependencies checked? Known vulnerabilities? License compatible? |
| Secrets | No API keys, passwords, or tokens in the code? Configuration via environment variables? |
| HTTP headers | Security headers present? CORS correctly configured? (See Web 11) |
Pull request template
Use a PR template that forces reviewers to think about security:
<!-- .github/pull_request_template.md -->
## Description
<!-- What does this change do? -->
## Security checklist
- [ ] No hardcoded credentials or secrets
- [ ] Input is validated and sanitized
- [ ] Authorization is checked on all new endpoints
- [ ] No sensitive data in logs or error messages
- [ ] New dependencies checked for CVEs
- [ ] SAST/SCA pipeline is green
## Test evidence
<!-- How did you test that this is secure? -->Security Champions Program
You can install the best tools, build the tightest CI/CD pipelines, and write the most complete checklists -- if no developer understands why those measures exist, they will be seen as obstacles and circumvented. A Security Champions program anchors security knowledge directly in the development teams.
What is a Security Champion?
A Security Champion is a developer who -- alongside their regular work -- serves as the point of contact for security within their team. It is not a full-time security engineer. It is someone who:
- Can explain the OWASP Top 10 to colleagues
- Does security-relevant code review
- Translates new vulnerabilities into impact for the team
- Escalates to the security team when needed
- Participates in monthly Security Champions meetups
Setup
| Aspect | Details |
|---|---|
| Selection | Volunteers, minimum 1 per team of 5-8 developers |
| Time investment | 10-15% of work time (4-6 hours per week) |
| Training | Initial: 2-day workshop. Ongoing: monthly session |
| Tooling | Access to SAST/DAST dashboards, threat modeling tools, internal wiki |
| Recognition | Inclusion in performance reviews, security certifications, conference budget |
| Community | Monthly meetup, Slack/Teams channel, joint CTF participation |
Training topics
- OWASP Top 10 and how to recognize them in code (see all Web chapters)
- Threat modeling in practice (see section 17.8)
- Secure coding patterns per language/framework
- Interpreting and triaging SAST/DAST results
- Incident response basics: what to do when things go wrong
- Supply chain security and SCA
Threat Modeling
Threat modeling is the structured process of thinking about what can go wrong, before it goes wrong. It is like mapping out the route for a car trip and thinking in advance about where the tires could go flat, instead of waiting until you are standing on the highway with a blowout.
When in the SDLC?
Threat modeling belongs in the design phase, before code is written. But it is also valuable for significant changes to existing systems -- a new API, an additional integration layer, a migration to the cloud.
Methodologies
| Method | Approach | Suitable for |
|---|---|---|
| STRIDE | Per component: Spoofing, Tampering, Repudiation, Information Disclosure, DoS, Elevation of Privilege | Systematic analysis of individual components |
| PASTA | 7 steps, risk-based, business context central | Complex systems with many stakeholders |
| Attack Trees | Hierarchical decomposition of attack goals | In-depth analysis of specific scenarios |
| LINDDUN | Privacy-focused threat model | Systems with personal data (GDPR) |
STRIDE in practice
For each component in your architecture you ask six questions:
Component: REST API for user management
[S] Spoofing - Can someone impersonate another user?
→ Mitigation: JWT validation, mutual TLS (see Web 14)
[T] Tampering - Can someone modify data in transit?
→ Mitigation: TLS 1.3, request signing (see Web 13)
[R] Repudiation - Can someone deny having performed an action?
→ Mitigation: Audit logging with tamper-evident storage
[I] Info Disclosure - Can sensitive data leak?
→ Mitigation: Output filtering, error handling (see Web 12)
[D] Denial of Service - Can someone make the service unavailable?
→ Mitigation: Rate limiting, input validation (see Web 14)
[E] Elevation - Can a user obtain admin privileges?
→ Mitigation: RBAC, principle of least privilege
Threat model as code
For teams that prefer code over diagrams, pytm (Python Threat Modeling) offers the ability to write threat models as code:
# threat_model.py
from pytm import TM, Server, Datastore, Dataflow, Boundary, Actor
tm = TM("Web Application Threat Model")
tm.description = "Threat model for customer portal"
internet = Boundary("Internet")
dmz = Boundary("DMZ")
internal = Boundary("Internal network")
user = Actor("User")
user.inBoundary = internet
web = Server("Web Application")
web.inBoundary = dmz
web.protocol = "HTTPS"
web.sanitizesInput = True
web.encodesOutput = True
db = Datastore("Database")
db.inBoundary = internal
db.isEncryptedAtRest = True
user_to_web = Dataflow(user, web, "HTTPS requests")
user_to_web.protocol = "HTTPS"
web_to_db = Dataflow(web, db, "SQL queries")
web_to_db.protocol = "TLS"
tm.process()# Generate threat model report
python threat_model.py --dfd | dot -Tpng -o threat_model.png
python threat_model.py --report threats.mdSummary
- Shift left: the earlier you find vulnerabilities, the cheaper the fix. Integrate security in every phase of the SDLC, not just at the end.
- SAST (Semgrep, Bandit, SonarQube) analyzes source code for known vulnerable patterns. Run it in your CI/CD pipeline on every pull request.
- DAST (OWASP ZAP, Nuclei) tests your running application. Use it against staging environments after every deploy.
- SCA (Trivy, pip-audit, npm audit) scans your dependencies for known CVEs. Generate an SBOM for compliance and insight.
- Secret scanning (gitleaks, truffleHog) prevents credentials from ending up in your Git history. Pre-commit hooks are your first line of defense.
- Code review catches business logic vulnerabilities that no tool can detect. Use checklists and PR templates to guide reviewers.
- Security Champions anchor security knowledge in development teams. Without human involvement, tools will be ignored.
- Threat modeling (STRIDE, PASTA) forces you to think about attack scenarios before building. Do it in the design phase, repeat for major changes.
- The best security is not a tool or configuration -- it is a culture in which developers view security as their responsibility, not as someone else's problem.
Further reading in the knowledge base
These articles in the portal provide more background and practical context:
- APIs -- the invisible glue of the internet
- SSL/TLS -- why that padlock in your browser matters
- Encryption -- the art of making things unreadable
- Password hashing -- how websites store your password
- Penetration tests vs. vulnerability scans
You need an account to access the knowledge base. Log in or register.
Related security measures
These articles provide additional context and depth: