Posts

SOC 2 Type 2 mapping to Secure SDLC Requirements

We started to talk about the SOC2 Type 2 certification and I feel that we neglected it a bit.

I wrote a bit about SDLC, Secure SDLC in particular, but now it is time to bring them together.

 

SOC 2 Type 2 and Secure SDLC — the big picture

SOC 2 Type 2 evaluates whether controls are operating effectively over time (typically 6–12 months). It is not a point-in-time snapshot.

Your SDLC is not an isolated engineering practice — it feeds directly into several Trust Services Criteria (TSC).

All nine Common Criteria map to the SDLC in some way, but they do so at different layers.

CC1 (Control Environment) is the foundation. It is not about code or process — it is about organizational accountability. The auditor checks that your Secure SDLC has a named owner, that the policy carries formal authority, and that security has a defined role in the development organization. Without this, every other control lacks a governance backbone.

CC2 (Communication) requires that developers know the rules. A Secure SDLC policy that exists but was never distributed or acknowledged does not satisfy this criterion. The auditor looks for training records, policy sign-offs, or equivalent evidence that the people making security decisions in each SDLC phase were aware of their obligations.

CC3 (Risk Assessment) maps directly to the Idea and PoC phases. The criterion requires that risks are identified and analyzed before work begins. A threat model, a risk register entry, or a documented security review of the proposed design all serve as evidence. The auditor wants to see that risk was considered as an input to scope decisions, not evaluated after the fact.

CC4 (Monitoring Activities) requires ongoing evaluation of whether controls are working. In SDLC terms this means SAST, DAST, and SCA scans must run regularly, their results must be reviewed, and findings must be tracked to resolution. Running a scan whose results are never acted on does not satisfy CC4.

CC5 (Control Activities) covers the specific rules that govern how code is written and reviewed. Secure coding standards, mandatory peer review, branch protection, and secrets scanning policies all live here. CC5 is about the guardrails built into the development process itself, not just the approval chain around it.

CC6 (Logical Access) runs across the widest range of SDLC phases. It covers who has access to source code, build pipelines, deployment tools, and production environments — and whether that access is appropriate at each phase. PoC access that was never revoked and production credentials embedded in a repository are both CC6 findings.

CC7 (System Operations) requires that running systems can detect and respond to threats. Its SDLC relevance is that logging, alerting, and incident response readiness must be built into the product before it reaches production. If these are treated as post-launch concerns, CC7 is a gap.

CC8 (Change Management) is the criterion most directly owned by the SDLC. Every code change from PoC through EOL must be authorized, reviewed, and traceable. This criterion generates the highest sample volume in a Type 2 audit — typically 20 to 25 change records — and every sampled item needs a complete evidence chain.

CC9 (Risk Mitigation) addresses third-party and vendor risk. In a software development context this means evaluating open-source libraries, SDKs, and external dependencies before they are adopted. Running a dependency scan satisfies part of this, but CC9 specifically requires that a conscious risk decision was documented — not just that a tool ran.

The practical takeaway is that CC1, CC2, and CC9 are the ones most commonly missing from customers who think their Secure SDLC is well covered.

They focus on CC8 (change management) and CC6 (access) but leave governance, communication, and vendor risk undocumented.

 

Summary mapping of SOC2 controls to SDLC

CC Control Name Idea PoC MVP Release EOL SDLC Intersection / Evidence
CC1 Control environment SDLC policy with named owner. Security team has formal authority to block releases.
CC2 Communication Secure SDLC policy published and acknowledged. Developer training completion records.
CC3 Risk assessment Threat model at Idea phase. Risk register updated before PoC scope is confirmed.
CC4 Monitoring activities SAST/DAST results reviewed. Recurring vuln scans in prod. Findings tracked to closure.
CC5 Control activities Secure coding standards doc. Code review policy enforced. Branch protection rules active.
CC6 Logical access Repo and pipeline access logs. Secrets management reviewed. Prod access revoked at EOL.
CC7 System operations Logging enabled pre-release. Alerting configured in prod. Incident runbook referenced.
CC8 Change management PR records with approvals. Pipeline gates enforced. EOL change ticket required.
CC9 Risk mitigation Third-party libraries assessed. OSS license and security risk reviewed before adoption.

 

Practical Checklist — SDLC Evidence by Common Criteria

CC1 — Control environment

SDLC policy with version, date, and named owner. Org chart showing security’s authority. Evidence security can block a release.

CC2 — Communication

Policy acknowledgment log with names and dates. Annual security training completion records. Re-communication evidence if policy changed during the audit period.

CC3 — Risk assessment

Threat model dated before PoC began. Risk register with severity ratings and owners. Security requirements traceable to backlog items.

CC4 — Monitoring activities

SAST and SCA scan reports on a recurring cadence, not one-off. Vulnerability remediation log showing finding, severity, owner, SLA target, and closure date.

CC5 — Control activities

Secure coding standards document. Branch protection configuration blocking direct pushes to main. Secrets scanning active in the repository.

CC6 — Logical access

User access list per environment with roles. Annual access review log. MFA enforcement evidence. Secrets stored in a secrets manager, not in code. Access revocation records for leavers and decommissioned systems.

CC7 — System operations

Logging configuration in place before first production release. Alerting thresholds and escalation paths documented. At least one security alert triaged and recorded during the audit period.

CC8 — Change management

PR records with reviewer names and approval timestamps for every sampled change. Pipeline logs showing tests passed before deployment. Rollback procedure documented. Change ticket for every production deployment including EOL.

CC9 — Risk mitigation

Dependency evaluation process documented. SCA reports showing library risk at adoption and on a recurring basis. Risk acceptance record for each significant new dependency introduced during the audit period.

Secure SDLC and SOC 2 Type 2 — Summary

SOC 2 Type 2 evaluates whether security controls operated consistently over an audit period, typically 6 to 12 months. A Secure SDLC is not a separate compliance workstream. It is the operational mechanism through which most of the Common Criteria are satisfied.

 

All nine Common Criteria (CC1–CC9) have at least one touchpoint in the SDLC. No phase is audit-free.

Idea is the most governance-heavy phase. CC1, CC2, CC3, CC5, and CC9 all apply here. Before a single line of code is written, the auditor expects a threat model, a risk register entry, a policy that developers have acknowledged, and evidence that third-party dependencies were evaluated. Skipping security at this phase creates gaps that are difficult to close retroactively.

PoC is where CC6 findings most often hide. Auditors check whether PoC environments were isolated from production data and whether access granted during PoC was later revoked. CC8 also applies — even exploratory work needs a change record.

MVP is the most evidence-dense phase and where auditors spend the most time. CC4, CC5, CC7, and CC8 all apply. The auditor will sample pull request records, SAST and SCA scan reports, vulnerability remediation logs, and logging configuration. Controls must have operated on every change, not just most of them.

Release is primarily about authorized change (CC8) and least-privilege access to production (CC6). Pipeline logs are strong evidence because they show controls were enforced automatically. A documented rollback procedure satisfies CC7.

EOL is the most commonly under-documented phase. CC6 requires proof that access was revoked. CC8 requires a change ticket for the decommission. CC7 applies if the system handled live data up to shutdown. Data disposal records satisfy C1.2 if confidentiality is in scope.

The controls most frequently missing in practice are CC1 (no named SDLC policy owner), CC2 (policy exists but was never formally acknowledged by developers), CC7 (logging treated as a post-launch concern rather than a release requirement), and CC9 (dependency risk decisions not documented, even when scans were run).

The key principle for SOC 2 Type 2 is consistency. A control that worked 90% of the time is still a finding. Every sampled change needs a complete evidence chain from its originating phase through to deployment or decommission.

The post SOC 2 Type 2 mapping to Secure SDLC Requirements first appeared on Sorin Mustaca’s blog.

How-To create Security User Stories

In the previous article, we explored how Scrum enables teams to add security to the backlog and prioritize it based on risk.

Incorporating security into the SDLC ensures that security is not an afterthought but an integral part of the development process.

Security User Stories are specific, actionable items that articulate the security needs of the software in the same way functional requirements are handled.

Writing Security User Stories complements this process by providing clear, actionable security requirements that can be integrated into each sprint.

By treating security stories with the same importance as functional stories, developers can ensure that the software they build is not only feature-complete but also secure.

 

What are Security User Stories?

Security User Stories are descriptions of security requirements written from the perspective of the user or the system. They focus on specific security needs, ensuring that the software not only meets functional requirements but also protects against potential vulnerabilities. Just like traditional user stories that describe a feature or function, security stories express how the system should behave securely.

A typical Security User Story follows the same format as a regular user story:

  • As a [role], I want [goal], so that [benefit].

For example, a Security User Story for web development might look like this:

  • “As a user, I want my session to expire after 15 minutes of inactivity, so that my account is protected from unauthorized access.”

Why are Security User Stories Needed?

Security is often treated as an afterthought, addressed late in the development process or after an incident occurs. This reactive approach leads to vulnerabilities, increased technical debt, and costly security fixes post-release. Security User Stories shift this paradigm by making security an integral part of the development process from the outset. They are necessary for several reasons:

  1. Proactive Security Integration: By incorporating security needs into the backlog from the start, you ensure that security considerations are addressed in each sprint, reducing the risk of vulnerabilities later on.
  2. Clear Requirements for Developers: Security User Stories provide clear, actionable security requirements, helping developers understand exactly what is expected to make the software secure.
  3. Accountability: Writing security stories holds the development team accountable for implementing security features and allows for better tracking of security tasks within the development cycle.
  4. Risk Mitigation: When security is considered early in the SDLC, potential security issues are identified and addressed before they become significant risks. This aligns with the concept of “Shift Left” security, where security is integrated into earlier stages of the development process.

How to Use Security User Stories

Security User Stories should be written as part of the Product Backlog and prioritized based on the level of risk or impact. Here’s how to use them effectively:

  1. Collaboration with Security Experts: Work with security professionals to identify potential threats and risks specific to the application or platform. They can help create and refine security user stories based on threat modeling and vulnerability assessments.
  2. Define Acceptance Criteria: Each Security User Story should have clear, testable acceptance criteria. These criteria define when the story is considered complete and what tests should be performed to verify the security requirement has been met.
  3. Prioritize Based on Risk: Security User Stories should be prioritized just like functional features, based on their importance. Stories that address high-risk vulnerabilities, such as authentication or encryption, should be prioritized early in the development cycle.
  4. Regular Review and Updates: Security is an evolving field. As new threats emerge, Security User Stories should be reviewed and updated to address the latest vulnerabilities. Regular threat assessments help ensure the backlog remains current.

Examples of Security User Stories Across Different Platforms

1. Web Application Development

Web applications face numerous security threats, from SQL injection to Cross-Site Scripting (XSS). Below are examples of Security User Stories that address common web application security issues:

  • “As a user, I want my password to be stored securely using a strong hashing algorithm like bcrypt, so that my account is protected from unauthorized access.”
  • “As a system, I want to validate all user inputs server-side to prevent injection attacks.”
  • “As a system, I must use HTTPS for all data transmitted between the client and the server, to ensure data confidentiality.”
  • “As a user, I want to be logged out after 15 minutes of inactivity, so that my session cannot be hijacked.”

2. Windows Software Development

Windows software may face risks such as privilege escalation or malicious code execution. Security User Stories for Windows development could include:

  • “As a user, I want my application to run with the minimum necessary privileges, so that the system is protected from privilege escalation attacks.”
  • “As a system administrator, I want all logs to be stored securely and be tamper-proof, so that I can audit user activities reliably.”
  • “As a developer, I want the application to verify all digital signatures before executing code, to ensure the code has not been tampered with.”
  • “As a system, I want to enforce Data Execution Prevention (DEP) to prevent malicious code from executing in the memory.”

3. Android App Development

Mobile applications, particularly Android apps, face unique security challenges, such as improper storage of sensitive information and unauthorized access to device features. Examples of Android-related Security User Stories include:

  • “As a user, I want my sensitive data (e.g., passwords, payment information) to be encrypted using the Android Keystore system, so that my data is safe even if the device is compromised.”
  • “As a developer, I want the app to request only the necessary permissions, so that the user’s privacy is respected.”
  • “As a user, I want to be required to authenticate using biometrics before making sensitive changes, such as resetting my password, to ensure the security of my account.”
  • “As a system, I want to securely store session tokens and prevent them from being accessible via insecure storage mechanisms (e.g., SharedPreferences).”

4. iOS App Development

iOS apps must adhere to strict privacy and security guidelines, and improper handling of user data can lead to severe breaches. Below are Security User Stories specific to iOS development:

  • “As a user, I want all sensitive information (e.g., authentication tokens) to be stored in the iOS Keychain, so that my data is protected from unauthorized access.”
  • “As a system, I want to ensure that network communication is secured using TLS 1.2 or above, to protect against man-in-the-middle attacks.”
  • “As a user, I want to enable Face ID for sensitive transactions (e.g., payments), to ensure that unauthorized users cannot perform critical actions.”
  • “As a developer, I want to implement App Transport Security (ATS) to ensure all connections are encrypted.”

Conclusion

Security User Stories are a powerful tool for developers to integrate security into their development process. By writing clear, actionable stories with defined acceptance criteria, development teams can proactively address security risks while ensuring that they meet functional requirements.

Whether you’re building a web app, Windows software, or mobile applications for Android or iOS, incorporating Security User Stories into the backlog ensures that security remains a priority throughout the SDLC.

With this approach, developers can create secure, reliable software that meets the needs of both the business and the users.

The post How-To create Security User Stories first appeared on Sorin Mustaca on Cybersecurity.

Portfolio Items