Posts

SOC 2 Type 2 mapping to Secure SDLC Requirements

We started to talk about the SOC2 Type 2 certification and I feel that we neglected it a bit.

I wrote a bit about SDLC, Secure SDLC in particular, but now it is time to bring them together.

 

SOC 2 Type 2 and Secure SDLC — the big picture

SOC 2 Type 2 evaluates whether controls are operating effectively over time (typically 6–12 months). It is not a point-in-time snapshot.

Your SDLC is not an isolated engineering practice — it feeds directly into several Trust Services Criteria (TSC).

All nine Common Criteria map to the SDLC in some way, but they do so at different layers.

CC1 (Control Environment) is the foundation. It is not about code or process — it is about organizational accountability. The auditor checks that your Secure SDLC has a named owner, that the policy carries formal authority, and that security has a defined role in the development organization. Without this, every other control lacks a governance backbone.

CC2 (Communication) requires that developers know the rules. A Secure SDLC policy that exists but was never distributed or acknowledged does not satisfy this criterion. The auditor looks for training records, policy sign-offs, or equivalent evidence that the people making security decisions in each SDLC phase were aware of their obligations.

CC3 (Risk Assessment) maps directly to the Idea and PoC phases. The criterion requires that risks are identified and analyzed before work begins. A threat model, a risk register entry, or a documented security review of the proposed design all serve as evidence. The auditor wants to see that risk was considered as an input to scope decisions, not evaluated after the fact.

CC4 (Monitoring Activities) requires ongoing evaluation of whether controls are working. In SDLC terms this means SAST, DAST, and SCA scans must run regularly, their results must be reviewed, and findings must be tracked to resolution. Running a scan whose results are never acted on does not satisfy CC4.

CC5 (Control Activities) covers the specific rules that govern how code is written and reviewed. Secure coding standards, mandatory peer review, branch protection, and secrets scanning policies all live here. CC5 is about the guardrails built into the development process itself, not just the approval chain around it.

CC6 (Logical Access) runs across the widest range of SDLC phases. It covers who has access to source code, build pipelines, deployment tools, and production environments — and whether that access is appropriate at each phase. PoC access that was never revoked and production credentials embedded in a repository are both CC6 findings.

CC7 (System Operations) requires that running systems can detect and respond to threats. Its SDLC relevance is that logging, alerting, and incident response readiness must be built into the product before it reaches production. If these are treated as post-launch concerns, CC7 is a gap.

CC8 (Change Management) is the criterion most directly owned by the SDLC. Every code change from PoC through EOL must be authorized, reviewed, and traceable. This criterion generates the highest sample volume in a Type 2 audit — typically 20 to 25 change records — and every sampled item needs a complete evidence chain.

CC9 (Risk Mitigation) addresses third-party and vendor risk. In a software development context this means evaluating open-source libraries, SDKs, and external dependencies before they are adopted. Running a dependency scan satisfies part of this, but CC9 specifically requires that a conscious risk decision was documented — not just that a tool ran.

The practical takeaway is that CC1, CC2, and CC9 are the ones most commonly missing from customers who think their Secure SDLC is well covered.

They focus on CC8 (change management) and CC6 (access) but leave governance, communication, and vendor risk undocumented.

 

Summary mapping of SOC2 controls to SDLC

CC Control Name Idea PoC MVP Release EOL SDLC Intersection / Evidence
CC1 Control environment SDLC policy with named owner. Security team has formal authority to block releases.
CC2 Communication Secure SDLC policy published and acknowledged. Developer training completion records.
CC3 Risk assessment Threat model at Idea phase. Risk register updated before PoC scope is confirmed.
CC4 Monitoring activities SAST/DAST results reviewed. Recurring vuln scans in prod. Findings tracked to closure.
CC5 Control activities Secure coding standards doc. Code review policy enforced. Branch protection rules active.
CC6 Logical access Repo and pipeline access logs. Secrets management reviewed. Prod access revoked at EOL.
CC7 System operations Logging enabled pre-release. Alerting configured in prod. Incident runbook referenced.
CC8 Change management PR records with approvals. Pipeline gates enforced. EOL change ticket required.
CC9 Risk mitigation Third-party libraries assessed. OSS license and security risk reviewed before adoption.

 

Practical Checklist — SDLC Evidence by Common Criteria

CC1 — Control environment

SDLC policy with version, date, and named owner. Org chart showing security’s authority. Evidence security can block a release.

CC2 — Communication

Policy acknowledgment log with names and dates. Annual security training completion records. Re-communication evidence if policy changed during the audit period.

CC3 — Risk assessment

Threat model dated before PoC began. Risk register with severity ratings and owners. Security requirements traceable to backlog items.

CC4 — Monitoring activities

SAST and SCA scan reports on a recurring cadence, not one-off. Vulnerability remediation log showing finding, severity, owner, SLA target, and closure date.

CC5 — Control activities

Secure coding standards document. Branch protection configuration blocking direct pushes to main. Secrets scanning active in the repository.

CC6 — Logical access

User access list per environment with roles. Annual access review log. MFA enforcement evidence. Secrets stored in a secrets manager, not in code. Access revocation records for leavers and decommissioned systems.

CC7 — System operations

Logging configuration in place before first production release. Alerting thresholds and escalation paths documented. At least one security alert triaged and recorded during the audit period.

CC8 — Change management

PR records with reviewer names and approval timestamps for every sampled change. Pipeline logs showing tests passed before deployment. Rollback procedure documented. Change ticket for every production deployment including EOL.

CC9 — Risk mitigation

Dependency evaluation process documented. SCA reports showing library risk at adoption and on a recurring basis. Risk acceptance record for each significant new dependency introduced during the audit period.

Secure SDLC and SOC 2 Type 2 — Summary

SOC 2 Type 2 evaluates whether security controls operated consistently over an audit period, typically 6 to 12 months. A Secure SDLC is not a separate compliance workstream. It is the operational mechanism through which most of the Common Criteria are satisfied.

 

All nine Common Criteria (CC1–CC9) have at least one touchpoint in the SDLC. No phase is audit-free.

Idea is the most governance-heavy phase. CC1, CC2, CC3, CC5, and CC9 all apply here. Before a single line of code is written, the auditor expects a threat model, a risk register entry, a policy that developers have acknowledged, and evidence that third-party dependencies were evaluated. Skipping security at this phase creates gaps that are difficult to close retroactively.

PoC is where CC6 findings most often hide. Auditors check whether PoC environments were isolated from production data and whether access granted during PoC was later revoked. CC8 also applies — even exploratory work needs a change record.

MVP is the most evidence-dense phase and where auditors spend the most time. CC4, CC5, CC7, and CC8 all apply. The auditor will sample pull request records, SAST and SCA scan reports, vulnerability remediation logs, and logging configuration. Controls must have operated on every change, not just most of them.

Release is primarily about authorized change (CC8) and least-privilege access to production (CC6). Pipeline logs are strong evidence because they show controls were enforced automatically. A documented rollback procedure satisfies CC7.

EOL is the most commonly under-documented phase. CC6 requires proof that access was revoked. CC8 requires a change ticket for the decommission. CC7 applies if the system handled live data up to shutdown. Data disposal records satisfy C1.2 if confidentiality is in scope.

The controls most frequently missing in practice are CC1 (no named SDLC policy owner), CC2 (policy exists but was never formally acknowledged by developers), CC7 (logging treated as a post-launch concern rather than a release requirement), and CC9 (dependency risk decisions not documented, even when scans were run).

The key principle for SOC 2 Type 2 is consistency. A control that worked 90% of the time is still a finding. Every sampled change needs a complete evidence chain from its originating phase through to deployment or decommission.

The post SOC 2 Type 2 mapping to Secure SDLC Requirements first appeared on Sorin Mustaca’s blog.

EU Cyber Resilience Act (CRA) – Overview

What is the Cyber Resilience Act – CRA

The Cyber Resilience Act is the first European regulation to set a mandatory minimum level of cyber security for all connected products available on the EU market – something that did not exist before.

The CRA is a regulation from the European Union — formally Regulation (EU) 2024/2847 — but it is likely to be applied soon in other parts of the world, which produce for and sell products in the EU.

It covers both hardware and software products whose intended or foreseeable use involves connection (direct or indirect) to a device or network. That includes things like smartphones, laptops, IoT devices (smart-home cameras, smart fridges, connected toys), embedded systems, routers, industrial control systems, and even software with network connectivity.

Non-commercial open source software products are exempt from the CRA and therefore do not have to fulfill the requirements of the CRA.

Some product categories are excluded because they are already covered by other sector-specific regulation (e.g. certain medical devices, aviation, automotive, defense).

As can be seen, the aim is to increase cybersecurity within the European Union. The new regulation applies in all EU Member States and will be implemented gradually.

Timeline & Legal Effect

The CRA entered into force on 10 December 2024. There is a transition / compliance period: the full requirements become applicable by 11 December 2027 for new products.

Starting 11 June 2026, the Conformity Assessment Bodies can assess the fulfillment of the requirements.

Reporting of vulnerabilities and security incidents starts on 11 September 2026.

*CABs = Conformity Assessment Bodies

Source: BSI

Key Requirements & Obligations

For manufacturers, importers or distributors of in-scope products, CRA demands:

Secure-by-design and secure-by-default

During design and development, implement baseline cybersecurity controls (minimizing attack surface, secure defaults, applying cryptography, access control, integrity protection, etc.).

If you design or manufacture hardware or software intended for the EU market — start including security early: threat modelling, secure defaults, update mechanisms, patch management, SBOM (software-bill-of-materials) for components, documentation.

Lifecycle security

Maintain security across the lifecycle — through production, deployment, maintenance, updates (patches), and eventual decommissioning.

Prepare to collect and maintain documentation of the build, supply chain components, update/maintenance history, and test results for many years.

Vulnerability & incident reporting

If a product becomes subject to a “actively exploited vulnerability” or a “severe security incident”, the manufacturer must report promptly (early warning within 24 h, full notification within 72 h, final report within certain timeframes) via the CRA Single Reporting Platform.

For software vendors — ensure update/patch infrastructure is robust and built-in, and notification processes in place for vulnerabilities.

Documentation & traceability

Maintain technical documentation, data inventories and evidence of security measures for a defined period (often many years) after placing the product on the market.

CE-marking with security

Products that comply must carry the CE-mark, indicating conformity with the CRA’s cybersecurity requirements — similar to CE marking for safety or environmental compliance.

For buyers/customers — expect CE-mark + transparency regarding security posture. Choose vendors who commit to long-term patching and vulnerability response.

Conformity assessments for higher-risk products

While many products (roughly 90%) fall under a “default” tier and can be self-assessed by manufacturers, certain more critical or important product types (e.g. firewalls, security modules, intrusion detection systems, certain embedded systems) may require third-party assessment before being placed on market.

Why It Matters

The CRA establishes a common, EU-wide baseline for cybersecurity of digital products. This helps avoid fragmentation where different member states might otherwise have different rules. It forces manufacturers and vendors to adopt security by default + lifecycle security, rather than treating cybersecurity as an optional afterthought. This helps reduce the attack surface and improves resilience against cyber threats.

It increases transparency for consumers and businesses: when they buy a product with digital elements, they can expect a baseline of security and support — including updates and vulnerability management.

For vendors and developers — in enterprise, embedded, IoT or consumer space — it’s a legal obligation. Non-compliance when required could lead to regulatory consequences, and non-compliant products will not be allowed on the EU market once the deadlines lapse.

 

CRA Product Classification

Criteria & Examples

The CRA divides “products with digital elements (PDEs)” into four classification tiers. Classification drives what conformity assessment, certification, and compliance rigour you must apply.

Category When a product is placed here (criteria / rationale) Typical product examples*
Default Products that are not listed in the “Important” or “Critical” annexes — i.e. no particularly sensitive cybersecurity function or high risks associated with compromise. Many consumer devices & software: smart toys, basic IoT devices, simple smart-home equipment, non-security-critical apps, common consumer electronics.
Important – Class I PDEs that provide a cybersecurity-relevant function (authentication, access control, network access, system functions) but whose compromise would have a moderate risk (less than Class II). Identity management systems / privileged-access software or hardware (e.g. access readers), standalone/embedded browsers, password managers, VPN clients, network management tools, operating systems, microcontrollers/microprocessors with security-related functions, routers/modems/switches.
Important – Class II PDEs whose function involves a significant cybersecurity risk, or whose compromise could have wide or severe impact, especially on many other systems — thus higher criticality than Class I. For these, third-party conformity assessment is mandatory. Firewalls, intrusion detection/prevention systems (IDS/IPS), virtualisation/hypervisor/ container runtime systems, tamper-resistant microprocessors/microcontrollers, industrial-grade network/security systems.
Critical PDEs with cybersecurity-related functionality whose compromise could disrupt or control a large number of other products, critical infrastructure, supply chains or sensitive services. These must either get an EU cybersecurity certificate (per relevant scheme) or undergo strict third-party assessment. Hardware security modules (“security boxes”), smart meter gateways, smartcards / secure-elements, secure cryptoprocessing hardware — i.e. devices central to critical infrastructure, secure identity, secure communication or supply chain security.

* These examples reflect currently published annex examples and guidance. Regulatory technical specification updates (e.g. by the European Commission) may refine or expand the lists.

 

Assessment & conformity requirements per class

Below are examples of software products affected by the Cyber Resilience Act, organized into two tables and classified into the CRA categories:

  • Default Category – non-critical, low inherent risk

  • Important Class I – higher exposure, widely deployed, could be abused at scale

  • Important Class II – products with elevated security relevance, including security software and products in Annex III

  • Critical – core components of cybersecurity, identity, encryption, or essential network infrastructure

These classifications follow the CRA’s conceptual tiers, not an official certification list, because exact classification depends on the manufacturer’s intended use and applicability of Annex III.

Examples of Software Products Classification

Disclaimer: this is my current understanding of products with digital elements (PDEs). There is no official list of categories of products published, or at least I did not find one.

This list was created with help of AI and it is no guarantee to be complete or correct.

 

Software Type Example(s) CRA Category Rationale
CRM Platforms Salesforce, HubSpot, MS Dynamics Default General business software; no direct security function.
Blogging/CMS Platforms WordPress, Ghost, Drupal Default Consumer and enterprise web software; not security-critical by default.
Office Productivity Tools LibreOffice, MS Office Default Widely used but not security components.
Developer Tools IDEs, build systems Important Class I Used in software supply chains; compromise impacts downstream.
Cloud Management Consoles AWS CLI tools, Azure Portal extensions Important Class I Access to infrastructure; security implications.
Antivirus / Endpoint Protection CrowdStrike, Defender, Bitdefender Important Class II Security products explicitly listed under risk-sensitive categories.
EDR/XDR Platforms SentinelOne, Trellix, Microsoft XDR Important Class II Security monitoring and threat response capabilities.
Firewalls (Software-based) pfSense, OPNsense, Cisco, Juniper Important Class II Security enforcement components.
VPN Clients OpenVPN Client, WireGuard clients Important Class II Encryption and secure communications; directly covered.
Identity & Access Software SSO, MFA clients, IdP agents Critical Core identity systems; high systemic impact.
Key Management & Crypto Libraries OpenSSL, libsodium Critical Cryptographic primitives/implementations; part of critical components.
Secure Configuration Agents MDM agents, compliance agents Important Class II Affect system posture and policy enforcement.
Network Monitoring / SIEM Splunk, Elastic, QRadar Important Class II Security event analysis and detection.
Container Security Tools Aqua, Twistlock Important Class II Protect containerized workloads; tied to infrastructure security.

 

Further reading and sources

The post EU Cyber Resilience Act (CRA) – Overview first appeared on Sorin Mustaca’s blog.

From Idea to Proof of Concept to MVP – 3 article series

This is a a developer focused guide in three parts to evolving code, architecture, and processes with the purpose of turning a raw concept into a usable product. This process is one of the hardest parts of software development.

Teams often jump into implementation too early, or they build something polished before testing whether the underlying assumptions hold.

A structured flow—Idea → Proof of Concept (POC) → Minimum Viable Product (MVP)—keeps this journey predictable and reduces waste.

Each stage exists for a specific reason, and each stage demands a different mindset about code quality, design rigor, and security.
For developers, this is also a shift in how code is written, reused, refactored, and prepared for production.
This article explains the journey from the perspective of engineering teams, with practical backend and frontend examples and a clear separation of security activities.

The Idea

A raw concept describing a problem and a possible technical direction. It has no validated assumptions.

At this point, teams focus on understanding why the problem matters and what a potential solution could look like. No production-ready code exists yet.

Read the full article: From Idea to Proof of Concept to MVP: The Idea stage (1/3)

The Proof of Concept (POC)

A disposable implementation created to validate one or two critical assumptions. The focus is feasibility, not quality.

The POC answers narrow engineering questions such as: Can this API be used to implement the idea? or Can the frontend render this interaction reliably?

Code is expected to be thrown away or heavily rewritten later.

Read the full article: From Idea to Proof of Concept to MVP: The POC stage (2/3) .

The Minimum Viable Product (MVP)

A functional, small-scope product that solves a real user need with the minimum set of features.

Unlike a POC, the MVP requires maintainable code, basic architecture, observability, initial security measures, and repeatable engineering processes.

It is the first version that can be deployed and measured with real users.

Read the full article: From Idea to Proof of Concept to MVP: The Minimum Viable Product – MVP (3/3)

The post From Idea to Proof of Concept to MVP – 3 article series first appeared on Sorin Mustaca’s blog.

From Idea to Proof of Concept to MVP: The Minimum Viable Product – MVP (3/3)

We continue the series of 3 articles with the second one, about the Minimum Viable Product (MVP).

Here is the first article in the series, From Idea to Proof of Concept to MVP: The Idea stage (1/3) and the second article, the From Idea to Proof of Concept to MVP: The POC stage (2/3) .

3. The Minimum Viable Product (MVP)

Once the team has validated feasibility, the work shifts to building a usable, reliable product with a minimal but complete set of features.
The MVP is the first version that serves real users and collects real feedback.

Code quality, architecture, and processes now matter because the MVP becomes the foundation for all future iterations.

Purpose and Scope

The MVP implements the core value with enough stability, scalability, and security to run in production.
It does not include every possible feature—only the essentials—but it must be well-engineered.

Inputs and Outputs

Inputs include the validated POC, UX designs, refined requirements, and mandatory security needs.
Outputs include a deployable product, operational metrics, user feedback, and a backlog for enhancements.

Actors

The full engineering team is involved: backend, frontend, QA, DevOps, Security, UX, Product, and Operations.
Cross-team communication becomes essential, because making the MVP stable requires alignment across all disciplines.

Engineering Expectations at This Stage

Code Quality and Reuse

Developers now take the core logic from the POC and turn it into production-ready modules.
This involves consistent naming, clear responsibilities, robust error handling, schema validation, and test coverage.
The team extracts reusable libraries, shared components, or service interfaces to avoid future duplication.
The MVP becomes the beginning of a long-term codebase.

Required Technical Changes

  • Transform API drafts into versioned, documented REST or GraphQL interfaces.

  • Move throwaway scripts into properly structured modules or services.

  • Add input validation, sanitization, and schema enforcement.

  • Introduce unit tests, integration tests, and E2E tests.

  • Replace temporary mock data with real data pipelines.

  • Add observability: logs, metrics, traces, dashboards.

  • Integrate with continuous delivery pipelines.

Process Evolution

The team adopts formal processes:

  • CI/CD, code reviews with defined guidelines, branching strategies, automated testing, deployment checklists, and observability standards.
  • Documentation becomes mandatory because the product is no longer experimental.

Backend Example

The recommendation engine becomes now a stable service.
The POC endpoint turns into a versioned API with full request validation, structured logging, retry logic, error mapping, and test coverage.
The integration with the ML service now uses proper authentication, rate limiting, and timeouts.
Monitoring dashboards track latency, throughput, and error rates.

Frontend Example

The rough POC component becomes part of the application’s design system.
It uses reusable UI components, handles loading and error states gracefully, and integrates with the global state store.
Unit tests confirm component behavior, tests validate the full user flow.
Telemetry captures user interactions so the team can validate assumptions after launch.

Security

Security now moves from conceptual and experimental checks to real, enforceable controls.
This includes:

  • Authentication and authorization integration

  • Input validation and output encoding

  • Protection against injection vulnerabilities

  • HTTPS enforcement and secure cookie settings

  • Audit logging

  • Secrets management

  • Data-handling guarantees for sensitive information

The MVP does not need every advanced security feature, but it must meet the minimum standards required for production—especially if it processes personal or regulated data.

Here is the first article in the series, From Idea to Proof of Concept to MVP: The Idea stage (1/3) and the second article, the From Idea to Proof of Concept to MVP: The POC stage (2/3) .

The post From Idea to Proof of Concept to MVP: The Minimum Viable Product – MVP (3/3) first appeared on Sorin Mustaca’s blog.

From Idea to Proof of Concept to MVP: The POC stage (2/3)

We continue the series of 3 articles with the second one, about the Proof of Concept (POC).

Here is the first article in the series, From Idea to Proof of Concept to MVP: The Idea stage (1/3) .

2. The Proof of Concept (POC)

The POC is where the team tests a specific risky assumption that could make or break the idea.
The aim is not to build a usable product but to verify that a key technical, architectural, or data-processing challenge is solvable.

POC code is intentionally imperfect. It moves fast and cuts corners. However, it should still be written in a way that reduces friction when extracting reusable parts for the MVP.

What Defines a POC

  • A POC is short-lived and narrowly focused. It often tests only one or two questions:
    Can we integrate with this external system?
  • Can this algorithm scale?
  • Can the frontend render a dynamic timeline with the required performance?

The purpose is to generate a clear yes/no answer, not to produce a polished outcome.

Inputs and Outputs

Inputs include the problem statement and hypothesis defined in the idea stage.
Outputs include a working demonstration, documentation of findings, architectural constraints, and a clear decision: continue, pivot, or stop.

Actors

Developers implement the experiment.
Tech leads help evaluate results.
QA may help with validation but does not perform full product testing.
Security engineers review risks that appear during the experiment.

Engineering Expectations at This Stage

Code and Reuse

POC code is disposable, but that does not mean it should be sloppy. Developers should write code that can be extracted later without major re-architecture. This typically means:

  • Avoid hardcoded credentials, external URLs, or secrets.

  • Organize files in a simple but meaningful structure.

  • Implement the core logic in isolated modules instead of burying it inside an ad-hoc script.

  • Use interfaces or adapters to make future dependency injection easier.

The mindset should be: “This code may be thrown away, but if it works well, we want to reuse pieces of it.”

What Must Change Later

Before integrating POC code into the MVP, the team will need to refactor it: add error handling, consistent logging, tests, and proper abstractions.
In other words, the POC shows the core idea works, but the MVP requires turning this into real engineering.

Process Evolution

The POC often introduces small process steps such as:

  • Lightweight code reviews

  • A temporary branch in the repository

  • Simple build scripts to allow teammates to run the demonstration

This is still not production engineering. CI/CD pipelines and test automation usually come only at the MVP stage.

Backend Example

Suppose the team is building a new recommendation engine.
The backend POC might implement a single endpoint that forwards a request to an external ML service and measures latency and response quality.
Logging might be minimal, validation might be non-existent, and error handling might be crude—but the team learns whether the external ML service meets the performance requirements.

Frontend Example

A frontend POC might involve building a rough React component that displays personalized recommendations using mock data.
The component may not follow the design system, may not handle loading states cleanly, and may ignore error cases.
The goal is to check whether the UI interaction model feels intuitive and whether the state updates behave as expected.

Security

Security engineers examine how the POC handles sensitive data, even if the handling is mocked.
They validate risky paths such as authentication flows, data transformation logic, or external integrations.
The POC must identify whether the solution will require additional compliance measures, encrypted storage, or stricter authentication schemes.
This becomes a mandatory input for the MVP.

The post From Idea to Proof of Concept to MVP: The POC stage (2/3) first appeared on Sorin Mustaca’s blog.

From Idea to Proof of Concept to MVP: The Idea stage (1/3)

This is a a developer focused guide in three parts to evolving code, architecture, and processes with the purpose of turning a raw concept into a usable product. This process is one of the hardest parts of software development.

Teams often jump into implementation too early, or they build something polished before testing whether the underlying assumptions hold.

A structured flow—Idea → Proof of Concept (POC) → Minimum Viable Product (MVP)—keeps this journey predictable and reduces waste.

Each stage exists for a specific reason, and each stage demands a different mindset about code quality, design rigor, and security.
For developers, this is also a shift in how code is written, reused, refactored, and prepared for production.
This article explains the journey from the perspective of engineering teams, with practical backend and frontend examples and a clear separation of security activities.

Legend

Idea
A raw concept describing a problem and a possible technical direction. It has no validated assumptions.

At this point, teams focus on understanding why the problem matters and what a potential solution could look like. No production-ready code exists yet.

Proof of Concept (POC)
A disposable implementation created to validate one or two critical assumptions. The focus is feasibility, not quality.

The POC answers narrow engineering questions such as: Can this API be used to implement the idea? or Can the frontend render this interaction reliably?

Code is expected to be thrown away or heavily rewritten later.

Minimum Viable Product (MVP)
A functional, small-scope product that solves a real user need with the minimum set of features.

Unlike a POC, the MVP requires maintainable code, basic architecture, observability, initial security measures, and repeatable engineering processes.

It is the first version that can be deployed and measured with real users.

Transition: Idea → POC
The transition tests feasibility. Only the highest-risk technical assumptions are validated. Success means the idea has enough technical grounding to justify investment.

Transition: POC → MVP
The transition focuses on turning validated feasibility into a real product. Teams refactor or rebuild the POC code into production-ready components. Architecture stabilizes, security controls appear, and processes become repeatable.

1. The Idea Stage

The idea stage is where the team defines the problem and shapes the first version of the solution direction. The discussion is broad, and uncertainty is still high.

At this point the team is not writing code in any meaningful sense, but rather exploring possibilities, boundaries (technological, legal, usability related), and early risks.

The goal here is not to “design the whole system”. The goal is to understand whether the idea is worth testing and whether the technical foundation appears feasible. This prevents teams from sinking time into something that cannot work or is not worth the investment.

What Makes This Stage Unique

The idea stage is low-cost, low-risk, and exploratory. Developers participate mainly by assessing feasibility, identifying potential architectural constraints, and sketching which components might be reused later. The conversation stays intentionally shallow. Nothing should be implemented that the team cannot abandon without regret.

Inputs and Outputs

Inputs include the product need, early UX sketches, discussions about the problem, and high-level constraints such as data privacy, integration requirements, or performance expectations.
Outputs include a defined problem statement, a preliminary solution outline, and a clear hypothesis that the POC must validate.

Actors

Product managers frame the problem. Engineering leads assess feasibility. UX designers shape initial user interactions. Security architects provide early warnings about potential data-handling or compliance pitfalls.

Engineering Expectations at This Stage

Code and Architecture

No production code is written. If developers create anything, it is lightweight and disposable:
simple mock APIs written in Postman collections, small HTML/JS mockups, or rough OpenAPI drafts.
Nothing created at this stage is meant to be reused directly, but these drafts help teams align on concepts.

However, developers should already think about potential reuse paths.
For instance, if the solution will likely need a shared data-access layer or a reusable front-end state-management module, this is the time to name those opportunities—even if nothing is implemented.

Process Implications

The team documents assumptions, potential dependencies, and cases where reuse might save time later.
There is no review process, no CI pipeline changes, and no branching strategy decisions.
This remains a design and exploration stage.

Backend Example

A backend developer might sketch a future architecture in AWS or draft a sequence diagram showing how the system would communicate with an external payment service.
They might explore the integration constraints by reading documentation and checking rate limits, but no or very little code is produced.

Frontend Example

A frontend developer might draft wireframes and map out how new UI states could fit into existing structures.
They might also check whether existing UI components can be repurposed to avoid re-inventing layout patterns later.

Security and Privacy

Security work is limited to conceptual analysis. No real data is supposed to be used in this stage, so privacy concerns should not exist.
Security architects identify which data categories will be processed, assess whether regulatory frameworks apply, and highlight technical constraints that must be tested in the POC.
No security implementation takes place at this stage, but early awareness helps avoid blind spots later.

The post From Idea to Proof of Concept to MVP: The Idea stage (1/3) first appeared on Sorin Mustaca’s blog.

Delivering often in small increments with Scrum

Agile software development, particularly using Scrum, has revolutionized the way software is built and delivered.

At its core, Agile embraces iterative and incremental development, a stark contrast to traditional “waterfall” methodologies.

The primary objective is to deliver working software frequently and in small increments, ensuring continuous feedback, adaptability, and rapid value delivery.

However, we know from experience that this is not always the case, and if you have worked long enough in the software development industry, you know that usually, it is not the case.

I wrote before about this and the articles were well read (on LinkedIn), but I still see the need to summarize those articles:

Guide for delivering frequently software features that matter (series) #1/2: the Pillars of successful frequent delivery

Guide for delivering frequently software features that matter (series) #2/2: Challenges and the path forward

 

Key principles and practices

In order to frequently deliver small-increment you need to implement several key principles and practices:

Decomposition and User Stories

Break down large features or requirements into smaller, manageable user stories.
A well-formed user story describes a desired functionality from the perspective of an end-user, following the format: “As a [type of user], I want [some goal] so that [some reason].”
These stories are then estimated and prioritized.

Time-boxed Sprints

Scrum operates in short, fixed-length iterations called “sprints,” typically 2-4 weeks long.
Each sprint has a specific goal and a defined set of user stories to be completed.
The time-box ensures a consistent rhythm of delivery and prevents scope creep within an iteration.

Definition of Done (DoD)

A clear and shared “Definition of Done” is crucial.
This defines the criteria that a user story must meet to be considered complete, including coding, testing, documentation, and integration.
This ensures quality and prevents partially finished work from accumulating.

 

Cross-functional Teams

Scrum teams are self-organizing and cross-functional, meaning they possess all the skills necessary to take a user story from conception to delivery.
This reduces dependencies and streamlines the development process.

 

Frequent Feedback Loops

Scrum incorporates several built-in feedback loops:

  • Daily Scrums: Short daily meetings where the team synchronizes, discusses progress, and identifies impediments.
  • Sprint Demo: At the end of each sprint, the team demonstrates the “potentially shippable increment” to stakeholders, gathering feedback for future sprints.
  • Sprint Retrospectives: The team reflects on the past sprint to identify what went well, what could be improved, and creates actionable plans for the next sprint.

Prioritization and Backlog Refinement

The Product Owner is responsible for maintaining and prioritizing the Product Backlog, a living list of all desired features.
Regular “backlog refinement” sessions ensure that upcoming user stories are well-understood, estimated, and ready for development.

 

Now, if you think that by doing this solves all your problems, well, you are not entirely wrong, but also not entirely right. 🙂

As with any methodology, there are challenges.

Challenges and Solutions

Large, Undifferentiated Requirements

Stakeholders often present high-level, monolithic requirements that are difficult to break down into small, shippable increments. This can lead to long development cycles and delayed feedback.

Solutions

  • Invest in User Story Mapping: Collaboratively map out the user’s journey and identify smaller, deliverable “slices” of functionality.
  • Employ techniques like “Splitting User Stories”: Learn patterns and techniques to effectively break down large stories into smaller, valuable pieces (e.g., by workflow steps, by data type, by role).
  • Product Owner Focus: The Product Owner plays a critical role in collaborating with stakeholders to refine and decompose requirements, ensuring they are “INVEST” (Independent, Negotiable, Valuable, Estimable, Small, Testable)

Technical Debt and Integration Issues

Rapid delivery can sometimes lead to accumulating technical debt (shortcuts taken for speed) and integration headaches if not managed carefully.

This can slow down future development and make small increments harder to achieve.

Solutions

  • Prioritize Technical Excellence: Bake in time for refactoring, code quality, and automated testing within each sprint. The Definition of Done should include these aspects.
  • Continuous Integration and Continuous Delivery (CI/CD): Implement robust CI/CD pipelines to automate builds, tests, and deployments, ensuring that the software is always in a releasable state.
  • Pair Programming and Code Reviews: collaborative development and peer review usually catch issues early and maintain code quality, but they also slow down delivery. Use with care.

Lack of Clear Prioritization

Without a clear and stable Product Backlog and a Product Owner empowered to make decisions, teams can struggle with shifting priorities, leading to wasted effort and delayed delivery.

Solutions

  • Empower the Product Owner: Ensure the Product Owner has the authority and understanding to prioritize the Product Backlog effectively, balancing business value, risk, and dependencies.
  • Regular Backlog Refinement: Conduct frequent and collaborative backlog refinement sessions to ensure upcoming stories are well-understood and ready for development.
  • Transparency: Make the Product Backlog visible and accessible to everyone, fostering understanding and aligning expectations.

 

External Dependencies and Silos

In larger organizations, external dependencies (e.g., other teams, external vendors, compliance departments) or internal silos can hinder a team’s ability to deliver independently and frequently.

Solutions

  • Active Stakeholder Management: The Product Owner and Scrum Master should proactively identify and manage external dependencies, facilitating communication and coordination.
  • Cross-team Collaboration: Encourage regular communication and collaboration between teams, potentially through “Scrum of Scrums” or other scaling frameworks if applicable.
  • Shift to a “Value Stream” Mindset: Focus on optimizing the flow of value across the entire organization, identifying and removing bottlenecks that span multiple teams or departments.

The post Delivering often in small increments with Scrum first appeared on Sorin Mustaca’s blog.

Navigating AI Standards and Regulations

Note: This post is written with a lot of help from AI, used to summarize the standards mentioned below.

 

Artificial intelligence (AI) is reshaping industries, but it also brings new risks.

From security vulnerabilities to compliance challenges, organizations must balance innovation with responsibility.

New standards were created and newer are emerging to guide this effort, most notably ISO/IEC 42001, ISO/IEC 22989, NIST AI RMF and the EU AI Act.

Together, they define how we should understand, manage, and regulate AI.

 

The Standards: ISO/IEC 42001, ISO/IEC 22989, NIST AI Risk Management Framework (AI RMF)

ISO/IEC 22989 focuses on concepts and terminology. By standardizing the language around AI, it ensures consistency in communication between developers, regulators, and policymakers. It provides a shared foundation for technical and strategic discussions, making it easier to align projects and compliance efforts.

 

ISO/IEC 42001 sets the framework for an Artificial Intelligence Management System (AIMS). As if we didn’t have enough Management Systems (ISMS, CSMS, DRMS, etc.), now we have AIMS.

It provides requirements for organizations to govern AI responsibly throughout its lifecycle.

Much like ISO 27001 for information security, this standard enables organizations to implement repeatable processes, assign roles, manage risks, and continuously improve their AI practices.

In short, ISO/IEC 22989 tells us how to talk about AI, while ISO/IEC 42001 tells us how to manage it.

NIST AI Risk Management Framework (AI RMF) is developed by the National Institute of Standards and Technology.  It gives guidance on managing the risks of AI systems: trustworthiness, safety, fairness, explainability, etc.

NIST also works on “crosswalks” linking the AI RMF to international standards like ISO, OECD guidelines, etc.

 

The Regulation: EU AI Act

The EU AI Act goes beyond voluntary standards. It is a regulation with binding legal requirements for AI systems placed on the EU market.

The Act classifies AI systems by risk:

  • Unacceptable risk systems (e.g., manipulative or exploitative applications) are prohibited.
  • High-risk systems (e.g., AI in healthcare, critical infrastructure, recruitment) must meet strict conformity assessments, documentation, and testing requirements.
  • Limited and minimal risk systems face transparency obligations or no specific restrictions.

Unlike ISO standards, which are voluntary, the EU AI Act will be legally enforced. Non-compliance may lead to heavy fines and product bans.

 

Comparing Standards and Regulation

  • ISO/IEC 22989 provides consistent terminology.
  • ISO/IEC 42001 defines organizational governance for AI.
  • NIST AI RMF guidance on managing the risks of AI systems: trustworthiness, safety, fairness, explainability.
  • EU AI Act imposes legally binding obligations at the product and deployment level.

While ISO and NIST standards are process-driven and supportive, the EU AI Act mandates specific outcomes.

Organizations can use ISO/IEC 42001 to establish governance processes that make compliance with the EU AI Act easier, but certification alone does not replace the legal requirements.

U.S. standards tend to be voluntary or guidance-based, not binding across all states or businesses, unlike the EU AI Act. There is no single federal law with comprehensive AI regulation yet;

instead it’s a patchwork of executive orders, agency actions, state laws, and voluntary standards. The U.S. places strong emphasis on risk management frameworks, public-private collaboration, innovation, and aligning with international standards.

In the U.S. there are some more standards on AI like Center for AI Standards and Innovation (CAISI) and various initiatives and plans for AI systems. Also there are some state laws and regulations which require some large AI model developers to publicly disclose safety protocols and report certain kinds of risk or incidents (California SB 53).

 

Key Risks Introduced by AI

  1. Model drift and performance risk — AI systems degrade over time, causing hidden failures.
  2. Bias and discrimination — Training data can produce unfair outcomes, raising legal and ethical issues.
  3. Lack of explainability — Black-box models hinder audits, accountability, and trust.
  4. Data protection risks — Models may leak or memorize personal data, creating privacy concerns.
  5. Security vulnerabilities — Adversarial attacks, poisoning, and prompt injection threaten system integrity.
  6. Supply chain dependency — Reliance on third-party models introduces hidden weaknesses.
  7. Regulatory non-compliance — Misclassifying risk or skipping assessments can result in fines and reputational damage.

How Standards Address These Risks

  • ISO/IEC 22989 ensures clarity in measurement and reporting.
  • ISO/IEC 42001 and NIST AI RMF requires lifecycle controls, risk assessments, monitoring, and continuous improvement.
  • EU AI Act mandates transparency, testing, and conformity assessments tailored to specific use cases.

When combined, these frameworks help organizations create trustworthy AI systems while meeting regulatory demands.

 

The Next Level of Compliance

To reach the “next level” of compliance, organizations must integrate voluntary standards and mandatory regulation into one cohesive program:

  1. Adopt common terminology using ISO/IEC 22989 across all teams.
  2. Implement an AI management system aligned with ISO/IEC 42001.
  3. Map AI products against EU risk categories and prepare compliance checklists.
  4. Generate technical evidence such as model cards, data lineage, and test results.
  5. Automate monitoring and incident response to detect model drift and adversarial attacks.
  6. Integrate privacy engineering to ensure alignment with GDPR.
  7. Secure the AI supply chain by tracking third-party components and models.
  8. Prepare for external audits and conformity assessments, leveraging ISO processes as supporting evidence.

Compliance should not be treated as a static checklist. The future of responsible AI lies in continuous monitoring, automated governance, and embedding compliance into MLOps pipelines.

Conclusions

AI standards and regulations are converging to create a new compliance landscape.

ISO/IEC 22989 provides the vocabulary, ISO/IEC 42001 offers governance, and the EU AI Act enforces legal obligations.

Organizations that align with all three will not only reduce risk but also strengthen trust in their AI systems. The next level of compliance means going beyond certification—building AI practices that are transparent, secure, and continuously monitored.

The EU provides a strong, comprehensive, binding regulatory framework for AI with clear risk categories, prohibited uses, and enforcement.

The U.S. currently relies more on existing laws, executive orders, and sectoral regulation, giving more flexibility but less predictability.

For global players, achieving dual compliance is increasingly necessary. The trend suggests U.S. regulation will become stronger over time, potentially drawing from EU models.

 

The post Navigating AI Standards and Regulations first appeared on Sorin Mustaca’s blog.

Policy vs Standard vs Procedure: why, what, how

Ever wondered what the differences between these terms are?

We use them in GRC very often, but we rarely think what they mean. This creates in time some stretching of these concepts, meaning that their meanings overlap to a certain degree.

 

A Policy is a high-level, mandatory statement of principles and intent.
A Standard is a mandatory, specific requirement that defines what is needed to comply with a policy.
A Procedure is a detailed, step-by-step set of instructions on how to implement a standard or fulfill a policy.
Policies set goals, standards define the required outcomes, and procedures provide the detailed roadmap to achieve them, forming a hierarchical structure within an organization.

Policy

What is it
A high-level, broad statement of principles, intent, or requirements designed to guide decisions and achieve outcomes.
Purpose
To establish strategic goals, the intent, to support an organization’s mission, comply with laws, or minimize risk.
Answers
Describes the Why must something be done.
Mandatory
Yes, policies are mandatory and define why must something be done. Because of their generic nature of defining the need and not the implementation, they rarely change and are not negotiable.
Example
An IT Security Policy that states the organization will protect sensitive data from unauthorized access.

Standard 

What is it
A mandatory, specific technical requirement or rule that provides concrete, measurable details for policy compliance.
Purpose
To provide the specific rules, metrics, and technical configurations necessary to make policies meaningful and effective.
Answers
Describes the What must be done to implement the policy.
Mandatory
Yes, standards are mandatory and define specific configurations, timelines, or processes. Because of their specific nature of describing the implementation, they can change because of the dynamic of the specific industry.
Example
An IT Security Standard for Encryption data that is required by a Policy that states that the organization will protect sensitive data from unauthorized access. The standard will define what encryption algorithm will be used, when to use it, what kind of data should be encrypted and who is responsible for implementing it.

Procedure

What is it 
A detailed, step-by-step set of instructions outlining the specific actions to be performed to implement a standard or policy. 

Purpose
To provide clear, actionable guidance on how to execute a task and to ensure consistent, repeatable measurable results. It also defines Who should do something and When.

Answers
Describes the How must something be done that is defined by the standard or directly by the policy.
Mandatory
Yes, procedures are mandatory and specify the exact steps an employee must follow. Because they define detailed requirements on how to implement a standard or policy, they change as needed. 

Example
A step-by-step instruction set on how to encrypt data in a database, a hard drive, emails and other types of information.

How They Work Together (Hierarchically) 

  1. Policy (The Goal)The high-level statement of intent, like an IT security policy.
  2. Standard (The Rule)The specific requirements that support the policy, such as password complexity standards.
  3. Procedure (The Steps)The detailed instructions on how to follow the standard, like the steps to change a password.
This top-down structure ensures that policies are actionable and that goals are met through consistent, documented processes.

What about Guidelines?

Guidelinesare at the bottom, offering recommended and flexible support for the entire framework. They are optional and usually accompany procedures and standards.

Read more

The post Policy vs Standard vs Procedure: why, what, how first appeared on Sorin Mustaca’s blog.

Comparing Annex A in ISO/IEC 27001:2013 vs. ISO/IEC 27001:2022

I wrote ages ago this article, where I compared briefly the Annex A in the two versions of the standard: https://www.sorinmustaca.com/annex-a-of-iso-27001-2022-explained/

But, I feel that there is still need to detail a bit the changes, especially that now more and more business are forced to re-audit for the newer standard.

 

Overview of Annex A

Each of these categories encompasses a set of controls designed to address specific aspects of information security management within an organization. These categories encompass policies, procedures, and technical and organizational measures designed to safeguard critical assets, prevent unauthorized access, and mitigate security threats.

The primary purpose of Annex A controls is to guide organizations in selecting appropriate security measures based on their specific context and identified risks. They are not mandatory requirements but serve as best practices for information security management.

Many auditors or practitioners are recommending to not focus exclusively on these controls, because they will not help you in the end to pass the audit. I agree, to not rely exclusively on them, but only to use them as a starting point.

 

  • 2013 edition:

    • 114 controls

    • Grouped in 14 control domains (e.g., A.5 Information Security Policies, A.6 Organization of Information Security, etc.).

    • Numbering is A.x.y.z.

  • 2022 edition:

    • 93 controls (reduced by consolidation, merging, and restructuring).

    • Grouped in 4 control themes:

      • Organizational (37 controls)

      • People (8 controls)

      • Physical (14 controls)

      • Technological (34 controls)

    • Numbering is A.5–A.8 only, reflecting the 4 control themes.

 

New Controls Introduced in 2022

ISO/IEC 27001:2022 introduced 11 new controls to address modern risks. Each expands the ISMS scope to include practices that were not explicitly covered in the 2013 edition.

I personally love this addition, because now the standard is in sync with the reality out there. I especially love the A.8.28 Secure Coding, which has been far too long ignored, despite the evidence that all major exploits have been caused by not respecting secure coding standards.

  1. A.5.7 Threat Intelligence

    • Requires collection and analysis of threat intelligence.

    • Sources: security vendors, government advisories, industry ISACs, internal incident data.

    • Outcome: anticipate and defend against emerging attack methods.

  2. A.5.23 Information Security for Use of Cloud Services

    • Establishes rules for assessing and managing cloud providers.

    • Covers due diligence, contracts, data residency, shared responsibility.

    • Goal: ensure cloud adoption is secure and consistent.

  3. A.5.30 ICT Readiness for Business Continuity

    • Ensures IT and communications systems are resilient to disruptions.

    • Focus: backup, recovery testing, failover, disaster readiness.

    • Bridges ISMS with business continuity (ISO 22301).

  4. A.7.4 Physical Security Monitoring

    • Monitoring of physical facilities using CCTV, access logs, alarms, motion sensors.

    • Detects unauthorized access and environmental hazards.

    • Complements access restriction controls.

  5. A.8.9 Configuration Management

    • Requires baseline configurations for systems and software.

    • Covers patching, secure hardening, prevention of unauthorized changes.

    • Reduces risks from misconfigurations.

  6. A.8.10 Information Deletion

    • Secure and verified erasure of data when no longer needed.

    • Applies to disks, mobile devices, cloud storage, and backups.

    • Prevents data recovery by unauthorized parties.

  7. A.8.11 Data Masking

    • Techniques to obscure sensitive information.

    • Useful in non-production environments and analytics.

    • Supports privacy requirements (GDPR, HIPAA, etc.).

  8. A.8.12 Data Leakage Prevention (DLP)

    • Deployment of technical and procedural measures to prevent data leaks.

    • Examples: DLP software, email scanning, outbound traffic filtering.

    • Helps against insider threats and accidental data loss.

  9. A.8.16 Monitoring Activities

    • Expands on logging to include continuous monitoring of systems and networks.

    • Goal: real-time detection of anomalies and policy violations.

    • Supports SOC operations and incident response.

  10. A.8.23 Web Filtering

  • Restricts or blocks access to malicious or inappropriate websites.

  • Prevents phishing, malware, and unauthorized browsing.

  • Often implemented via secure DNS or proxy gateways.

  1. A.8.28 Secure Coding

  • Mandates secure software development practices.

  • Includes developer training, code review, automated scanning, use of vetted libraries.

  • Supports DevSecOps integration and early vulnerability prevention.

 

Merged Controls

Some 2013 controls were consolidated to reduce duplication:

  • Logging and monitoring (A.12.4.1–A.12.4.3, 2013) merged into A.8.15 & A.8.16 (2022).

  • Cryptographic controls (A.10.1.1, A.10.1.2, 2013) merged into A.8.24 (2022).

  • Access management controls consolidated into A.5.15–A.5.18 (2022).

 

Removed / Reorganized Controls

No controls were truly eliminated; instead, they were rephrased or merged.

  • Example: Removal of assets (A.11.2.7, 2013) became part of Return of assets (A.5.9, 2022).

  • Teleworking and mobile device policies combined under broader organizational controls.

 

Attributes in Annex A (2022)

A new classification model (“attributes”) was introduced to tag each control.

Categories include:

  • Control type: Preventive, Detective, Corrective

  • Security properties: Confidentiality, Integrity, Availability

  • Cybersecurity concepts: Identify, Protect, Detect, Respond, Recover (aligned with NIST CSF)

  • Operational capabilities: Governance, Asset management, Identity, Resilience, etc.

  • Security domains: Align with organizational, people, physical, technological

Why Attributes Matter

This enables flexible mapping to frameworks like NIST, CIS, and especially TISAX.

  • They make ISO 27001 more practical and flexible.

  • Help you cross-map ISO 27001 controls to:

    • NIST CSF (via cybersecurity concepts)

    • CIA triad (via security properties)

    • Defense-in-depth planning (via control type)

  • Useful for gap analysis: you can check whether your ISMS is too prevention-heavy and weak on detection or recovery.

  • Improve communication with stakeholders: executives, auditors, regulators, or IT operations can each view controls in the lens that matters most to them.

In simple words: Attributes are like tags in a library. They don’t change the book (control), but they let you find it faster depending on whether you search by topic, author, or year.

Since TISAX is my favorite certification (ok, ok, it is a label, but bare with me here) I need to point to the column P. “Reference to other standards”, where this cateogry has been used several times.

Reference “3.1.10” in Cell P50 from the ISA-VDA-6.0.3:

3 -> Cybersecurity Concept

1 -> Detect

10 -> Control Identifier

This ia a Mapping between control A.8.15 (=Logging) und  Cybersecurity Concept: Detect von NIST CSF :

Identifier   Control_Code   Title
3.1.1  A.7. X Employee event reporting
3.1.2 A.7. X Information security event reporting
3.1.3 A.5.24 Information security incident planning/prep
3.1.4 A.5.25 Assessment & decision on info security events
3.1.5 A.5.26 Response to information security incidents
3.1.6 A.5.27 Learning from information security incidents
3.1.7 A.7.4 Physical security monitoring
3.1.8 A.8.12 Data leakage prevention
3.1.9 A.8.16 Monitoring activities
3.1.10 A.8.15 Logging

A.8.15 Logging -> mapping -> Cybersecurity Concept: Detect

This is useful for aligning ISO/IEC 27001 with NIST CSF, TISAX, ISA/IEC 62443, and others .

I think there is a lot more to write about them, perhaps in another article.

 

Summary

2013 Control (Domain) 2022 Control (Theme) Notes
A.5.1.1 Information security policy A.5.1 Policies for information security Mostly unchanged
A.5.1.2 Review of policies A.5.1 Policies for information security Merged
A.6.1.1 Roles and responsibilities A.5.2 Information security roles and responsibilities Direct
A.6.1.2 Segregation of duties A.5.3 Segregation of duties Direct
A.6.1.3 Contact with authorities A.5.4 Contact with authorities Direct
A.6.1.4 Contact with special interest groups A.5.5 Contact with special interest groups Direct
A.6.1.5 Project management A.5.8 Information security in project management Expanded
A.6.2.1 Mobile device policy A.6.2.1 (2013) merged → A.6.2 (2022 People theme) Consolidated
A.6.2.2 Teleworking A.5.10 Acceptable use of information and other assets + A.5.11 Return of assets Reorganized
A.7.1.1 Screening A.6.1 Screening Direct
A.7.1.2 Terms of employment A.6.2 Terms of employment Direct
A.7.2.1 Management responsibilities A.6.3 Management responsibilities Direct
A.7.2.2 Information security awareness, education, and training A.6.4 Information security awareness, education, and training Direct
A.7.2.3 Disciplinary process A.6.5 Disciplinary process Direct
A.7.3 Termination/responsibilities A.5.9 Return of assets Consolidated
A.8.1.1 Inventory of assets A.5.9 Inventory of information and other assets Direct
A.8.1.2 Ownership of assets A.5.9 Inventory of information and other assets Consolidated
A.8.1.3 Acceptable use of assets A.5.10 Acceptable use of information and other assets Direct
A.8.1.4 Return of assets A.5.11 Return of assets Direct
A.8.2.1 Classification of information A.5.12 Classification of information Direct
A.8.2.2 Labeling of information A.5.13 Labelling of information Direct
A.8.2.3 Handling of assets A.5.14 Handling of information Direct
A.8.3.1 Management of removable media A.8.10 Information deletion Merged/expanded
A.8.3.2 Disposal of media A.8.10 Information deletion Direct
A.8.3.3 Physical media transfer A.5.14 Handling of information Consolidated
A.9.1.1 Access control policy A.5.15 Access control Direct
A.9.1.2 Access to networks and services A.5.16 Access to network and network services Direct
A.9.2.x User access management (all) A.5.17–A.5.18 Consolidated
A.9.3 User responsibilities A.5.18 Access rights Direct
A.9.4 System and application access A.5.19–A.5.22 Expanded
A.10.1.1 Policy on cryptographic controls A.8.24 Use of cryptography Direct
A.10.1.2 Key management A.8.25 Key management Direct
A.11.x Physical and environmental controls A.7.1–A.7.4 Simplified/merged
A.12.1.x Operational procedures A.8.1–A.8.8 Direct
A.12.4.1–A.12.4.3 Logging & monitoring A.8.15–A.8.16 Monitoring activities Merged
A.12.5.x Control of operational software A.8.7–A.8.9 Consolidated
A.12.6.x Technical vulnerability mgmt. A.8.8 Management of technical vulnerabilities Direct
A.13.1.x Network security controls A.8.20 Network security Direct
A.13.2.x Information transfer A.5.14 Handling of information Consolidated
A.14.1.x Security requirements for IS A.8.26 Application security requirements Direct
A.14.2.1 Secure development policy A.8.28 Secure coding Expanded
A.14.2.5 Secure system engineering A.8.27 Secure system architecture and engineering principles Direct
A.15.1 Supplier security A.5.19 Supplier relationships Direct
A.15.2 Supplier service delivery mgmt. A.5.20–A.5.21 Consolidated
A.16.1.x Incident mgmt. A.5.25–A.5.27 Direct
A.17.1 Business continuity planning A.5.29 ICT readiness for business continuity Expanded
A.18.1 Compliance with legal A.5.32 Compliance obligations Direct
A.18.2 Information security reviews A.5.33 Independent review of information security Direct

 

 

Conclusions

  • The shift from ISO/IEC 27001:2013 to ISO/IEC 27001:2022 is less about reducing the number of controls and more about modernizing and simplifying them.

While the 2013 version spread 114 controls across 14 domains, the 2022 edition organizes 93 controls into just four clear themes. This makes the standard easier to understand and apply.

The addition of 11 new controls shows how the standard has kept pace with today’s security challenges: cloud services, secure coding, threat intelligence, data leakage prevention, and stronger monitoring.

At the same time, many older controls were merged or rephrased, removing overlaps and making the framework more practical.

  • Perhaps the biggest improvement is the introduction of attributes. These tags let organizations view the controls through different lenses — confidentiality, integrity, availability, NIST CSF functions, or operational capabilities. That flexibility makes it much easier to map ISO 27001 to other frameworks and compliance requirements.
  • For organizations, the transition means more than just updating documentation. It is an opportunity to strengthen governance, align with modern practices, and close gaps in areas that were not well covered before, such as cloud and DevSecOps.

The post Comparing Annex A in ISO/IEC 27001:2013 vs. ISO/IEC 27001:2022 first appeared on Sorin Mustaca’s blog.