Navigating AI Standards and Regulations

Note: This post is written with a lot of help from AI, used to summarize the standards mentioned below.

 

Artificial intelligence (AI) is reshaping industries, but it also brings new risks.

From security vulnerabilities to compliance challenges, organizations must balance innovation with responsibility.

New standards were created and newer are emerging to guide this effort, most notably ISO/IEC 42001, ISO/IEC 22989, NIST AI RMF and the EU AI Act.

Together, they define how we should understand, manage, and regulate AI.

 

The Standards: ISO/IEC 42001, ISO/IEC 22989, NIST AI Risk Management Framework (AI RMF)

ISO/IEC 22989 focuses on concepts and terminology. By standardizing the language around AI, it ensures consistency in communication between developers, regulators, and policymakers. It provides a shared foundation for technical and strategic discussions, making it easier to align projects and compliance efforts.

 

ISO/IEC 42001 sets the framework for an Artificial Intelligence Management System (AIMS). As if we didn’t have enough Management Systems (ISMS, CSMS, DRMS, etc.), now we have AIMS.

It provides requirements for organizations to govern AI responsibly throughout its lifecycle.

Much like ISO 27001 for information security, this standard enables organizations to implement repeatable processes, assign roles, manage risks, and continuously improve their AI practices.

In short, ISO/IEC 22989 tells us how to talk about AI, while ISO/IEC 42001 tells us how to manage it.

NIST AI Risk Management Framework (AI RMF) is developed by the National Institute of Standards and Technology.  It gives guidance on managing the risks of AI systems: trustworthiness, safety, fairness, explainability, etc.

NIST also works on “crosswalks” linking the AI RMF to international standards like ISO, OECD guidelines, etc.

 

The Regulation: EU AI Act

The EU AI Act goes beyond voluntary standards. It is a regulation with binding legal requirements for AI systems placed on the EU market.

The Act classifies AI systems by risk:

  • Unacceptable risk systems (e.g., manipulative or exploitative applications) are prohibited.
  • High-risk systems (e.g., AI in healthcare, critical infrastructure, recruitment) must meet strict conformity assessments, documentation, and testing requirements.
  • Limited and minimal risk systems face transparency obligations or no specific restrictions.

Unlike ISO standards, which are voluntary, the EU AI Act will be legally enforced. Non-compliance may lead to heavy fines and product bans.

 

Comparing Standards and Regulation

  • ISO/IEC 22989 provides consistent terminology.
  • ISO/IEC 42001 defines organizational governance for AI.
  • NIST AI RMF guidance on managing the risks of AI systems: trustworthiness, safety, fairness, explainability.
  • EU AI Act imposes legally binding obligations at the product and deployment level.

While ISO and NIST standards are process-driven and supportive, the EU AI Act mandates specific outcomes.

Organizations can use ISO/IEC 42001 to establish governance processes that make compliance with the EU AI Act easier, but certification alone does not replace the legal requirements.

U.S. standards tend to be voluntary or guidance-based, not binding across all states or businesses, unlike the EU AI Act. There is no single federal law with comprehensive AI regulation yet;

instead it’s a patchwork of executive orders, agency actions, state laws, and voluntary standards. The U.S. places strong emphasis on risk management frameworks, public-private collaboration, innovation, and aligning with international standards.

In the U.S. there are some more standards on AI like Center for AI Standards and Innovation (CAISI) and various initiatives and plans for AI systems. Also there are some state laws and regulations which require some large AI model developers to publicly disclose safety protocols and report certain kinds of risk or incidents (California SB 53).

 

Key Risks Introduced by AI

  1. Model drift and performance risk — AI systems degrade over time, causing hidden failures.
  2. Bias and discrimination — Training data can produce unfair outcomes, raising legal and ethical issues.
  3. Lack of explainability — Black-box models hinder audits, accountability, and trust.
  4. Data protection risks — Models may leak or memorize personal data, creating privacy concerns.
  5. Security vulnerabilities — Adversarial attacks, poisoning, and prompt injection threaten system integrity.
  6. Supply chain dependency — Reliance on third-party models introduces hidden weaknesses.
  7. Regulatory non-compliance — Misclassifying risk or skipping assessments can result in fines and reputational damage.

How Standards Address These Risks

  • ISO/IEC 22989 ensures clarity in measurement and reporting.
  • ISO/IEC 42001 and NIST AI RMF requires lifecycle controls, risk assessments, monitoring, and continuous improvement.
  • EU AI Act mandates transparency, testing, and conformity assessments tailored to specific use cases.

When combined, these frameworks help organizations create trustworthy AI systems while meeting regulatory demands.

 

The Next Level of Compliance

To reach the “next level” of compliance, organizations must integrate voluntary standards and mandatory regulation into one cohesive program:

  1. Adopt common terminology using ISO/IEC 22989 across all teams.
  2. Implement an AI management system aligned with ISO/IEC 42001.
  3. Map AI products against EU risk categories and prepare compliance checklists.
  4. Generate technical evidence such as model cards, data lineage, and test results.
  5. Automate monitoring and incident response to detect model drift and adversarial attacks.
  6. Integrate privacy engineering to ensure alignment with GDPR.
  7. Secure the AI supply chain by tracking third-party components and models.
  8. Prepare for external audits and conformity assessments, leveraging ISO processes as supporting evidence.

Compliance should not be treated as a static checklist. The future of responsible AI lies in continuous monitoring, automated governance, and embedding compliance into MLOps pipelines.

Conclusions

AI standards and regulations are converging to create a new compliance landscape.

ISO/IEC 22989 provides the vocabulary, ISO/IEC 42001 offers governance, and the EU AI Act enforces legal obligations.

Organizations that align with all three will not only reduce risk but also strengthen trust in their AI systems. The next level of compliance means going beyond certification—building AI practices that are transparent, secure, and continuously monitored.

The EU provides a strong, comprehensive, binding regulatory framework for AI with clear risk categories, prohibited uses, and enforcement.

The U.S. currently relies more on existing laws, executive orders, and sectoral regulation, giving more flexibility but less predictability.

For global players, achieving dual compliance is increasingly necessary. The trend suggests U.S. regulation will become stronger over time, potentially drawing from EU models.

 

The post Navigating AI Standards and Regulations first appeared on Sorin Mustaca’s blog.

Policy vs Standard vs Procedure: why, what, how

Ever wondered what the differences between these terms are?

We use them in GRC very often, but we rarely think what they mean. This creates in time some stretching of these concepts, meaning that their meanings overlap to a certain degree.

 

A Policy is a high-level, mandatory statement of principles and intent.
A Standard is a mandatory, specific requirement that defines what is needed to comply with a policy.
A Procedure is a detailed, step-by-step set of instructions on how to implement a standard or fulfill a policy.
Policies set goals, standards define the required outcomes, and procedures provide the detailed roadmap to achieve them, forming a hierarchical structure within an organization.

Policy

What is it
A high-level, broad statement of principles, intent, or requirements designed to guide decisions and achieve outcomes.
Purpose
To establish strategic goals, the intent, to support an organization’s mission, comply with laws, or minimize risk.
Answers
Describes the Why must something be done.
Mandatory
Yes, policies are mandatory and define why must something be done. Because of their generic nature of defining the need and not the implementation, they rarely change and are not negotiable.
Example
An IT Security Policy that states the organization will protect sensitive data from unauthorized access.

Standard 

What is it
A mandatory, specific technical requirement or rule that provides concrete, measurable details for policy compliance.
Purpose
To provide the specific rules, metrics, and technical configurations necessary to make policies meaningful and effective.
Answers
Describes the What must be done to implement the policy.
Mandatory
Yes, standards are mandatory and define specific configurations, timelines, or processes. Because of their specific nature of describing the implementation, they can change because of the dynamic of the specific industry.
Example
An IT Security Standard for Encryption data that is required by a Policy that states that the organization will protect sensitive data from unauthorized access. The standard will define what encryption algorithm will be used, when to use it, what kind of data should be encrypted and who is responsible for implementing it.

Procedure

What is it 
A detailed, step-by-step set of instructions outlining the specific actions to be performed to implement a standard or policy. 

Purpose
To provide clear, actionable guidance on how to execute a task and to ensure consistent, repeatable measurable results. It also defines Who should do something and When.

Answers
Describes the How must something be done that is defined by the standard or directly by the policy.
Mandatory
Yes, procedures are mandatory and specify the exact steps an employee must follow. Because they define detailed requirements on how to implement a standard or policy, they change as needed. 

Example
A step-by-step instruction set on how to encrypt data in a database, a hard drive, emails and other types of information.

How They Work Together (Hierarchically) 

  1. Policy (The Goal)The high-level statement of intent, like an IT security policy.
  2. Standard (The Rule)The specific requirements that support the policy, such as password complexity standards.
  3. Procedure (The Steps)The detailed instructions on how to follow the standard, like the steps to change a password.
This top-down structure ensures that policies are actionable and that goals are met through consistent, documented processes.

What about Guidelines?

Guidelinesare at the bottom, offering recommended and flexible support for the entire framework. They are optional and usually accompany procedures and standards.

Read more

The post Policy vs Standard vs Procedure: why, what, how first appeared on Sorin Mustaca’s blog.

Comparing Annex A in ISO/IEC 27001:2013 vs. ISO/IEC 27001:2022

I wrote ages ago this article, where I compared briefly the Annex A in the two versions of the standard: https://www.sorinmustaca.com/annex-a-of-iso-27001-2022-explained/

But, I feel that there is still need to detail a bit the changes, especially that now more and more business are forced to re-audit for the newer standard.

 

Overview of Annex A

Each of these categories encompasses a set of controls designed to address specific aspects of information security management within an organization. These categories encompass policies, procedures, and technical and organizational measures designed to safeguard critical assets, prevent unauthorized access, and mitigate security threats.

The primary purpose of Annex A controls is to guide organizations in selecting appropriate security measures based on their specific context and identified risks. They are not mandatory requirements but serve as best practices for information security management.

Many auditors or practitioners are recommending to not focus exclusively on these controls, because they will not help you in the end to pass the audit. I agree, to not rely exclusively on them, but only to use them as a starting point.

 

  • 2013 edition:

    • 114 controls

    • Grouped in 14 control domains (e.g., A.5 Information Security Policies, A.6 Organization of Information Security, etc.).

    • Numbering is A.x.y.z.

  • 2022 edition:

    • 93 controls (reduced by consolidation, merging, and restructuring).

    • Grouped in 4 control themes:

      • Organizational (37 controls)

      • People (8 controls)

      • Physical (14 controls)

      • Technological (34 controls)

    • Numbering is A.5–A.8 only, reflecting the 4 control themes.

 

New Controls Introduced in 2022

ISO/IEC 27001:2022 introduced 11 new controls to address modern risks. Each expands the ISMS scope to include practices that were not explicitly covered in the 2013 edition.

I personally love this addition, because now the standard is in sync with the reality out there. I especially love the A.8.28 Secure Coding, which has been far too long ignored, despite the evidence that all major exploits have been caused by not respecting secure coding standards.

  1. A.5.7 Threat Intelligence

    • Requires collection and analysis of threat intelligence.

    • Sources: security vendors, government advisories, industry ISACs, internal incident data.

    • Outcome: anticipate and defend against emerging attack methods.

  2. A.5.23 Information Security for Use of Cloud Services

    • Establishes rules for assessing and managing cloud providers.

    • Covers due diligence, contracts, data residency, shared responsibility.

    • Goal: ensure cloud adoption is secure and consistent.

  3. A.5.30 ICT Readiness for Business Continuity

    • Ensures IT and communications systems are resilient to disruptions.

    • Focus: backup, recovery testing, failover, disaster readiness.

    • Bridges ISMS with business continuity (ISO 22301).

  4. A.7.4 Physical Security Monitoring

    • Monitoring of physical facilities using CCTV, access logs, alarms, motion sensors.

    • Detects unauthorized access and environmental hazards.

    • Complements access restriction controls.

  5. A.8.9 Configuration Management

    • Requires baseline configurations for systems and software.

    • Covers patching, secure hardening, prevention of unauthorized changes.

    • Reduces risks from misconfigurations.

  6. A.8.10 Information Deletion

    • Secure and verified erasure of data when no longer needed.

    • Applies to disks, mobile devices, cloud storage, and backups.

    • Prevents data recovery by unauthorized parties.

  7. A.8.11 Data Masking

    • Techniques to obscure sensitive information.

    • Useful in non-production environments and analytics.

    • Supports privacy requirements (GDPR, HIPAA, etc.).

  8. A.8.12 Data Leakage Prevention (DLP)

    • Deployment of technical and procedural measures to prevent data leaks.

    • Examples: DLP software, email scanning, outbound traffic filtering.

    • Helps against insider threats and accidental data loss.

  9. A.8.16 Monitoring Activities

    • Expands on logging to include continuous monitoring of systems and networks.

    • Goal: real-time detection of anomalies and policy violations.

    • Supports SOC operations and incident response.

  10. A.8.23 Web Filtering

  • Restricts or blocks access to malicious or inappropriate websites.

  • Prevents phishing, malware, and unauthorized browsing.

  • Often implemented via secure DNS or proxy gateways.

  1. A.8.28 Secure Coding

  • Mandates secure software development practices.

  • Includes developer training, code review, automated scanning, use of vetted libraries.

  • Supports DevSecOps integration and early vulnerability prevention.

 

Merged Controls

Some 2013 controls were consolidated to reduce duplication:

  • Logging and monitoring (A.12.4.1–A.12.4.3, 2013) merged into A.8.15 & A.8.16 (2022).

  • Cryptographic controls (A.10.1.1, A.10.1.2, 2013) merged into A.8.24 (2022).

  • Access management controls consolidated into A.5.15–A.5.18 (2022).

 

Removed / Reorganized Controls

No controls were truly eliminated; instead, they were rephrased or merged.

  • Example: Removal of assets (A.11.2.7, 2013) became part of Return of assets (A.5.9, 2022).

  • Teleworking and mobile device policies combined under broader organizational controls.

 

Attributes in Annex A (2022)

A new classification model (“attributes”) was introduced to tag each control.

Categories include:

  • Control type: Preventive, Detective, Corrective

  • Security properties: Confidentiality, Integrity, Availability

  • Cybersecurity concepts: Identify, Protect, Detect, Respond, Recover (aligned with NIST CSF)

  • Operational capabilities: Governance, Asset management, Identity, Resilience, etc.

  • Security domains: Align with organizational, people, physical, technological

Why Attributes Matter

This enables flexible mapping to frameworks like NIST, CIS, and especially TISAX.

  • They make ISO 27001 more practical and flexible.

  • Help you cross-map ISO 27001 controls to:

    • NIST CSF (via cybersecurity concepts)

    • CIA triad (via security properties)

    • Defense-in-depth planning (via control type)

  • Useful for gap analysis: you can check whether your ISMS is too prevention-heavy and weak on detection or recovery.

  • Improve communication with stakeholders: executives, auditors, regulators, or IT operations can each view controls in the lens that matters most to them.

In simple words: Attributes are like tags in a library. They don’t change the book (control), but they let you find it faster depending on whether you search by topic, author, or year.

Since TISAX is my favorite certification (ok, ok, it is a label, but bare with me here) I need to point to the column P. “Reference to other standards”, where this cateogry has been used several times.

Reference “3.1.10” in Cell P50 from the ISA-VDA-6.0.3:

3 -> Cybersecurity Concept

1 -> Detect

10 -> Control Identifier

This ia a Mapping between control A.8.15 (=Logging) und  Cybersecurity Concept: Detect von NIST CSF :

Identifier   Control_Code   Title
3.1.1  A.7. X Employee event reporting
3.1.2 A.7. X Information security event reporting
3.1.3 A.5.24 Information security incident planning/prep
3.1.4 A.5.25 Assessment & decision on info security events
3.1.5 A.5.26 Response to information security incidents
3.1.6 A.5.27 Learning from information security incidents
3.1.7 A.7.4 Physical security monitoring
3.1.8 A.8.12 Data leakage prevention
3.1.9 A.8.16 Monitoring activities
3.1.10 A.8.15 Logging

A.8.15 Logging -> mapping -> Cybersecurity Concept: Detect

This is useful for aligning ISO/IEC 27001 with NIST CSF, TISAX, ISA/IEC 62443, and others .

I think there is a lot more to write about them, perhaps in another article.

 

Summary

2013 Control (Domain) 2022 Control (Theme) Notes
A.5.1.1 Information security policy A.5.1 Policies for information security Mostly unchanged
A.5.1.2 Review of policies A.5.1 Policies for information security Merged
A.6.1.1 Roles and responsibilities A.5.2 Information security roles and responsibilities Direct
A.6.1.2 Segregation of duties A.5.3 Segregation of duties Direct
A.6.1.3 Contact with authorities A.5.4 Contact with authorities Direct
A.6.1.4 Contact with special interest groups A.5.5 Contact with special interest groups Direct
A.6.1.5 Project management A.5.8 Information security in project management Expanded
A.6.2.1 Mobile device policy A.6.2.1 (2013) merged → A.6.2 (2022 People theme) Consolidated
A.6.2.2 Teleworking A.5.10 Acceptable use of information and other assets + A.5.11 Return of assets Reorganized
A.7.1.1 Screening A.6.1 Screening Direct
A.7.1.2 Terms of employment A.6.2 Terms of employment Direct
A.7.2.1 Management responsibilities A.6.3 Management responsibilities Direct
A.7.2.2 Information security awareness, education, and training A.6.4 Information security awareness, education, and training Direct
A.7.2.3 Disciplinary process A.6.5 Disciplinary process Direct
A.7.3 Termination/responsibilities A.5.9 Return of assets Consolidated
A.8.1.1 Inventory of assets A.5.9 Inventory of information and other assets Direct
A.8.1.2 Ownership of assets A.5.9 Inventory of information and other assets Consolidated
A.8.1.3 Acceptable use of assets A.5.10 Acceptable use of information and other assets Direct
A.8.1.4 Return of assets A.5.11 Return of assets Direct
A.8.2.1 Classification of information A.5.12 Classification of information Direct
A.8.2.2 Labeling of information A.5.13 Labelling of information Direct
A.8.2.3 Handling of assets A.5.14 Handling of information Direct
A.8.3.1 Management of removable media A.8.10 Information deletion Merged/expanded
A.8.3.2 Disposal of media A.8.10 Information deletion Direct
A.8.3.3 Physical media transfer A.5.14 Handling of information Consolidated
A.9.1.1 Access control policy A.5.15 Access control Direct
A.9.1.2 Access to networks and services A.5.16 Access to network and network services Direct
A.9.2.x User access management (all) A.5.17–A.5.18 Consolidated
A.9.3 User responsibilities A.5.18 Access rights Direct
A.9.4 System and application access A.5.19–A.5.22 Expanded
A.10.1.1 Policy on cryptographic controls A.8.24 Use of cryptography Direct
A.10.1.2 Key management A.8.25 Key management Direct
A.11.x Physical and environmental controls A.7.1–A.7.4 Simplified/merged
A.12.1.x Operational procedures A.8.1–A.8.8 Direct
A.12.4.1–A.12.4.3 Logging & monitoring A.8.15–A.8.16 Monitoring activities Merged
A.12.5.x Control of operational software A.8.7–A.8.9 Consolidated
A.12.6.x Technical vulnerability mgmt. A.8.8 Management of technical vulnerabilities Direct
A.13.1.x Network security controls A.8.20 Network security Direct
A.13.2.x Information transfer A.5.14 Handling of information Consolidated
A.14.1.x Security requirements for IS A.8.26 Application security requirements Direct
A.14.2.1 Secure development policy A.8.28 Secure coding Expanded
A.14.2.5 Secure system engineering A.8.27 Secure system architecture and engineering principles Direct
A.15.1 Supplier security A.5.19 Supplier relationships Direct
A.15.2 Supplier service delivery mgmt. A.5.20–A.5.21 Consolidated
A.16.1.x Incident mgmt. A.5.25–A.5.27 Direct
A.17.1 Business continuity planning A.5.29 ICT readiness for business continuity Expanded
A.18.1 Compliance with legal A.5.32 Compliance obligations Direct
A.18.2 Information security reviews A.5.33 Independent review of information security Direct

 

 

Conclusions

  • The shift from ISO/IEC 27001:2013 to ISO/IEC 27001:2022 is less about reducing the number of controls and more about modernizing and simplifying them.

While the 2013 version spread 114 controls across 14 domains, the 2022 edition organizes 93 controls into just four clear themes. This makes the standard easier to understand and apply.

The addition of 11 new controls shows how the standard has kept pace with today’s security challenges: cloud services, secure coding, threat intelligence, data leakage prevention, and stronger monitoring.

At the same time, many older controls were merged or rephrased, removing overlaps and making the framework more practical.

  • Perhaps the biggest improvement is the introduction of attributes. These tags let organizations view the controls through different lenses — confidentiality, integrity, availability, NIST CSF functions, or operational capabilities. That flexibility makes it much easier to map ISO 27001 to other frameworks and compliance requirements.
  • For organizations, the transition means more than just updating documentation. It is an opportunity to strengthen governance, align with modern practices, and close gaps in areas that were not well covered before, such as cloud and DevSecOps.

The post Comparing Annex A in ISO/IEC 27001:2013 vs. ISO/IEC 27001:2022 first appeared on Sorin Mustaca’s blog.

NIS2 Fulfillment through TISAX Assessment and ISA6

ENX has released an interesting article about how NIS2 requirements map to TISAX requirements. For this, there is a short introductory article called “TISAX and Cybersecurity in Industry – Expert Analysis Confirms NIS2 Coverage” and

and a full article of 75 pages : https://enx.com/TISAX-NIS2-en.pdf

An analysis conducted within ENX’s expert working groups examined how well a TISAX assessment based on the ISA6 catalog aligns with the requirements of the NIS2 Directive.

The key findings include:

  • All relevant NIS2 requirements are addressed, including risk management, incident response, supply chain security, governance, and technical safeguards.
  • TISAX goes beyond minimum legal requirements, incorporating structured maturity assessments, systematic vulnerability management, and continuous improvement mechanisms.
  • The established three-year assessment cycle is considered appropriate in the context of NIS2.
  • TISAX labels are publicly accessible via the ENX database, enabling transparent verification.
  • Additional national requirements must be addressed separately. This includes, in particular, country-specific reporting obligations to authorities or national CSIRTs. While not part of the TISAX standard, these requirements can be effectively managed using existing TISAX structures.

 

Here is the summary of the PDF above created with NotebookLM (9 pages):

Detailed Briefing Document: NIS2 Fulfillment Through TISAX

Date: October 26, 2023 Prepared for: Key Stakeholders concerned with NIS2 Compliance in the Automotive Industry Subject: Review of the “NIS2 fulfilment through TISAX” Expert Opinion, detailing how TISAX assessments align with NIS2 Directive requirements.

Executive Summary

The automotive industry, through the ENX Association and the ISA requirements catalogue, has proactively addressed cybersecurity for years, culminating in the TISAX assessment standard established in 2017. This expert opinion, published by the ENX Association, concludes that companies with TISAX-compliant sites fully implement the requirements of the NIS2 Directive. The ISA catalogue and TISAX assessments go beyond NIS2 requirements, defining and continuously upholding the “state of the art” in information and cybersecurity for the industry. Independent auditors confirm implementation in a three-year cycle, deemed appropriate even when compared to the two-year cycle for critical infrastructure operators under German law. A common exchange mechanism allows organizations to query TISAX status and, by extension, NIS2 compliance, of partners.

Key Takeaway: Organizations with a valid TISAX label are generally well-prepared for the material requirements of NIS2, with the caveat that they must still manage national reporting requirements in parallel and ensure that their TISAX assessment objectives reflect their overall risk and cover all NIS2-affected sites.

1. Introduction and Overview of NIS2 and TISAX

The NIS2 Directive (EU) 2022/2555 aims to strengthen cyber resilience across the European Union, replacing the NIS1 Directive. It expands the scope of affected organizations, including many in the automotive industry. The automotive industry recognized the need for industry-wide information and cybersecurity and developed the TISAX Assessment standard and its underlying ISA requirements catalogue. The purpose of this analysis is to demonstrate that TISAX assessments, based on ISA6, can be considered proof of compliance with NIS2 requirements.

  • Purpose of Analysis: To assist companies in the automotive industry in assessing whether TISAX compliance covers NIS2 requirements.
  • Scope of Analysis: Focuses exclusively on NIS2 Directive requirements with specific implementation guidelines for companies. It does not provide implementation assistance or confirm a company’s readiness for NIS2 outside of TISAX. Country-specific implementations and additional material requirements are not covered.
  • Target Audience: Experts from companies affected by NIS2 that use or undergo TISAX assessments, and authorities responsible for NIS2 compliance and supervision.

2. TISAX Assessment and Underlying Catalogue of Requirements (ISA6)

TISAX assessments, conducted by independent auditors in a three-year cycle, are based on ISA catalogue version 6 (ISA6). A critical distinction is made between TISAX scope definition and ISO management system certifications:

  • TISAX Assessment Scope: Utilizes a generally defined standard scope, ensuring comparability and a similar level of security across companies. This contrasts with ISO/IEC 27001, where the audited organization defines its ISMS scope. For the conclusions of this document to apply, TISAX Assessment objectives must reflect the company’s overall risk, and all NIS2-affected sites must have corresponding TISAX labels.
  • TISAX Assessment Objectives: Allow for scaling the assessment content based on risk and criticality of information processed (e.g., Confidential, Strictly Confidential, High Availability, Very High Availability, Data, Special Data, Prototype Protection).
  • TISAX Assessment Levels (AL):AL 1: Self-assessment, auditor checks completion, low confidence, not used in TISAX.
  • AL 2: Auditor performs plausibility check of self-assessment, checks evidence, conducts interviews (usually web conference).
  • AL 3: Comprehensive review, auditor verifies documents, conducts planned and unplanned interviews, observes implementation, and considers local conditions. Generally takes place on-site at all locations.
  • If multiple objectives are used, the highest AL is applied to the overall assessment.
  • TISAX Group Assessments (Simplified Group Assessment – SGA): Designed for companies with many locations and a centralized, highly developed ISMS.
  • S-SGA (Sample-based): Main site extensively assessed, sample sites assessed, other sites assessed at one AL lower.
  • R-SGA (Rotating Schedule-based): Main site extensively assessed, other locations assessed at the same AL but distributed over the three-year validity period. Not available for prototype protection objectives.
  • TISAX Control Questions and Requirements:Requirements are categorized (Must, Should, Additional requirements for high protection needs, Additional requirements for very high protection needs, Additional requirements for SGA).
  • “Must” requirements are strict, “Should” allows for justified deviations.
  • Additional requirements are subdivided by protection objectives (Confidentiality (C), Integrity (I), Availability (A)).
  • Individual control questions cannot be excluded as “not applicable”; they must be implemented holistically.
  • Deviations in TISAX Model: TISAX includes a maturity model (six levels, target is “established”) to assess practical implementation. Identified deviations require corrective action plans with defined implementation periods (up to 3, 6, or 9 months). Failure to correct deviations results in a failed audit.
  • Validity Period: TISAX assessments are valid for three years. Companies must continuously implement specified measures, conduct regular internal audits, and report significant changes affecting the ISMS or physical conditions, potentially requiring interim assessments.

3. NIS2 Article 20: Governance and Training

NIS2 Article 20 focuses on the governance body’s responsibility for cybersecurity risk management and their participation in relevant training.

  • NIS2 Article 20 (1): Governing Body’s Role in Risk Management: Requires the governing body to establish and monitor structures for cybersecurity risk management.
  • TISAX Fulfilment: Fully covered by ISA6 controls (1.2.1, 1.2.2, 1.4.1, 1.5.1, 1.5.2, 7.1.1). These controls check for defined ISMS scope, determined requirements, management commissioning and approval of ISMS, communication channels, regular reviews of ISMS effectiveness, defined responsibilities, resource availability, adequate security structure, qualified employees, conflict of interest avoidance, regular risk assessments, risk classification and allocation, security risk handling, compliance verification, independent ISMS reviews, and consideration of regulatory/contractual provisions.
  • Summary: “The requirement that the governing body of an organization has created appropriate structures to implement and monitor the implementation of the cybersecurity risk management measures taken to comply with Article 21 (NIS2 Article 20 (1)) is described by the controls defined in the ISA6 assessment standard and is fully checked for existence and implementation by the responsible auditor within a TISAX assessment.” The three-year TISAX cycle is considered appropriate given NIS2’s risk-based approach.
  • NIS2 Article 20 (2): Training for Governing Body and Relevant Members: Requires regular training for governing body members and other relevant individuals to acquire sufficient knowledge and skills in cybersecurity risk identification, assessment, and management.
  • TISAX Fulfilment: Checked by ISA6 control 2.1.3 (“To what extent is staff made aware of and trained with respect to the risks arising from the handling of information?”). This includes comprehensive training for all employees (including management), an awareness training concept covering relevant areas, consideration of target groups, regular execution, and documentation of participation.
  • Summary: While ISA does not explicitly list “management body” for training, it mandates training for “all employees” and differentiation by “target group,” implicitly covering management. This ensures the requirements of NIS2 Article 20 (2) are met.

4. NIS2 Article 21: Risk Management Measures

NIS2 Article 21 mandates appropriate and proportionate technical, operational, and organizational measures to manage risks to network and information systems.

  • NIS2 Article 21 (1): General Measures for Risk Management: Requires appropriate and proportionate measures to manage risks and minimize incident impact, considering the state of the art and implementation costs.
  • TISAX Fulfilment: Covered by ISA6 controls 1.2.1 (“To what extent is information security managed within the organization?”) and 1.4.1 (“To what extent are information security risks managed?”). These check for defined ISMS scope, determined requirements, existence and regular updating of risk assessments, assignment of risk owners, and action plans for risks.
  • Summary: “The requirements of NIS2 Article 21 (1) are described by the controls defined in the ISA6 assessment standard and are checked for existence and implementation by the auditor responsible during a TISAX assessment.” The TISAX assessment ensures a risk-based approach tailored to the company’s circumstances.
  • NIS2 Article 21 (2) a) – j): Specific Measures: These sub-articles detail specific areas for cybersecurity measures.
  • a) Policies on Risk Analysis and Information System Security: Fully covered by ISA6 controls 1.4.1, 5.2.7, 5.3.1, checking for procedures to identify, assess, and address risks, network management requirements, and information security consideration in new/developed IT systems.
  • b) Incident Handling: Fully covered by ISA6 controls 1.6.1, 1.6.2, checking for definition of reportable events, reporting channels, communication strategies, and incident processing procedures (categorization, qualification, prioritization, response, escalation). “The processes for detection, reporting channels and procedures, classification, processing and escalation (if necessary), go beyond the requirements stipulated in NIS2.”
  • c) Business Continuity, Backup Management, Disaster Recovery, Crisis Management: Fully covered by ISA6 controls 1.6.3, 5.2.8, 5.2.9, checking for crisis management preparedness, IT service continuity planning, and backup/recovery of data and IT services.
  • d) Supply Chain Security: Fully covered by ISA6 controls 1.2.4, 1.3.3, 1.6.1, 1.6.2, 1.6.3, 5.3.3, 6.1.1, 6.1.2. This includes defining responsibilities with external IT service providers, ensuring use of evaluated services, incident reporting and management from external parties, secure removal of information from external services, ensuring information security among contractors and partners, and contractual non-disclosure agreements. “The requirements in the ISA6 assessment standard go beyond the requirements of NIS2 and additionally include, for example, compliance with information security standards beyond the direct providers or service providers.”
  • e) Security in Network and Information Systems Acquisition, Development, and Maintenance (including vulnerability handling): Fully covered by ISA6 controls 1.2.3, 1.2.4, 1.3.4, 5.2.1, 5.2.4, 5.2.5, 5.2.6, 5.3.1, 5.3.2, 5.3.3, 5.3.4. This extensive coverage includes considering information security in projects, responsibilities with external IT service providers, approved software usage, change management, event logging, vulnerability identification and addressing, technical checks of IT systems, security in new/developed IT systems, network service requirements, and information protection in shared external services. “The assessment goes beyond the requirements of NIS2 by considering the return and secure removal of information assets from IT services outside the organization.”
  • f) Policies and Procedures to Assess Effectiveness of Cybersecurity Risk-Management Measures: Fully covered by ISA6 controls 1.2.1, 1.4.1, 1.5.1, 1.5.2, 1.6.2, 5.2.6, checking for regular review of ISMS effectiveness by management, up-to-date risk assessments, regular compliance checks, independent ISMS reviews, continuous improvement based on security events, and regular technical audits of IT systems and services. The three-year cycle is considered appropriate.
  • g) Basic Cyber Hygiene Practices and Cybersecurity Training: Covered by a wide range of ISA6 controls (1.1.1, 2.1.2, 2.1.3, 4.1.3, 4.2.1, 5.1.1, 5.1.2, 5.2.1, 5.2.2, 5.2.3, 5.2.4, 5.2.5, 5.2.6, 5.2.7, 5.2.8, 5.2.9, 5.3.1, 5.3.2, 5.3.3, 5.3.4). This includes information security policies, contractual obligations for staff, comprehensive training, secure management of user accounts/login info, access rights management, cryptographic procedures, information protection during transfer, change management, separation of environments, malware protection, event logging, vulnerability management, technical audits, network management, continuity planning, backup/recovery, and secure handling of information assets.
  • h) Policies and Procedures Regarding Cryptography and Encryption: Fully covered by ISA6 controls 5.1.1, 5.1.2, checking for adherence to industry standards, technical rules, lifecycle management of cryptographic keys, key sovereignty, and protection of information during transfer (including encryption).
  • i) Human Resources Security, Access Control Policies, and Asset Management: Fully covered by a comprehensive set of ISA6 controls (1.3.1, 1.3.2, 1.3.3, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 3.1.3, 3.1.4, 4.1.1, 4.1.2, 4.1.3, 4.2.1, 5.2.1, 5.2.2, 5.2.3, 5.2.4, 5.2.5, 5.2.6, 5.2.7, 5.2.8, 5.2.9). This includes identification and classification of information assets, use of approved external IT services, employee qualification for sensitive roles, contractual obligations, training, mobile work regulations, handling of supporting assets, mobile device management, identification means management, user access security, user account/login info management, access rights, change management, separation of environments, malware protection, event logging, vulnerability management, technical audits, network management, continuity planning, and backup/recovery.
  • j) Multi-factor Authentication, Continuous Authentication, Secured Communications, and Emergency Communication Systems: Fully covered by ISA6 controls 1.6.3, 4.1.2, 4.1.3, 5.1.2, 5.2.8. This involves crisis planning for communication, user authentication procedures (including strong authentication/MFA for privileged accounts), secure management of user accounts/login info, protection of information during transfer (secure voice/video/text communication), and continuity planning that includes alternative communication strategies.
  • NIS2 Article 21 (4): Immediate Corrective Measures for Non-Compliance: Requires immediate necessary, appropriate, and proportionate corrective measures upon awareness of non-compliance with Article 21 (2) measures.
  • TISAX Fulfilment: Fully covered by ISA6 controls 1.5.1, 1.5.2, checking for verification of policy observation, regular review of policies/procedures, documented results, regular compliance checks, and initiation/pursuit of corrective measures based on internal and independent reviews. The three-year cycle is deemed appropriate.

5. NIS2 Article 23: Incident Reporting

NIS2 Article 23 outlines requirements for reporting security incidents.

  • NIS2 Article 23 (1): Notification of Significant Security Incidents: Essential and important entities must notify their CSIRT or competent authority without undue delay of significant security incidents. Recipients of services must also be informed immediately. Information enabling cross-border impact determination must be provided.
  • TISAX Fulfilment: Almost fully met by ISA6 controls 1.6.1, 1.6.2. These check for defined reportable events, known reporting mechanisms based on severity, available reporting channels, handling of events by category, knowledge of reporting obligations and contact information, and communication strategies.
  • Summary: “One exception here is the disclosure of cross-border effects, which is not explicitly required within the ISA. It has already been defined here that emergency communication must be expanded to include the specifications from NIS2. Once this extension has been considered, the requirements are fully met.”
  • NIS2 Article 23 (2): Communication of Remedial Actions to Recipients: Entities must promptly communicate to affected recipients any measures or remedial actions they can take in response to a significant cyber threat, and inform them of the threat itself.
  • TISAX Fulfilment: Covered by ISA6 control 1.6.2. This includes categorization, qualification, and prioritization of reported events, appropriate responses, and communication strategies considering target recipients and reporting periods.
  • Summary: “The explicit contact information, reporting channels and languages must be included in the Business Continuity Management (BCM) by the companies following their publication by the EU member states. The auditor cannot guarantee that this information is available, as the information to be included is company-specific and can therefore take a variety of forms.”
  • NIS2 Article 23 (3): Definition of Significant Security Incident: Provides an informative definition (serious disruption or financial/material/immaterial damage).
  • TISAX Fulfilment: Purely informative, no assessable measures.
  • NIS2 Article 23 (4): Reporting Timelines and Content: Specifies detailed reporting timelines (early warning within 24 hours, incident notification within 72 hours, intermediate reports, final report within one month).
  • TISAX Fulfilment: Covered by ISA6 controls 1.6.1, 1.6.2, 1.6.3. These check for defined reportable events, mechanisms based on severity, accessible reporting channels, obligation to report, feedback procedures, categorization/prioritization, maximum response times, escalation, and crisis communication strategy.
  • Summary: “In addition to the knowledge and existence of the necessary reporting channels and deadlines, the ISA standard also requires the establishment of crisis-proof communication. At this point, the requirements of the ISA go beyond the requirements of NIS2.” Similar to 23(2), explicit contact information and channels are company-specific and not directly assessed by TISAX.
  • NIS2 Article 23 (5-11): No explicit demands on affected companies requiring preparatory measures.

6. NIS2 Article 24

  • NIS2 Article 24 (1): No explicit demands on affected companies that require preparatory measures.
  • TISAX Fulfilment: No assessable measures.

7. NIS2 Article 25: European and International Standards

NIS2 Article 25 addresses the application of European and international standards for network and information system security.

  • TISAX Fulfilment: “The requirements of NIS2 Article 25 to use European and international standards and technical specifications for the security of network and information systems to ensure the implementation of the requirements for companies resulting from NIS2 are met by an audit of an organization’s ISMS carried out in accordance with TISAX, as this report demonstrates.” No explicit demands for preparatory measures are made on companies.

8. NIS2 Articles 22, 26-29

  • NIS2 Article 22: Coordinated Risk Assessments for Critical Supply Chains: No specific requirements for companies, not considered further in this report.
  • NIS2 Articles 26-28 (Jurisdiction, Register of Entities, Domain Name Registration Data): No measures to be examined for companies, not considered in this document.
  • NIS2 Article 29: Exchange of Cybersecurity Information: “The requirements of NIS2 Article 29 are not assessed within the TISAX assessment.”

9. Overall Summary and Conclusion

The “NIS2 fulfilment through TISAX” document strongly asserts that TISAX assessments, based on the ISA requirements catalogue, provide comprehensive evidence that companies meet the material requirements of the NIS2 Directive.

  • State of the Art: ISA and TISAX are considered “state of the art” for information and cybersecurity in the automotive industry due to their continuous development by experts, application by thousands of companies, and resulting knowledge gain.
  • Management Responsibility and Risk Management: A TISAX label indicates that the management of an assessed company fulfills the responsibility required in NIS2 Article 20 and has implemented all state-of-the-art risk management measures of Article 21, provided the assessment objectives reflect overall risk and all NIS2-affected sites were included.
  • Audit Cycle: The three-year TISAX audit cycle is deemed appropriate, even compared to the two-year cycle for critical infrastructure operators under German law, due to the continuous monitoring and documentation obligations within the cycle.
  • Preparation for NIS2: Companies with a valid TISAX label are “well positioned to meet the requirements of the NIS2 directive in these areas.”
  • Reporting Requirements: TISAX provides proof of established mechanisms for mandatory reporting to authorities and customers. However, companies are responsible for integrating country-specific additional requirements and verifying them against implemented measures.

In essence, TISAX is presented as a robust framework that aligns with and often exceeds the cybersecurity requirements set forth by NIS2 for the automotive sector.

 

 

The post NIS2 Fulfillment through TISAX Assessment and ISA6 first appeared on Sorin Mustaca’s blog.

Guide for delivering frequently software features that matter (series) #2/2: Challenges and the path forward

Click below for the podcast version (AI generated):


Challenges that stop teams to deliver and how to solve them

Objection 1: “Our features are too complex for short sprints”

This is the most common objection I hear, and it reveals a fundamental misunderstanding. The solution isn’t longer sprints or more sprints — it’s better feature decomposition.

Take an e-commerce checkout flow. Instead of trying to build the entire process in one Sprint, break it down: first, just shopping cart management; next, shipping information; then payment processing; finally, order confirmation.

Each piece provides immediate value and teaches you something about user behavior.

The key insight? Users will happily use a partial feature if it solves a real problem for them. Of course, some things can be used, some others don’t.

In the above example, it makes no sense to allow ordering without being able to pay or to enter a delivery address.

It’s important to apply common sense and decompose features in such a way that they provide some value to the user or stakeholder.

Another aspect here is that sometimes you maybe don’t deliver the feature to the users, but you accumulate a few deliverables and then you ship them together, when it makes sense.

The key take out is: there is no receipt for how small or big the features should be in order to allow delivery. Try to decompose them and use common sense when to deliver them: individually or in sets.

 

Objection 2: “We can’t maintain quality at this pace”

Quality isn’t something you add at the end—it’s built into every step. The teams with the highest delivery frequency actually have the fewest quality issues because they’ve automated their quality checks and made them part of their daily workflow.

But this has a mandatory requirement the fact that automation is there.

If you postpone automation you run eventually in technical debt, which is more expensive to implement later.

 

Objection 3: “Our stakeholders don’t understand this approach” or “they don’t know what they want”

Stakeholder education is crucial. They need to understand that their active participation is what makes frequent delivery valuable. Regular “show and tell” sessions where stakeholders can actually use the software create enthusiasm and provide immediate feedback.

One technique that works well: frame frequent delivery as risk reduction. Instead of betting everything on a big release, you’re placing smaller, safer bets that can be adjusted based on market response.

Ask for feedback about you delivered and what you plan to deliver. You will see that even if the stakeholders don’t know exactly what they want, they will find it easier to provide feedback or corrections to your plans.

 

Advanced strategies for teams

Release planning without rigidity

While Scrum focuses on Sprint-level planning, successful teams also think several Sprints ahead. I use story mapping to visualize how features relate to user workflows, which helps identify what should be delivered together versus what can stand alone.

Think of it as planning a road trip—you know your major destinations but remain flexible about the exact route based on what you discover along the way.

Manage dependencies

Dependencies kill delivery predictability. The best teams minimize them through smart architecture choices (like microservices) and careful Sprint planning. When dependencies exist, make them visible through dependency boards that show how different teams’ work interconnects.

Define and collect metrics that actually matter

Velocity is useful for Sprint planning, but business metrics tell the real story.

  • Did you receive any feedback or complains from customers/users/stakeholders?
  • How quickly can you respond to customer requests?
  • How often do users engage with new features?
  • How many bugs did you have in the last delivery?
  • Were the features delivered used?

These metrics ensure frequent delivery, which translates to business success.

Building the culture that makes it work

Creating psychological safety

Frequent delivery requires teams to take risks and experiment. This only works when people feel safe to voice concerns, make and admit mistakes.

The goal is not to make mistakes, but to be aware that they might occur and react accordingly.

In my retrospectives, I focus on systems and processes, not individual performance.

When problems arise, we ask “how do we prevent this?” not “who caused this?”

Yes, sometimes it is needed to get direct feedback, but in general, I try to focus this feedback on me and less on other team members.

 

Real customer collaboration

The Agile Manifesto’s emphasis on customer collaboration isn’t just philosophy—it’s practical necessity.

Whenever possible and feasible, try to involve actual end users in sprint reviews, not just business stakeholders. Their feedback often reveals usability issues that internal teams miss.

Implement user analytics directly in your application to provide continuous insight into how people actually use your software.

 

Instead of conclusions

Mastering frequent delivery is a journey, not a destination.

The teams I’ve worked with who succeed share three characteristics:

  • They embrace change as opportunity,
  • They prioritize working software over comprehensive documentation (who doesn’t ?), and
  • They value collaboration over rigid processes.

Start with the fundamentals—reliable Sprint execution and solid engineering practices—then layer on advanced techniques as your team matures.

The goal isn’t perfection; it’s continuous progress toward more effective value delivery.

Organizations that master frequent delivery gain significant competitive advantage. They respond quickly to market changes, incorporate user feedback rapidly, and create more engaging work environments where team members see the immediate impact of their efforts.

Your journey starts with the next Sprint. Focus on delivering something valuable to users, measure their response, and use that learning to make the next Sprint even better.

That’s the path to software that actually matters.

The post Guide for delivering frequently software features that matter (series) #2/2: Challenges and the path forward first appeared on Sorin Mustaca on Cybersecurity.

Guide for delivering frequently software features that matter (series) #1/2: the Pillars of successful frequent delivery

Click below for the podcast version (AI generated):

Guide for delivering frequently software features that matter: the three Pillars of successful frequent delivery

If you’re a software engineer older than 30 years, then you definitely have worked following a non-agile methodology.

Those methodologies are based on a fixed structure, a lot of planning, and hope that everything will go as planned. And they never worked 🙂

 

Small bets, less risk

After helping many teams transform their delivery approach over the past 2 decades, I’ve learned that the most successful software projects share one trait: they deliver working software early and often. Think of it like learning to cook—you taste as you go rather than waiting until the entire meal is prepared to discover it needs salt – or to discover that it has too much salt.

Scrum’s power lies in its ability to turn software development from a high-stakes gamble into a series of small, manageable bets.  It basically lowers the risk of creating something that is a failure before it is even released.

Instead of spending months building features that might miss the mark, you deliver value every 2 weeks and course-correct based on real user/stakeholder feedback.

 

The Three Pillars of successful frequent delivery

1. Sprint Planning that actually delivers value

Here’s where most teams go wrong: they focus on completing tasks instead of delivering outcomes.

In my experience, the magic question that transforms Sprint planning is: “What could we deliver to users at the end of this Sprint that would make them say ‘this is useful’?”

Or maybe, if you’re not that far, think in terms of: what do we have to do in order to be able to have something to show to customers/users/stakeholders?

This shift in thinking leads to what I call “vertical slicing”—delivering complete, end-to-end functionality rather than building in horizontal layers.

Think of instead of spending a sprint on “database framework,” you deliver a complete feature like “user login” that touches database, business logic, and user interface.

Or, instead of having a “GUI framework”, implement a GUI element and make it testable. You will still need to put the base of the GUI framework, but you will likely (or hopefully) implement only those elements needed to deliver that one element.

 

2. Your Definition of Done (DoD) is your safety net

The Definition of Done isn’t bureaucracy—it’s your insurance policy against the dreaded “90% complete” syndrome. I’ve seen too many teams rush to demo features that weren’t actually ready for users, creating technical debt that haunts them for months.

A solid Definition of Done includes peer reviews, automated tests, security checks, performance validation, and sometimes stakeholder approval.

Think of it as your quality gateway: nothing passes through unless it meets production standards.

 

3. What enables speed

CI/CD

Continuous Integration isn’t just a nice-to-have—it’s the foundation that makes frequent delivery possible. When code is integrated and tested multiple times, you eliminate the integration nightmares that plague traditional development.

Anything that is manual, especially testing, takes more time on the long run. And in software development you are running a multi stage marathon. Invest in automated End-To-End testing and you invest the time once, not every release cycle.

 

Main branch development

The teams who excel at frequent delivery have embraced “trunk-based development” where everyone works from the main branch. This forces smaller, more frequent commits and prevents the merge conflicts that can derail Sprint goals.

You might say that this is not always possible – and I even agree. Sometimes you need to branch in order to allow parallel development of larger features, which you don’t want to deliver step-by-step. While I don’t like this approach, I understand that sometimes it makes sense.

But, even in such cases, you can apply the same strategy on the parallel branch: make many small commits so that you can release often and test often.

 

I’ll stop here for now, but as you can see, there are many challenges that stop teams from releasing often.

I’ll address this in the next article from this series.

The post Guide for delivering frequently software features that matter (series) #1/2: the Pillars of successful frequent delivery first appeared on Sorin Mustaca on Cybersecurity.

Guide for delivering frequently software features that matter (series)

If you’re a software engineer older than 30 years, then you definitely have worked following a non-agile methodology.

Those methodologies are based on a fixed structure, a lot of planning, and hope that everything will go as planned. And they never worked 🙂

 

Small bets, less risk

After helping many teams transform their delivery approach over the past 2 decades, I’ve learned that the most successful software projects share one trait: they deliver working software early and often. Think of it like learning to cook—you taste as you go rather than waiting until the entire meal is prepared to discover it needs salt – or to discover that it has too much salt.

Scrum’s power lies in its ability to turn software development from a high-stakes gamble into a series of small, manageable bets.  It basically lowers the risk of creating something that is a failure before it is even released.

Instead of spending months building features that might miss the mark, you deliver value every 2 weeks and course-correct based on real user/stakeholder feedback.

 

The Three Pillars of successful frequent delivery

1. Sprint Planning that actually delivers value

Here’s where most teams go wrong: they focus on completing tasks instead of delivering outcomes.

In my experience, the magic question that transforms Sprint planning is: “What could we deliver to users at the end of this Sprint that would make them say ‘this is useful’?”

Or maybe, if you’re not that far, think in terms of: what do we have to do in order to be able to have something to show to customers/users/stakeholders?

This shift in thinking leads to what I call “vertical slicing”—delivering complete, end-to-end functionality rather than building in horizontal layers.

Think of instead of spending a sprint on “database framework,” you deliver a complete feature like “user login” that touches database, business logic, and user interface.

Or, instead of having a “GUI framework”, implement a GUI element and make it testable. You will still need to put the base of the GUI framework, but you will likely (or hopefully) implement only those elements needed to deliver that one element.

 

2. Your Definition of Done (DoD) is your safety net

The Definition of Done isn’t bureaucracy—it’s your insurance policy against the dreaded “90% complete” syndrome. I’ve seen too many teams rush to demo features that weren’t actually ready for users, creating technical debt that haunts them for months.

A solid Definition of Done includes peer reviews, automated tests, security checks, performance validation, and sometimes stakeholder approval.

Think of it as your quality gateway: nothing passes through unless it meets production standards.

 

3. What enables speed

CI/CD

Continuous Integration isn’t just a nice-to-have—it’s the foundation that makes frequent delivery possible. When code is integrated and tested multiple times, you eliminate the integration nightmares that plague traditional development.

Anything that is manual, especially testing, takes more time on the long run. And in software development you are running a multi stage marathon. Invest in automated End-To-End testing and you invest the time once, not every release cycle.

 

Main branch development

The teams who excel at frequent delivery have embraced “trunk-based development” where everyone works from the main branch. This forces smaller, more frequent commits and prevents the merge conflicts that can derail Sprint goals.

You might say that this is not always possible – and I even agree. Sometimes you need to branch in order to allow parallel development of larger features, which you don’t want to deliver step-by-step. While I don’t like this approach, I understand that sometimes it makes sense.

But, even in such cases, you can apply the same strategy on the parallel branch: make many small commits so that you can release often and test often.

 

I’ll stop here for now, but as you can see, there are many challenges that stop teams from releasing often.

I’ll address this in the next article from this series.

The post Guide for delivering frequently software features that matter (series) first appeared on Sorin Mustaca on Cybersecurity.

Beyond “Move Fast and Fail Fast”: Balancing Speed, Security, and … Sanity in Software Development (with Podcast)


Move fast and fail fast

In software development, the mantra “move fast and fail fast” has become both a rallying cry and a source of considerable debate.

It champions rapid iteration, prioritizing speed and output, often at the perceived expense of meticulous planning and architectural foresight. This approach, deeply intertwined with the principles of agile development, presents a stark contrast to the traditional model of lengthy planning cycles, rigorous architecture design, and a focus on minimizing risk through exhaustive preparation.

Fail fast

The allure of “fast” is undeniable. In today’s competitive market, speed to market can be the difference between success and failure. Rapid prototyping allows for early user feedback, facilitating continuous improvement and ensuring the product aligns with real-world needs. In essence, it’s about validating hypotheses quickly and pivoting when necessary. This iterative approach, inherent in agile methodologies, fosters a culture of adaptability and responsiveness, crucial in environments where change is the only constant.

So, “fail fast” refers mostly to a fast validation of the MVP (minimum viable product) and drop it if the results are unsatisfactory. This is, in general, very good because it is an optimal usage of resources.

Speed vs. Integrity

However, the emphasis on speed can raise legitimate concerns, particularly regarding security and long-term architectural integrity.

The fear is that a “move fast” mentality might lead to shortcuts, neglecting essential security considerations and creating a foundation prone to technical debt.

This is where the misconception often lies: “fast” in this context does not necessitate “insecure” or “bad.” Rather, it implies a prioritization of development output, which can, and should, be balanced with robust security practices and a forward-thinking architectural vision.

But, how can this forward-thinking be achieved, when the team is focused mostly on delivering value to validate with customers the assumptions made?

The key lies in understanding that agile development, when implemented effectively, incorporates security and architecture as an integral part of the process.

Concepts like “shift left security” emphasize integrating security considerations early in the development lifecycle, rather than as an afterthought.

Automated security testing, continuous integration/continuous deployment (CI/CD) pipelines with security gates, and regular security audits can be woven into the fabric of rapid development, ensuring that speed does not compromise security.

Validating early in the process means also that the not only the product is proven to meet the expectations, but also the architecture it is built upon.

The traditional approach

On the other hand, the traditional approach, with its emphasis on extensive planning and architecture, offers the perceived stability of a well-defined blueprint.

However, this approach carries its own risks. The extended planning phase can lead to delays, rendering the final product obsolete by the time it reaches the market. Moreover, the rigid nature of pre-defined architectures can hinder adaptability, making it difficult to respond to unexpected changes in user needs or market dynamics. The risk of “failing due to delays and lack of adaptation” is a real threat in fast-paced environments.

The modern software developer must navigate this tension, finding a balance between speed and stability. This involves adopting a pragmatic approach, leveraging the benefits of agile methodologies while mitigating the associated risks.

This can involve:

  • Establishing clear security guidelines and incorporating them into the development process. Having a SSDLC is mandatory when having to deliver fast.
  • Prioritizing a modular and adaptable architecture that can evolve with changing requirements. Modules should be possible to be implemented quickly and dropped without a lot of pain if they prove to be unsuccessful.
  • Implementing robust testing and monitoring to identify and address issues early on. A CI/CD pipeline will allow the team to focus more on delivering new features than testing and integrating all the time.
  • Fostering a culture of continuous learning and improvement, where developers are encouraged to experiment and innovate, while also being accountable for security and quality.
  • Utilizing threat modeling and risk assessment early in the design process. Threat modeling contains a risk assessment, which when done properly will prevent major issues later.

Instead of Conclusions:  my experience

Ultimately, the most effective approach is not about choosing between “fast” and “slow,” but about finding the right cadence of delivering value for each specific project .

The goal is to deliver constantly small pieces of code that bring value while avoiding failure altogether. If deliverables are constantly validated, a failure can only be of a small deliverable increment, which can be either quickly improved, completely removed or entirely replaced with something else.

Important is to learn from it quickly and adapt, ensuring that software development remains a dynamic and evolving process.

When I run a project, I define the goal and the high level path to achieve that goal. Sometimes this path is clear, sometimes many experiments are needed, and some will fail, some will succeed.

The post Beyond “Move Fast and Fail Fast”: Balancing Speed, Security, and … Sanity in Software Development (with Podcast) first appeared on Sorin Mustaca on Cybersecurity.

Project management with Scrum (with Podcast)


They can’t mix, can they?

Seems like a contradiction to talk about classical project management and the best agile software development methodology ?

But let me ask you this: ever feel like traditional project management is great for mapping out the big picture but falls short when it comes to the nitty-gritty of execution?

And conversely, while Scrum is fantastic for rapid iteration and delivering value quickly, it sometimes lacks that long-term strategic view?

If you feel this, then you’re not alone!

Yes, they can mix

Let’s talk about how to get the best of both worlds when managing projects: having a solid long-term plan and the flexibility to adapt and deliver quickly.

Sometimes it feels like traditional project management is great for the big picture but not so hot on the details, right?

And Scrum is awesome for getting stuff done in short bursts, but can sometimes lose sight of the overall direction.

Turns out, a lot of teams are finding a sweet spot by mixing these two. Think of it like having a good map for your road trip and a sturdy vehicle to handle any bumps along the way.

So, what does each approach bring to the party?

Classical Project Management: The Grand Plan

Imagine classical project management as your strategic guide. It’s all about figuring out the project’s scope, setting those long-term goals, marking important milestones, and creating a project plan.

We’re talking budget, resources, timeline – the whole thing.

It’s about answering the big questions:

  • What are we trying to do?
  • When does it need to be finished?
  • How much will it cost?
  • Who’s in charge of what?

This is great for having a clear vision and a roadmap. It helps everyone stay on the same page and lets you track progress.

The tricky part? Sometimes those detailed plans can go out of date pretty fast. Because things change, right?

 

Scrum: Getting Things Done

Now, Scrum is your agile friend. It’s built for doing things in short bursts, perfect for navigating the twists and turns of, well, pretty much any project.

You break the project into smaller chunks – sprints – usually 2 weeks long. Each sprint has specific goals, and the team works together to deliver something useful by the end.

Scrum is all about talking to each other a lot, having quick daily meetings, and checking in regularly. It’s about being flexible and delivering value bit by bit.

Scrum is great at handling feedback, adding new stuff, and showing real results quickly.

The thing is, on its own, Scrum might need that long-term direction that classical project management provides.

The Perfect Mix: Working Together, Delivering Fast

The magic happens when you put these two together:

  • You use classical project management to set the long-term vision, make the initial plan, and decide where you’re going. This gives you a good map.
  • Use Scrum to actually get there, one sprint at a time. Scrum becomes your engine for delivering value along the route laid out by classical project management.

Here’s a simple way to think about it:

  1. Big Picture: Classical project management sets the overall project scope, goals, and timeline. Everyone knows what the target is.

  2. Breaking it Down: The project gets broken down into smaller pieces, often using the classical project management approach. This makes the work manageable.

  3. Sprint Time: The Scrum team takes a chunk of work and plans it out for a sprint. They figure out what they can realistically do in that time.

  4. Daily Check-ins: The team has quick daily meetings to talk about progress, any problems, and adjust as needed. Keeps everyone in sync.

  5. Show and Tell: At the end of each sprint, the team shows what they’ve built and gets feedback. This feedback helps plan future sprints.

  6. Getting Better: Regular team meetings let everyone think about how they’re working and find ways to improve.

So, by mixing classical project management and Scrum, you get the best of both worlds. You have a clear long-term plan and the flexibility to adapt and deliver quickly. It’s a great way to work together, deliver fast, and make sure projects stay on track while being able to handle whatever comes up.

The post Project management with Scrum (with Podcast) first appeared on Sorin Mustaca on Cybersecurity.

Comparing “Records of Processing Activities” (ROPA) and “Data Protection Impact Assessments” (DPIA) (with Podcast)

Understanding ROPA and DPIA: Key GDPR Concepts for Tech Companies


Podcast of this article:



 

 

 

Let’s explore two essential components of GDPR compliance: Records of Processing Activities (ROPA) and Data Protection Impact Assessments (DPIA).

ROPA provides a comprehensive overview of your data handling, while DPIA focuses on assessing and mitigating risks for specific, higher-risk activities.

Records of Processing Activities (ROPA): Your Company’s Data Map

Think of ROPA as your company’s data map. It documents every step of the data journey, from collection to deletion.

It’s about what data you collect, including why, how, and with whom you share it.

A well-maintained ROPA is crucial for demonstrating GDPR compliance and building trust with your users.

What ROPA Covers

  • Purposes of Processing: Be specific! Instead of “marketing,” say “personalized email marketing based on user browsing history” or “improving product recommendations based on user purchase data.”
  • Categories of Data Subjects: Identify who the data relates to (e.g., customers, employees, website visitors, app users).
  • Categories of Personal Data: List the types of data you process (e.g., name, email address, IP address, location data, browsing history, biometric data).
  • Recipients of Personal Data: Specify who you share data with (e.g., cloud storage providers, marketing agencies, analytics platforms, law enforcement). Include both internal and external recipients.
  • Transfers to Third Countries: If you transfer data outside the EU, document the safeguards in place (e.g., adequacy decisions, standard contractual clauses).
  • Data Retention Periods: Specify how long you keep different types of data. This should be based on legal requirements and business needs.
  • Technical and Organizational Security Measures: Briefly describe the security measures you have in place to protect the data (e.g., encryption, access controls, data masking).

ROPA Examples for Tech Companies

  • Social Media Platform: A social media platform’s ROPA would detail processing activities related to user profiles, posts, photos, friend connections, messaging, targeted advertising, and data analytics. It would specify data categories (e.g., profile information, IP address, location data, browsing history), purposes (e.g., personalized content delivery, targeted advertising, platform improvement), and recipients (e.g., advertising partners, analytics providers).
  • SaaS Provider: A SaaS provider’s ROPA would document processing related to user account management, data storage, application usage tracking, customer support interactions, and billing. It would include details about data categories (e.g., user credentials, company data, usage logs), purposes (e.g., providing the service, improving performance, customer support), and recipients (e.g., cloud hosting providers, payment processors).
  • Mobile App Developer: A mobile app developer’s ROPA would cover data processing within the app, such as collecting user location data for personalized recommendations, accessing contacts for social features, or tracking in-app purchases. It would detail the data categories (e.g., location, contacts, purchase history), purposes (e.g., personalized recommendations, social features, in-app advertising), and recipients (e.g., location services providers, advertising networks).

Data Protection Impact Assessments (DPIA): Proactive Risk Management

A DPIA is a more in-depth analysis triggered by specific processing activities that pose a high risk to individuals.

With the DPIA you’re identifying risks, and also finding ways to mitigate them and demonstrating that you’ve considered data protection all the way.

What DPIA Covers

  • Description of the Processing Operations: Clearly explain the planned processing, including the purposes, data categories, and processing methods.
  • Necessity and Proportionality: Justify why the processing is necessary and proportionate to the intended purpose. Are there less intrusive ways to achieve the same goal?
  • Assessment of Risks to Individuals: Identify potential risks to individuals’ rights and freedoms, such as identity theft, discrimination, loss of control over their data, or reputational damage. Consider the likelihood and severity of these risks.
  • Measures to Address the Risks: Describe the measures you will implement to mitigate the identified risks. This might include technical measures (e.g., encryption, anonymization), organizational measures (e.g., access controls, data minimization policies), and legal measures (e.g., data processing agreements).
  • Consultation with Data Protection Authorities (DPA): In some cases, you may need to consult with your local DPA before carrying out high-risk processing.

DPIA Examples for Tech Companies

  • Facial Recognition Software: A company developing facial recognition software for security purposes would need a DPIA. The DPIA would assess risks related to accuracy, bias, potential for misuse, and impact on individuals’ privacy and freedom of movement. Mitigation measures might include strict access controls, data anonymization techniques, and clear guidelines for use.
  • AI-Powered Recommendation Engine: A company launching a new AI-powered personalized recommendation engine that analyzes large volumes of user data would require a DPIA. The DPIA would analyze the risks of profiling, discrimination, and loss of privacy. Mitigation measures could include data minimization, differential privacy techniques, and user consent mechanisms.
  • Biometric Authentication: A company implementing large-scale biometric authentication for access control would need a DPIA. The DPIA would evaluate the risks of data breaches, identity theft, and potential misuse of biometric data. Mitigation measures could include secure storage of biometric data, multi-factor authentication, and strict access controls.

ROPA and DPIA: Similarities and Differences

ROPA and DPIA are like two sides of the same coin – both essential for responsible data handling under GDPR. They work together to ensure your data processing is transparent, accountable, and respects individuals’ privacy.

Similarities

  • GDPR Compliance:
    • Both ROPA and DPIA are mandated by the GDPR (Articles 30 and 35, respectively).
    • They’re not optional; they’re legal requirements for many organizations.   
  • Focus on Data Protection:
    • At their core, both aim to protect individuals’ rights and freedoms related to their personal data.
    • They promote a privacy-first approach to data processing.
  • Documentation is Key:
    • Both require thorough documentation.
    • ROPA is the documented record of your processing activities, and DPIA results in a documented risk assessment report.
    • Good record-keeping is crucial for demonstrating compliance.   
  • Accountability:
    • Both contribute to demonstrating accountability.
    • By maintaining a ROPA and conducting DPIAs, you show that you’re taking data protection seriously and actively managing risks. 

Differences

  • Scope:
    • ROPA covers all your data processing activities,
    • DPIA focuses on specific, high-risk processing activities.
    • Think of ROPA as the big picture and DPIA as a focused close-up.
  • Purpose:
    • ROPA’s primary purpose is to document and provide transparency about all your data processing.
    • DPIA’s main goal is to assess and mitigate the risks of particular processing activities that are likely to be high-risk.  
  • Requirement:
    • ROPA is a general requirement for most organizations (especially those with over 250 employees or those processing sensitive data).
    • DPIA is only required when processing activities are likely to result in a high risk to individuals’ rights and freedoms. It’s triggered by specific circumstances.
  • Outcome:
    • ROPA produces a comprehensive record of your processing activities.
    • DPIA results in a risk assessment report outlining potential risks and the measures you’ll take to mitigate them.
    • One is a detailed inventory, the other a focused risk analysis.  
  • Timing:
    • ROPA is an ongoing requirement – you need to keep it updated as your processing activities change.
    • DPIA is conducted for specific projects or plans before they are implemented. It is a point-in-time assessment. 

In a nutshell:

  • ROPA is your ongoing data processing inventory, demonstrating your overall approach to data protection.
  • DPIA is a targeted risk assessment for specific, potentially high-risk projects, ensuring you’ve considered and addressed privacy concerns before they become a problem.
  • Both are essential tools in your GDPR compliance toolkit.

The post Comparing “Records of Processing Activities” (ROPA) and “Data Protection Impact Assessments” (DPIA) (with Podcast) first appeared on Sorin Mustaca on Cybersecurity.