Navigating AI Standards and Regulations

Note: This post is written with a lot of help from AI, used to summarize the standards mentioned below.

 

Artificial intelligence (AI) is reshaping industries, but it also brings new risks.

From security vulnerabilities to compliance challenges, organizations must balance innovation with responsibility.

New standards were created and newer are emerging to guide this effort, most notably ISO/IEC 42001, ISO/IEC 22989, NIST AI RMF and the EU AI Act.

Together, they define how we should understand, manage, and regulate AI.

 

The Standards: ISO/IEC 42001, ISO/IEC 22989, NIST AI Risk Management Framework (AI RMF)

ISO/IEC 22989 focuses on concepts and terminology. By standardizing the language around AI, it ensures consistency in communication between developers, regulators, and policymakers. It provides a shared foundation for technical and strategic discussions, making it easier to align projects and compliance efforts.

 

ISO/IEC 42001 sets the framework for an Artificial Intelligence Management System (AIMS). As if we didn’t have enough Management Systems (ISMS, CSMS, DRMS, etc.), now we have AIMS.

It provides requirements for organizations to govern AI responsibly throughout its lifecycle.

Much like ISO 27001 for information security, this standard enables organizations to implement repeatable processes, assign roles, manage risks, and continuously improve their AI practices.

In short, ISO/IEC 22989 tells us how to talk about AI, while ISO/IEC 42001 tells us how to manage it.

NIST AI Risk Management Framework (AI RMF) is developed by the National Institute of Standards and Technology.  It gives guidance on managing the risks of AI systems: trustworthiness, safety, fairness, explainability, etc.

NIST also works on “crosswalks” linking the AI RMF to international standards like ISO, OECD guidelines, etc.

 

The Regulation: EU AI Act

The EU AI Act goes beyond voluntary standards. It is a regulation with binding legal requirements for AI systems placed on the EU market.

The Act classifies AI systems by risk:

  • Unacceptable risk systems (e.g., manipulative or exploitative applications) are prohibited.
  • High-risk systems (e.g., AI in healthcare, critical infrastructure, recruitment) must meet strict conformity assessments, documentation, and testing requirements.
  • Limited and minimal risk systems face transparency obligations or no specific restrictions.

Unlike ISO standards, which are voluntary, the EU AI Act will be legally enforced. Non-compliance may lead to heavy fines and product bans.

 

Comparing Standards and Regulation

  • ISO/IEC 22989 provides consistent terminology.
  • ISO/IEC 42001 defines organizational governance for AI.
  • NIST AI RMF guidance on managing the risks of AI systems: trustworthiness, safety, fairness, explainability.
  • EU AI Act imposes legally binding obligations at the product and deployment level.

While ISO and NIST standards are process-driven and supportive, the EU AI Act mandates specific outcomes.

Organizations can use ISO/IEC 42001 to establish governance processes that make compliance with the EU AI Act easier, but certification alone does not replace the legal requirements.

U.S. standards tend to be voluntary or guidance-based, not binding across all states or businesses, unlike the EU AI Act. There is no single federal law with comprehensive AI regulation yet;

instead it’s a patchwork of executive orders, agency actions, state laws, and voluntary standards. The U.S. places strong emphasis on risk management frameworks, public-private collaboration, innovation, and aligning with international standards.

In the U.S. there are some more standards on AI like Center for AI Standards and Innovation (CAISI) and various initiatives and plans for AI systems. Also there are some state laws and regulations which require some large AI model developers to publicly disclose safety protocols and report certain kinds of risk or incidents (California SB 53).

 

Key Risks Introduced by AI

  1. Model drift and performance risk — AI systems degrade over time, causing hidden failures.
  2. Bias and discrimination — Training data can produce unfair outcomes, raising legal and ethical issues.
  3. Lack of explainability — Black-box models hinder audits, accountability, and trust.
  4. Data protection risks — Models may leak or memorize personal data, creating privacy concerns.
  5. Security vulnerabilities — Adversarial attacks, poisoning, and prompt injection threaten system integrity.
  6. Supply chain dependency — Reliance on third-party models introduces hidden weaknesses.
  7. Regulatory non-compliance — Misclassifying risk or skipping assessments can result in fines and reputational damage.

How Standards Address These Risks

  • ISO/IEC 22989 ensures clarity in measurement and reporting.
  • ISO/IEC 42001 and NIST AI RMF requires lifecycle controls, risk assessments, monitoring, and continuous improvement.
  • EU AI Act mandates transparency, testing, and conformity assessments tailored to specific use cases.

When combined, these frameworks help organizations create trustworthy AI systems while meeting regulatory demands.

 

The Next Level of Compliance

To reach the “next level” of compliance, organizations must integrate voluntary standards and mandatory regulation into one cohesive program:

  1. Adopt common terminology using ISO/IEC 22989 across all teams.
  2. Implement an AI management system aligned with ISO/IEC 42001.
  3. Map AI products against EU risk categories and prepare compliance checklists.
  4. Generate technical evidence such as model cards, data lineage, and test results.
  5. Automate monitoring and incident response to detect model drift and adversarial attacks.
  6. Integrate privacy engineering to ensure alignment with GDPR.
  7. Secure the AI supply chain by tracking third-party components and models.
  8. Prepare for external audits and conformity assessments, leveraging ISO processes as supporting evidence.

Compliance should not be treated as a static checklist. The future of responsible AI lies in continuous monitoring, automated governance, and embedding compliance into MLOps pipelines.

Conclusions

AI standards and regulations are converging to create a new compliance landscape.

ISO/IEC 22989 provides the vocabulary, ISO/IEC 42001 offers governance, and the EU AI Act enforces legal obligations.

Organizations that align with all three will not only reduce risk but also strengthen trust in their AI systems. The next level of compliance means going beyond certification—building AI practices that are transparent, secure, and continuously monitored.

The EU provides a strong, comprehensive, binding regulatory framework for AI with clear risk categories, prohibited uses, and enforcement.

The U.S. currently relies more on existing laws, executive orders, and sectoral regulation, giving more flexibility but less predictability.

For global players, achieving dual compliance is increasingly necessary. The trend suggests U.S. regulation will become stronger over time, potentially drawing from EU models.

 

The post Navigating AI Standards and Regulations first appeared on Sorin Mustaca’s blog.