EU AI Act, NIST RMF and ISO/IEC 42000: A Plain English Comparison
- Ken Huang
- Ethical Hacking
Security practitioners, governance/risk/compliance leaders, internal auditors, risk managers, and executives need to navigate a thicket of emerging artificial intelligence (AI) regulations and standards. The European Union’s Artificial Intelligence Act (EU AI Act), the U.S. National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF), and the international standard ISO/IEC 42001 all aim to increase trust in AI systems, but they differ in scope, enforcement, and obligations. Understanding their intersections and differences helps teams build programs that satisfy multiple frameworks without duplicating effort.
Why This Matters Now
The AI ecosystem is moving from voluntary guidelines to enforceable obligations. The EU AI Act, agreed upon in 2024 and updated in 2025, introduces a risk-based regulatory regime. Providers of general-purpose AI (GPAI) models must comply with obligations effective August 2, 2025; enforcement by the European Commission begins on August 2, 2026, and models already on the market before August 2, 2025 must comply by August 2, 2027 (European Commission, 2025). High-risk AI applications (for example, credit scoring, HR, or critical infrastructure) will face stringent requirements for data, governance, and transparency, while “unacceptable risk” uses (such as social scoring) will be banned (European Commission, n.d.). At the same time, many organizations are using NIST’s voluntary AI RMF to structure risk management programs, and ISO/IEC 42001, published in 2023, as the first global standard for an AI management system. Regulatory coverage is patchwork, but the direction is clear: AI risk management is becoming mandatory and auditable.
Comparing Scope and Intent
EU AI Act
The EU AI Act is a regulation (law) with extraterritorial reach. Its risk-based taxonomy divides applications into four main categories (European Commission, n.d.):
- Unacceptable Risk: Practices that are prohibited outright (e.g., social scoring or real-time biometric identification for law enforcement except in narrow circumstances).
- High Risk: Applications that significantly affect individuals’ safety or fundamental rights, such as medical devices, employment screening, credit scoring, and essential public services. Providers and users of high-risk AI must implement a management system covering data governance, documentation, transparency, human oversight, and post-market monitoring. A “conformity assessment” must be performed before placing high-risk AI on the market.
- Limited Risk: Most other uses; these are subject to transparency obligations (e.g., labeling deepfakes).
- Minimal Risk: These include most AI systems currently in use in the EU. Systems deemed no risk or minimal risk are not subject to any rules under the EU AI Act.
GPAI models, large language models included, receive special attention. Providers must maintain technical documentation, publish a summary of training data, implement reasonable policies to address risks, and notify the new EU AI Office if a model poses systemic risks. Open-source GPAI models with non-commercial licenses are exempt from some obligations, but those that pose systemic risk must still meet safety, security, and incident-reporting requirements (European Commission, 2025).
NIST AI RMF
The AI RMF is a voluntary framework developed by the U.S. NIST in close consultation with industry and civil society. It does not carry the force of law, but regulators and standards bodies reference it as a baseline. The AI RMF defines risk as the likelihood and magnitude of harm from an AI system and encourages organizations to manage negative impacts while maximizing benefits (NIST, 2023). The framework’s core comprises four functions:
- Govern: Establish an organizational environment that cultivates responsible AI. Governance applies across the AI lifecycle and includes policies, accountability structures, and continuous improvement.
- Map: Understand the context and the AI system. Mapping covers stakeholder needs, intended purpose, societal impacts, and system limitations.
- Measure: Analyze and monitor AI risks and benefits. This includes measuring model performance, uncertainty, bias, and other relevant attributes.
- Manage: Prioritize and respond to risks. Organizations integrate risk responses into their workflows and decision-making processes.
NIST emphasizes that the AI RMF is flexible, sector-agnostic, and can be tailored to organizations of different sizes and maturity levels. It complements, rather than replaces, legal obligations such as the EU AI Act.
ISO/IEC 42001:2023
ISO/IEC 42001 is the first global standard for an AI Management System (AIMS). It provides requirements and guidance for establishing, implementing, maintaining, and continually improving an AIMS within organizations of any size (ISO, 2023). ISO/IEC 42001 is structured similarly to other ISO management standards (e.g., ISO/IEC 27001 for information security), focusing on responsible AI governance and continuous improvement. Key requirements include:
- Leadership and Organizational Context: Senior management must define the scope of the AIMS and demonstrate commitment.
- AI Policy and Objectives: Organizations should articulate a policy aligned with applicable legal and ethical principles and set measurable objectives.
- Risk Management: Risk assessments must cover the AI lifecycle, identify hazards, and implement controls to mitigate risks while fostering innovation.
- Data Governance and Lifecycle Controls: Policies should ensure data quality, privacy, protection of intellectual property, and respect for licenses.
- Transparency and Accountability: Documentation and communication should promote explainability and human oversight.
- Performance Evaluation and Continual Improvement: Organizations must monitor, measure, and improve the AIMS over time.
ISO/IEC 42001, thus, translates AI governance principles into a certifiable management system. Certification, however, requires auditors who meet the separate standard BS ISO/IEC 42006:2025, ensuring that AI auditors are qualified and consistent (BSI, 2025).
Strengths and Limitations
| Framework | Nature | Strengths | Limitations |
|---|---|---|---|
| EU AI Act | Binding legislation | Provides legal clarity and enforceable obligations; risk-based approach; dedicated authority (AI Office) | Compliance costs can be significant; definitions and scope may evolve; extraterritorial reach may conflict with other jurisdictions |
| NIST AI RMF | Voluntary guideline | Flexible, sector-agnostic; focuses on risk management and trustworthiness; widely referenced by regulators and industry agnostic | Nonbinding; lacks specific enforcement mechanisms; may require mapping to sector regulations |
| ISO/IEC 42001 | Certifiable standard | Provides a structured management system; integrates with other ISO standards; emphasizes continual improvement | Implementation effort may be high; the certification ecosystem is still maturing; it is not yet mandated by law |
Mapping Frameworks: Toward Integrated Compliance
Organizations often need to satisfy multiple frameworks simultaneously. Crosswalks can help align requirements. For example, a mapping template that connects ISO/IEC 42001 controls to NIST AI RMF functions can ensure that key controls are not overlooked. A 2025 industry article notes that automated crosswalks simplify evidence collection and ensure critical controls aren’t missed (Sethupathy, 2025). An effective mapping exercise involves:
- Identify Applicable Controls: List ISO/IEC 42001 clauses relevant to your AI system (e.g., risk assessment, data governance, transparency).
- Map to AI RMF Functions: Assign each clause to the corresponding AI RMF function (Govern, Map, Measure, Manage). For example, ISO/IEC 42001’s requirement to maintain an inventory of AI systems supports the AI RMF’s Map function.
- Add EU AI Act Requirements: Annotate where the EU AI Act imposes additional obligations. For a high-risk credit-scoring model, you must perform a conformity assessment and ensure human oversight; these align with the AI RMF’s Measure and Manage functions.
- Determine Gaps and Overlaps: Identify controls that satisfy multiple frameworks and note any gaps requiring new policies or processes.
Below is a simplified crosswalk checklist illustrating how a high-risk AI use case might align across frameworks. The checklist is a starting point and should be adapted to your organization’s specific context.
| Control/Requirement | EU AI Act | NIST AI RMF | ISO/IEC 42001 |
|---|---|---|---|
| Maintain AI inventory and register | High-risk systems must be registered; GPAI providers must notify the AI Office | Map (identify AI systems and context) | 8.4: maintain an inventory of AI systems and their purposes |
| Data governance and quality | High-risk models must use high-quality, relevant data; log and store data securely | Measure (assess data quality and bias) | 8.5: define data acquisition, labelling, and quality controls; 8.6: protect data and respect licenses |
| Human oversight | Mandatory for high-risk AI; ensure humans can override decisions | Manage (decide risk responses) | 8.3: define roles and responsibilities; 8.7: implement human-in-the-loop controls |
| Risk assessment and impact analysis | Conformity assessment required for high-risk systems | Govern, Measure | 8.2: identify, analyze, and evaluate risks; 9.1: plan for continual improvement |
| Transparency and user information | Provide information to users about the functionality, limitations, and purpose of AI | Map, Manage | 8.8: ensure explainability; 8.9: communicate transparently to stakeholders |
Practical Example: Building a Cross-Framework Compliance Plan
Imagine your organization develops a machine learning model to recommend credit limits for small business applicants in the EU. The model likely falls under the EU AI Act’s high-risk category because it influences access to credit. A cross-framework plan might include:
- Inventory and Classification: List the model, its purpose, inputs, and outputs; classify it as high risk.
- Risk Assessment: Perform a bias assessment and evaluate potential harms; document the likelihood and magnitude of impacts in line with NIST’s Measure function.
- Data Governance: Verify the legitimacy and quality of training data; document sources; ensure compliance with data protection laws.
- Governance Structure: Appoint an AI governance lead; develop a cross-functional oversight committee; align policies with ISO/IEC 42001’s leadership requirements.
- Conformity Assessment: Compile technical documentation, risk management measures, and human oversight procedures; engage notified bodies or internal auditors qualified under BS ISO/IEC 42006:2025.
- Continual Improvement: Monitor model performance; conduct post-market surveillance; update the AIMS and risk assessments regularly.
Next Steps
- Perform Gap Analysis: Evaluate your existing AI policies against the EU AI Act, NIST AI RMF, and ISO/IEC 42001. Identify overlapping controls, any gaps, and unique obligations.
- Develop a Cross-Framework Register: Document each AI system, its risk category, applicable regulations, and mapped controls. Use automation tools where feasible to maintain evidence and facilitate audits.
- Engage Stakeholders: Involve legal counsel, risk management, technical teams, and end users in the governance process. Clear roles and accountability are essential.
- Stay Current: Regulators continue to refine guidelines. Monitor updates to the EU AI Act (delegated acts, codes of practice), NIST’s AI RMF resources, and ISO/IEC 42001 revisions to ensure sustained compliance.
References
BSI. (2025, July 21). BSI publishes standard to ensure quality among growing AI audit market. https://www.bsigroup.com/en-GB/insights-and-media/media-centre/press-releases/2025/july/bsi-publishes-standard-to-ensure-quality-among-growing-ai-audit-market/
European Commission. (n.d.). AI Act. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#1720699867912-0
European Commission. (2025, August 02). Guidelines on the scope of obligations for providers of general-purpose AI models under the AI Act. https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act
ISO. (2023, December). ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system. https://www.iso.org/standard/42001
NIST. (2023, January 26). NIST AI 100-1: Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology, U.S. Department of Commerce. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
Sethupathy, G. (2025, October 03). Integrating the NIST AI RMF and ISO 42001: A Practical Guide. FairNow. https://fairnow.ai/map-nist-ai-rmf-iso-42001/
About the Author
Ken Huang
Ken Huang is a leading author and expert in AI applications and agentic AI security, serving as CEO and chief AI officer at DistributedApps.ai. He is co-chair of AI safety groups at the Cloud Security Alliance and the OWASP AIVSS project, and co-chair of the AI STR Working Group at the World Digital Technology Academy. He is an EC-Council instructor and adjunct professor at the University of San Francisco, teaching GenAI security and agentic AI security for data scientists, respectively. He coauthored OWASP’s Top 10 for LLM Applications and contributes to the NIST Generative AI Public Working Group. His books are published by Springer, Cambridge, Wiley, Packt, and China Machine Press, including Generative AI Security, Agentic AI Theories and Practices, Beyond AI, and Securing AI Agents. A frequent global speaker, he engages at major technology and policy forums.

