Welcome to the AI Standards Revolution
New ISO/EU Frameworks to transform Risk Management & Compliance
The AI landscape is evolving at an unprecedented pace, with advanced models like Claude 4 Opus demonstrating unprecedented capabilities in AI reasoning, coding and problem-solving, while agentic AI systems which are designed to operate autonomously, are multiplying in scale and autonomy at (seemingly!) lightning speed.
This game changing flow of next gen technologies is democratising AI deployment across industries, but as AI systems become more autonomous, capable, and pervasive, a critical question emerges: how do we truly understand, test, and manage the multifaceted risks that increasingly sophisticated AI systems expose individuals and organisations to?
An answer lies in the growing collection of international standards that prioritise systematic testing, evaluation, and assurance of AI systems—providing auditable frameworks for governing and managing AI risks across entire lifecycles.
ISO 42001: World’s First AI Management Standard
The International Organisation for Standardisation (ISO) responded to the urgent need for AI risk understanding in 2023 with an integrated approach that places testing, evaluation, and assurance of organisational AI systems at its core. The ISO 42001 provides the foundational framework for AI management systems, creating organisational governance structures for the first time that mandates systematic risk assessment and continuous evaluation throughout the AI lifecycle.
Building Blocks
Building on this foundation, the recently released ISO 42005 and ISO 42006, delineate additional requirements for bodies like Advai that audit and certify artificial intelligence management systems (AIMS) to ensure rigorous system-specific testing and evaluation. ISO 42005, published in May, offers the first international standard specifically for AIMS, enabling systematic identification and evaluation of risks across ethical, legal, and social dimensions. Crucially, it requires organisations to understand not just what their AI systems do, but what types of harm they might cause to individuals and society. Meanwhile, ISO 42006 released this month, ensures these AIMS can be independently audited and verified by accredited certification bodies—providing the external assurance that stakeholders need.
Red Teaming
Another welcome update is the new working group and ISO draft framework for AI security, (ISO/IEC AWI TS 42119-7), aiming to introduce a comprehensive methodology for expert teams identifying risks and testing systems, commonly known as red teaming. This exciting new standard can build on previous AI standards to provide a set of technology-agnostic guidance for conducting red teaming assessments, covering identification of risks, attack vectors, and methodologies for planning and executing adversarial testing on AI systems.
Why will this be important?
Because it will provide a standard framework for organisations to understand their AI systems' vulnerabilities and failure modes, which will inform how they manage risk.
Simplifying Compliance: EU AI Act
These testing and assurance standards are critical for regulatory frameworks worldwide. The EU AI Act, initially perceived as complex, is undergoing significant simplification efforts, which is welcome news.
This simplification strategy is key to the EU's broader AI Master Plan, which explicitly aims to simplify regulations to enable the advancement of homegrown AI, lessening reporting requirements and administrative burdens for companies. The EU Commission has launched public consultations seeking additional measures to ensure the simple application of the AI Act, demonstrating a genuine commitment to reducing regulatory complexity while maintaining robust AI governance.
The emphasis shifts from compliance paperwork to genuine risk understanding and mitigation. This should continue to form the core of the regulatory approach as frameworks evolve.
FCA & NVIDIA’s ‘Supercharged Sandbox’
The practical value of systematic AI testing and evaluation is exemplified by the Financial Conduct Authority's (FCA) groundbreaking initiative announced this week.
On June 9, the FCA partnered with NVIDIA to create a "Supercharged Sandbox" within their AI Lab - demonstrating how regulators can facilitate comprehensive AI system evaluation in controlled environments, previously a key impediment for AI take up for financial services firms.
This world-first collaboration, unveiled during London Tech Week, provides financial services firms with NVIDIA's full-stack accelerated computing platform and AI Enterprise Software from October 2025. Crucially, the sandbox environment allows firms to evaluate AI risks under real time conditions with access to better datasets, technical expertise, and regulatory support—while gathering the evidence regulators need to understand potential impacts on consumers and markets.
The FCA's approach exemplifies how the new ISO standards can be operationalised in practice. Firms can conduct comprehensive impact assessments, perform systematic testing protocols, and build auditable evidence trails—all within a regulatory framework designed to support rather than hinder innovation.
Goodbye Risk: Hello Assurance
These breakthrough regulatory developments address the most significant challenge in AI adoption: creating auditable evidence that AI systems are thoroughly understood, risk tested, and appropriately controlled. The new ISO standards provide systematic methodologies for:
- Comprehensive risk identification: Understanding the full spectrum of potential harms AI systems might cause to individuals, organisations, and society
- Systematic testing protocols: Implementing structured evaluation processes, including adversarial red teaming approaches as outlined in the ISO/IEC AWI TS 42119-7 standard
- Auditable documentation: Creating evidence trails that demonstrate not just what testing was done, but how risks were identified, evaluated, and mitigated
- Continuous assurance: Establishing ongoing monitoring and evaluation processes that adapt as AI systems evolve and new risks emerge
- Independent verification: Enabling third-party auditors to assess and validate an organisation's AI risk management practices
This shift from compliance theatre to genuine assurance represents a fundamental evolution in how we approach AI governance..
AI Testing a Competitive Advantage
The convergence of advanced AI capabilities with sophisticated testing and assurance frameworks marks a turning point in technology adoption. Organisations can now deploy AI systems with confidence, knowing they have systematic approaches to understand, evaluate, and manage the risks these systems present to individuals and society.
As these standards mature and regulatory frameworks continue to evolve, we're moving toward a future where comprehensive AI testing and evaluation aren't compliance burdens—they're competitive advantages. Organisations that can demonstrate thorough understanding of their AI systems' capabilities and limitations, backed by rigorous testing and independent assurance, will earn the trust of customers, regulators, and stakeholders.
The AI revolution is here. The testing and assurance revolution is ensuring it's trustworthy.
At Advai, we’re experts in streamlining AI adoption by identifying and mitigating points of AI failure, our clients include the Ministry of Defence, and major private sector companies. Contact us if you want a demo or to hear more about what we do.
Referenced Articles and URLs:
ISO/IEC AWI TS 42119-7 - Artificial intelligence — Testing of AI — Part 7: Red teaming
EU could postpone flagship AI rules, tech chief says – POLITICO
Simplification and gigafactories: what's in the EU's new AI master plan? - Euractiv
https://sifted.eu/articles/eu-ai-act-pause-analysis
https://www.fca.org.uk/news/press-releases/fca-allows-firms-experiment-ai-alongside-nvidia