ISO 42001 Explained: The AI Standard You Will Be Asked About
ISO 42001 is the first international standard for managing AI systems. It was published in December 2023 and adoption has accelerated through 2025 as enterprises started adding it to their vendor security questionnaires. If you sell software with AI in it, the question 'are you ISO 42001 certified or working towards it?' will be on a procurement form within twelve months.
What ISO 42001 actually covers
It is a management system standard — meaning it specifies how you govern AI across its lifecycle, not what your AI must do. If you have seen ISO 27001 (information security) or ISO 9001 (quality), the structure will be familiar: a policy, defined responsibilities, risk assessment, controls, monitoring, internal audit, management review, continuous improvement.
- AI policy and objectives signed off by leadership
- AI risk assessment methodology and a live risk register
- Annex A controls covering data, transparency, human oversight, and lifecycle management
- Impact assessments for systems affecting individuals or groups
- Supplier and third-party AI management
- Incident response and reporting for AI failures
Why enterprises are asking for it
Three pressures are converging. The EU AI Act creates legal exposure for enterprises that deploy non-compliant AI from suppliers. NIST's AI Risk Management Framework is becoming the US baseline. And boards are asking CIOs to evidence AI governance after a string of public AI failures. ISO 42001 is the cleanest single answer to all three: an internationally recognised, independently audited certification.
The certification path
- Stage 0 — gap analysis against the standard (1–2 weeks)
- Stage 1 — implementation: write the AI policy, build the risk register, define controls (8–16 weeks for a small team)
- Stage 2 — operate the management system for at least 3 months to generate evidence
- Stage 3 — Stage 1 audit (documentation review) by an accredited certification body
- Stage 4 — Stage 2 audit (operational verification) — pass and you are certified for 3 years
- Annual surveillance audits and a full recertification at year 3
Cost and effort for an SME
Internal effort. Expect 0.5–1 FTE for 4 months during implementation, then ~0.2 FTE ongoing. A small product team can do this without hiring a full-time AI risk officer.
External cost. Audit fees from an accredited body typically run £8k–£20k for an SME, depending on scope. Consultancy support varies wildly — be wary of any quote that does not include a templated risk register and Annex A controls library.
Timeline. 6–9 months from kick-off to certificate is realistic. Faster is possible if you already hold ISO 27001, because the management system structure overlaps significantly.
Should you get certified now?
Get certified now if you sell into regulated industries, public sector, or any enterprise with a mature procurement function. Start the implementation now even if you do not certify yet — the discipline of writing the policy, doing the risk assessment, and defining human oversight will sharpen your product. Wait if your buyer is consumer or SMB and AI is incidental to your value proposition.
Hypergility is ISO 42001 certified — the AI we build into client products is governed to the standard enterprise buyers are starting to demand. We do not sell AI Act compliance as a service. We build AI products to it. If that is what you need, talk to us.
Talk to HypergilityHypergility is ISO 42001 certified and helps clients through gap analysis and implementation. If you want to know whether the standard is right for your stage, book a call.
Talk to Hypergility