Hypergility
    Back to News
    AI Safety Build it

    Board-Level AI Governance: 12 Questions Non-Execs Must Ask

    Tuli Faas May 9, 2026

    Director duties are catching up with AI. The FRC, the ICO, the FCA and the EU AI Act each push board accountability for AI use up the agenda. Non-execs are being asked, in interviews and reviews, what they have done to oversee AI risk. 'The exec team has it covered' is no longer a sufficient answer.

    The 12 questions every board should ask

    • Which decisions in our business are now influenced by AI, and at what scale?
    • Which of those decisions are high-risk under the EU AI Act, FCA Consumer Duty, or our own ethics policy?
    • Who in the executive team owns AI risk, and to whom does it escalate?
    • Do we have an AI policy and a live AI risk register? When were they last reviewed?
    • Which third-party AI providers do we depend on, and what is our concentration risk?
    • Where is human oversight applied, and is it real or rubber-stamping?
    • How do customers, employees and the public know when AI is involved?
    • What is the worst-case AI incident that could happen to us, and what is our response plan?
    • How would we know if a model was producing biased or wrong outputs in production?
    • Are we using customer or employee data to train models, and is that disclosed and lawful?
    • What standards (ISO 42001, NIST AI RMF) are we aligning to, and what evidence supports that claim?
    • What is our disclosure obligation to investors, regulators and customers when something goes wrong?

    The audit trail executives need to produce

    AI policy and risk register. Live, version-controlled, owned by a named executive, reviewed at least annually by the board.

    AI inventory. A current list of every AI system in use, with risk classification.

    Impact assessments. For high-risk systems, a documented assessment covering affected stakeholders, intended outcomes, mitigations and monitoring.

    Incident log. Every AI incident captured, investigated and closed with a documented lesson. Reviewed at audit committee.

    Independent assurance. For material AI use, second- or third-line review. ISO 42001 certification, internal audit reviews, external assurance reports — whichever is proportionate to the risk.

    Where boards typically fall short

    Three patterns recur. AI risk is delegated to a single CTO with no peer challenge. The board sees only good-news case studies, not the risk register. AI is treated as an IT topic when it is also a customer, regulatory and reputational topic.

    Director liability is sharpening

    Under existing UK director duties (Companies Act s.172, s.174), failure to take reasonable care in overseeing a foreseeable risk is already actionable. Once AI causes a public failure at a peer organisation, the foreseeability bar drops fast.

    Hypergility is ISO 42001 certified — meaning the AI we build into client products gives boards exactly the audit trail they now need to demonstrate. If that is the standard you need, talk to us.

    Talk to Hypergility

    That's exactly what we built your Hypergility AI companion for — induced with our team and innovation hub partners' domain expertise, designed to challenge your thinking rather than rubber-stamp it.

    Hypergility is ISO 42001 certified and helps clients through gap analysis and implementation. If you want to know whether the standard is right for your stage, book a call.

    Talk to Hypergility

    We Are Certified

    ISO 9001 Badge

    ISO 9001

    Quality Management

    ISO 27001 Badge

    ISO 27001

    Information Security

    ISO 42001 Badge

    ISO 42001

    AI Management System

    Cyber Essentials

    UK Cyber Security

    We use cookies to improve your experience and analyse site traffic. You can manage your preferences or read our Privacy Policy.