
I had the privilege of sharing my views on AI Risk at the AI Security Summit, where senior leaders and practitioners came together to translate high-level fear into practical guardrails. In this blog I share a short playbook of the key themes and real-world strategies.
AI risk may not land as a single catastrophic event. It accumulates from many small, avoidable failures:
- Shadow AI and AI agents: unsanctioned models and bots run by teams or individuals that evade governance.
- Identity and access gaps: excessive or unmanaged access to models and data multiplies attack surface.
- Model bias and data issues: poor training data surfaces as reputational, legal and fairness risks.
- Supply chain weaknesses: vulnerable or poorly licensed open-source components and third-party models.
Practical governance tips
Move beyond policy documents to practical controls:
- AI-aware CI/CD: enforce evaluations (privacy, fairness, robustness) as gates, not optional checks
- Cross-functional guardrails: embed legal, security and product in deployment decisions
- Threat-model agents (data, model, deployment, supply chain) before production.
- Continuous runtime monitoring: detect drift, unusual inputs, and exfil patterns with anomaly detection and explainability signals.
- Defence-in-depth: combine input sanitisation, access control and detection rather than relying on a single countermeasure.
Balancing speed and safety
Innovation and control aren’t mutually exclusive, but they require tradeoffs upfront:
- Automate safe-by-default pipelines so developers can move fast without skipping checks.
- Use policy-as-code to keep governance scalable.
- Apply incremental rollouts with observability and roll-back plans (feature flags and canary deployments).
