AI and Cyber for Board Directors

It was good to attend the Essential Director Update – a timely reminder that good governance now requires foresight as well as oversight.

Staying on the forefront of contemporary governance demands AI and cybersecurity competency.

My key takeaways for boards and executives:
☑️ Data is the fuel: protect data integrity (accurate, consistent, timely) and focus governance where it creates the most value.
☑️ AI is everywhere, no longer just an IT challenge: adopt a human-centred approach, define guardrails around intent, and factor legal and ethical considerations into every deployment.
☑️ Balance innovation with risk: prioritise highest-value use cases, automate safety controls where possible, but don’t outsource accountability.
☑️ Cybersecurity must be risk-based: know your crown jewels, expect incidents, build crisis response plans and regularly test your defences.
☑️ People first: changing work practices will affect roles and culture; steer the transition and invest in policy and education.

Evolution of third-party risk, accountability and trust

It was great to join last night’s panel, where I shared practical lessons from managing AI in vendor ecosystems – including ethical implications, regulatory uncertainties and resilience at scale.

If you run a restaurant, your supplier gives you a batch of ingredients and you use them in meals for customers. You’re responsible if the food makes people sick. AI vendors are ingredient suppliers – you are the chef.

Guardrails don’t have to block progress – they can make AI reliable and trustworthy.

Responsible Management Prize

I’ve been awarded the Responsible Management Prize 🏆

This award recognises the values that guide me every day: honesty, integrity and leading with purpose.

In today’s evolving business landscape, where AI, risk management and cybersecurity intersect, ethical practice is essential. Because what we stand for today shapes the world we build tomorrow.

As algorithms power more of our decisions, we must ensure they’re transparent, fair and aligned with human values. Balancing innovation with resilience means anticipating unintended consequences, protecting stakeholders and driving sustainable outcomes.

Safeguarding data and privacy isn’t merely a technical challenge – it’s a trust imperative that underpins every relationship.

Thank you to the selection committee for recognising the work we’ve done together to build an inclusive, principled and forward-looking learning community.

AI leadership in an accelerating world

I have just completed two leadership programs, both focused on AI-powered strategy

I particularly enjoyed the interactive hands-on session on designing AI capabilities to better understand:
✅ What the current trajectory of AI is, and why it reinforces the need for a clear AI strategy
✅ How AI is reshaping strategic thinking and planning, and how we can use it to strengthen techniques such as scenario analysis
✅ How AI is transforming day-to-day work, and practical steps that can help us build genuine AI readiness

Huge thanks to the academics and industry experts for sharing their research-backed insights. I look forward to applying these frameworks and tools to drive purposeful, data-driven impact in my own leadership journey.

Board observership

I’m excited to join the Lokahi Foundation as a Board Observer through The Observership Program in partnership with Australian Institute of Company Directors and The Ethics Centre.

The program is designed to equip leaders with skills and practical experience for not-for-profit board and social impact leadership. I found the sessions on governance, finance, risk and strategy for not-for-profit directors very useful.

I appreciate the opportunity to contribute to an organisation that is truly making a difference and driving systemic change in our communities.

AI guardrails and governance

Just wrapped up an engaging panel on AI guardrails where we explored the shifting ground beneath enterprise AI adoption.

The best AI governance starts not with controls, but with culture. When people start asking not just ‘Can we do this?’ but ‘Should we?” that’s when you know you’re on the right path.

Secure by Design is a widely understood concept in cybersecurity, it can be extended to Ethics by Design when building and adopting AI capabilities. Ethical considerations should be embedded from the start, with continuous assurance throughout the lifecycle.

AI in the Enterprise: Balancing Innovation and Security

It was great to have a debate on balancing innovation and security keynote panel, where we dug into both the promise and the perils of AI adoption from the CISO and CIO perspectives.

Your biggest AI risks really depend on where and how you’re using it. I recommend reviewing your product roadmap for AI-powered features to anticipate potential gaps.

Map out whether AI is home-grown, vendor-sourced or embedded. When it comes to governance, we can borrow from what we learned with BYOD, cloud and shadow IT. Extend existing security reviews, supply-chain checks and third-party assessments into your AI program. For quick wins, manage it like a SaaS risk: think privacy controls and boundaries around sensitive data.

Cyber risk quantification

I really enjoyed the cyber risk quantification workshop led by Richard Seiersen, co-author of How to Measure Anything in Cybersecurity Risk.

During the session, Richard broke down risk quantification, focusing on identifying the risks most likely to cause significant business losses where assets, threats and vulnerabilities intersect.

I’m also glad to receive his book for correctly estimating cost in our the discussions. It’s one of the most influential books in security: it challenges subjective risk assessments, offering practical frameworks for using data, probability and economics to drive smarter security decisions.

Adapting to EU regulatory changes: navigating compliance and building resilience

I had the privilege of joining a panel discussion on the rapidly evolving regulatory landscape and its impact on businesses worldwide. With cyber threats, operational disruptions, and AI risks on the rise, governments are strengthening regulations to drive security, resilience and accountability across industries.

In Europe, major frameworks like DORA (Digital Operational Resilience Act), NIS2 (Network and Information Security Directive) and the EU AI Act are reshaping how organisations approach cybersecurity, operational resilience, and responsible AI governance. But this shift isn’t limited to the EU – regulatory scrutiny is increasing globally, from the U.S. to APAC, with frameworks reinforcing risk management, third-party oversight and AI transparency.

A huge thank you to my fellow panelists and engaged audience members for an insightful discussion.

Navigating the endless sea of threats

Cyber security is a relentless race to keep pace with evolving threats, where staying ahead isn’t always possible. Advancing cyber maturity demands more than just reactive measures—it requires proactive strategies, cultural alignment, and a deep understanding of emerging risks.

I had an opportunity to share my thoughts on staying informed about threats, defining cyber maturity, and aligning security metrics with business goals with Corinium’s Maddie Abe ahead of my appearance as a speaker at the upcoming CISO Sydney next month.

More