Design security for people: how behavioural science beats friction

Security failures are rarely a technology problem alone. They’re socio-technical failures: mismatches between how controls are designed and how people actually work under pressure. If you want resilient organisations, start by redesigning security so it fits human cognition, incentives and workflows. Then measure and improve it.

Think like a behavioural engineer

Apply simple behavioural-science tools to reduce errors and increase adoption:

  • Defaults beat persuasion. Make the secure choice the path of least resistance: automatic updates, default multi-factor authentication, managed device profiles, single sign-on with conditional access. Defaults change behaviour at scale without relying on willpower.
  • Reduce friction where it matters. Map high-risk workflows (sales demos, incident response, customer support) and remove unnecessary steps that push people toward risky workarounds (like using unapproved software). Where friction is unavoidable, provide fast, well-documented alternatives.
  • Nudge, don’t nag. Use contextual micro-prompts (like in-app reminders) at the moment of decision rather than one-off training. Framing matters: emphasise how a control helps the person do their job, not just what it prevents.
  • Commitment and incentives. Encourage teams to publicly adopt small security commitments (e.g. “we report suspicious emails”) and recognise them. Social proof is powerful – people emulate peers more than policies.

Build trust, not fear

A reporting culture requires psychological safety.

  • Adopt blameless post-incident reviews for honest mistakes; separate malice investigations from learning reviews.
     
  • Be transparent: explain why rules exist, how they are enforced and what happens after a report.
     
  • Lead by example: executives and managers must follow the rules visibly. Norms are set from the top.

Practical programme components

  1. Security champion network. One trained representative per team. Responsibilities: localising guidance, triaging near-misses and feeding back usability problems to the security team.
     
  2. Lightweight feedback loops. Short surveys, near-miss logs and regular champion roundtables to capture usability issues and unearth workarounds.
     
  3. Blameless playbooks. Clear incident reporting channels, response expectations, and public, learning-oriented postmortems.
     
  4. Measure what matters. Track metrics tied to risk and behaviour.
     

Metrics that inform action (not vanity)

Stop counting clicks and start tracking signals that show cultural change and risk reduction:

  • Reporting latency: median time from detection to report. Increasing latency can indicate reduced psychological safety (fear of blame), friction in the reporting path (hard-to-find button) or gaps in frontline detection capability. A drop in latency after a campaign usually signals improved awareness or lowered friction.

Always interpret in context: rising near-miss reports with falling latency can be positive (visibility improving). Review volume and type alongside latency before deciding.

  • Inquiries rate: median number of proactive security inquiries (help requests, pre-deployment checks, risk questions). An increase usually signals growing trust and willingness to engage with security; a sustained fall may indicate rising friction, unresponsiveness or fear. 

If rate rises sharply with no matching incident reduction, validate whether confusion is driving questions (update docs) or whether new features need security approvals (streamline process).

  • Confidence and impact: employees’ reported confidence to perform required security tasks (backups, secure file sharing, suspicious email reporting) and their belief that those actions produce practical organisational outcomes (risk reduction, follow-up action, leadership support).

An increase may signal stronger capability and perceived efficacy of security actions. While a decrease indicates skills gaps, tooling or access friction or perception that actions don’t lead to change.

Metrics should prompt decisions (e.g., simplify guidance if dwell time on key security pages is low, fund an automated patching project if mean time to remediate is unacceptable), not decorate slide decks.

Experiment, measure, repeat

Treat culture change like product development: hypothesis → experiment → measure → adjust. Run small pilots (one business unit, one workflow), measure impact on behaviour and operational outcomes, then scale the successful patterns.

Things you can try this month

  • Map 3 high-risk workflows and design safer fast paths.
  • Stand up a security champion pilot in two teams.
  • Change one reporting process to be blameless and measure reporting latency.
  • Implement or verify secure defaults for identity and patching.
  • Define 3 meaningful metrics and publish baseline values.
     

When security becomes the way people naturally work, supported by defaults, fast safe paths and a culture that rewards reporting and improvement, it stops being an obstacle and becomes an enabler. That’s the real return on investment: fewer crises, faster recovery and the confidence to innovate securely.

If you’d like to learn more, check out the second edition of The Psychology of Information Security for more practical guidance on building a positive security culture.

Governing AI – where should we draw the line?

As AI adoption accelerates, leaders face the challenge of setting clear boundaries, not only around what AI should and shouldn’t do, but also around who holds responsibility for its oversight.

It was great to share my thoughts and answer audience questions during this panel discussion.

Governance must be cross-functional: security, risk, data and the business share accountability. I also reinforced the importance of guardrails, particularly forAgentic AI: automate low-risk work, but keep humans in the loop for decisions that affect safety, rights or reputation. Classify models and agents by impact and apply controls accordingly.

SANS Cyber Incident Leader

I’m proud to share that I’ve completed SANS’s LDR553: Cyber Incident Management hands-on training and earned the GIAC Cyber Incident Leader (GCIL) certification.

This course sharpened my ability to guide teams through every stage of a breach. I was awarded a challenge coin for the top score in the final capstone project.

More

Scenario analysis in cyber security: building resilience

Resilience matrix, adapted from Burnard, Bhamra & Tsinopoulos (2018, p. 357).

Scenario analysis is a powerful tool to enhance strategic thinking and strategic responses. It aims to examine how our environment might play out in the future and can help organisations ask the right questions, reduce biases and prepare for the unexpected.

What are scenarios? Simply put, these are short explanatory stories with an attention- grabbing and easy-to-remember title. They define plausible futures and often based on trends and uncertainties.

More

How to adopt NIST CSF 2.0

CSF 2.0 Functions. Source: NIST

NIST released a new version of the Cybersecurity Framework with a few key changes:

  • It now can be applied beyond critical infrastructure, making it more versatile and straightforward to adopt.
  • It introduces a new core “Govern” function that includes categories from other sections, with increased focus on supply chain risk management and accountability.
  • It highlights synergies with the NIST Privacy Framework.

I often use this framework to develop and deliver information security strategy. Although, other methodologies exist, I find its layout and functions facilitate effective communication with various stakeholder groups, including the Board.

More

How to maximise the return on security investment

Not every conversation a CISO is having with the Board should be about asking for a budget increase or FTE uplift. On the contrary, with the squeeze on security budgets, it can be an opportunity to demonstrate how you do more with less.

To demonstrate business value and achieve desired impact, a CISO’s cyber security strategy should go beyond cyber capability uplift and risk reduction and also improve cost performance.

Security leaders don’t have unlimited resources. Significant security transformation, however, can be achieved leveraging existing investment and security resource levels.

More

Starting an Executive MBA

It’s widely understood that cybersecurity should support the business – it’s a common theme of this blog. However, it’s often difficult to achieve true alignment without understanding the business context, priorities and challenges and being able to communicate in the language of business stakeholders.

I decided to enrol to the Master of Business Administration (Executive) degree to broaden my knowledge and enhance my strategic thinking to better serve organisations. Developing my skills in finance, leadership, strategy and innovation will help equip me to better understand current challenges and make a positive, lasting impact. The Australian Graduate School of Management (AGSM) program at the University of New South Wales will help me learn about the latest business practices and how to effectively apply them to add value to the business.

I have a strong technical background and analytical skills and I look to build on this foundation to enhance my contribution to the C-Suite. Throughout my career I’ve worked in consulting, corporate and startup organisations; my understanding of challenges and opportunities of both large corporations and nimble startups globally will bring a unique perspective to the AGSM community. I can also leverage my extensive professional network around the world to support fellow Executive MBA candidates and alumni.

I’ll be writing about my experience and learning in this blog, so stay tuned for more updates on how cybersecurity practices can be aligned to wider business strategy and objectives.