Design security for people: how behavioural science beats friction

Security failures are rarely a technology problem alone. They’re socio-technical failures: mismatches between how controls are designed and how people actually work under pressure. If you want resilient organisations, start by redesigning security so it fits human cognition, incentives and workflows. Then measure and improve it.

Think like a behavioural engineer

Apply simple behavioural-science tools to reduce errors and increase adoption:

  • Defaults beat persuasion. Make the secure choice the path of least resistance: automatic updates, default multi-factor authentication, managed device profiles, single sign-on with conditional access. Defaults change behaviour at scale without relying on willpower.
  • Reduce friction where it matters. Map high-risk workflows (sales demos, incident response, customer support) and remove unnecessary steps that push people toward risky workarounds (like using unapproved software). Where friction is unavoidable, provide fast, well-documented alternatives.
  • Nudge, don’t nag. Use contextual micro-prompts (like in-app reminders) at the moment of decision rather than one-off training. Framing matters: emphasise how a control helps the person do their job, not just what it prevents.
  • Commitment and incentives. Encourage teams to publicly adopt small security commitments (e.g. “we report suspicious emails”) and recognise them. Social proof is powerful – people emulate peers more than policies.

Build trust, not fear

A reporting culture requires psychological safety.

  • Adopt blameless post-incident reviews for honest mistakes; separate malice investigations from learning reviews.
     
  • Be transparent: explain why rules exist, how they are enforced and what happens after a report.
     
  • Lead by example: executives and managers must follow the rules visibly. Norms are set from the top.

Practical programme components

  1. Security champion network. One trained representative per team. Responsibilities: localising guidance, triaging near-misses and feeding back usability problems to the security team.
     
  2. Lightweight feedback loops. Short surveys, near-miss logs and regular champion roundtables to capture usability issues and unearth workarounds.
     
  3. Blameless playbooks. Clear incident reporting channels, response expectations, and public, learning-oriented postmortems.
     
  4. Measure what matters. Track metrics tied to risk and behaviour.
     

Metrics that inform action (not vanity)

Stop counting clicks and start tracking signals that show cultural change and risk reduction:

  • Reporting latency: median time from detection to report. Increasing latency can indicate reduced psychological safety (fear of blame), friction in the reporting path (hard-to-find button) or gaps in frontline detection capability. A drop in latency after a campaign usually signals improved awareness or lowered friction.

Always interpret in context: rising near-miss reports with falling latency can be positive (visibility improving). Review volume and type alongside latency before deciding.

  • Inquiries rate: median number of proactive security inquiries (help requests, pre-deployment checks, risk questions). An increase usually signals growing trust and willingness to engage with security; a sustained fall may indicate rising friction, unresponsiveness or fear. 

If rate rises sharply with no matching incident reduction, validate whether confusion is driving questions (update docs) or whether new features need security approvals (streamline process).

  • Confidence and impact: employees’ reported confidence to perform required security tasks (backups, secure file sharing, suspicious email reporting) and their belief that those actions produce practical organisational outcomes (risk reduction, follow-up action, leadership support).

An increase may signal stronger capability and perceived efficacy of security actions. While a decrease indicates skills gaps, tooling or access friction or perception that actions don’t lead to change.

Metrics should prompt decisions (e.g., simplify guidance if dwell time on key security pages is low, fund an automated patching project if mean time to remediate is unacceptable), not decorate slide decks.

Experiment, measure, repeat

Treat culture change like product development: hypothesis → experiment → measure → adjust. Run small pilots (one business unit, one workflow), measure impact on behaviour and operational outcomes, then scale the successful patterns.

Things you can try this month

  • Map 3 high-risk workflows and design safer fast paths.
  • Stand up a security champion pilot in two teams.
  • Change one reporting process to be blameless and measure reporting latency.
  • Implement or verify secure defaults for identity and patching.
  • Define 3 meaningful metrics and publish baseline values.
     

When security becomes the way people naturally work, supported by defaults, fast safe paths and a culture that rewards reporting and improvement, it stops being an obstacle and becomes an enabler. That’s the real return on investment: fewer crises, faster recovery and the confidence to innovate securely.

If you’d like to learn more, check out the second edition of The Psychology of Information Security for more practical guidance on building a positive security culture.

Security is a social design problem, not a tech one

I’m super proud to have written this book. It’s the much improved second edition – and I can’t wait to hear what you think about it.

https://amzn.asia/d/9lr6Sd9

Please leave an Amazon review if you can – this really helps beat the algorithm, and is much appreciated!

Collaborating with the enemy: key lessons for cyber security

In cybersecurity, collaboration is essential. With growing complexity in the threat landscape, leaders often find themselves working with parties they may not fully align with—whether internal teams, external stakeholders, or even rival firms.

Adam Kahane’s book Collaborating with the Enemy: How to Work with People You Don’t Agree with or Like or Trust outlines principles for collaborating effectively, especially in challenging environments where trust and agreement are minimal. Kahane’s “stretch collaboration” approach can transform the way cybersecurity leaders address conflicts and turn rivals into partners to meet critical security goals. In this blog, I’ll share my key takeaways.

More

Trust in People: Macquarie University Cyber Security Industry Workshop

I’ve been invited to to share my thoughts on human-centric security at the Macquarie University Cyber Security Industry Workshop.

Drawing on insights from The Psychology of Information Security and my experience in the field, I outlined some of the reasons for friction between security and business productivity and suggested a practical approach to a building a better security culture in organisations.

It was great to be able to contribute to the collaboration between the industry, government and academia on this topic.

Book signing

I’ve been asked to sign a large order of my book The Psychology of Information Security and hope that people who receive a copy will appreciate the personal touch!

I wrote this book to help security professionals and people who are interested in a career in cyber security to do their job better. Not only do we need to help manage cyber security risks, but also communicate effectively in order to be successful. To achieve this, I suggest starting by understanding the wider organisational context of what we are protecting and why.

Communicating often and across functions is essential when developing and implementing a security programme to mitigate identified risks. In the book, I discuss how to engage with colleagues to factor in their experiences and insights to shape security mechanisms around their daily roles and responsibilities. I also recommend orienting security education activities towards the goals and values of individual team members, as well as the values of the organisation.

I also warn against imposing too much security on the business. At the end of the day, the company needs to achieve its business objectives and innovate, albeit securely. The aim should be to educate people about security risks and help colleagues make the right decisions, showing that security is not only important to keep the company afloat or meet a compliance requirement but that it can also be a business enabler. This helps demonstrate to the Board that security contributes to the overall success of the organisation by elevating trust and amplifying the brand message, which in turn leads to happier customers.

Can AI help improve security culture?

I’ve been exploring the current application of machine learning techniques to cybersecurity. Although, there are some strong use cases in the areas of log analysis and malware detection, I couldn’t find the same quantity of research on applying AI to the human side of cybersecurity.

Can AI be used to support the decision-making process when developing cyber threat prevention mechanisms in organisations and influence user behaviour towards safer choices? Can modelling adversarial scenarios help us better understand and protect against social engineering attacks?

To answer these questions, a multidisciplinary perspective should be adopted with technologists and psychologists working together with industry and government partners.

While designing such mechanisms, consideration should be given to the fact that many interventions can be perceived by users as negatively impacting their productivity, as they demand additional effort to be spent on security and privacy activities not necessarily related to their primary activities [1, 2].

A number of researchers use the principles from behavioural economics to identify cyber security “nudges” (e.g.  [3], [4]) or visualisations [5,6].  This approach helps them make better decisions and minimises perceived effort by moving them away from their default position. This method is being applied in the privacy area, for example for reduced Facebook sharing [7] and improved smartphone privacy settings [8]. Additionally there is greater use of these as interventions, particularly with installation of mobile applications [9].

The proposed socio-technical approach to the reduction of cyber threats aims to account for the development of responsible and trustworthy people-centred AI solutions that can use data whilst maintaining personal privacy.

A combination of supervised and unsupervised learning techniques is already being employed to predict new threats and malware based on existing patterns. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Building adversarial models, designing empirical studies and running experiments (e.g. using Amazon’s Mechanical Turk) can help better measure the effectiveness of attackers’ techniques and develop better defence mechanisms. I believe there is a need to explore opportunities to utilise machine learning to aid the human decision-making process whereby people are supported by, and work together with, AI to better defend against cyber attacks.

We should draw upon participatory co-design and follow a people-centred approach so that relevant stakeholders are engaged in the process. This can help develop personalised and contextualised solutions, crucial to addressing ethical, legal and social challenges that cannot be solved with AI automation alone.

More

CSO30 Conference – behavioural science in cyber security

I’ve been invited to speak at the CSO30 Conference today on applying behavioural science to cyber security.

I talked about the role behavioural science plays in improving cybersecurity in organisations, the challenges of applying academic theory in practice and how to overcome them.

I shared some tips on how to build the culture of security and measure the success of your security programme.

We also spoke about the differences in approaches and scalability of your security programme depending on the size and context you organisation, including staffing and resourcing constraints.

Overall, I think we covered a lot of ground in just 30 minutes and registration is still open if you’d like to watch a recording.

Royal Holloway University of London adopts my book for their MSc Information Security programme

Photo by lizsmith

One of the UK’s leading research-intensive universities has selected The Psychology of Information Security to be included in their flagship Information Security programme as part of their ongoing collaboration with industry professionals.

Royal Holloway University of London’s MSc in Information Security was the first of its kind in the world. It is certified by GCHQ, the UK Government Communications Headquarters, and taught by academics and industrial partners in one of the largest and most established Information Security Groups in the world. It is a UK Academic Centre of Excellence for cyber security research, and an Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in cyber security.

Researching and teaching behaviours, risk perception and decision-making in security is one of the key components of the programme and my book is one of the resources made available to students.

“We adopted The Psychology of Information Security book for our MSc in Information Security and have been using it for two years now. Our students appreciate the insights from the book and it is on the recommended reading list for the Human Aspects of Security and Privacy module. The feedback from students has been very positive as it brings the world of academia and industry closer together.”

Dr Konstantinos Mersinas,
Director of Distance Learning Programme and MSc Information Security Lecturer.