Can AI help improve security culture?

I’ve been exploring the current application of machine learning techniques to cybersecurity. Although, there are some strong use cases in the areas of log analysis and malware detection, I couldn’t find the same quantity of research on applying AI to the human side of cybersecurity.

Can AI be used to support the decision-making process when developing cyber threat prevention mechanisms in organisations and influence user behaviour towards safer choices? Can modelling adversarial scenarios help us better understand and protect against social engineering attacks?

To answer these questions, a multidisciplinary perspective should be adopted with technologists and psychologists working together with industry and government partners.

While designing such mechanisms, consideration should be given to the fact that many interventions can be perceived by users as negatively impacting their productivity, as they demand additional effort to be spent on security and privacy activities not necessarily related to their primary activities [1, 2].

A number of researchers use the principles from behavioural economics to identify cyber security “nudges” (e.g.  [3], [4]) or visualisations [5,6].  This approach helps them make better decisions and minimises perceived effort by moving them away from their default position. This method is being applied in the privacy area, for example for reduced Facebook sharing [7] and improved smartphone privacy settings [8]. Additionally there is greater use of these as interventions, particularly with installation of mobile applications [9].

The proposed socio-technical approach to the reduction of cyber threats aims to account for the development of responsible and trustworthy people-centred AI solutions that can use data whilst maintaining personal privacy.

A combination of supervised and unsupervised learning techniques is already being employed to predict new threats and malware based on existing patterns. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Building adversarial models, designing empirical studies and running experiments (e.g. using Amazon’s Mechanical Turk) can help better measure the effectiveness of attackers’ techniques and develop better defence mechanisms. I believe there is a need to explore opportunities to utilise machine learning to aid the human decision-making process whereby people are supported by, and work together with, AI to better defend against cyber attacks.

We should draw upon participatory co-design and follow a people-centred approach so that relevant stakeholders are engaged in the process. This can help develop personalised and contextualised solutions, crucial to addressing ethical, legal and social challenges that cannot be solved with AI automation alone.

More

How to be a trusted advisor

Being a security leader is first and foremost acting as a trusted advisor to the business. This includes understanding its objectives and aligning your efforts to support and enable delivery on the wider strategy.

It is also about articulating cyber risks and opportunities and working with the executive team on managing them. This doesn’t mean, however, that your role is to highlight security weaknesses and leave it to the board to figure it all out. Instead, being someone they can turn to for advice is the best way to influence the direction and make the organisation more resilient in combating cyber threats.

For your advice to be effective, you first need to earn the right to offer it. One of the best books I’ve read on the subject is The Trusted Advisor by David H. Maister. It’s not a new book and it’s written from the perspective of a professional services firm but that doesn’t mean the lessons from it can’t be applied in the security context. It covers the mindset, attributes and principles of a trusted advisor.

Unsurprisingly, the major focus of this work is on developing trust. The author summarises his views on this subject in the trust equation:

Trust = (Credibility + Reliability + Intimacy) / Self-Orientation

It’s a simple yet powerful representation of what contributes to and hinders the trust building process.

It’s hard to trust someone’s recommendations when they don’t put our interests first and instead are preoccupied with being right or jump to solutions without fully understanding the problem.

Equally, as important credibility is, the long list of your professional qualifications and previous experience on its own is not sufficient to be trustworthy. Having courage and integrity, following through on your promises and active listening, among other things are key. In the words of Maister, “it is not enough to be right, you must also be helpful”.

How to apply FBI’s behavioural change stairway to security

Unlike the FBI’s Hostage Negotiation Team, cyber security professionals are rarely involved in high-stakes negotiations involving human life. But that doesn’t mean they can’t use some of the techniques developed by them to apply it to improve security culture, overcome resistance and guide organisational change.

Behind the apparent simplicity, this model is a tried and tested way to influence human behaviour over time. The crux of it is that you can’t skip any steps as consecutive efforts build on the previous ones. The common mistake many cyber security professionals make is they jump straight to Influence or Behavioral change with phishing simulations or security awareness campaigns but this can be counterproductive. 

As explained in the original paper, it is recommended to invest time in active listening, empathy and establishing rapport first. In the security context, this might mean working with the business stakeholders to understand their objectives and concerns, rather than sowing fear of security breaches and regulatory fines.

All of this doesn’t mean you have to treat every interaction like a hostile negotiation or treat your business executives as violent felons. The aim is to build trust to be able to best support the business not manipulate your way into getting your increased budget signed off.I cover some techniques in The Psychology of Information Security – feel free to check it out if you would like to learn more.

CSO30 Conference – behavioural science in cyber security

I’ve been invited to speak at the CSO30 Conference today on applying behavioural science to cyber security.

I talked about the role behavioural science plays in improving cybersecurity in organisations, the challenges of applying academic theory in practice and how to overcome them.

I shared some tips on how to build the culture of security and measure the success of your security programme.

We also spoke about the differences in approaches and scalability of your security programme depending on the size and context you organisation, including staffing and resourcing constraints.

Overall, I think we covered a lot of ground in just 30 minutes and registration is still open if you’d like to watch a recording.

Royal Holloway University of London adopts my book for their MSc Information Security programme

Photo by lizsmith

One of the UK’s leading research-intensive universities has selected The Psychology of Information Security to be included in their flagship Information Security programme as part of their ongoing collaboration with industry professionals.

Royal Holloway University of London’s MSc in Information Security was the first of its kind in the world. It is certified by GCHQ, the UK Government Communications Headquarters, and taught by academics and industrial partners in one of the largest and most established Information Security Groups in the world. It is a UK Academic Centre of Excellence for cyber security research, and an Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in cyber security.

Researching and teaching behaviours, risk perception and decision-making in security is one of the key components of the programme and my book is one of the resources made available to students.

“We adopted The Psychology of Information Security book for our MSc in Information Security and have been using it for two years now. Our students appreciate the insights from the book and it is on the recommended reading list for the Human Aspects of Security and Privacy module. The feedback from students has been very positive as it brings the world of academia and industry closer together.”

Dr Konstantinos Mersinas,
Director of Distance Learning Programme and MSc Information Security Lecturer.

One year in: a look back

In the past year I had the opportunity to help a tech startup shape its culture and make security a brand differentiator. As the Head of Information Security, I was responsible for driving the resilience, governance and compliance agenda, adjusting to the needs of a dynamic and growing business.

More

Security lessons from the pandemic

49640411737_05b48fc1b0_c

The need for the digital transformation is becoming clear as the current pandemic is accelerating existing business and technology trends. Despite market uncertainty and tightening budgets, many companies are seeing improved productivity and cost savings through embracing remote working and cloud computing. They are recognising the value of being able to scale up and down the capacity based on customer demand and paying for only what they use rather than maintaining their own datacentres. Supporting staff and trusting them to do the right thing also pays off.

Security programmes must adapt accordingly. They should be agile and cater for this shift, helping people do their jobs better and more securely. Protecting remote workforce and your cloud infrastructure becomes a focus. It’s also a great opportunity to dust off incident response and business continuity plans to keep them relevant and in the forefront of everyone’s minds.

Work with your staff to explain the ways bad guys take advantage of media intense events for scams and fraud. Make it personal, use examples and relate to scenarios outside of the work context too. Secure their devices and know your shared responsibility model when it comes to cloud services. Backups, logging and monitoring, identity and access management are all important areas to consider. Overall, it’s a good time to review your risk logs and threat models and adjust your approach accordingly.

Photo by Chad Davis.

What can a US Army General teach us about security?

General

General Douglas MacMarthur said “never give an order that can’t be obeyed”. This is sound advice, as doing so can diminish the commander’s authority. If people want to do what you are asking them to do, but can’t, they would doubt your judgement in the future.

Despite the fact that most of us operate in commercial organisations rather than the US Army, there are some lessons to be learned from this.

Security professionals don’t need to rally their troops and rarely operate in command-and-control environments. Their role has largely shifted to the one of an advisor to the business when it comes to managing cyber risk. Yet all too often advice they give is misguided. In an effort to protect the business they sometimes fail to grasp the wider context in which it operates. More importantly, they rarely consider their colleagues who will have to follow their guidance.

Angela Sasse gives a brilliant example of this when she talks about phishing. Security professionals expect people to be able to identify a phishing email in order to keep the company secure. Through numerous awareness sessions they tell them how dangerous it is to click on a link in a phishing email.

Although it makes sense to some extent, it’s not helpful to expect people to be able to recognise a phishing email 100% of the times. In fact, a lot of information security professionals might struggle to make that distinction themselves, especially when it comes to more sophisticated cases of spear phishing. So how can we expect people who are not information security specialists to measure up?

To make matters worse, most of modern enterprises depend on email with links to be productive. It is considered normal and part of business as usual to receive an email and click on the link in it. I heard of a scenario where a company hired an external agency and paid good money for surveying their employees. Despite advance warnings, the level of engagement with this survey was reduced as people were reporting these external emails as “phishing attempts”. The communications team was not pleased and that certainly didn’t help establish the productive relationship with the security team.

The bottom line is that if your defences depend on people not clicking on links, you can do better than that. The aim is not to punish people when they make a mistake, but to build trust. The security team should therefore be there to support people and recognise their challenges rather than police them.

After all, when someone does eventually click on a malicious link, it’s much better if they pick up the phone to the security team and admit their mistake rather than hope it doesn’t get noticed. Not only does this speed-up incident response, it fosters the role of the security professional as a business enabler, rather than a commander who keeps giving orders that can’t be obeyed.