Can AI help improve security culture?

I’ve been exploring the current application of machine learning techniques to cybersecurity. Although, there are some strong use cases in the areas of log analysis and malware detection, I couldn’t find the same quantity of research on applying AI to the human side of cybersecurity.

Can AI be used to support the decision-making process when developing cyber threat prevention mechanisms in organisations and influence user behaviour towards safer choices? Can modelling adversarial scenarios help us better understand and protect against social engineering attacks?

To answer these questions, a multidisciplinary perspective should be adopted with technologists and psychologists working together with industry and government partners.

While designing such mechanisms, consideration should be given to the fact that many interventions can be perceived by users as negatively impacting their productivity, as they demand additional effort to be spent on security and privacy activities not necessarily related to their primary activities [1, 2].

A number of researchers use the principles from behavioural economics to identify cyber security “nudges” (e.g.  [3], [4]) or visualisations [5,6].  This approach helps them make better decisions and minimises perceived effort by moving them away from their default position. This method is being applied in the privacy area, for example for reduced Facebook sharing [7] and improved smartphone privacy settings [8]. Additionally there is greater use of these as interventions, particularly with installation of mobile applications [9].

The proposed socio-technical approach to the reduction of cyber threats aims to account for the development of responsible and trustworthy people-centred AI solutions that can use data whilst maintaining personal privacy.

A combination of supervised and unsupervised learning techniques is already being employed to predict new threats and malware based on existing patterns. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Building adversarial models, designing empirical studies and running experiments (e.g. using Amazon’s Mechanical Turk) can help better measure the effectiveness of attackers’ techniques and develop better defence mechanisms. I believe there is a need to explore opportunities to utilise machine learning to aid the human decision-making process whereby people are supported by, and work together with, AI to better defend against cyber attacks.

We should draw upon participatory co-design and follow a people-centred approach so that relevant stakeholders are engaged in the process. This can help develop personalised and contextualised solutions, crucial to addressing ethical, legal and social challenges that cannot be solved with AI automation alone.

More

CSO30 Conference – behavioural science in cyber security

I’ve been invited to speak at the CSO30 Conference today on applying behavioural science to cyber security.

I talked about the role behavioural science plays in improving cybersecurity in organisations, the challenges of applying academic theory in practice and how to overcome them.

I shared some tips on how to build the culture of security and measure the success of your security programme.

We also spoke about the differences in approaches and scalability of your security programme depending on the size and context you organisation, including staffing and resourcing constraints.

Overall, I think we covered a lot of ground in just 30 minutes and registration is still open if you’d like to watch a recording.

Royal Holloway University of London adopts my book for their MSc Information Security programme

Photo by lizsmith

One of the UK’s leading research-intensive universities has selected The Psychology of Information Security to be included in their flagship Information Security programme as part of their ongoing collaboration with industry professionals.

Royal Holloway University of London’s MSc in Information Security was the first of its kind in the world. It is certified by GCHQ, the UK Government Communications Headquarters, and taught by academics and industrial partners in one of the largest and most established Information Security Groups in the world. It is a UK Academic Centre of Excellence for cyber security research, and an Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in cyber security.

Researching and teaching behaviours, risk perception and decision-making in security is one of the key components of the programme and my book is one of the resources made available to students.

“We adopted The Psychology of Information Security book for our MSc in Information Security and have been using it for two years now. Our students appreciate the insights from the book and it is on the recommended reading list for the Human Aspects of Security and Privacy module. The feedback from students has been very positive as it brings the world of academia and industry closer together.”

Dr Konstantinos Mersinas,
Director of Distance Learning Programme and MSc Information Security Lecturer.

The foundation of the Zero Trust architecture

Zero Trust is a relatively new term for a concept that’s been around for a while. The shift to remote working and wider adoption of cloud services has accelerated the transition away from the traditional well understood and controlled network perimeter.

Security professionals should help organisations balance the productivity of their employees with appropriate security measures to manage cyber security risks arising from the new ways of working.

When people talk about Zero Trust, however, they might refer to new technologies marketed by security vendors. But in my opinion, it is as much (if not more) about the communication and foundational IT controls. Effective implementation of the Zero Trust model depends on close cross departmental collaboration between IT, Security, Risk, HR and Procurement when it comes to access control, joiner-mover-leaver process, managing identities, detecting threats and more.

Device management is the foundation of an effective Zero Trust implementation. Asset inventory in this model is no longer just a compliance requirement but a prerequisite for managing access to corporate applications. Security professionals should work closely with procurement and IT teams to keep this inventory up-to-date. Controlling the lifecycle of the device from procuring and uniquely identifying it through tracking and managing changes, to decommissioning should be closely linked with user identities.

People change roles within the company, new employees join and some leave. Collaborating with HR to establish processes for maintaining the connection between device management and employee identities, roles and associated permissions is key to success.

As an example, check out Google’s implementation of the Zero Trust model in their BeyondCorp initiative.

One year in: a look back

In the past year I had the opportunity to help a tech startup shape its culture and make security a brand differentiator. As the Head of Information Security, I was responsible for driving the resilience, governance and compliance agenda, adjusting to the needs of a dynamic and growing business.

More

What can a US Army General teach us about security?

General

General Douglas MacMarthur said “never give an order that can’t be obeyed”. This is sound advice, as doing so can diminish the commander’s authority. If people want to do what you are asking them to do, but can’t, they would doubt your judgement in the future.

Despite the fact that most of us operate in commercial organisations rather than the US Army, there are some lessons to be learned from this.

Security professionals don’t need to rally their troops and rarely operate in command-and-control environments. Their role has largely shifted to the one of an advisor to the business when it comes to managing cyber risk. Yet all too often advice they give is misguided. In an effort to protect the business they sometimes fail to grasp the wider context in which it operates. More importantly, they rarely consider their colleagues who will have to follow their guidance.

Angela Sasse gives a brilliant example of this when she talks about phishing. Security professionals expect people to be able to identify a phishing email in order to keep the company secure. Through numerous awareness sessions they tell them how dangerous it is to click on a link in a phishing email.

Although it makes sense to some extent, it’s not helpful to expect people to be able to recognise a phishing email 100% of the times. In fact, a lot of information security professionals might struggle to make that distinction themselves, especially when it comes to more sophisticated cases of spear phishing. So how can we expect people who are not information security specialists to measure up?

To make matters worse, most of modern enterprises depend on email with links to be productive. It is considered normal and part of business as usual to receive an email and click on the link in it. I heard of a scenario where a company hired an external agency and paid good money for surveying their employees. Despite advance warnings, the level of engagement with this survey was reduced as people were reporting these external emails as “phishing attempts”. The communications team was not pleased and that certainly didn’t help establish the productive relationship with the security team.

The bottom line is that if your defences depend on people not clicking on links, you can do better than that. The aim is not to punish people when they make a mistake, but to build trust. The security team should therefore be there to support people and recognise their challenges rather than police them.

After all, when someone does eventually click on a malicious link, it’s much better if they pick up the phone to the security team and admit their mistake rather than hope it doesn’t get noticed. Not only does this speed-up incident response, it fosters the role of the security professional as a business enabler, rather than a commander who keeps giving orders that can’t be obeyed.

Vulnerability scanning gone bad

4197732260_60306abecf_z

Security teams often have good intentions when they want to improve the security posture of a company by introducing new tools.

In one organisation, for example, they might want to mitigate the risk of exploiting application vulnerabilities and decide to deploy a code-scanning tool. This would make sure that applications are tested for exploits before they are released. Great idea but the uptake on the use of this tool was surprisingly low and created a lot of friction.

After closer examination, it turns out that this was primarily due to challenges with communication with the development teams that would need to use the tool. The impacted teams weren’t sufficiently trained on the use of it and there wasn’t enough support from the management to adopt it.

Development teams have tight timelines and budgets to work to in order to meet the business objectives. Anything that could disrupt these aspects is viewed with caution.

As a result, applications that should have had their code scanned either hadn’t, or had to be scanned at a much later stage of the development cycle. It was not incorporated in the DevOps pipeline– the scans were run as part of a manual check before release in production. Not only the risk of having applications with flaws in them remain largely unchanged, the whole process of delivering working software was prolonged.

These new applications were being delivered to facilitate revenue growth or streamline exiting processes to reduce cost and complexity. The impact on the business was that the new functionality they were expecting took longer to materialise, resulting in users’ frustration.

What can you do to prevent such situations from happening? Here are a few recommendations:

  1. Communicate frequently and at the right level. Communication must start at the top of an organisation and work its way down, so that priorities and expectations can be aligned. A person may need to hear the same message multiple times before they take action.
  2. Articulate the benefits. Security and risk teams need to ensure they position any new processes or tools in a way that highlights the benefits to each stakeholder group.
  3. Provide clear steps. In order to ensure the change is successful, security professionals should clearly outline the steps for how to start realising these benefits.

Communicating and providing support on new security policies, tools and practices to impacted teams is absolutely critical. This is especially important in large organisations with many stakeholder groups spread across multiple geographies. Always keep the people in mind when introducing a change, even if it’s the one for the better.

Image by Hugo Chinaglia

Author of the month for January 2019

discount-banner

IT Governance Publishing named me the author of the month and kindly provided a 20% discount on my book.

There’s an interview available in a form of a podcast, where I discuss the most significant challenges related to change management and organisational culture; the common causes of a poor security culture my advice for improving the information security culture in your organisation.

ITGP also made one of the chapters of the audio version of my book available for free – I hope you enjoy it!