Trust in People: Macquarie University Cyber Security Industry Workshop

I’ve been invited to to share my thoughts on human-centric security at the Macquarie University Cyber Security Industry Workshop.

Drawing on insights from The Psychology of Information Security and my experience in the field, I outlined some of the reasons for friction between security and business productivity and suggested a practical approach to a building a better security culture in organisations.

It was great to be able to contribute to the collaboration between the industry, government and academia on this topic.

Book signing

I’ve been asked to sign a large order of my book The Psychology of Information Security and hope that people who receive a copy will appreciate the personal touch!

I wrote this book to help security professionals and people who are interested in a career in cyber security to do their job better. Not only do we need to help manage cyber security risks, but also communicate effectively in order to be successful. To achieve this, I suggest starting by understanding the wider organisational context of what we are protecting and why.

Communicating often and across functions is essential when developing and implementing a security programme to mitigate identified risks. In the book, I discuss how to engage with colleagues to factor in their experiences and insights to shape security mechanisms around their daily roles and responsibilities. I also recommend orienting security education activities towards the goals and values of individual team members, as well as the values of the organisation.

I also warn against imposing too much security on the business. At the end of the day, the company needs to achieve its business objectives and innovate, albeit securely. The aim should be to educate people about security risks and help colleagues make the right decisions, showing that security is not only important to keep the company afloat or meet a compliance requirement but that it can also be a business enabler. This helps demonstrate to the Board that security contributes to the overall success of the organisation by elevating trust and amplifying the brand message, which in turn leads to happier customers.

Can AI help improve security culture?

I’ve been exploring the current application of machine learning techniques to cybersecurity. Although, there are some strong use cases in the areas of log analysis and malware detection, I couldn’t find the same quantity of research on applying AI to the human side of cybersecurity.

Can AI be used to support the decision-making process when developing cyber threat prevention mechanisms in organisations and influence user behaviour towards safer choices? Can modelling adversarial scenarios help us better understand and protect against social engineering attacks?

To answer these questions, a multidisciplinary perspective should be adopted with technologists and psychologists working together with industry and government partners.

While designing such mechanisms, consideration should be given to the fact that many interventions can be perceived by users as negatively impacting their productivity, as they demand additional effort to be spent on security and privacy activities not necessarily related to their primary activities [1, 2].

A number of researchers use the principles from behavioural economics to identify cyber security “nudges” (e.g.  [3], [4]) or visualisations [5,6].  This approach helps them make better decisions and minimises perceived effort by moving them away from their default position. This method is being applied in the privacy area, for example for reduced Facebook sharing [7] and improved smartphone privacy settings [8]. Additionally there is greater use of these as interventions, particularly with installation of mobile applications [9].

The proposed socio-technical approach to the reduction of cyber threats aims to account for the development of responsible and trustworthy people-centred AI solutions that can use data whilst maintaining personal privacy.

A combination of supervised and unsupervised learning techniques is already being employed to predict new threats and malware based on existing patterns. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Building adversarial models, designing empirical studies and running experiments (e.g. using Amazon’s Mechanical Turk) can help better measure the effectiveness of attackers’ techniques and develop better defence mechanisms. I believe there is a need to explore opportunities to utilise machine learning to aid the human decision-making process whereby people are supported by, and work together with, AI to better defend against cyber attacks.

We should draw upon participatory co-design and follow a people-centred approach so that relevant stakeholders are engaged in the process. This can help develop personalised and contextualised solutions, crucial to addressing ethical, legal and social challenges that cannot be solved with AI automation alone.

More

CSO30 Conference – behavioural science in cyber security

I’ve been invited to speak at the CSO30 Conference today on applying behavioural science to cyber security.

I talked about the role behavioural science plays in improving cybersecurity in organisations, the challenges of applying academic theory in practice and how to overcome them.

I shared some tips on how to build the culture of security and measure the success of your security programme.

We also spoke about the differences in approaches and scalability of your security programme depending on the size and context you organisation, including staffing and resourcing constraints.

Overall, I think we covered a lot of ground in just 30 minutes and registration is still open if you’d like to watch a recording.

Royal Holloway University of London adopts my book for their MSc Information Security programme

Photo by lizsmith

One of the UK’s leading research-intensive universities has selected The Psychology of Information Security to be included in their flagship Information Security programme as part of their ongoing collaboration with industry professionals.

Royal Holloway University of London’s MSc in Information Security was the first of its kind in the world. It is certified by GCHQ, the UK Government Communications Headquarters, and taught by academics and industrial partners in one of the largest and most established Information Security Groups in the world. It is a UK Academic Centre of Excellence for cyber security research, and an Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in cyber security.

Researching and teaching behaviours, risk perception and decision-making in security is one of the key components of the programme and my book is one of the resources made available to students.

“We adopted The Psychology of Information Security book for our MSc in Information Security and have been using it for two years now. Our students appreciate the insights from the book and it is on the recommended reading list for the Human Aspects of Security and Privacy module. The feedback from students has been very positive as it brings the world of academia and industry closer together.”

Dr Konstantinos Mersinas,
Director of Distance Learning Programme and MSc Information Security Lecturer.

What can a US Army General teach us about security?

General

General Douglas MacMarthur said “never give an order that can’t be obeyed”. This is sound advice, as doing so can diminish the commander’s authority. If people want to do what you are asking them to do, but can’t, they would doubt your judgement in the future.

Despite the fact that most of us operate in commercial organisations rather than the US Army, there are some lessons to be learned from this.

Security professionals don’t need to rally their troops and rarely operate in command-and-control environments. Their role has largely shifted to the one of an advisor to the business when it comes to managing cyber risk. Yet all too often advice they give is misguided. In an effort to protect the business they sometimes fail to grasp the wider context in which it operates. More importantly, they rarely consider their colleagues who will have to follow their guidance.

Angela Sasse gives a brilliant example of this when she talks about phishing. Security professionals expect people to be able to identify a phishing email in order to keep the company secure. Through numerous awareness sessions they tell them how dangerous it is to click on a link in a phishing email.

Although it makes sense to some extent, it’s not helpful to expect people to be able to recognise a phishing email 100% of the times. In fact, a lot of information security professionals might struggle to make that distinction themselves, especially when it comes to more sophisticated cases of spear phishing. So how can we expect people who are not information security specialists to measure up?

To make matters worse, most of modern enterprises depend on email with links to be productive. It is considered normal and part of business as usual to receive an email and click on the link in it. I heard of a scenario where a company hired an external agency and paid good money for surveying their employees. Despite advance warnings, the level of engagement with this survey was reduced as people were reporting these external emails as “phishing attempts”. The communications team was not pleased and that certainly didn’t help establish the productive relationship with the security team.

The bottom line is that if your defences depend on people not clicking on links, you can do better than that. The aim is not to punish people when they make a mistake, but to build trust. The security team should therefore be there to support people and recognise their challenges rather than police them.

After all, when someone does eventually click on a malicious link, it’s much better if they pick up the phone to the security team and admit their mistake rather than hope it doesn’t get noticed. Not only does this speed-up incident response, it fosters the role of the security professional as a business enabler, rather than a commander who keeps giving orders that can’t be obeyed.

Vulnerability scanning gone bad

4197732260_60306abecf_z

Security teams often have good intentions when they want to improve the security posture of a company by introducing new tools.

In one organisation, for example, they might want to mitigate the risk of exploiting application vulnerabilities and decide to deploy a code-scanning tool. This would make sure that applications are tested for exploits before they are released. Great idea but the uptake on the use of this tool was surprisingly low and created a lot of friction.

After closer examination, it turns out that this was primarily due to challenges with communication with the development teams that would need to use the tool. The impacted teams weren’t sufficiently trained on the use of it and there wasn’t enough support from the management to adopt it.

Development teams have tight timelines and budgets to work to in order to meet the business objectives. Anything that could disrupt these aspects is viewed with caution.

As a result, applications that should have had their code scanned either hadn’t, or had to be scanned at a much later stage of the development cycle. It was not incorporated in the DevOps pipeline– the scans were run as part of a manual check before release in production. Not only the risk of having applications with flaws in them remain largely unchanged, the whole process of delivering working software was prolonged.

These new applications were being delivered to facilitate revenue growth or streamline exiting processes to reduce cost and complexity. The impact on the business was that the new functionality they were expecting took longer to materialise, resulting in users’ frustration.

What can you do to prevent such situations from happening? Here are a few recommendations:

  1. Communicate frequently and at the right level. Communication must start at the top of an organisation and work its way down, so that priorities and expectations can be aligned. A person may need to hear the same message multiple times before they take action.
  2. Articulate the benefits. Security and risk teams need to ensure they position any new processes or tools in a way that highlights the benefits to each stakeholder group.
  3. Provide clear steps. In order to ensure the change is successful, security professionals should clearly outline the steps for how to start realising these benefits.

Communicating and providing support on new security policies, tools and practices to impacted teams is absolutely critical. This is especially important in large organisations with many stakeholder groups spread across multiple geographies. Always keep the people in mind when introducing a change, even if it’s the one for the better.

Image by Hugo Chinaglia

Transparency in security

Transparent

I was asked to deliver a keynote in Germany at the Security Transparent conference. Of course, I agreed. Transparency in security is one of the topics that is very close to my heart and I wish professionals in the industry not only talked about it more, but also applied it in practice.

Back in the old days, security through obscurity was one of the many defence layers security professionals were employing to protect against attackers. On the surface, it’s hard to argue with such a logic: the less the adversary knows about our systems, the less likely they are to find a vulnerability that can be exploited.

There are some disadvantages to this approach, however. For one, you now need to tightly control the access to the restricted information about the system to limit the possibility of leaking sensitive information about its design. But this also limits the scope for testing: if only a handful of people are allowed to inspect the system for security flaws, the chances of actually discovering them are greatly reduced, especially when it comes to complex systems. Cryptographers were among the first to realise this. One of Kerckhoff’s principles states that “a cryptosystem should be secure even if everything about the system, except the key, is public knowledge”.

Modern encryption algorithms are not only completely open to public, exposing them to intense scrutiny, but they have often been developed by the public, as is the case, for example, with Advanced Encryption Standard (AES). If a vendor is boasting using their own proprietary encryption algorithm, I suggest giving them a wide berth.

Cryptography aside, you can approach transparency from many different angles: the way you handle personal data, respond to a security incident or work with your partners and suppliers. All of these and many more deserve attention of the security community. We need to move away from ambiguous privacy policies and the desire to save face by not disclosing a security breach affecting our customers or downplaying its impact.

The way you communicate internally and externally while enacting these changes within an organisation matters a lot, which is why I focused on this communication element while presenting at Security Transparent 2019. I also talked about friction between security and productivity and the need for better alignment between security and the business.

I shared some stories from behavioural economics, criminology and social psychology to demonstrate that challenges we are facing in information security are not always unique – we can often look at other seemingly unrelated fields to borrow and adjust what works for them. Applying lessons learned from other disciplines when it comes to transparency and understanding people is essential when designing security that works, especially if your aim is to move beyond compliance and be an enabler to the business.

Remember, people are employed to do a particular job: unless you’re hired as an information security specialist, your job is not to be an expert in security. In fact, badly designed and implemented security controls can prevent you from doing your job effectively by reducing your productivity.

After all, even Kerckhoff recognised the importance of context and fatigue that security can place on people. One of his lesser known principles states that “given the circumstances in which it is to be used, the system must be easy to use and should not be stressful to use or require its users to know and comply with a long list of rules”. He was a wise man indeed.