Behavioural science in cyber security

Why your staff ignore security policies and what to do about it.               

Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.

In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.

His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it.  So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.

Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.

This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.

The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.

This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.

To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.

Read the rest of this entry »

Advertisements

Artificial intelligence and cyber security: attacking and defending

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack.  Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.


DevOps and Operational Technology: a security perspective

I have worked in the Operational Technology (OT) environment for years, predominantly in major Oil and Gas companies. And yes, we all know that this space can move quite slowly! Companies traditionally employ a waterfall model while managing projects with rigid stage gates, extensive planning and design phases followed by lengthy implementation or development.

It’s historically been difficult to adopt more agile approaches in such an environment for various reasons. For example, I’ve developed architecture blueprints with a view to refresh industrial control assets for a gas and electricity distribution network provider in the UK on a timeline of 7 years. It felt very much like a construction project to me.  Which is quite different from the software development culture that typically is all about experimenting and failing fast. I’m not sure about you, but I would not like our power grid to fail fast in the name of agility. The difference in culture is justified: we need to prioritise safety and rigour when it comes to industrial control systems, as the impact of a potential mistake can cost more than a few days’ worth of development effort – it can be human life.

The stakes are not as high when we talk about software development.  I’ve spent the past several months in one of the biggest dot-coms in Europe and it was interesting to compare and contrast their agile approach to the more traditional OT space I’ve spent most of my career in. These two worlds can’t be more different.

I arrived to a surprising conclusion though: they are both slow when it comes to security. But  for different reasons.

Agile, and Scrum in particular, is great on paper but it’s quite challenging when it comes to security.

Agile works well when small teams are focused on developing products but I found it quite hard to introduce security culture in such an environment. Security often is just not a priority for them.

Teams mostly focus on what they perceive as a business priority. It is a standard practice there to define OKRs – Objectives and Key Results. The teams are then measured on how well they achieved those. So say if they’ve met 70% of their OKRs, they had a good quarter. Guess what – security always ends up in the other bottom 30% and security-related backlog items get de-prioritised.

DevOps works well for product improvement, but it can be quite bad for security. For instance, when a new API or a new security process is introduced, it has to touch a lot of teams which can be a stakeholder management nightmare in such an environment. A security product has to be shoe horned across multiple DevOps teams, where every team has its own set of OKRs, resulting in natural resistance to collaborate.

In a way, both OT and DevOps move slowly when it comes to security. But what do you do about it?

The answer might lie in setting the tone from the top and making sure that everyone is responsible for security, which I’ve discussed in a series of articles on security culture on this blog and in my book The Psychology of Information Security.

How about running your security team like a DevOps team? When it comes to Agile, minimising the friction for developers is the name of the game: incorporate your security checks in the deployment process, do some automated vulnerability scans, implement continuous control monitoring, describe your security controls in the way developers understand (e.g. user stories) and so on.

Most importantly, gather and analyse data to improve. Where is security failing? Where is it clashing with the business process? What does the business actually need here? Is security helping or impeding? Answering these questions is the first step to understanding where security can add value to the business regardless of the environment: Agile or OT.


Sharing views on security culture

Talk01

I’ve been invited to talk about human aspects of security at the CyberSecurity Talks & Networking event.  The venue and the format allowed the audience to participate and ask questions and we had insightful discussions at the end of my talk. It’s always interesting to hear what challenges people face in various organisations and how a few simple improvements can change the security culture for the better.


Cybersecurity Canon: The Psychology of Information Security

Big-Canon-Banner.png

My book has been nominated for the Cybersecurity Cannon, a list of must-read books for all cybersecurity practitioners.

Review by Guest Contributor Nicola Burr, Cybersecurity Consultant

Read the rest of this entry »


Presenting at SANS European Security Awareness Summit

It’s been a pleasure delivering a talk on the psychology of information security culture at the SANS European Security Awareness Summit 2016. It was the first time for me to attend and present at this event, I certainly hope it’s not going to be the last.

The summit has a great community feel to it and Lance Spitzner did a great job organising and bringing people together. It was an opportunity for me not only to share my knowledge, but also to learn from others during a number of interactive sessions and workshops. The participants were keen to share tips and tricks to improve security awareness in their companies, as well as sharing war stories of what worked and what didn’t.

It was humbling to find out that my book was quite popular in this community and I even managed to sign a couple of copies.

All speakers’ presentation slides (including from past and future events) can be accessed here.


The Psychology of Information Security – Get 10% Off

book.PNG

IT Governance Publishing kindly provided a 10% discount on my book. Simply use voucher code SPY10 on my publisher’s website.

Offer ends 30 November 2016.