Behavioural science in cyber security

Why your staff ignore security policies and what to do about it.               

Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.

In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.

His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it.  So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.

Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.

This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.

The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.

This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.

To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.

Understanding the motivation

Understanding motivation begins with understating why people don’t comply with information security policies. Three common reasons include:

  • There is no obvious reason to comply
  • Compliance comes at a steep cost to workers
  • Employees are simply unable to comply

There is no obvious reason to comply

Risk and threat are part of cyber security specialists’ everyday lives, and they have a universal appreciation for what they entail. But regular employees seldom have an accurate concept of what information security actually is, and what it is trying to protect.

Employees are hazy about the rules themselves, and tend to lack a crystallised understanding of what certain security policies forbid and allow, which results in so-called “security myths.” Furthermore, even in the rare cases where employees are aware of a particular security policy and interpret it correctly, the motivation to comply isn’t there. They’ll do the right thing, but their heart isn’t really in it.

People seldom feel that their actions have any bearing on the overall information security of an organisation. As the poet Stanisław Jerzy Lec once said, “No snowflake in an avalanche ever feels responsible.” This is troubling because if adhering to a policy involves a certain amount of effort, and there is no perceived immediate threat, non-compliant behaviour can appear to be the more attractive and comfortable option.

Compliance comes at a steep cost to workers

All people within an organisation have their own duties and responsibilities to execute. A marketing director is responsible for PR and communications; a project manager is responsible for ensuring tasks remain on track; a financial analyst is helping an organisation decide which stocks and shares to buy. For most of these employees, their main concern — if not their sole concern — is ensuring their jobs get done. Anything secondary, like information security, falls to the wayside especially if employees perceive it to be arduous or unimportant.

The evidence shows that if security mechanisms create additional work for employees, they will tend to err on the side of non-compliant behaviour, in order to concentrate on executing their primary tasks efficiently.

There is a troubling lack of concern among security managers about the burden security mechanisms impose on employees. Many assume that employees can simply adjust to new shifting security requirements without much extra effort. This belief is often mistaken, as employees regard new security mechanisms as arduous and cumbersome, draining both their time and effort. From their perspective, reduced risk to the organisation as a consequence of their compliance is seen as not a worthwhile trade-off for the disruption to their productivity.

And in extreme cases — for example, when an individual is faced with an impending deadline — employees may find it fit to cut corners and fail to comply with established security procedure, regardless of being aware of the risks.

An example of this is file sharing. Many organisations enact punishing restrictions regarding the exchange of digital files, in an effort to prevent the organisation from data exfiltration or phishing attempts. This often takes the form of strict permissions, by storage or transfer limits, or by time-consuming protocols. If pressed for time, an employee may resort to an unapproved alternative — like Dropbox, Google Drive, or Box. Shadow IT is a major security concern for enterprises, and is often a consequence of cumbersome security protocols. And from the perspective of an employee they can justify it, as failing to complete their primary tasks holds more immediate consequences for them, especially compared to the potential and unclear risk associated with security non-compliance.

Employees are simply unable to comply

In rare and extreme cases, compliance — whether enforced or voluntary — fails to be an option for employees, no matter how much time or effort they are willing to commit. In these cases, the most frequent scenario is that the security protocols imposed do not match their basic work requirements.

An example of this would be an organisation that distributed encrypted USB flash drives with an insufficient amount of storage. Employees who frequently need to transfer large files — such as those working with audio-visual assets — would be forced to rely on unauthorised mechanisms, like online file sharing services, or larger, non-encrypted external hard drives. It is also common to see users copy files onto their laptops from secure locations, either because the company’s remote access doesn’t work well, or because they’ve been allocated an insufficient amount of storage on their network drives.

Password complexity rules often force employees to break established security codes of conduct. When forced to memorise different, profoundly complex passwords, employees will try and find a shortcut by writing them down — either physically, or electronically.

In these situations, the employees are cognisant of the fact that they’re breaking the rules, but they justify it by saying their employer had failed to offer them a workable technical implementation. They assume the company would be more comfortable with a failure to adhere by security rules than the failing to perform their primary duties. This assumption is often reinforced by non-security managerial staff.

The end result is that poorly implemented security protocols create a chasm between the security function and the rest of the organisation, creating a “them-and-us” scenario, where they are perceived as “out of touch” to the needs of the rest of the organisation. Information security — and information security professionals — become resented, and the wider organisation responds to security enforcers with scepticism or derision. These reinforced perspectives can result in resistance to security measures, regardless of how well-designed or seamlessly implemented they are.

How people make decisions

The price of overly complicated security mechanisms is productivity; the tougher compliance is, the more it’ll interfere with the day-to-day running of the organisation. It’s not uncommon to see the business-critical parts of an organisation engaging heavily in non-compliant behaviour, because they value productivity over security and don’t perceive an immediate risk.

And although employees will often make a sincere effort to comply with an organisation’s policies, their predominant concern is getting their work done. When they violate a rule, it’s usually not due to deliberately malicious behaviour, but rather because of poor control implementation that pays scant attention to their needs.

On the other hand the more employee-centred a security policy is, the better it incentivises employees to comply, and strengthens the overall security culture. This requires empathy, and actually listening to those users downstream. Crucially, it requires remembering that employee behaviour is primarily driven by meeting goals and key performance indicators. This is often in contrast to the security world, which emphasises managing risks and proactively responding to threats that may or may not emerge, and is often seen by outsiders as abstract and lacking context.

That’s why developing a security programme that works requires an understanding of the human decision-making process.

How individuals make decisions is a subject of interest for psychologists and economists, who have traditionally viewed human behaviour as regular and highly predictable. This framework let researchers build models that allowed them to comprehend social and economic behaviour almost like clockwork, where it can be deconstructed and observed how the moving parts fit together.

But people are unique, and therefore, complicated. There is no one-size-fits-all paradigm for humanity. People have behaviour that can be irrational, disordered, and prone to spur-of-the-moment thinking, reflecting the dynamic and ever-changing working environment. Research in psychology and economics later pivoted to understand the drivers behind certain actions. This research is relevant to the information security field.

Among the theories pertaining to human behaviour is the theory of rational choice, which explains how people aim to maximise their benefits and minimise their costs. Self-interest is the main motivator, with people making decisions based on personal benefit, as well as the cost of the outcome.

This can also explain how employees make decisions about what institutional information security rules they choose to obey. According to the theory of rational choice, it may be rational for users to fail to adhere to a security policy because the effort vastly outweighs the perceived benefit — in this case, a reduction in risk.

University students, for example, have been observed to frequently engage in unsafe computer security practices, like sharing credentials, downloading attachments without taking safe precautions, and failing to back up their data. Although students — being digital natives — were familiar with the principles of safe computing behaviour, they still continued to exhibit risky practices. Researchers who have looked into this field believe that simple recommendations aren’t enough to ensure compliance; educational institutions may need to impose secure behaviour through more forceful means.

This brings us onto the theory of general deterrence, which states that users will fail to comply with the rules if they know that there will be no consequences. In the absence of a punishment, users feel compelled to behave as they feel fit.

Two terms vital to understanding this theory are ‘intrinsic motivation’ and ‘extrinsic motivation.’ As the name suggests, intrinsic motivations come from within, and usually lead to actions that are personally rewarding. The main mover here is one’s own desires. Extrinsic motivations, on the other hand, derive from the hope of gaining a reward or avoiding a punishment.

Research into the application of the theory of general deterrence within the context of information security awareness suggests that the perception of consequences is far more effective in deterring unsafe behaviour than actually imposing sanctions. These findings came after examining the behaviour of a sample of 269 employees from eight different companies who had received security training and were aware of the existence of user-monitoring software on their computers.

But there isn’t necessarily a consensus on this. A criticism of the aforementioned theory is that it’s based solely on extrinsic motivations. This lacks the consideration of intrinsic motivation, which is a defining and driving facet of the human character. An analysis of a sample of 602 employees showed that approaches which address intrinsic motivations lead to a significant increase in compliant employee behaviour, rather than ones rooted in punishment and reward. In short, the so-called “carrot and stick” method might not be particularly effective.

The value of intrinsic motivations is supported by the cognitive evaluation theory, which can be used to predict the impact that rewards have on intrinsic motivations. So, if an effort is recognised by an external factor, such as with an award or prize, the individual will be more likely to adhere to the organisation’s security policies.

However, if rewards are seen as a “carrot” to control behaviour, they have a negative impact on intrinsic motivation. This is due to the fact that a recipient’s sense of individual autonomy and self-determination will diminish when they feel as though they’re being controlled.

The cognitive evaluation theory also explains why non-tangible rewards — like praise — also have positive impacts on intrinsic motivation. Verbal rewards boost an employee’s sense of self-esteem and self-worth, and reinforces the view that they’re skilled at a particular task, and their performance is well-regarded by their superiors. However, for non-tangible rewards to be effective, they must not appear to be coercive.

Focusing on ensuring greater compliance within an information security context, this theory recommends adoption of a positive, non-tangible reward system that recognises positive efforts in order to ensure constructive behaviour regarding security policy compliance.

And ultimately, the above theories show that in order to effectively protect an institution, security policies shouldn’t merely ensure formal compliance with legal and regulatory requirements, but also pay respect to the motivations and attitudes of the employees that must live and work under them.

Designing security that works

A fundamental aspect of ensuring compliance is providing employees with the tools and working environments they need, so they don’t feel compelled to use insecure, unauthorised third-party alternatives. For example, an enterprise could issue encrypted USB flash drives and provide a remotely-accessible network drive, so employees can save and access their documents as required. Therefore employees aren’t tempted to use Dropbox or Google Drive; however these options must have enough storage capacity for employees to do their work.

Additionally, these network drives can be augmented with auto-archiving systems, allowing administrators to ensure staffers do not travel with highly-sensitive documents. If employees must travel with their laptops, their internal storage drives can be encrypted, so that even if they leave them in a restaurant or train, there is scant possibility that the contents will be accessed by an unauthorised third-party.

Other steps taken could include the use of remote desktop systems, meaning that no files are actually stored on the device, or single-sign-on systems, so that employees aren’t forced to remember, or worse, write down, several unique and complex passwords. Ultimately, whatever security steps taken must align with the needs of employees and the realities of their day-to-day jobs.

People’s resources are limited. This doesn’t just refer to time, but also to energy. Individuals often find decision making to be hard when fatigued.  This concept was highlighted by a psychological experiment, where two sets of people had to memorise a different number. One was a simple, two-digit number, while the other was a longer seven-digit number. The participants were offered a reward for correctly reciting the number; but had to walk to another part of the building to collect it.

On the way, they were intercepted with a second pair of researchers who offered them a snack, which could only be collected after the conclusion of the experiment. The participants were offered a choice between a healthy option and chocolate. Those presented with the easier number tended to err towards the healthy option, while those tasked with remembering the seven digit number predominantly selected chocolate.

Another prominent study examines the behaviour of judges during different times of the day. It found that in the mornings and after lunch, judges had more energy, and were better able to consider the merits of an individual case. This resulted in more grants of parole. Those seen before a judge in the evenings were denied parole more frequently. This is believed to be because they simply ran out of mental energy, and defaulted to what they perceived to be the safest option: refusal.

So how do these studies apply to an information security context? Those working in the field should reflect on the individual circumstances of those in the organisation. If people are tired or engaged in activities requiring high concentration, they get fatigued, which affects their ability or willingness to maintain compliance. This makes security breaches a real possibility.

But compliance efforts don’t need to contribute to mental depletion. When people perform tasks that work with their mental models (defined as the way they view the world and expect it to work), the activities are less mentally tiring than those that divert from the aforementioned models. If people can apply their previous knowledge and expertise to a problem, less energy is required to solve it in a secure manner.

This is exemplified by a piece of research that highlights the importance of secure file removal, which highlighted that merely emptying the Recycle Bin is insufficient, and files can easily be recovered through trivial forensic means. However, there are software products that exploit the “mental models” from the physical world. One uses a “shredding” analogy to highlight that files are being destroyed securely. If you shred a physical file, it is extremely challenging to piece it together, and this is what is happening on the computer, and echoes a common workplace task. This interface design might lighten the cognitive burden on users.

Another example of ensuring user design resembles existing experiences refers to the desktop metaphor introduced by researchers at Xerox in the 1980s, where people were presented with a graphical experience, rather than a text-driven command line. Users could manipulate objects much like they would in the real world (i.e. drag and drop, move files to the recycle bin, and organise files in visual folder-based hierarchies).  Building on the way people think makes it significantly easier for individuals to accept ways of working and new technologies. However, it’s important to remember that cultural differences can make this hard. Not everything is universal. The original Apple Macintosh trash icon, for example, puzzled users in Japan, where metallic bins were unheard of.

Good interface design isn’t just great for users; it makes things easier for those responsible for cyber security. This contradicts the established thinking that security is antithetical to good design. In reality, design and security can coexist by defining constructive and destructive behaviours. Effective design should streamline constructive behaviours, while making damaging ones hard to accomplish. To do this, security has to be a vocal influence in the design process, and not an afterthought.

Designers can involve security specialists in a variety of ways. One way is iterative design, where design is performed in cycles followed by testing, evaluation, and criticism. The other is participatory design, which ensures that all key stakeholders – especially those working in security – are presented with an opportunity to share their perspective.

Of course, this isn’t a panacea. The involvement of security professionals isn’t a cast-iron guarantee that security-based usability problems won’t crop up later. These problems are categorised as ‘wicked’.  A wicked problem is defined as one that is arduous, if not entirely impossible, to solve. This is often due to vague, inaccurate, changing or missing requirements from stakeholders.  Wicked problems cannot be solved through traditional means. It requires creative and novel thinking, such as the application of design thinking techniques. This includes performing situational analysis, interviewing stakeholders, creating user profiles, examining how others faced with a similar problem solved it, creating prototypes, and mind-mapping.

Design thinking is summed up by four different rules. The first is “the human rule,” which states that all design activity is “ultimately social in nature.” The ambiguity rule states that “design thinkers must preserve ambiguity.” The redesign rule says that “all design is redesign,” while the tangibility rule mandates that “making ideas tangible always facilitates communication”.

Security professionals should learn these rules and use them in order to design security mechanisms that don’t merely work, but are fundamentally usable. To do this, it’s important they escape their bubbles, and engage with those who actually use them. This can be done by utilising existing solutions and creating prototypes that can demonstrate the application of security concepts within a working environment.

The Achilles heel of design thinking is that while it enables the design of fundamentally better controls, it doesn’t highlight why existing ones fail.

When things go awry, we tend to look at the symptoms and not the cause. Tailichi Ohno, the Japanese industrialist who created the Toyota Production System (which inspired Lean Manufacturing), developed a technique known as “Five Whys” as a systematic problem-solving tool.

One example, given by Ohno in one of his books, shows this technique in action when trying to diagnose a faulty machine:

  1. Why did the machine stop? There was an overload and the fuse blew
  2. Why was there an overload? The bearing was not sufficiently lubricated.
  3. Why was it not lubricated sufficiently? The lubrication pump was not pumping sufficiently
  4. Why was it not pumping sufficiently? The shaft of the pump was worn and rattling
  5. Why was the shaft worn out? There was no strainer attached and metal scrap got in.

Rather than focus on the first issue, Ohno drilled down through a myriad of issues, which together culminated into a “perfect storm,” resulting in the machine failure. As security professionals, continuing to ask “why” can help us determine why a mechanism failed.

In the example, Ohno pointed out that the root cause was a human failure (namely, a failure to apply a strainer) rather than technical. This is something most security professionals can relate to. As Eric Reis said in his 2011 book The Lean Startup, “the root of every seemingly technical problem is actually a human problem”.

Creating a culture of security

Culture is ephemeral, and often hard to define. Yet, it can be the defining factor of whether a security programme fails or succeeds. Once employees’ primary tasks are identified and aligned with a seamless and considerate set of security controls, it’s vital to demonstrate that information security exists for a purpose, and not to needlessly inconvenience them.  Therefore it is also vital we understand the root causes of poor security culture.

The first step is to recognise is that bad habits and behaviours tend to be contagious. As highlighted by Canadian psychologist Malcolm Gladwell in his book The Tipping Point, there are certain conditions that allow some ideas or behaviours to spread virally. Gladwell refers specifically to the broken window theory to highlight the importance and power of context. This was originally used in law enforcement, and argued that stopping smaller crimes (like vandalism, hence the “broken window” link) is vital in stopping larger crimes (like murder). If a broken window is left for several days in a neighbourhood, more vandalism would inevitably ensue. This shows that crime will effectively go unpunished, leading to bigger and more harmful crimes.

The broken window theory is subject to a fierce debate. Some argue that it led to a dramatic crime reduction in the 1990’s. Other attribute the drop in crime to other factors, like the elimination of leaded petrol. Regardless of what argument is right, it’s worth recognising that the broken window theory can be applied in an information security context, and addressing smaller infractions can reduce the risk of larger, more damaging infractions.

Moving forward, it’s worth recognising that people are unmoved to behave in a compliant way because they do not see the financial consequences of violating it.

In The Honest Truth about Dishonesty, Dan Ariely tries to understand what motivates people to break the rules. Ariely describes a survey of golf players, which tries to find the conditions on which they might be tempted to move the ball into a more advantageous position, and how they would go about it. The golfers were presented with three options: using their club, their foot, or picking up the ball with their hands.

All of these are considered cheating, and are major no-nos. However, the survey is presented in a way where one is psychologically more acceptable than the others. Predictably, the players said that they would move the ball with their club. Second and third respectably were moving the ball with their foot, and picking up with their hand. The survey shows that by psychologically distancing themselves from the act of dishonesty – in this case, by using a tool actually used in the game of golf to cheat – the act of dishonesty becomes more acceptable, and people become more likely to behave in such a fashion.  It’s worth mentioning that the “distance” in this experiment is merely psychological. Moving the ball with the club is just as wrong as picking it up. The nature of the action isn’t changed.

In a security context, the vast majority of employees are unlikely to steal confidential information or sabotage equipment, much like professional golfers are unlikely to pick up the ball. However, employees might download a peer-to-peer application, like Gnutella, in order to download music to listen to at work. This could expose an organisation to data exfiltration, much like if someone left the office with a flash drive full of documents that they shouldn’t have. The motivation may be different, but the impact is the same.

This can be used to remind employees that their actions have consequences. Breaking security policy doesn’t seem to have a direct financial cost to the company – at least at first – making it easier for employees to rationalise behaving in a non-compliant way. Policy violations, however, can lead to a security breaches. Regulation like GDPR with fines of up to €20 million or four per cent of a firm’s global turnover makes this connection clearer and could help employees understand the consequences of acting improperly.

Another study relates tangentially to the broader discussion of breaking security policies and cheating. Participants were asked to solve 20 simple math problems, and promised 50 cents for each correct answer. Crucially, the researchers made it technically possible to cheat, by allowing participants to check their work against a sheet containing the correct answers. Participants could shred the sheet, leaving no evidence of cheating.

Compared to controlled conditions, where cheating wasn’t possible, participants with access to the answer sheet answered on average five more problems correctly.

The researchers looked at how a peer might influence behaviour in such circumstances. They introduced an individual, who answered all the problems correctly in a seemingly-impossible amount of time. Since such behaviour remained unchallenged, this had a marked effect on the other participants, who answered roughly eight more problems correctly than those working under conditions where cheating wasn’t possible.

Much like the broken window theory, this reinforces the idea that cheating is contagious and he same can be said of the workplace. If people see others violating security polices, like using unauthorised tools and services to conduct work business, they may be inclined to exhibit the same behaviour. Non-compliance becomes normalised, and above all, socially acceptable. This normalisation is why poor security behaviour exists.

Fortunately, the inverse is also true. If employees see others acting in a virtuous manner, they’ll be less inclined to break the rules. This is why, when it comes to security campaigns, it’s important that senior leadership set a positive example, and become role models for the rest of the company. If the CEO takes security policy seriously, it’s more likely the rank-and-file foot soldiers of the company will too.

One of the examples of this is given in the book The Power of Habit, where journalist Charles Duhigg discusses the story of Paul O’Neill, then CEO of the Aluminium Company of America (Alcoa), who aimed to make his company the safest in the nation to work for. Initially he experienced resistance, as stakeholders were concerned that his primary priority wasn’t merely margins and other finance-related performance indicators. They failed to see the connection between his aim for zero workplace injuries, and the company’s financial performance.  And yet Alcoa’s profits reached an all-time record high within a year of his announcement, and when he retired, the company’s annual income was five times than it was before he arrived. Moreover, it became one of the safest industrial companies in the world.

Duhigg attributes this to the “keystone habit.” O’Neill identified safety as such a habit, and fervently focused on it. He wanted to change the company, but this couldn’t be done by merely telling people to change his behaviour, explaining: “… That’s not how the brain works. So I decided I was going to start by focusing on one thing. If I could start disrupting the habits around one thing, it would spread throughout the entire company.”

In the book, O’Neill discusses an incident when a worker died trying to fix a piece of equipment in a way that violated the established security procedures and warning signs. The CEO issued an emergency meeting to understand the cause of the event, and took personal responsibility for the worker’s death.  He also pinpointed several inadequacies with workplace safety education, specifically that the fact that training material didn’t highlight that employees wouldn’t be sanctioned for hardware failure, and that they shouldn’t commence repair before first consulting a manager.

In the aftermath, Alcoa safety policies were updated and employees were encouraged to engage with management in drafting new policies. This engagement led workers to take a step further and suggest improvements to how the business could be run. By talking about safety, the company was able to improve communication and innovation, which lead to a marked improvement in the company’s financial performance.

Timothy D. Wilson, Professor of Psychology at the University of Virginia says that behaviour change precedes changes in sentiment – not the other way around. Those responsible for security should realise that there is no silver bullet, and changing culture requires an atmosphere of constant vigilance, where virtuous behaviour is constantly reinforced in order to create and sustain positive habits.

The goal isn’t to teach one-off tricks, but rather to create a culture that is accepted by everyone without resistance, and is understood. To do this, messages need to cater to each type of employee, and eschew the idea that a one-size-fits-all campaign could work. Questions that must be answered include: What are the benefits? Why should I bother? What are the impacts of my actions?

Tone is important. Campaigns must avoid scare tactics, such as threatening employees with punishment in the case of breaches or non-compliances. These can be dismissed as scaremongering. In the same breath, they should acknowledge the damage caused by non-compliant employee behaviour and recognise that employee error can result in risk to the organisation. They should acknowledge the aims and values of the user, as well as the values of the organisation, like professionalism and timely delivery of projects. The campaign should recognise that everyone has a role to play.

Above all, a campaign should emphasise the value that information security brings to the business. This reframes the conversation around security from being about imposing limits on user behaviour, and deflects the idea that security can be a barrier from employees doing their job.

Security campaigns targeted to specific groups enable better flexibility, and allow information security professionals to be more effective at communicating risk to more employees, which is crucial for creating behavioural change. When everyone in the organisation is aware of security risks and procedures, the organisation can identify chinks in the communal knowledge, and respond by providing further education.

From this point onwards, role-specific education can be offered. So, if an employee has access to a company laptop and external storage drive, they could be offered guidance on keeping company data secure when out of the office. Additionally, employees should have a library of reference materials to consult on procedure, should they need to reinforce their knowledge later on.

Security professionals should understand the importance of the collective in order to build a vibrant and thriving security culture. Above all, they should remember that as described in the broken windows theory, addressing minor infractions can result in better behaviour across the board.

Conclusion

Companies want to have their cake and eat it. On one hand, they want their employees to be productive; that is obvious as productivity is directly linked to the performance of the business. On the other hand, they are wary of facing security breaches, which can result in financial penalties from regulators, costs associated with remediation and restitution, as well as negative publicity.

As we have seen, employees are concerned primarily with doing their day-to-day jobs in a timely and effective manner. Anything else is secondary and as far as compliance goes, for many employees, the ends justify the means. Therefore, it’s vital that productivity and security be reconciled. When companies fail to do so, they effectively force employees’ hands into breaking policy, and heightening risk for the organisation.

Employees will only comply with security policy if they feel motivated to do so. They must see a link between compliance and personal benefit. They must be empowered to adhere to security policy. To do this, they have to be given the tools and means to comprehend risks facing the organisation, and to see how their actions play into this. Once they are sufficiently equipped, they must be trusted to act unhindered to make decisions that mitigate risk at the organisational level.

Crucially, it’s important that front-line information security workers shift their role from that of a policeman enforcing policy from the top-down through sanctions and hand-wringing. This traditional approach no longer works, especially when you consider that today’s businesses are geographically distributed, and often consist of legions of remote workers.

It’s vital that we shift from identikit, one-size-fits-all frameworks. They fail to take advantage of context, both situational and local. Flexibility and adaptability are key mechanisms to use when faced with conflicts between tasks and established security codes of conduct.

Security mechanisms should be shaped around the day-to-day working lives of employees, and not the other way around. The best way to do this is to engage with employees, and to factor in their unique experiences and insights into the design process. The aim should be to correct the misconceptions, misunderstandings, and faulty decision-making processes that result in non-compliant behaviour. To effectively protect your company’s assets from cyber-attacks, focus on the most important asset – your people.

References

Dale Carnegie, How to Win Friends and Influence People. Simon and Schuster, 2010.

Iacovos Kirlappos, Adam Beautement and M. Angela Sasse, “‘Comply or Die’ Is Dead: Long Live Security-Aware Principal Agents”, in Financial Cryptography and Data Security, Springer, 2013, pages 70–82.

Leron Zinatullin, The Psychology of Information Security: Resolving conflicts between security compliance and human behaviour. IT Governance Ltd, 2016

Kregg Aytes and Terry Connolly, “Computer and Risky Computing Practices: A Rational Choice Perspective”, Journal of Organizational End User Computing, 16(2), 2004, 22–40

John D’Arcy, Anat Hovav and Dennis Galletta, “User Awareness of Security Countermeasures and Its Impact on Information Systems Misuse: A Deterrence Approach”, Information Systems Research, 17(1), 2009, 79–98

Jai-Yeol, Son “Out of Fear or Desire? Toward a Better Understanding of Employees’ Motivation to Follow IS Security Policies”, Information &. Management, 48(7), 2011, 296–302

Baba Shiv and Alexander Fedorikhin, “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making”, Journal of Consumer Research, 1999, 278–292

Shai Danziger, Jonathan Levav and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions”, Proceedings of the National Academy of Sciences, 108(17), 2011, 6889–6892

Simson. L. Garfinkel and Abhi Shelat, “Remembrance of Data Passed: A Study of Disk Sanitization Practices”, IEEE Security & Privacy, 1, 2003, 17–27.

John Maeda, The Laws of Simplicity, MIT Press, 2006.

Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning”, Policy Sciences, 4, 1973, 155–169.  

Hasso Plattner, Christoph Meinel and Larry J. Leifer, eds., Design Thinking: Understand–Improve–Apply, Springer Science & Business Media, 2010

Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production, Productivity Press, 1988.

Eric Reis, The Lean Startup, Crown Business, 2011

Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little, Brown, 2006

Dan Ariely, The Honest Truth about Dishonesty, Harper, 2013

Francesca Gino, Shahar Ayal and Dan Ariely, “Contagion and Differentiation in Unethical Behavior: The Effect of One Bad Apple on the Barrel”, Psychological Science, 20(3), 2009, pages 393–398

Charles Duhigg, The Power of Habit: Why We Do What We Do and How to Change, Random House, 2013

Timothy Wilson, Strangers to Ourselves, Harvard University Press, 2004, 212

Advertisements

Artificial intelligence and cyber security: attacking and defending

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack.  Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.


DevOps and Operational Technology: a security perspective

I have worked in the Operational Technology (OT) environment for years, predominantly in major Oil and Gas companies. And yes, we all know that this space can move quite slowly! Companies traditionally employ a waterfall model while managing projects with rigid stage gates, extensive planning and design phases followed by lengthy implementation or development.

It’s historically been difficult to adopt more agile approaches in such an environment for various reasons. For example, I’ve developed architecture blueprints with a view to refresh industrial control assets for a gas and electricity distribution network provider in the UK on a timeline of 7 years. It felt very much like a construction project to me.  Which is quite different from the software development culture that typically is all about experimenting and failing fast. I’m not sure about you, but I would not like our power grid to fail fast in the name of agility. The difference in culture is justified: we need to prioritise safety and rigour when it comes to industrial control systems, as the impact of a potential mistake can cost more than a few days’ worth of development effort – it can be human life.

The stakes are not as high when we talk about software development.  I’ve spent the past several months in one of the biggest dot-coms in Europe and it was interesting to compare and contrast their agile approach to the more traditional OT space I’ve spent most of my career in. These two worlds can’t be more different.

I arrived to a surprising conclusion though: they are both slow when it comes to security. But  for different reasons.

Agile, and Scrum in particular, is great on paper but it’s quite challenging when it comes to security.

Agile works well when small teams are focused on developing products but I found it quite hard to introduce security culture in such an environment. Security often is just not a priority for them.

Teams mostly focus on what they perceive as a business priority. It is a standard practice there to define OKRs – Objectives and Key Results. The teams are then measured on how well they achieved those. So say if they’ve met 70% of their OKRs, they had a good quarter. Guess what – security always ends up in the other bottom 30% and security-related backlog items get de-prioritised.

DevOps works well for product improvement, but it can be quite bad for security. For instance, when a new API or a new security process is introduced, it has to touch a lot of teams which can be a stakeholder management nightmare in such an environment. A security product has to be shoe horned across multiple DevOps teams, where every team has its own set of OKRs, resulting in natural resistance to collaborate.

In a way, both OT and DevOps move slowly when it comes to security. But what do you do about it?

The answer might lie in setting the tone from the top and making sure that everyone is responsible for security, which I’ve discussed in a series of articles on security culture on this blog and in my book The Psychology of Information Security.

How about running your security team like a DevOps team? When it comes to Agile, minimising the friction for developers is the name of the game: incorporate your security checks in the deployment process, do some automated vulnerability scans, implement continuous control monitoring, describe your security controls in the way developers understand (e.g. user stories) and so on.

Most importantly, gather and analyse data to improve. Where is security failing? Where is it clashing with the business process? What does the business actually need here? Is security helping or impeding? Answering these questions is the first step to understanding where security can add value to the business regardless of the environment: Agile or OT.


Sharing views on security culture

Talk01

I’ve been invited to talk about human aspects of security at the CyberSecurity Talks & Networking event.  The venue and the format allowed the audience to participate and ask questions and we had insightful discussions at the end of my talk. It’s always interesting to hear what challenges people face in various organisations and how a few simple improvements can change the security culture for the better.


Cybersecurity Canon: The Psychology of Information Security

Big-Canon-Banner.png

My book has been nominated for the Cybersecurity Cannon, a list of must-read books for all cybersecurity practitioners.

Review by Guest Contributor Nicola Burr, Cybersecurity Consultant

Read the rest of this entry »


Presenting at SANS European Security Awareness Summit

It’s been a pleasure delivering a talk on the psychology of information security culture at the SANS European Security Awareness Summit 2016. It was the first time for me to attend and present at this event, I certainly hope it’s not going to be the last.

The summit has a great community feel to it and Lance Spitzner did a great job organising and bringing people together. It was an opportunity for me not only to share my knowledge, but also to learn from others during a number of interactive sessions and workshops. The participants were keen to share tips and tricks to improve security awareness in their companies, as well as sharing war stories of what worked and what didn’t.

It was humbling to find out that my book was quite popular in this community and I even managed to sign a couple of copies.

All speakers’ presentation slides (including from past and future events) can be accessed here.


The Psychology of Information Security – Get 10% Off

book.PNG

IT Governance Publishing kindly provided a 10% discount on my book. Simply use voucher code SPY10 on my publisher’s website.

Offer ends 30 November 2016.