Behavioural science in cyber security

Why your staff ignore security policies and what to do about it.               

Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.

In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.

His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it.  So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.

Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.

This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.

The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.

This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.

To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.

Understanding the motivation

Understanding motivation begins with understating why people don’t comply with information security policies. Three common reasons include:

  • There is no obvious reason to comply
  • Compliance comes at a steep cost to workers
  • Employees are simply unable to comply

There is no obvious reason to comply

Risk and threat are part of cyber security specialists’ everyday lives, and they have a universal appreciation for what they entail. But regular employees seldom have an accurate concept of what information security actually is, and what it is trying to protect.

Employees are hazy about the rules themselves, and tend to lack a crystallised understanding of what certain security policies forbid and allow, which results in so-called “security myths.” Furthermore, even in the rare cases where employees are aware of a particular security policy and interpret it correctly, the motivation to comply isn’t there. They’ll do the right thing, but their heart isn’t really in it.

People seldom feel that their actions have any bearing on the overall information security of an organisation. As the poet Stanisław Jerzy Lec once said, “No snowflake in an avalanche ever feels responsible.” This is troubling because if adhering to a policy involves a certain amount of effort, and there is no perceived immediate threat, non-compliant behaviour can appear to be the more attractive and comfortable option.

Compliance comes at a steep cost to workers

All people within an organisation have their own duties and responsibilities to execute. A marketing director is responsible for PR and communications; a project manager is responsible for ensuring tasks remain on track; a financial analyst is helping an organisation decide which stocks and shares to buy. For most of these employees, their main concern — if not their sole concern — is ensuring their jobs get done. Anything secondary, like information security, falls to the wayside especially if employees perceive it to be arduous or unimportant.

The evidence shows that if security mechanisms create additional work for employees, they will tend to err on the side of non-compliant behaviour, in order to concentrate on executing their primary tasks efficiently.

There is a troubling lack of concern among security managers about the burden security mechanisms impose on employees. Many assume that employees can simply adjust to new shifting security requirements without much extra effort. This belief is often mistaken, as employees regard new security mechanisms as arduous and cumbersome, draining both their time and effort. From their perspective, reduced risk to the organisation as a consequence of their compliance is seen as not a worthwhile trade-off for the disruption to their productivity.

And in extreme cases — for example, when an individual is faced with an impending deadline — employees may find it fit to cut corners and fail to comply with established security procedure, regardless of being aware of the risks.

An example of this is file sharing. Many organisations enact punishing restrictions regarding the exchange of digital files, in an effort to prevent the organisation from data exfiltration or phishing attempts. This often takes the form of strict permissions, by storage or transfer limits, or by time-consuming protocols. If pressed for time, an employee may resort to an unapproved alternative — like Dropbox, Google Drive, or Box. Shadow IT is a major security concern for enterprises, and is often a consequence of cumbersome security protocols. And from the perspective of an employee they can justify it, as failing to complete their primary tasks holds more immediate consequences for them, especially compared to the potential and unclear risk associated with security non-compliance.

Employees are simply unable to comply

In rare and extreme cases, compliance — whether enforced or voluntary — fails to be an option for employees, no matter how much time or effort they are willing to commit. In these cases, the most frequent scenario is that the security protocols imposed do not match their basic work requirements.

An example of this would be an organisation that distributed encrypted USB flash drives with an insufficient amount of storage. Employees who frequently need to transfer large files — such as those working with audio-visual assets — would be forced to rely on unauthorised mechanisms, like online file sharing services, or larger, non-encrypted external hard drives. It is also common to see users copy files onto their laptops from secure locations, either because the company’s remote access doesn’t work well, or because they’ve been allocated an insufficient amount of storage on their network drives.

Password complexity rules often force employees to break established security codes of conduct. When forced to memorise different, profoundly complex passwords, employees will try and find a shortcut by writing them down — either physically, or electronically.

In these situations, the employees are cognisant of the fact that they’re breaking the rules, but they justify it by saying their employer had failed to offer them a workable technical implementation. They assume the company would be more comfortable with a failure to adhere by security rules than the failing to perform their primary duties. This assumption is often reinforced by non-security managerial staff.

The end result is that poorly implemented security protocols create a chasm between the security function and the rest of the organisation, creating a “them-and-us” scenario, where they are perceived as “out of touch” to the needs of the rest of the organisation. Information security — and information security professionals — become resented, and the wider organisation responds to security enforcers with scepticism or derision. These reinforced perspectives can result in resistance to security measures, regardless of how well-designed or seamlessly implemented they are.

How people make decisions

The price of overly complicated security mechanisms is productivity; the tougher compliance is, the more it’ll interfere with the day-to-day running of the organisation. It’s not uncommon to see the business-critical parts of an organisation engaging heavily in non-compliant behaviour, because they value productivity over security and don’t perceive an immediate risk.

And although employees will often make a sincere effort to comply with an organisation’s policies, their predominant concern is getting their work done. When they violate a rule, it’s usually not due to deliberately malicious behaviour, but rather because of poor control implementation that pays scant attention to their needs.

On the other hand the more employee-centred a security policy is, the better it incentivises employees to comply, and strengthens the overall security culture. This requires empathy, and actually listening to those users downstream. Crucially, it requires remembering that employee behaviour is primarily driven by meeting goals and key performance indicators. This is often in contrast to the security world, which emphasises managing risks and proactively responding to threats that may or may not emerge, and is often seen by outsiders as abstract and lacking context.

That’s why developing a security programme that works requires an understanding of the human decision-making process.

How individuals make decisions is a subject of interest for psychologists and economists, who have traditionally viewed human behaviour as regular and highly predictable. This framework let researchers build models that allowed them to comprehend social and economic behaviour almost like clockwork, where it can be deconstructed and observed how the moving parts fit together.

But people are unique, and therefore, complicated. There is no one-size-fits-all paradigm for humanity. People have behaviour that can be irrational, disordered, and prone to spur-of-the-moment thinking, reflecting the dynamic and ever-changing working environment. Research in psychology and economics later pivoted to understand the drivers behind certain actions. This research is relevant to the information security field.

Among the theories pertaining to human behaviour is the theory of rational choice, which explains how people aim to maximise their benefits and minimise their costs. Self-interest is the main motivator, with people making decisions based on personal benefit, as well as the cost of the outcome.

This can also explain how employees make decisions about what institutional information security rules they choose to obey. According to the theory of rational choice, it may be rational for users to fail to adhere to a security policy because the effort vastly outweighs the perceived benefit — in this case, a reduction in risk.

University students, for example, have been observed to frequently engage in unsafe computer security practices, like sharing credentials, downloading attachments without taking safe precautions, and failing to back up their data. Although students — being digital natives — were familiar with the principles of safe computing behaviour, they still continued to exhibit risky practices. Researchers who have looked into this field believe that simple recommendations aren’t enough to ensure compliance; educational institutions may need to impose secure behaviour through more forceful means.

This brings us onto the theory of general deterrence, which states that users will fail to comply with the rules if they know that there will be no consequences. In the absence of a punishment, users feel compelled to behave as they feel fit.

Two terms vital to understanding this theory are ‘intrinsic motivation’ and ‘extrinsic motivation.’ As the name suggests, intrinsic motivations come from within, and usually lead to actions that are personally rewarding. The main mover here is one’s own desires. Extrinsic motivations, on the other hand, derive from the hope of gaining a reward or avoiding a punishment.

Research into the application of the theory of general deterrence within the context of information security awareness suggests that the perception of consequences is far more effective in deterring unsafe behaviour than actually imposing sanctions. These findings came after examining the behaviour of a sample of 269 employees from eight different companies who had received security training and were aware of the existence of user-monitoring software on their computers.

But there isn’t necessarily a consensus on this. A criticism of the aforementioned theory is that it’s based solely on extrinsic motivations. This lacks the consideration of intrinsic motivation, which is a defining and driving facet of the human character. An analysis of a sample of 602 employees showed that approaches which address intrinsic motivations lead to a significant increase in compliant employee behaviour, rather than ones rooted in punishment and reward. In short, the so-called “carrot and stick” method might not be particularly effective.

The value of intrinsic motivations is supported by the cognitive evaluation theory, which can be used to predict the impact that rewards have on intrinsic motivations. So, if an effort is recognised by an external factor, such as with an award or prize, the individual will be more likely to adhere to the organisation’s security policies.

However, if rewards are seen as a “carrot” to control behaviour, they have a negative impact on intrinsic motivation. This is due to the fact that a recipient’s sense of individual autonomy and self-determination will diminish when they feel as though they’re being controlled.

The cognitive evaluation theory also explains why non-tangible rewards — like praise — also have positive impacts on intrinsic motivation. Verbal rewards boost an employee’s sense of self-esteem and self-worth, and reinforces the view that they’re skilled at a particular task, and their performance is well-regarded by their superiors. However, for non-tangible rewards to be effective, they must not appear to be coercive.

Focusing on ensuring greater compliance within an information security context, this theory recommends adoption of a positive, non-tangible reward system that recognises positive efforts in order to ensure constructive behaviour regarding security policy compliance.

And ultimately, the above theories show that in order to effectively protect an institution, security policies shouldn’t merely ensure formal compliance with legal and regulatory requirements, but also pay respect to the motivations and attitudes of the employees that must live and work under them.

Designing security that works

A fundamental aspect of ensuring compliance is providing employees with the tools and working environments they need, so they don’t feel compelled to use insecure, unauthorised third-party alternatives. For example, an enterprise could issue encrypted USB flash drives and provide a remotely-accessible network drive, so employees can save and access their documents as required. Therefore employees aren’t tempted to use Dropbox or Google Drive; however these options must have enough storage capacity for employees to do their work.

Additionally, these network drives can be augmented with auto-archiving systems, allowing administrators to ensure staffers do not travel with highly-sensitive documents. If employees must travel with their laptops, their internal storage drives can be encrypted, so that even if they leave them in a restaurant or train, there is scant possibility that the contents will be accessed by an unauthorised third-party.

Other steps taken could include the use of remote desktop systems, meaning that no files are actually stored on the device, or single-sign-on systems, so that employees aren’t forced to remember, or worse, write down, several unique and complex passwords. Ultimately, whatever security steps taken must align with the needs of employees and the realities of their day-to-day jobs.

People’s resources are limited. This doesn’t just refer to time, but also to energy. Individuals often find decision making to be hard when fatigued.  This concept was highlighted by a psychological experiment, where two sets of people had to memorise a different number. One was a simple, two-digit number, while the other was a longer seven-digit number. The participants were offered a reward for correctly reciting the number; but had to walk to another part of the building to collect it.

On the way, they were intercepted with a second pair of researchers who offered them a snack, which could only be collected after the conclusion of the experiment. The participants were offered a choice between a healthy option and chocolate. Those presented with the easier number tended to err towards the healthy option, while those tasked with remembering the seven digit number predominantly selected chocolate.

Another prominent study examines the behaviour of judges during different times of the day. It found that in the mornings and after lunch, judges had more energy, and were better able to consider the merits of an individual case. This resulted in more grants of parole. Those seen before a judge in the evenings were denied parole more frequently. This is believed to be because they simply ran out of mental energy, and defaulted to what they perceived to be the safest option: refusal.

So how do these studies apply to an information security context? Those working in the field should reflect on the individual circumstances of those in the organisation. If people are tired or engaged in activities requiring high concentration, they get fatigued, which affects their ability or willingness to maintain compliance. This makes security breaches a real possibility.

But compliance efforts don’t need to contribute to mental depletion. When people perform tasks that work with their mental models (defined as the way they view the world and expect it to work), the activities are less mentally tiring than those that divert from the aforementioned models. If people can apply their previous knowledge and expertise to a problem, less energy is required to solve it in a secure manner.

This is exemplified by a piece of research that highlights the importance of secure file removal, which highlighted that merely emptying the Recycle Bin is insufficient, and files can easily be recovered through trivial forensic means. However, there are software products that exploit the “mental models” from the physical world. One uses a “shredding” analogy to highlight that files are being destroyed securely. If you shred a physical file, it is extremely challenging to piece it together, and this is what is happening on the computer, and echoes a common workplace task. This interface design might lighten the cognitive burden on users.

Another example of ensuring user design resembles existing experiences refers to the desktop metaphor introduced by researchers at Xerox in the 1980s, where people were presented with a graphical experience, rather than a text-driven command line. Users could manipulate objects much like they would in the real world (i.e. drag and drop, move files to the recycle bin, and organise files in visual folder-based hierarchies).  Building on the way people think makes it significantly easier for individuals to accept ways of working and new technologies. However, it’s important to remember that cultural differences can make this hard. Not everything is universal. The original Apple Macintosh trash icon, for example, puzzled users in Japan, where metallic bins were unheard of.

Good interface design isn’t just great for users; it makes things easier for those responsible for cyber security. This contradicts the established thinking that security is antithetical to good design. In reality, design and security can coexist by defining constructive and destructive behaviours. Effective design should streamline constructive behaviours, while making damaging ones hard to accomplish. To do this, security has to be a vocal influence in the design process, and not an afterthought.

Designers can involve security specialists in a variety of ways. One way is iterative design, where design is performed in cycles followed by testing, evaluation, and criticism. The other is participatory design, which ensures that all key stakeholders – especially those working in security – are presented with an opportunity to share their perspective.

Of course, this isn’t a panacea. The involvement of security professionals isn’t a cast-iron guarantee that security-based usability problems won’t crop up later. These problems are categorised as ‘wicked’.  A wicked problem is defined as one that is arduous, if not entirely impossible, to solve. This is often due to vague, inaccurate, changing or missing requirements from stakeholders.  Wicked problems cannot be solved through traditional means. It requires creative and novel thinking, such as the application of design thinking techniques. This includes performing situational analysis, interviewing stakeholders, creating user profiles, examining how others faced with a similar problem solved it, creating prototypes, and mind-mapping.

Design thinking is summed up by four different rules. The first is “the human rule,” which states that all design activity is “ultimately social in nature.” The ambiguity rule states that “design thinkers must preserve ambiguity.” The redesign rule says that “all design is redesign,” while the tangibility rule mandates that “making ideas tangible always facilitates communication”.

Security professionals should learn these rules and use them in order to design security mechanisms that don’t merely work, but are fundamentally usable. To do this, it’s important they escape their bubbles, and engage with those who actually use them. This can be done by utilising existing solutions and creating prototypes that can demonstrate the application of security concepts within a working environment.

The Achilles heel of design thinking is that while it enables the design of fundamentally better controls, it doesn’t highlight why existing ones fail.

When things go awry, we tend to look at the symptoms and not the cause. Tailichi Ohno, the Japanese industrialist who created the Toyota Production System (which inspired Lean Manufacturing), developed a technique known as “Five Whys” as a systematic problem-solving tool.

One example, given by Ohno in one of his books, shows this technique in action when trying to diagnose a faulty machine:

  1. Why did the machine stop? There was an overload and the fuse blew
  2. Why was there an overload? The bearing was not sufficiently lubricated.
  3. Why was it not lubricated sufficiently? The lubrication pump was not pumping sufficiently
  4. Why was it not pumping sufficiently? The shaft of the pump was worn and rattling
  5. Why was the shaft worn out? There was no strainer attached and metal scrap got in.

Rather than focus on the first issue, Ohno drilled down through a myriad of issues, which together culminated into a “perfect storm,” resulting in the machine failure. As security professionals, continuing to ask “why” can help us determine why a mechanism failed.

In the example, Ohno pointed out that the root cause was a human failure (namely, a failure to apply a strainer) rather than technical. This is something most security professionals can relate to. As Eric Reis said in his 2011 book The Lean Startup, “the root of every seemingly technical problem is actually a human problem”.

Creating a culture of security

Culture is ephemeral, and often hard to define. Yet, it can be the defining factor of whether a security programme fails or succeeds. Once employees’ primary tasks are identified and aligned with a seamless and considerate set of security controls, it’s vital to demonstrate that information security exists for a purpose, and not to needlessly inconvenience them.  Therefore it is also vital we understand the root causes of poor security culture.

The first step is to recognise is that bad habits and behaviours tend to be contagious. As highlighted by Canadian psychologist Malcolm Gladwell in his book The Tipping Point, there are certain conditions that allow some ideas or behaviours to spread virally. Gladwell refers specifically to the broken window theory to highlight the importance and power of context. This was originally used in law enforcement, and argued that stopping smaller crimes (like vandalism, hence the “broken window” link) is vital in stopping larger crimes (like murder). If a broken window is left for several days in a neighbourhood, more vandalism would inevitably ensue. This shows that crime will effectively go unpunished, leading to bigger and more harmful crimes.

The broken window theory is subject to a fierce debate. Some argue that it led to a dramatic crime reduction in the 1990’s. Other attribute the drop in crime to other factors, like the elimination of leaded petrol. Regardless of what argument is right, it’s worth recognising that the broken window theory can be applied in an information security context, and addressing smaller infractions can reduce the risk of larger, more damaging infractions.

Moving forward, it’s worth recognising that people are unmoved to behave in a compliant way because they do not see the financial consequences of violating it.

In The Honest Truth about Dishonesty, Dan Ariely tries to understand what motivates people to break the rules. Ariely describes a survey of golf players, which tries to find the conditions on which they might be tempted to move the ball into a more advantageous position, and how they would go about it. The golfers were presented with three options: using their club, their foot, or picking up the ball with their hands.

All of these are considered cheating, and are major no-nos. However, the survey is presented in a way where one is psychologically more acceptable than the others. Predictably, the players said that they would move the ball with their club. Second and third respectably were moving the ball with their foot, and picking up with their hand. The survey shows that by psychologically distancing themselves from the act of dishonesty – in this case, by using a tool actually used in the game of golf to cheat – the act of dishonesty becomes more acceptable, and people become more likely to behave in such a fashion.  It’s worth mentioning that the “distance” in this experiment is merely psychological. Moving the ball with the club is just as wrong as picking it up. The nature of the action isn’t changed.

In a security context, the vast majority of employees are unlikely to steal confidential information or sabotage equipment, much like professional golfers are unlikely to pick up the ball. However, employees might download a peer-to-peer application, like Gnutella, in order to download music to listen to at work. This could expose an organisation to data exfiltration, much like if someone left the office with a flash drive full of documents that they shouldn’t have. The motivation may be different, but the impact is the same.

This can be used to remind employees that their actions have consequences. Breaking security policy doesn’t seem to have a direct financial cost to the company – at least at first – making it easier for employees to rationalise behaving in a non-compliant way. Policy violations, however, can lead to a security breaches. Regulation like GDPR with fines of up to €20 million or four per cent of a firm’s global turnover makes this connection clearer and could help employees understand the consequences of acting improperly.

Another study relates tangentially to the broader discussion of breaking security policies and cheating. Participants were asked to solve 20 simple math problems, and promised 50 cents for each correct answer. Crucially, the researchers made it technically possible to cheat, by allowing participants to check their work against a sheet containing the correct answers. Participants could shred the sheet, leaving no evidence of cheating.

Compared to controlled conditions, where cheating wasn’t possible, participants with access to the answer sheet answered on average five more problems correctly.

The researchers looked at how a peer might influence behaviour in such circumstances. They introduced an individual, who answered all the problems correctly in a seemingly-impossible amount of time. Since such behaviour remained unchallenged, this had a marked effect on the other participants, who answered roughly eight more problems correctly than those working under conditions where cheating wasn’t possible.

Much like the broken window theory, this reinforces the idea that cheating is contagious and he same can be said of the workplace. If people see others violating security polices, like using unauthorised tools and services to conduct work business, they may be inclined to exhibit the same behaviour. Non-compliance becomes normalised, and above all, socially acceptable. This normalisation is why poor security behaviour exists.

Fortunately, the inverse is also true. If employees see others acting in a virtuous manner, they’ll be less inclined to break the rules. This is why, when it comes to security campaigns, it’s important that senior leadership set a positive example, and become role models for the rest of the company. If the CEO takes security policy seriously, it’s more likely the rank-and-file foot soldiers of the company will too.

One of the examples of this is given in the book The Power of Habit, where journalist Charles Duhigg discusses the story of Paul O’Neill, then CEO of the Aluminium Company of America (Alcoa), who aimed to make his company the safest in the nation to work for. Initially he experienced resistance, as stakeholders were concerned that his primary priority wasn’t merely margins and other finance-related performance indicators. They failed to see the connection between his aim for zero workplace injuries, and the company’s financial performance.  And yet Alcoa’s profits reached an all-time record high within a year of his announcement, and when he retired, the company’s annual income was five times than it was before he arrived. Moreover, it became one of the safest industrial companies in the world.

Duhigg attributes this to the “keystone habit.” O’Neill identified safety as such a habit, and fervently focused on it. He wanted to change the company, but this couldn’t be done by merely telling people to change his behaviour, explaining: “… That’s not how the brain works. So I decided I was going to start by focusing on one thing. If I could start disrupting the habits around one thing, it would spread throughout the entire company.”

In the book, O’Neill discusses an incident when a worker died trying to fix a piece of equipment in a way that violated the established security procedures and warning signs. The CEO issued an emergency meeting to understand the cause of the event, and took personal responsibility for the worker’s death.  He also pinpointed several inadequacies with workplace safety education, specifically that the fact that training material didn’t highlight that employees wouldn’t be sanctioned for hardware failure, and that they shouldn’t commence repair before first consulting a manager.

In the aftermath, Alcoa safety policies were updated and employees were encouraged to engage with management in drafting new policies. This engagement led workers to take a step further and suggest improvements to how the business could be run. By talking about safety, the company was able to improve communication and innovation, which lead to a marked improvement in the company’s financial performance.

Timothy D. Wilson, Professor of Psychology at the University of Virginia says that behaviour change precedes changes in sentiment – not the other way around. Those responsible for security should realise that there is no silver bullet, and changing culture requires an atmosphere of constant vigilance, where virtuous behaviour is constantly reinforced in order to create and sustain positive habits.

The goal isn’t to teach one-off tricks, but rather to create a culture that is accepted by everyone without resistance, and is understood. To do this, messages need to cater to each type of employee, and eschew the idea that a one-size-fits-all campaign could work. Questions that must be answered include: What are the benefits? Why should I bother? What are the impacts of my actions?

Tone is important. Campaigns must avoid scare tactics, such as threatening employees with punishment in the case of breaches or non-compliances. These can be dismissed as scaremongering. In the same breath, they should acknowledge the damage caused by non-compliant employee behaviour and recognise that employee error can result in risk to the organisation. They should acknowledge the aims and values of the user, as well as the values of the organisation, like professionalism and timely delivery of projects. The campaign should recognise that everyone has a role to play.

Above all, a campaign should emphasise the value that information security brings to the business. This reframes the conversation around security from being about imposing limits on user behaviour, and deflects the idea that security can be a barrier from employees doing their job.

Security campaigns targeted to specific groups enable better flexibility, and allow information security professionals to be more effective at communicating risk to more employees, which is crucial for creating behavioural change. When everyone in the organisation is aware of security risks and procedures, the organisation can identify chinks in the communal knowledge, and respond by providing further education.

From this point onwards, role-specific education can be offered. So, if an employee has access to a company laptop and external storage drive, they could be offered guidance on keeping company data secure when out of the office. Additionally, employees should have a library of reference materials to consult on procedure, should they need to reinforce their knowledge later on.

Security professionals should understand the importance of the collective in order to build a vibrant and thriving security culture. Above all, they should remember that as described in the broken windows theory, addressing minor infractions can result in better behaviour across the board.

Conclusion

Companies want to have their cake and eat it. On one hand, they want their employees to be productive; that is obvious as productivity is directly linked to the performance of the business. On the other hand, they are wary of facing security breaches, which can result in financial penalties from regulators, costs associated with remediation and restitution, as well as negative publicity.

As we have seen, employees are concerned primarily with doing their day-to-day jobs in a timely and effective manner. Anything else is secondary and as far as compliance goes, for many employees, the ends justify the means. Therefore, it’s vital that productivity and security be reconciled. When companies fail to do so, they effectively force employees’ hands into breaking policy, and heightening risk for the organisation.

Employees will only comply with security policy if they feel motivated to do so. They must see a link between compliance and personal benefit. They must be empowered to adhere to security policy. To do this, they have to be given the tools and means to comprehend risks facing the organisation, and to see how their actions play into this. Once they are sufficiently equipped, they must be trusted to act unhindered to make decisions that mitigate risk at the organisational level.

Crucially, it’s important that front-line information security workers shift their role from that of a policeman enforcing policy from the top-down through sanctions and hand-wringing. This traditional approach no longer works, especially when you consider that today’s businesses are geographically distributed, and often consist of legions of remote workers.

It’s vital that we shift from identikit, one-size-fits-all frameworks. They fail to take advantage of context, both situational and local. Flexibility and adaptability are key mechanisms to use when faced with conflicts between tasks and established security codes of conduct.

Security mechanisms should be shaped around the day-to-day working lives of employees, and not the other way around. The best way to do this is to engage with employees, and to factor in their unique experiences and insights into the design process. The aim should be to correct the misconceptions, misunderstandings, and faulty decision-making processes that result in non-compliant behaviour. To effectively protect your company’s assets from cyber-attacks, focus on the most important asset – your people.

References

Dale Carnegie, How to Win Friends and Influence People. Simon and Schuster, 2010.

Iacovos Kirlappos, Adam Beautement and M. Angela Sasse, “‘Comply or Die’ Is Dead: Long Live Security-Aware Principal Agents”, in Financial Cryptography and Data Security, Springer, 2013, pages 70–82.

Leron Zinatullin, The Psychology of Information Security: Resolving conflicts between security compliance and human behaviour. IT Governance Ltd, 2016

Kregg Aytes and Terry Connolly, “Computer and Risky Computing Practices: A Rational Choice Perspective”, Journal of Organizational End User Computing, 16(2), 2004, 22–40

John D’Arcy, Anat Hovav and Dennis Galletta, “User Awareness of Security Countermeasures and Its Impact on Information Systems Misuse: A Deterrence Approach”, Information Systems Research, 17(1), 2009, 79–98

Jai-Yeol, Son “Out of Fear or Desire? Toward a Better Understanding of Employees’ Motivation to Follow IS Security Policies”, Information &. Management, 48(7), 2011, 296–302

Baba Shiv and Alexander Fedorikhin, “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making”, Journal of Consumer Research, 1999, 278–292

Shai Danziger, Jonathan Levav and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions”, Proceedings of the National Academy of Sciences, 108(17), 2011, 6889–6892

Simson. L. Garfinkel and Abhi Shelat, “Remembrance of Data Passed: A Study of Disk Sanitization Practices”, IEEE Security & Privacy, 1, 2003, 17–27.

John Maeda, The Laws of Simplicity, MIT Press, 2006.

Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning”, Policy Sciences, 4, 1973, 155–169.  

Hasso Plattner, Christoph Meinel and Larry J. Leifer, eds., Design Thinking: Understand–Improve–Apply, Springer Science & Business Media, 2010

Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production, Productivity Press, 1988.

Eric Reis, The Lean Startup, Crown Business, 2011

Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little, Brown, 2006

Dan Ariely, The Honest Truth about Dishonesty, Harper, 2013

Francesca Gino, Shahar Ayal and Dan Ariely, “Contagion and Differentiation in Unethical Behavior: The Effect of One Bad Apple on the Barrel”, Psychological Science, 20(3), 2009, pages 393–398

Charles Duhigg, The Power of Habit: Why We Do What We Do and How to Change, Random House, 2013

Timothy Wilson, Strangers to Ourselves, Harvard University Press, 2004, 212

Advertisements

NIS Directive: are you ready?

UNADJUSTEDNONRAW_thumb_3de4

Governments across Europe recognised that with increased interconnectiveness a cyber incident can affect multiple entities spanning across a number of countries. Moreover, impact and frequency of cyber attacks is at all-time high with recent examples including:

  • 2017 WannaCry ransomware attack
  • 2016 attacks on US water utilities
  • 2015 attack on Ukraine’s electricity network

In order to manage cyber risk, the European Union introduced the Network and Information Systems (NIS) Directive which requires all Member States to protect their critical national infrastructure by implementing cyber security legislation.

Each Member State is required to set their own rules on financial penalties and must take the necessary measures to ensure that they are implemented. For example, in the UK fines, can be up to £17 million.

And yes, in case you are wondering, the UK government has confirmed that the Directive will apply irrespective of Brexit (the NIS Regulations come into effect before the UK leaves the EU).

Who does the NIS Directive apply to?

The law applies to:

  • Operators of Essential Services that are established in the EU
  • Digital Service Providers that offer services to persons within the EU

The sectors affected by the NIS Directive are:

  • Water
  • Health (hospitals, private clinics)
  • Energy (gas, oil, electricity)
  • Transport (rail, road, maritime, air)
  • Digital infrastructure and service providers (e.g. DNS service providers)
  • Financial Services (only in certain Member States e.g. Germany)

NIS Directive objectives

In the UK the NIS Regulations will be implemented in the form of outcome-focused principles rather than prescriptive rules.

National Cyber Security Centre (NCSC) is the UK single point of contact for the legislation. They published top level objectives with underlying security principles.

Objective A – Managing security risk

  • A1. Governance
  • A2. Risk management
  • A3. Asset management
  • A4. Supply chain

Objective B – Protecting against cyber attack

  • B1. Service protection policies and processes
  • B2. Identity and access control
  • B3. Data security
  • B4. System security
  • B5. Resilient networks and systems
  • B6. Staff awareness

Objective C – Detecting cyber security events

  • C1. Security monitoring
  • C2. Proactive security event discovery

Objective D – Minimising the impact of cyber security incidents

  • D1. Response and recovery planning
  • D2. Lessons learned

Table view of principles and related guidance is also available on the NCSC website.

Cyber Assessment Framework

The implementation of the NIS Directive can only be successful if Competent Authorities  can adequately assess the cyber security of organisations is scope. To assist with this, NCSC developed the Cyber Assessment Framework (CAF).

The Framework is based on the 14 outcomes-based principles of the NIS Regulations outlined above. Adherence to each principle is determined based on how well associated outcomes are met. See below for an example:

NIS

Each outcome is assessed based upon Indicators of Good Practice (IGPs), which are statements that can either be true or false for a particular organisation.

Whats’s next?

If your organisation is in the scope of the NIS Directive, it is useful to conduct an initial self-assessment using the CAF described above as an starting point of reference. Remember, formal self-assessment will be required by your Competent Authority, so it is better not to delay this crucial step.

Establishing an early dialogue with the Competent Authority is essential as this will not only help you establish the scope of the assessment (critical assets), but also allow you to receive additional guidance from them.

Initial self-assessment will most probably highlight some gaps. It is important to outline a plan to address these gaps and share it with your Competent Authority. Make sure you keep incident response in mind at all times. The process has to be well-defined to allow you report NIS-specific incidents to your Competent Authority within 72 hours.

Remediate the findings in the agreed time frames and monitor on-going compliance and potential changes in requirements, maintaining the dialogue with the Competent Authority.


Augusta University’s Cyber Institute adopts my book

jsacacalog

Just received some great news from my publisher.  My book has been accepted for use on a course at Augusta University. Here’s some feedback from the course director:

Augusta University’s Cyber Institute adopted the book “The Psychology of Information Security” as part of our Masters in Information Security Management program because we feel that the human factor plays an important role in securing and defending an organisation. Understanding behavioural aspects of the human element is important for many information security managerial functions, such as developing security policies and awareness training. Therefore, we want our students to not only understand technical and managerial aspects of security, but psychological aspects as well.


Cybersecurity Canon: The Psychology of Information Security

Big-Canon-Banner.png

My book has been nominated for the Cybersecurity Cannon, a list of must-read books for all cybersecurity practitioners.

Review by Guest Contributor Nicola Burr, Cybersecurity Consultant

Read the rest of this entry »


Delivering a guest lecture at California State University, Long Beach

CSU Long Beach

I’ve been invited to talk to Masters students at the California State University, Long Beach about starting a career in cyber security.  My guest lecture at the Fundamentals of Security class was well received. Here’s the feedback I received from the Professor:

Leron, thank you so much for talking to my students. We had a great session and everybody was feeling very energised afterwards. It always helps students to interact with industry practitioners and you did a fantastic job inspiring the class. I will be teaching this class next semester, too. Let’s keep in touch and see if you will be available to do a similar session with the next cohort. Again, thank you very much for your time – I wish we could have more time available to talk!


The Psychology of Information Security Culture

logo

In order to reduce security risks within an enterprise, security professionals have traditionally attempted to guide employees towards compliance through security training. However, recurring problems and employee behaviour in this arena indicate that these measures are insufficient and rather ineffective.

Security training tends to focus on specific working practices and defined threat scenarios, leaving the understanding of security culture and its specific principles of behaviour untouched. A security culture should be regarded as a fundamental matter to address. If neglected, employees will not develop habitually secure behaviour or take the initiative to make better decisions when problems arise.

In my talk I will focus on how you can improve security culture in your organisation. I’ll discuss how you can:

  • Understand the root causes of a poor security culture within the workplace
  • Aligning a security programme with wider organisational objectives
  • Manage and communicate these changes within an organisation

The goal is not to teach tricks, but to create a new culture which is accepted and understood by everyone. Come join us at the Security Awareness Summit on 11 Nov for an amazing opportunity to learn from and share with each other. Activities include show-n-tell, 306 Lightening Talks, video wars, group case studies and numerous networking activities. Learn more and register now for the Summit.

Original


How to conduct a cyber security assessment

NIST SCF

I remember conducting a detailed security assessment using the CSET (Cybersecurity Evaluation Tool) developed by the US Department of Homeland Security for UK and US gas and electricity generation and distribution networks. The question set contained around 1200 weighted and prioritised questions and resulted in a very detailed report.

Was it useful?

The main learning was that tracking every grain of sand is no longer effective after a certain threshold.

The value add, however, came from going deeper into specific controls and almost treating it as an audit where a certain category is graded lower if none or insufficient evidence was provided.  Sometimes this is the only way to provide an insight into how security is being managed across the estate.

Why?

What’s apparent in some companies – especially in financial services – is that they conduct experiments often using impressive technology. But have they made it into a standard and consistently rolled it out across the organisation? The answer is often ‘no’. Especially if the company is geographically distributed.

I’ve done a lot of assessments and benchmarking exercises against NIST CSF, ISO 27001, ISF IRAM2 and other standards since that CSET engagement and developed a set of questions that cover the areas of the NIST Cybersecurity Framework.

I felt the need to streamline the process and developed a tool to represent the scores nicely and help benchmark against the industry. I usually propose a tailor-made questionnaire that would include 50-100 suitable questions from the bank. From my experience in these assessments, the answers are not binary. Yes, a capability might be present but the real questions are:

  • How is it implemented?
  • How consistently it is being rolled out?
  • Can you actually show me the evidence?

So it’s very much about seeking the facts.

As I’ve mentioned, the process might not be the most pleasant for the parties involved but it is the one that delivers the most value for the leadership.

What about maturity?

I usually map the scores to the CMMI (Capability Maturity Model Integration) levels of:

  • Initial
  • Managed
  • Defined
  • Quantitatively managed and
  • Optimised

But I also consider NIST Cybersecurity framework implementation tiers that are not strictly considered as maturity levels, rather the higher tiers point to a more complete implementation of CSF standards. They go through Tiers 1-4 from Partial through to Risk Informed, Repeatable and finally Adaptive.

The key here is not just the ultimate score but the relation of the score to the coverage across the estate.