Trust in People: Macquarie University Cyber Security Industry Workshop

I’ve been invited to to share my thoughts on human-centric security at the Macquarie University Cyber Security Industry Workshop.

Drawing on insights from The Psychology of Information Security and my experience in the field, I outlined some of the reasons for friction between security and business productivity and suggested a practical approach to a building a better security culture in organisations.

It was great to be able to contribute to the collaboration between the industry, government and academia on this topic.

Advertisement

Behavioural science in cyber security

Why your staff ignore security policies and what to do about it.               

Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.

In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.

His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it.  So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.

Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.

This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.

The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.

This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.

To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.

More

How to Create a Security Culture at the Workplace

October is National Cyber Security Awareness Month(NCSAM) which is designed to engage and educate public and private sector partners through events and initiatives to raise awareness about cybersecurity,

I’ve been asked to share my views on creating a security culture at the workplace with The State of Security.

I believe the goal is not to teach tricks, but to create a new culture which is accepted and understood by everyone. In order to effectively do so, messages need to be designed and delivered according to each type of employee: there is no such thing as a one-size-fits-all security campaign. Questions that must always be answered include: What are the benefits? What does it matter or why should I care? What impact do my actions have?

Security campaigns must discard scare tactics such as threatening employees with sanctions for breaches. Campaigns should be oriented towards the users’ goals and values, as well as the values of the organisation, such as professionalism and delivery.

A security campaign should emphasise that employees can cause serious damage to an organisation when they engage in non-compliant behaviour, even if it appears to be in an insignificant way. They should understand that they are bearing some responsibility for the security of the organisation and its exposure to risk.

Furthermore, the entire organisation needs to perceive security as bringing value to the company, as opposed to being an obstacle preventing employees from doing their job. It is important for employees to understand that they contribute to the smooth and efficient operation of business processes when they follow recommended security practices, just as security enables the availability of resources that support these processes.

The Psychology of Information Security book reviews

51enjkmw1ll-_sx322_bo1204203200_

I wrote about my book  in the previous post. Here I would like to share what others have to say about it.

So often information security is viewed as a technical discipline – a world of firewalls, anti-virus software, access controls and encryption. An opaque and enigmatic discipline which defies understanding, with a priesthood who often protect their profession with complex concepts, language and most of all secrecy.

Leron takes a practical, pragmatic and no-holds barred approach to demystifying the topic. He reminds us that ultimately security depends on people – and that we all act in what we see as our rational self-interest – sometimes ill-informed, ill-judged, even downright perverse.

No approach to security can ever succeed without considering people – and as a profession we need to look beyond our computers to understand the business, the culture of the organisation – and most of all, how we can create a security environment which helps people feel free to actually do their job.
David Ferbrache OBE, FBCS
Technical Director, Cyber Security
KPMG UK

This is an easy-to-read, accessible and simple introduction to information security.  The style is straightforward, and calls on a range of anecdotes to help the reader through what is often a complicated and hard to penetrate subject.  Leron approaches the subject from a psychological angle and will be appealing to both those of a non-technical and a technical background.
Dr David King
Visiting Fellow of Kellogg College
University of Oxford

More

Digital decisions: Understanding behaviours for safer cyber environments

DART

I was invited to participate in a panel discussion at a workshop on digital decision-making and risk-taking hosted by the Decision, Attitude, Risk & Thinking (DART) research group at Kingston Business School.

During the workshop, we addressed the human dimension in issues arising from increasing digital interconnectedness with a particular focus on cyber security risks and cyber safety in web-connected organisations.

We identified behavioural challenges in cyber security such as insider threats, phishing emails, security culture and achieving stakeholder buy-in. We also outlined a potential further research opportunity which could tackle behavioural security risks inherent in the management of organisational information assets.

2016-04-25 14.50

Building a security culture

Building on the connection between breaking security policies and cheating, let’s look at a study[1] that asked participants to solve 20 simple maths problems and promised 50 cents for each correct answer.

The participants were allowed to check their own answers and then shred the answer sheet, leaving no evidence of any potential cheating. The results demonstrated that participants reported solving, on average, five more problems than under conditions where cheating was not possible (i.e. controlled conditions).

The researchers then introduced David – a student who was tasked to raise his hand shortly after the experiment begun and proclaim that he had solved all the problems. Other participants were obviously shocked by such a statement. It was clearly impossible to solve all the problems in only a few minutes. The experimenter, however, didn’t question his integrity and suggested that David should shred the answer sheet and take all the money from the envelope.

Interestingly, other participants’ behaviour adapted as a result. They reported solving on average eight more problems than under controlled conditions.

Much like the broken windows theory mentioned in my previous blog, this demonstrates that unethical behaviour is contagious, as are acts of non-compliance. If employees in a company witness other people breaking security policies and not being punished, they are tempted to do the same. It becomes socially acceptable and normal. This is the root cause of poor security culture.

The good news is that the opposite holds true as well. That’s why security culture has to have strong senior management support. Leading by example is the key to changing the perception of security in the company: if employees see that the leadership team takes security seriously, they will follow.

So, security professionals should focus on how security is perceived. This point is outlined in three basic steps in the book The Social Animal, by David Brooks:[2]

  1. People perceive a situation.
  2. People estimate if the action is in their long-term interest.
  3. People use willpower to take action.

P-A-A

He claims that, historically, people were mostly focused on the last two steps of this process. In the previous blog I argued that relying solely on willpower has a limited effect. Willpower can be exercised like a muscle, but it is also prone to atrophy.

In regard to the second step of the decision-making process, if people were reminded of the potential negative consequences they would be likely not to take the action. Brooks then refers to ineffective HIV/AIDS awareness campaigns, which focused only on the negative consequences and ultimately failed to change people’s behaviour.

He also suggests that most diets fail because willpower and reason are not strong enough to confront impulsive desires: “You can tell people not to eat the French fry. You can give them pamphlets about the risks of obesity … In their nonhungry state, most people will vow not to eat it. But when their hungry self rises, their well-intentioned self fades, and they eat the French fry”.

This doesn’t only apply to dieting: when people want to get their job done and security gets in the way, they will circumvent it, regardless of the degree of risk they might expose the company to.

That is the reason for perception being the cornerstone of the decision-making process. Employees have to be taught to see security violations in a particular way that minimises the temptation to break policies.

In ‘Strangers to Ourselves’, Timothy Wilson claims, “One of the most enduring lessons of social psychology is that behaviour change often precedes changes in attitudes and feelings”.[3]

Security professionals should understand that there is no single event that alters users’ behaviour – changing security culture requires regular reinforcement, creating and sustaining habits.

Charles Duhigg, in his book The Power of Habit,[4] tells a story about Paul O’Neill, a CEO of the Aluminum Company of America (Alcoa) who was determined to make his enterprise the safest in the country. At first, people were confused that the newly appointed executive was not talking about profit margins or other finance-related metrics. They didn’t see the link between his ‘zero-injuries’ goal and the company’s performance. Despite that, Alcoa’s profits reached a historical high within a year of his announcement. When O’Neill retired, the company’s annual income was five times greater than it had been before his arrival. Moreover, it became one of the safest companies in the world.

Duhigg explains this phenomenon by highlighting the importance of the “keystone habit”. Alcoa’s CEO identified safety as such a habit and focused solely on it.

O’Neill had a challenging goal to transform the company, but he couldn’t just tell people to change their behaviour. He said, “that’s not how the brain works. So I decided I was going to start by focusing on one thing. If I could start disrupting the habits around one thing, it would spread throughout the entire company.”

He recalled an incident when one of his workers died trying to fix a machine despite the safety procedures and warning signs. The CEO called an emergency meeting to understand what had caused this tragic event.

He took personal responsibility for the worker’s death, identifying numerous shortcomings in safety education. For example, the training programme didn’t highlight the fact that employees wouldn’t be blamed for machinery failure or the fact that they shouldn’t commence repair work before finding a manager.

As a result, the policies were updated and the employees were encouraged to suggest safety improvements. Workers, however, went a step further and started suggesting business improvements as well. Changing their behaviour around safety led to some innovative solutions, enhanced communication and increased profits for the company.

Security professionals should understand the importance of group dynamics and influences to build an effective security culture.

They should also remember that just as ‘broken windows’ encourage policy violations, changing one security habit can encourage better behaviour across the board.

References:

[1] Francesca Gino, Shahar Ayal and Dan Ariely, “Contagion and Differentiation in Unethical Behavior: The Effect of One Bad Apple on the Barrel”, Psychological Science, 20(3), 2009, 393–398.

[2] David Brooks, The Social Animal: The Hidden Sources of Love, Character, and Achievement, Random House, 2011.

[3] Timothy Wilson, Strangers to Ourselves, Harvard University Press, 2004, 212.

[4] Charles Duhigg, The Power of Habit: Why We Do What We Do and How to Change, Random House, 2013.

To find out more about building a security culture, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond

The root causes of a poor security culture within the workplace

15542330623_c052788d46_z

Demonstrating to employees that security is there to make their life easier, not harder, is the first step in developing a sound security culture. But before we discuss the actual steps to improve it, let’s first understand the root causes of poor security culture.

Security professionals must understand that bad habits and behaviours tend to be contagious. Malcolm Gladwell, in his book The Tipping Point,[1] discusses the conditions that allow some ideas or behaviours to “spread like viruses”. He refers to the broken windows theory to illustrate the power of context. This theory advocates stopping smaller crimes by maintaining the environment in order to prevent bigger ones. The claim goes that a broken window left for several days in a neighbourhood would trigger more vandalism. The small defect signals a lack of care and attention on the property, which in turn implies that crime will go unpunished.

Gladwell describes the efforts of George Kelling, who employed the theory to fight vandalism on the New York City subway system. He argued that cleaning up graffiti on the trains would prevent further vandalism. Gladwell concluded that this several-year-long effort resulted in a dramatically reduced crime rate.

Despite ongoing debate regarding the causes of the 1990s crime rate reduction in the US, the broken windows theory can be applied in an information security context.

Security professionals should remember that minor policy violations tend to lead to bigger ones, eroding the company’s security culture.

The psychology of human behaviour should be considered as well

Sometimes people are not motivated to comply with a security policy because they simply don’t see the financial impact of violating it.

Dan Ariely, in his book The Honest Truth about Dishonesty,[2] tries to understand why people break the rules. Among other experiments, he describes a survey conducted among golf players to determine the conditions in which they would be tempted to move the ball into a more advantageous position, and if so, which method they would choose. The golfers were offered three different options: they could use their club, use their shoe or simply pick the ball up using their hands.

Although all of these options break the rules, they were designed in this way to determine if one method of cheating is more psychologically acceptable than others. The results of the study demonstrated that moving the ball with a club was the most common choice, followed by the shoe and, finally, the hand. It turned out that physically and psychologically distancing ourselves from the ‘immoral’ action makes people more likely to act dishonestly.

It is important to understand that the ‘distance’ described in this experiment is merely psychological. It doesn’t change the nature of the action.

In a security context, employees will usually be reluctant to steal confidential information, just as golfers will refrain from picking up a ball with their hand to move it to a more favourable position, because that would make them directly involved in the unethical behaviour. However, employees might download a peer-to-peer sharing application to listen to music while at work, as the impact of this action is less obvious. This can potentially lead to even bigger losses due to even more confidential information being stolen from the corporate network.

Security professionals can use this finding to remind employees of the true meaning of their actions. Breaking security policy does not seem to have a direct financial impact on the company – there is usually no perceived loss, so it is easy for employees to engage in such behaviour. Highlighting this link and demonstrating the correlation between policy violations and the business’s ability to generate revenue could help employees understand the consequences of non-compliance.

References:

[1] Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little, Brown, 2006.

[2] Dan Ariely, The Honest Truth about Dishonesty, Harper, 2013.

Image by txmx 2 https://flic.kr/p/pFqvpD

To find out more about the behaviours behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond

‘Wicked’ problems in information security

10299945186_12bb26640f_z

Incorporating security activities into the natural workflow of productive tasks, makes it easier for people to adopt new technologies and ways of working, but it’s not necessarily enough to guarantee that you’ll be able to solve a particular security-usability issue. The reason for this is that such problems can be categorised as wicked.

Rittel and Webber in ‘Policy Sciences’ define a wicked problem in the context of social policy planning as a challenging – if not impossible – problem to solve because of missing, poorly defined or inconsistent requirements from stakeholders, which may morph over time and which can be demanding to find an optimal solution for.[1]

One cannot apply traditional methods to solving a wicked problem; a creative solution must be sought instead. One of these creative solutions could be to apply design thinking techniques.

Methods for design thinking include performing situational analysis, interviewing, creating user profiles, looking at other existing solutions, creating prototypes and mind mapping.

Plattner, Meinel and Leifer in ‘Design Thinking: Understand–Improve–Apply’ assert that there are four rules to design thinking, which can help security professionals better approach wicked problems:[2]

  1. The human rule: all design activity is ultimately social in nature.
  2. The ambiguity rule: design thinkers must preserve ambiguity.
  3. The redesign rule: all design is redesign
  4. The tangibility rule: making ideas tangible always facilitates communication.

Security professionals should adopt these rules in order to develop secure and usable controls, by engaging people, utilising existing solutions and creating prototypes that can help by allowing the collection of feedback.

Although this enables the design of better security controls, the design thinking rules rarely provide an insight into why the existing mechanism is failing.

When a problem occurs, we naturally tend to focus on the symptoms instead of identifying the root cause. In ‘Toyota Production System: Beyond Large-Scale Production’, Taiichi Ohno developed the Five Whys technique, which was used in the Toyota production system as a systematic problem-solving tool to get to the heart of the problem.

In one of his books, Ohno provides the following example of applying this technique when a machine stopped functioning:[3]

  1. Why did the machine stop? There was an overload and the fuse blew.
  2. Why was there an overload? The bearing was not sufficiently lubricated.
  3. Why was it not lubricated sufficiently? The lubrication pump was not pumping sufficiently.
  4. Why was it not pumping sufficiently? The shaft of the pump was worn and rattling.
  5. Why was the shaft worn out? There was no strainer attached and metal scrap got in.

Instead of focusing on resolving the first reason for the malfunction – i.e. replacing the fuse or the pump shaft – repeating ‘why’ five times can help to uncover the underlying issue and prevent the problem from resurfacing again in the near future.

Eric Reis, who adapted this technique to starting up a business in his book The Lean Startup,[4] points out that at “the root of every seemingly technical problem is actually a human problem.”

As in Ohno’s example, the root cause turned out to be human error (an employee forgetting to attach a strainer), rather than a technical fault (a blown fuse), as was initially suspected. This is typical of most problems that security professionals face, no matter which industry they are in.

These techniques can help to address the core of the issue and build systems that are both usable and secure. This is not easy to achieve due to the nature of the problem. But, once implemented, such mechanisms can significantly improve the security culture in organisations.

References:

[1] Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning”, Policy Sciences, 4, 1973, 155–169.

[2] Hasso Plattner, Christoph Meinel and Larry J. Leifer, eds.,  Design Thinking: Understand–Improve–Apply, Springer Science & Business Media, 2010.

[3] Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production, Productivity Press, 1988.

[4] Eric Reis, The Lean Startup, Crown Business, 2011.

Image by Paloma Baytelman https://www.flickr.com/photos/palomabaytelman/10299945186/in/photostream/

To find out more about the psychology behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond

Security and Usability

Many employees find information security secondary to their normal day-to-day work, often leaving their organisation vulnerable to cyber attacks, particularly if they are stressed or tired. Leron Zinatullin, the author of The Psychology of Information Security, looks at the opportunities available to prevent such cognitive depletion.

2141071329_9097e63c06_o

When users perform tasks that comply with their own mental models (i.e. the way that they view the world and how they expect it to work), the activities present less of a cognitive challenge than those that work against these models.

If people can apply their previous knowledge and experience to a problem, less energy is required to solve it in a secure manner and they are less mentally depleted by the end of the day.

For example, a piece of research on disk sanitisation highlighted the importance of secure file removal from the hard disk.[1] It is not clear to users that emptying the ‘Recycle Bin’ is insufficient and that files can easily be recovered. However, there are software products available that exploit users’ mental models. They employ a ‘shredding’ analogy to indicate that files are being removed securely, which echoes an activity they would perform at work. Such an interface design might help lighten the burden on users.

Security professionals should pay attention to the usability of security mechanisms, aligning them with users’ existing mental models.

In The Laws of Simplicity,[2] John Maeda supports the importance of making design more user-friendly by relating it to an existing experience. He refers to an example of the desktop metaphor introduced by Xerox researchers in the 1980s. People were able to relate to the graphical computer interface as opposed to the command line. They could manipulate objects similarly to the way they did with a physical desk: storing and categorising files in folders, as well as moving or renaming them, or deleting them by placing them in the recycle bin.

Using mental models

Building on existing mental models makes it easier for people to adopt new technologies and ways of working. However, such mappings must take cultural background into consideration. The metaphor might not work if it is not part of the existing mental model. For instance, Apple Macintosh’s original trash icon was impossible to recognise in Japan, where users were not accustomed to metallic bins of this kind.

Good interface design not only lightens the burden on users but can also complement security. Traditionally, it has been assumed that security and usability always contradict each other – that security makes things more complicated, while usability aims to improve the user experience. In reality, they can support each other by defining constructive and destructive activities. Effective design should make constructive activities simple to perform while hindering destructive ones.

This can be achieved by incorporating security activities into the natural workflow of productive tasks, which requires the involvement of security professionals early in the design process. Security and usability shouldn’t be extra features introduced as an afterthought once the system has been developed, but an integral part of the design from the beginning.

Security professionals can provide input into the design process via several methods such as iterative or participatory design.[3] The iterative method consists of each development cycle being followed by testing and evaluation and the participatory method ensures that key stakeholders, including security professionals, have an opportunity to be involved.

References:

[1] S. L. Garfinkel and A. Shelat, “Remembrance of Data Passed: A Study of Disk Sanitization Practices”, IEEE Security & Privacy, 1, 2003, 17–27.

[2] John Maeda, The Laws of Simplicity, MIT Press, 2006.

[3] For iterative design see J. Nielsen, “Iterative User Interface Design”, IEEE Computer, 26(11) (1993), 32–41; for participatory design see D. Schuler and A. Namioka, Participatory Design: Principles and Practices, CRC Press, 1993.

Image by Rosenfeld Media https://www.flickr.com/photos/rosenfeldmedia/2141071329/


To find out more about the psychology behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond

How employees react to security policies

8205162689_345cce5b75_o

Information security can often be a secondary consideration for many employees, which leaves their company vulnerable to cyber attacks. Leron Zinatullin, author of The Psychology of Information Security, discusses how organisations can address this.

First, security professionals should understand that people’s resources are limited. Moreover, people tend to struggle with making effective decisions when they are tired.

To test the validity of this argument, psychologists designed an experiment in which they divided participants into two groups: the first group was asked to memorise a two-digit number (e.g. 54) and the second was asked to remember a seven-digit number (e.g. 4509672).[1] They then asked the participants to go down the hall to another room to collect their reward for participating. This payment, however, could be only received if the number was recalled correctly.

While they were making their way down the corridor, the participants encountered another experimenter, who offered them either fruit or chocolate. They were told that they could collect their chosen snack after they finished the experiment, but they had to make a decision there and then.

The results demonstrated that people who were given the easier task of remembering a two-digit number mostly chose the healthy option, while people overburdened by the more challenging task of recalling a longer string of digits succumbed to the more gratifying chocolate.

The implications of these findings, however, are not limited to dieting. A study looked at the decision-making patterns that can be observed in the behaviour of judges when considering inmates for parole during different stages of the day.[2]

Despite the default position being to reject parole, judges had more cognitive capacity and energy to fully consider the details of the case and make an informed decision in the mornings and after lunch, resulting in more frequently granted paroles. In the evenings, judges tended to reject parole far more frequently, which is believed to be due to the mental strain they endure throughout the day. They simply ran out of energy and defaulted to the safest option.

How can this be applied to the information security context?

Security professionals should bear in mind that if people are stressed at work, making difficult decisions, performing productive tasks, they get tired. This might affect their ability or willingness to maintain compliance. In a corporate context, this cognitive depletion may result in staff defaulting to core business activities at the expense of secondary security tasks.

Security mechanisms must be aligned with individual primary tasks in order to ensure effective implementation, by factoring in an individual’s perspective, knowledge and awareness, and a modern, flexible and adaptable information security approach. The aim should therefore be to correct employee misunderstandings and misconceptions that result in non-compliant behaviour, because, in the end, people are a company’s best asset.

References:

[1] B. Shiv and A. Fedorikhin, “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making”, Journal of Consumer Research,  1999, 278–292.

[2] Shai Danziger, Jonathan Levav and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions”, Proceedings of the National Academy of Sciences, 108(17), 2011, 6889–6892.

Photo by CrossfitPaleoDietFitnessClasses https://www.flickr.com/photos/crossfitpaleodietfitnessclasses/8205162689

To find out more about the psychology behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond