Why your staff ignore security policies and what to do about it.
Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.
In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.
His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it. So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.
Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.
This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.
The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.
This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.
To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.
Building on the connection between breaking security policies and cheating, let’s look at a study that asked participants to solve 20 simple maths problems and promised 50 cents for each correct answer.
The participants were allowed to check their own answers and then shred the answer sheet, leaving no evidence of any potential cheating. The results demonstrated that participants reported solving, on average, five more problems than under conditions where cheating was not possible (i.e. controlled conditions).
The researchers then introduced David – a student who was tasked to raise his hand shortly after the experiment begun and proclaim that he had solved all the problems. Other participants were obviously shocked by such a statement. It was clearly impossible to solve all the problems in only a few minutes. The experimenter, however, didn’t question his integrity and suggested that David should shred the answer sheet and take all the money from the envelope.
Interestingly, other participants’ behaviour adapted as a result. They reported solving on average eight more problems than under controlled conditions.
Much like the broken windows theory mentioned in my previous blog, this demonstrates that unethical behaviour is contagious, as are acts of non-compliance. If employees in a company witness other people breaking security policies and not being punished, they are tempted to do the same. It becomes socially acceptable and normal. This is the root cause of poor security culture.
The good news is that the opposite holds true as well. That’s why security culture has to have strong senior management support. Leading by example is the key to changing the perception of security in the company: if employees see that the leadership team takes security seriously, they will follow.
So, security professionals should focus on how security is perceived. This point is outlined in three basic steps in the book The Social Animal, by David Brooks:
- People perceive a situation.
- People estimate if the action is in their long-term interest.
- People use willpower to take action.
He claims that, historically, people were mostly focused on the last two steps of this process. In the previous blog I argued that relying solely on willpower has a limited effect. Willpower can be exercised like a muscle, but it is also prone to atrophy.
In regard to the second step of the decision-making process, if people were reminded of the potential negative consequences they would be likely not to take the action. Brooks then refers to ineffective HIV/AIDS awareness campaigns, which focused only on the negative consequences and ultimately failed to change people’s behaviour.
He also suggests that most diets fail because willpower and reason are not strong enough to confront impulsive desires: “You can tell people not to eat the French fry. You can give them pamphlets about the risks of obesity … In their nonhungry state, most people will vow not to eat it. But when their hungry self rises, their well-intentioned self fades, and they eat the French fry”.
This doesn’t only apply to dieting: when people want to get their job done and security gets in the way, they will circumvent it, regardless of the degree of risk they might expose the company to.
That is the reason for perception being the cornerstone of the decision-making process. Employees have to be taught to see security violations in a particular way that minimises the temptation to break policies.
In ‘Strangers to Ourselves’, Timothy Wilson claims, “One of the most enduring lessons of social psychology is that behaviour change often precedes changes in attitudes and feelings”.
Security professionals should understand that there is no single event that alters users’ behaviour – changing security culture requires regular reinforcement, creating and sustaining habits.
Charles Duhigg, in his book The Power of Habit, tells a story about Paul O’Neill, a CEO of the Aluminum Company of America (Alcoa) who was determined to make his enterprise the safest in the country. At first, people were confused that the newly appointed executive was not talking about profit margins or other finance-related metrics. They didn’t see the link between his ‘zero-injuries’ goal and the company’s performance. Despite that, Alcoa’s profits reached a historical high within a year of his announcement. When O’Neill retired, the company’s annual income was five times greater than it had been before his arrival. Moreover, it became one of the safest companies in the world.
Duhigg explains this phenomenon by highlighting the importance of the “keystone habit”. Alcoa’s CEO identified safety as such a habit and focused solely on it.
O’Neill had a challenging goal to transform the company, but he couldn’t just tell people to change their behaviour. He said, “that’s not how the brain works. So I decided I was going to start by focusing on one thing. If I could start disrupting the habits around one thing, it would spread throughout the entire company.”
He recalled an incident when one of his workers died trying to fix a machine despite the safety procedures and warning signs. The CEO called an emergency meeting to understand what had caused this tragic event.
He took personal responsibility for the worker’s death, identifying numerous shortcomings in safety education. For example, the training programme didn’t highlight the fact that employees wouldn’t be blamed for machinery failure or the fact that they shouldn’t commence repair work before finding a manager.
As a result, the policies were updated and the employees were encouraged to suggest safety improvements. Workers, however, went a step further and started suggesting business improvements as well. Changing their behaviour around safety led to some innovative solutions, enhanced communication and increased profits for the company.
Security professionals should understand the importance of group dynamics and influences to build an effective security culture.
They should also remember that just as ‘broken windows’ encourage policy violations, changing one security habit can encourage better behaviour across the board.
 Francesca Gino, Shahar Ayal and Dan Ariely, “Contagion and Differentiation in Unethical Behavior: The Effect of One Bad Apple on the Barrel”, Psychological Science, 20(3), 2009, 393–398.
 David Brooks, The Social Animal: The Hidden Sources of Love, Character, and Achievement, Random House, 2011.
 Timothy Wilson, Strangers to Ourselves, Harvard University Press, 2004, 212.
 Charles Duhigg, The Power of Habit: Why We Do What We Do and How to Change, Random House, 2013.
Demonstrating to employees that security is there to make their life easier, not harder, is the first step in developing a sound security culture. But before we discuss the actual steps to improve it, let’s first understand the root causes of poor security culture.
Security professionals must understand that bad habits and behaviours tend to be contagious. Malcolm Gladwell, in his book The Tipping Point, discusses the conditions that allow some ideas or behaviours to “spread like viruses”. He refers to the broken windows theory to illustrate the power of context. This theory advocates stopping smaller crimes by maintaining the environment in order to prevent bigger ones. The claim goes that a broken window left for several days in a neighbourhood would trigger more vandalism. The small defect signals a lack of care and attention on the property, which in turn implies that crime will go unpunished.
Gladwell describes the efforts of George Kelling, who employed the theory to fight vandalism on the New York City subway system. He argued that cleaning up graffiti on the trains would prevent further vandalism. Gladwell concluded that this several-year-long effort resulted in a dramatically reduced crime rate.
Despite ongoing debate regarding the causes of the 1990s crime rate reduction in the US, the broken windows theory can be applied in an information security context.
Security professionals should remember that minor policy violations tend to lead to bigger ones, eroding the company’s security culture.
The psychology of human behaviour should be considered as well
Sometimes people are not motivated to comply with a security policy because they simply don’t see the financial impact of violating it.
Dan Ariely, in his book The Honest Truth about Dishonesty, tries to understand why people break the rules. Among other experiments, he describes a survey conducted among golf players to determine the conditions in which they would be tempted to move the ball into a more advantageous position, and if so, which method they would choose. The golfers were offered three different options: they could use their club, use their shoe or simply pick the ball up using their hands.
Although all of these options break the rules, they were designed in this way to determine if one method of cheating is more psychologically acceptable than others. The results of the study demonstrated that moving the ball with a club was the most common choice, followed by the shoe and, finally, the hand. It turned out that physically and psychologically distancing ourselves from the ‘immoral’ action makes people more likely to act dishonestly.
It is important to understand that the ‘distance’ described in this experiment is merely psychological. It doesn’t change the nature of the action.
In a security context, employees will usually be reluctant to steal confidential information, just as golfers will refrain from picking up a ball with their hand to move it to a more favourable position, because that would make them directly involved in the unethical behaviour. However, employees might download a peer-to-peer sharing application to listen to music while at work, as the impact of this action is less obvious. This can potentially lead to even bigger losses due to even more confidential information being stolen from the corporate network.
Security professionals can use this finding to remind employees of the true meaning of their actions. Breaking security policy does not seem to have a direct financial impact on the company – there is usually no perceived loss, so it is easy for employees to engage in such behaviour. Highlighting this link and demonstrating the correlation between policy violations and the business’s ability to generate revenue could help employees understand the consequences of non-compliance.
 Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little, Brown, 2006.
 Dan Ariely, The Honest Truth about Dishonesty, Harper, 2013.
Image by txmx 2 https://flic.kr/p/pFqvpD
Incorporating security activities into the natural workflow of productive tasks, makes it easier for people to adopt new technologies and ways of working, but it’s not necessarily enough to guarantee that you’ll be able to solve a particular security-usability issue. The reason for this is that such problems can be categorised as wicked.
Rittel and Webber in ‘Policy Sciences’ define a wicked problem in the context of social policy planning as a challenging – if not impossible – problem to solve because of missing, poorly defined or inconsistent requirements from stakeholders, which may morph over time and which can be demanding to find an optimal solution for.
One cannot apply traditional methods to solving a wicked problem; a creative solution must be sought instead. One of these creative solutions could be to apply design thinking techniques.
Methods for design thinking include performing situational analysis, interviewing, creating user profiles, looking at other existing solutions, creating prototypes and mind mapping.
Plattner, Meinel and Leifer in ‘Design Thinking: Understand–Improve–Apply’ assert that there are four rules to design thinking, which can help security professionals better approach wicked problems:
- The human rule: all design activity is ultimately social in nature.
- The ambiguity rule: design thinkers must preserve ambiguity.
- The redesign rule: all design is redesign
- The tangibility rule: making ideas tangible always facilitates communication.
Security professionals should adopt these rules in order to develop secure and usable controls, by engaging people, utilising existing solutions and creating prototypes that can help by allowing the collection of feedback.
Although this enables the design of better security controls, the design thinking rules rarely provide an insight into why the existing mechanism is failing.
When a problem occurs, we naturally tend to focus on the symptoms instead of identifying the root cause. In ‘Toyota Production System: Beyond Large-Scale Production’, Taiichi Ohno developed the Five Whys technique, which was used in the Toyota production system as a systematic problem-solving tool to get to the heart of the problem.
In one of his books, Ohno provides the following example of applying this technique when a machine stopped functioning:
- Why did the machine stop? There was an overload and the fuse blew.
- Why was there an overload? The bearing was not sufficiently lubricated.
- Why was it not lubricated sufficiently? The lubrication pump was not pumping sufficiently.
- Why was it not pumping sufficiently? The shaft of the pump was worn and rattling.
- Why was the shaft worn out? There was no strainer attached and metal scrap got in.
Instead of focusing on resolving the first reason for the malfunction – i.e. replacing the fuse or the pump shaft – repeating ‘why’ five times can help to uncover the underlying issue and prevent the problem from resurfacing again in the near future.
Eric Reis, who adapted this technique to starting up a business in his book The Lean Startup, points out that at “the root of every seemingly technical problem is actually a human problem.”
As in Ohno’s example, the root cause turned out to be human error (an employee forgetting to attach a strainer), rather than a technical fault (a blown fuse), as was initially suspected. This is typical of most problems that security professionals face, no matter which industry they are in.
These techniques can help to address the core of the issue and build systems that are both usable and secure. This is not easy to achieve due to the nature of the problem. But, once implemented, such mechanisms can significantly improve the security culture in organisations.
 Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning”, Policy Sciences, 4, 1973, 155–169.
 Hasso Plattner, Christoph Meinel and Larry J. Leifer, eds., Design Thinking: Understand–Improve–Apply, Springer Science & Business Media, 2010.
 Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production, Productivity Press, 1988.
 Eric Reis, The Lean Startup, Crown Business, 2011.
Image by Paloma Baytelman https://www.flickr.com/photos/palomabaytelman/10299945186/in/photostream/
Many employees find information security secondary to their normal day-to-day work, often leaving their organisation vulnerable to cyber attacks, particularly if they are stressed or tired. Leron Zinatullin, the author of The Psychology of Information Security, looks at the opportunities available to prevent such cognitive depletion.
When users perform tasks that comply with their own mental models (i.e. the way that they view the world and how they expect it to work), the activities present less of a cognitive challenge than those that work against these models.
If people can apply their previous knowledge and experience to a problem, less energy is required to solve it in a secure manner and they are less mentally depleted by the end of the day.
For example, a piece of research on disk sanitisation highlighted the importance of secure file removal from the hard disk. It is not clear to users that emptying the ‘Recycle Bin’ is insufficient and that files can easily be recovered. However, there are software products available that exploit users’ mental models. They employ a ‘shredding’ analogy to indicate that files are being removed securely, which echoes an activity they would perform at work. Such an interface design might help lighten the burden on users.
Security professionals should pay attention to the usability of security mechanisms, aligning them with users’ existing mental models.
In The Laws of Simplicity, John Maeda supports the importance of making design more user-friendly by relating it to an existing experience. He refers to an example of the desktop metaphor introduced by Xerox researchers in the 1980s. People were able to relate to the graphical computer interface as opposed to the command line. They could manipulate objects similarly to the way they did with a physical desk: storing and categorising files in folders, as well as moving or renaming them, or deleting them by placing them in the recycle bin.
Using mental models
Building on existing mental models makes it easier for people to adopt new technologies and ways of working. However, such mappings must take cultural background into consideration. The metaphor might not work if it is not part of the existing mental model. For instance, Apple Macintosh’s original trash icon was impossible to recognise in Japan, where users were not accustomed to metallic bins of this kind.
Good interface design not only lightens the burden on users but can also complement security. Traditionally, it has been assumed that security and usability always contradict each other – that security makes things more complicated, while usability aims to improve the user experience. In reality, they can support each other by defining constructive and destructive activities. Effective design should make constructive activities simple to perform while hindering destructive ones.
This can be achieved by incorporating security activities into the natural workflow of productive tasks, which requires the involvement of security professionals early in the design process. Security and usability shouldn’t be extra features introduced as an afterthought once the system has been developed, but an integral part of the design from the beginning.
Security professionals can provide input into the design process via several methods such as iterative or participatory design. The iterative method consists of each development cycle being followed by testing and evaluation and the participatory method ensures that key stakeholders, including security professionals, have an opportunity to be involved.
 S. L. Garfinkel and A. Shelat, “Remembrance of Data Passed: A Study of Disk Sanitization Practices”, IEEE Security & Privacy, 1, 2003, 17–27.
 John Maeda, The Laws of Simplicity, MIT Press, 2006.
 For iterative design see J. Nielsen, “Iterative User Interface Design”, IEEE Computer, 26(11) (1993), 32–41; for participatory design see D. Schuler and A. Namioka, Participatory Design: Principles and Practices, CRC Press, 1993.
Image by Rosenfeld Media https://www.flickr.com/photos/rosenfeldmedia/2141071329/
Information security can often be a secondary consideration for many employees, which leaves their company vulnerable to cyber attacks. Leron Zinatullin, author of The Psychology of Information Security, discusses how organisations can address this.
First, security professionals should understand that people’s resources are limited. Moreover, people tend to struggle with making effective decisions when they are tired.
To test the validity of this argument, psychologists designed an experiment in which they divided participants into two groups: the first group was asked to memorise a two-digit number (e.g. 54) and the second was asked to remember a seven-digit number (e.g. 4509672). They then asked the participants to go down the hall to another room to collect their reward for participating. This payment, however, could be only received if the number was recalled correctly.
While they were making their way down the corridor, the participants encountered another experimenter, who offered them either fruit or chocolate. They were told that they could collect their chosen snack after they finished the experiment, but they had to make a decision there and then.
The results demonstrated that people who were given the easier task of remembering a two-digit number mostly chose the healthy option, while people overburdened by the more challenging task of recalling a longer string of digits succumbed to the more gratifying chocolate.
The implications of these findings, however, are not limited to dieting. A study looked at the decision-making patterns that can be observed in the behaviour of judges when considering inmates for parole during different stages of the day.
Despite the default position being to reject parole, judges had more cognitive capacity and energy to fully consider the details of the case and make an informed decision in the mornings and after lunch, resulting in more frequently granted paroles. In the evenings, judges tended to reject parole far more frequently, which is believed to be due to the mental strain they endure throughout the day. They simply ran out of energy and defaulted to the safest option.
How can this be applied to the information security context?
Security professionals should bear in mind that if people are stressed at work, making difficult decisions, performing productive tasks, they get tired. This might affect their ability or willingness to maintain compliance. In a corporate context, this cognitive depletion may result in staff defaulting to core business activities at the expense of secondary security tasks.
Security mechanisms must be aligned with individual primary tasks in order to ensure effective implementation, by factoring in an individual’s perspective, knowledge and awareness, and a modern, flexible and adaptable information security approach. The aim should therefore be to correct employee misunderstandings and misconceptions that result in non-compliant behaviour, because, in the end, people are a company’s best asset.
 B. Shiv and A. Fedorikhin, “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making”, Journal of Consumer Research, 1999, 278–292.
 Shai Danziger, Jonathan Levav and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions”, Proceedings of the National Academy of Sciences, 108(17), 2011, 6889–6892.
Photo by CrossfitPaleoDietFitnessClasses https://www.flickr.com/photos/crossfitpaleodietfitnessclasses/8205162689
The majority of employees within an organisation are hired to execute specific jobs, such as marketing, managing projects, manufacturing goods or overseeing financial investment. Their main – sometimes only – priority will be to efficiently complete their core business activity, so information security will usually only be a secondary consideration. Consequently, employees will be reluctant to invest more than a limited amount of effort and time on such a secondary task that they rarely understand, and from which they perceive no benefit.
Research suggests that when security mechanisms cause additional work, employees will favour non-compliant behaviour in order to complete their primary tasks quickly.
There is a lack of awareness among security managers about the burden that security mechanisms impose on employees, because it is assumed that the users can easily accommodate the effort that security compliance requires. In reality, employees tend to experience a negative impact on their performance because they feel that these cumbersome security mechanisms drain both their time and their effort. The risk mitigation achieved through compliance, from their perspective, is not worth the disruption to their productivity. In extreme cases, the more urgent the delivery of the primary task is, the more appealing and justifiable non-compliance becomes, regardless of employees’ awareness of the risks.
When security mechanisms hinder or significantly slow down employees’ performance, they will cut corners, and reorganise and adjust their primary tasks in order to avoid them. This seems to be particularly prevalent in file sharing, especially when users are restricted by permissions, by data storage or transfer allowance, and by time-consuming protocols. People will usually work around the security mechanisms and resort to the readily available commercial alternatives, which may be insecure. From the employee’s perspective, the consequences of not completing a primary task are severe, as opposed to the ‘potential’ consequences of the risk associated with breaching security policies.
If organisations continue to set equally high goals for both security and business productivity, they are essentially leaving it up to their employees to resolve potential conflicts between them. Employees will focus most of their time and effort on carrying out their primary tasks efficiently and in a timely manner, which means that their target will be to maximise their own benefit, as opposed to the company’s. It is therefore vital for organisations to find a balance between both security and productivity, because when they fail to do so, they lead – or even force – their employees to resort to non-compliant behaviour. When companies are unable to recognise and correct security mechanisms and policies that affect performance and when they exclusively reward their employees for productivity, not for security, they are effectively enabling and reinforcing non-compliant decision-making on behalf of the employees.
Employees will only comply with security policies if they are motivated to do so: they must have the perception that compliant behaviour results in personal gain. People must be given the tools and the means to understand the potential risks associated with their roles, as well as the benefits of compliant behaviour, both to themselves and to the organisation. Once they are equipped with this information and awareness, they must be trusted to make their own decisions that can serve to mitigate risks at the organisational level.
 Iacovos Kirlappos, Adam Beautement and M. Angela Sasse, “‘Comply or Die’ Is Dead: Long Live Security-Aware Principal Agents”, in Financial Cryptography and Data Security, Springer, 2013, 70–82.
 Leron Zinatullin, “The Psychology of Information Security.”, IT Governance Publishing, 2016.
Photo by Nick Carter https://www.flickr.com/photos/8323834@N07/500995147/