User Experience Design

9719949207_bcf69ec9fd_z

Here’s a collection of courses designed to further your knowledge in user experience design. Happy learning!

Read the rest of this entry »

Advertisements

The root causes of a poor security culture within the workplace

15542330623_c052788d46_z

Demonstrating to employees that security is there to make their life easier, not harder, is the first step in developing a sound security culture. But before we discuss the actual steps to improve it, let’s first understand the root causes of poor security culture.

Security professionals must understand that bad habits and behaviours tend to be contagious. Malcolm Gladwell, in his book The Tipping Point,[1] discusses the conditions that allow some ideas or behaviours to “spread like viruses”. He refers to the broken windows theory to illustrate the power of context. This theory advocates stopping smaller crimes by maintaining the environment in order to prevent bigger ones. The claim goes that a broken window left for several days in a neighbourhood would trigger more vandalism. The small defect signals a lack of care and attention on the property, which in turn implies that crime will go unpunished.

Gladwell describes the efforts of George Kelling, who employed the theory to fight vandalism on the New York City subway system. He argued that cleaning up graffiti on the trains would prevent further vandalism. Gladwell concluded that this several-year-long effort resulted in a dramatically reduced crime rate.

Despite ongoing debate regarding the causes of the 1990s crime rate reduction in the US, the broken windows theory can be applied in an information security context.

Security professionals should remember that minor policy violations tend to lead to bigger ones, eroding the company’s security culture.

The psychology of human behaviour should be considered as well

Sometimes people are not motivated to comply with a security policy because they simply don’t see the financial impact of violating it.

Dan Ariely, in his book The Honest Truth about Dishonesty,[2] tries to understand why people break the rules. Among other experiments, he describes a survey conducted among golf players to determine the conditions in which they would be tempted to move the ball into a more advantageous position, and if so, which method they would choose. The golfers were offered three different options: they could use their club, use their shoe or simply pick the ball up using their hands.

Although all of these options break the rules, they were designed in this way to determine if one method of cheating is more psychologically acceptable than others. The results of the study demonstrated that moving the ball with a club was the most common choice, followed by the shoe and, finally, the hand. It turned out that physically and psychologically distancing ourselves from the ‘immoral’ action makes people more likely to act dishonestly.

It is important to understand that the ‘distance’ described in this experiment is merely psychological. It doesn’t change the nature of the action.

In a security context, employees will usually be reluctant to steal confidential information, just as golfers will refrain from picking up a ball with their hand to move it to a more favourable position, because that would make them directly involved in the unethical behaviour. However, employees might download a peer-to-peer sharing application to listen to music while at work, as the impact of this action is less obvious. This can potentially lead to even bigger losses due to even more confidential information being stolen from the corporate network.

Security professionals can use this finding to remind employees of the true meaning of their actions. Breaking security policy does not seem to have a direct financial impact on the company – there is usually no perceived loss, so it is easy for employees to engage in such behaviour. Highlighting this link and demonstrating the correlation between policy violations and the business’s ability to generate revenue could help employees understand the consequences of non-compliance.

References:

[1] Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little, Brown, 2006.

[2] Dan Ariely, The Honest Truth about Dishonesty, Harper, 2013.

Image by txmx 2 https://flic.kr/p/pFqvpD

To find out more about the behaviours behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond


‘Wicked’ problems in information security

10299945186_12bb26640f_z

Incorporating security activities into the natural workflow of productive tasks, makes it easier for people to adopt new technologies and ways of working, but it’s not necessarily enough to guarantee that you’ll be able to solve a particular security-usability issue. The reason for this is that such problems can be categorised as wicked.

Rittel and Webber in ‘Policy Sciences’ define a wicked problem in the context of social policy planning as a challenging – if not impossible – problem to solve because of missing, poorly defined or inconsistent requirements from stakeholders, which may morph over time and which can be demanding to find an optimal solution for.[1]

One cannot apply traditional methods to solving a wicked problem; a creative solution must be sought instead. One of these creative solutions could be to apply design thinking techniques.

Methods for design thinking include performing situational analysis, interviewing, creating user profiles, looking at other existing solutions, creating prototypes and mind mapping.

Plattner, Meinel and Leifer in ‘Design Thinking: Understand–Improve–Apply’ assert that there are four rules to design thinking, which can help security professionals better approach wicked problems:[2]

  1. The human rule: all design activity is ultimately social in nature.
  2. The ambiguity rule: design thinkers must preserve ambiguity.
  3. The redesign rule: all design is redesign
  4. The tangibility rule: making ideas tangible always facilitates communication.

Security professionals should adopt these rules in order to develop secure and usable controls, by engaging people, utilising existing solutions and creating prototypes that can help by allowing the collection of feedback.

Although this enables the design of better security controls, the design thinking rules rarely provide an insight into why the existing mechanism is failing.

When a problem occurs, we naturally tend to focus on the symptoms instead of identifying the root cause. In ‘Toyota Production System: Beyond Large-Scale Production’, Taiichi Ohno developed the Five Whys technique, which was used in the Toyota production system as a systematic problem-solving tool to get to the heart of the problem.

In one of his books, Ohno provides the following example of applying this technique when a machine stopped functioning:[3]

  1. Why did the machine stop? There was an overload and the fuse blew.
  2. Why was there an overload? The bearing was not sufficiently lubricated.
  3. Why was it not lubricated sufficiently? The lubrication pump was not pumping sufficiently.
  4. Why was it not pumping sufficiently? The shaft of the pump was worn and rattling.
  5. Why was the shaft worn out? There was no strainer attached and metal scrap got in.

Instead of focusing on resolving the first reason for the malfunction – i.e. replacing the fuse or the pump shaft – repeating ‘why’ five times can help to uncover the underlying issue and prevent the problem from resurfacing again in the near future.

Eric Reis, who adapted this technique to starting up a business in his book The Lean Startup,[4] points out that at “the root of every seemingly technical problem is actually a human problem.”

As in Ohno’s example, the root cause turned out to be human error (an employee forgetting to attach a strainer), rather than a technical fault (a blown fuse), as was initially suspected. This is typical of most problems that security professionals face, no matter which industry they are in.

These techniques can help to address the core of the issue and build systems that are both usable and secure. This is not easy to achieve due to the nature of the problem. But, once implemented, such mechanisms can significantly improve the security culture in organisations.

 

References:

[1] Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning”, Policy Sciences, 4, 1973, 155–169.

[2] Hasso Plattner, Christoph Meinel and Larry J. Leifer, eds.,  Design Thinking: Understand–Improve–Apply, Springer Science & Business Media, 2010.

[3] Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production, Productivity Press, 1988.

[4] Eric Reis, The Lean Startup, Crown Business, 2011.

Image by Paloma Baytelman https://www.flickr.com/photos/palomabaytelman/10299945186/in/photostream/

To find out more about the psychology behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond


Security and Usability

Many employees find information security secondary to their normal day-to-day work, often leaving their organisation vulnerable to cyber attacks, particularly if they are stressed or tired. Leron Zinatullin, the author of The Psychology of Information Security, looks at the opportunities available to prevent such cognitive depletion.

2141071329_9097e63c06_o

When users perform tasks that comply with their own mental models (i.e. the way that they view the world and how they expect it to work), the activities present less of a cognitive challenge than those that work against these models.

If people can apply their previous knowledge and experience to a problem, less energy is required to solve it in a secure manner and they are less mentally depleted by the end of the day.

For example, a piece of research on disk sanitisation highlighted the importance of secure file removal from the hard disk.[1] It is not clear to users that emptying the ‘Recycle Bin’ is insufficient and that files can easily be recovered. However, there are software products available that exploit users’ mental models. They employ a ‘shredding’ analogy to indicate that files are being removed securely, which echoes an activity they would perform at work. Such an interface design might help lighten the burden on users.

Security professionals should pay attention to the usability of security mechanisms, aligning them with users’ existing mental models.

In The Laws of Simplicity,[2] John Maeda supports the importance of making design more user-friendly by relating it to an existing experience. He refers to an example of the desktop metaphor introduced by Xerox researchers in the 1980s. People were able to relate to the graphical computer interface as opposed to the command line. They could manipulate objects similarly to the way they did with a physical desk: storing and categorising files in folders, as well as moving or renaming them, or deleting them by placing them in the recycle bin.

Using mental models

Building on existing mental models makes it easier for people to adopt new technologies and ways of working. However, such mappings must take cultural background into consideration. The metaphor might not work if it is not part of the existing mental model. For instance, Apple Macintosh’s original trash icon was impossible to recognise in Japan, where users were not accustomed to metallic bins of this kind.

Good interface design not only lightens the burden on users but can also complement security. Traditionally, it has been assumed that security and usability always contradict each other – that security makes things more complicated, while usability aims to improve the user experience. In reality, they can support each other by defining constructive and destructive activities. Effective design should make constructive activities simple to perform while hindering destructive ones.

This can be achieved by incorporating security activities into the natural workflow of productive tasks, which requires the involvement of security professionals early in the design process. Security and usability shouldn’t be extra features introduced as an afterthought once the system has been developed, but an integral part of the design from the beginning.

Security professionals can provide input into the design process via several methods such as iterative or participatory design.[3] The iterative method consists of each development cycle being followed by testing and evaluation and the participatory method ensures that key stakeholders, including security professionals, have an opportunity to be involved.

References:

[1] S. L. Garfinkel and A. Shelat, “Remembrance of Data Passed: A Study of Disk Sanitization Practices”, IEEE Security & Privacy, 1, 2003, 17–27.

[2] John Maeda, The Laws of Simplicity, MIT Press, 2006.

[3] For iterative design see J. Nielsen, “Iterative User Interface Design”, IEEE Computer, 26(11) (1993), 32–41; for participatory design see D. Schuler and A. Namioka, Participatory Design: Principles and Practices, CRC Press, 1993.

Image by Rosenfeld Media https://www.flickr.com/photos/rosenfeldmedia/2141071329/


To find out more about the psychology behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond


How employees react to security policies

8205162689_345cce5b75_o

Information security can often be a secondary consideration for many employees, which leaves their company vulnerable to cyber attacks. Leron Zinatullin, author of The Psychology of Information Security, discusses how organisations can address this.

First, security professionals should understand that people’s resources are limited. Moreover, people tend to struggle with making effective decisions when they are tired.

To test the validity of this argument, psychologists designed an experiment in which they divided participants into two groups: the first group was asked to memorise a two-digit number (e.g. 54) and the second was asked to remember a seven-digit number (e.g. 4509672).[1] They then asked the participants to go down the hall to another room to collect their reward for participating. This payment, however, could be only received if the number was recalled correctly.

While they were making their way down the corridor, the participants encountered another experimenter, who offered them either fruit or chocolate. They were told that they could collect their chosen snack after they finished the experiment, but they had to make a decision there and then.

The results demonstrated that people who were given the easier task of remembering a two-digit number mostly chose the healthy option, while people overburdened by the more challenging task of recalling a longer string of digits succumbed to the more gratifying chocolate.

The implications of these findings, however, are not limited to dieting. A study looked at the decision-making patterns that can be observed in the behaviour of judges when considering inmates for parole during different stages of the day.[2]

Despite the default position being to reject parole, judges had more cognitive capacity and energy to fully consider the details of the case and make an informed decision in the mornings and after lunch, resulting in more frequently granted paroles. In the evenings, judges tended to reject parole far more frequently, which is believed to be due to the mental strain they endure throughout the day. They simply ran out of energy and defaulted to the safest option.

How can this be applied to the information security context?

Security professionals should bear in mind that if people are stressed at work, making difficult decisions, performing productive tasks, they get tired. This might affect their ability or willingness to maintain compliance. In a corporate context, this cognitive depletion may result in staff defaulting to core business activities at the expense of secondary security tasks.

Security mechanisms must be aligned with individual primary tasks in order to ensure effective implementation, by factoring in an individual’s perspective, knowledge and awareness, and a modern, flexible and adaptable information security approach. The aim should therefore be to correct employee misunderstandings and misconceptions that result in non-compliant behaviour, because, in the end, people are a company’s best asset.

References:

[1] B. Shiv and A. Fedorikhin, “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making”, Journal of Consumer Research,  1999, 278–292.

[2] Shai Danziger, Jonathan Levav and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions”, Proceedings of the National Academy of Sciences, 108(17), 2011, 6889–6892.

Photo by CrossfitPaleoDietFitnessClasses https://www.flickr.com/photos/crossfitpaleodietfitnessclasses/8205162689

To find out more about the psychology behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond


Secure by design

V4tS8p5F_C0

Have you seen security controls being implemented just to comply with legal and regulatory requirements? Just like this fence. I’m sure it will pass all the audits: it is functioning as designed, it blocks the path (at least on paper) and it has a bright yellow colour just as specified in the documentation. But is it fit for purpose?

It turns out that many security problems arise from this eager drive to comply: if the regulator needs a fence – it will be added!

Sometimes controls are introduced later, when the project is well passed the design stage. It might be the case that they just don’t align with the real world anymore.

n0saycKzykM

Safety measures, unfortunately, are no exception. The solution may be poorly designed, but more often, safety requirements are included later on with the implementation not fit for purpose.

IbHF452Usk0tuLB7kjBazs

Same holds for privacy as well. Privacy professionals encourage to adopt the Privacy by Design principle. Is it considered on the image below?

2Tx1qKFQzfoB5nli4NoEG4


Password Policies: Security vs Productivity

A password policy can include a number of parameters. Let’s examine them from both security and productivity perspectives:

  • Minimum password length defines how many characters a password should consist of. The longer the password, the more resistant it is to a brute force attack given other password best practices are followed. Longer passwords, however, are usually harder to remember which may lead to instances of writing passwords down.
  • Password complexity. If a password includes a combination of upper- and lowercase characters combined with numbers and special characters, the harder it is to run a dictionary attack against such a password. Similarly to long passwords, complex passwords are usually harder to remember.
  • Password renewal policy ensures that users regularly change their passwords. This helps to minimise the potential security impact of compromised passwords. Although this policy is beneficial from the security perspective, users may struggle to come up with new passwords that satisfy security requirements.
  • The policy restricts users to set passwords they used before. This forces them to come up with new passwords to make sure that if the password was compromised it is not reused. Although this policy is beneficial from the security perspective, users may struggle to come up with new passwords that satisfy security requirements.
  • Locking out a user’s account after a number of wrong password attempts is a strong measure against a brute force attack. The attacker in this case is unable to try all possible combinations using specialized software. From the usability perspective, however, legitimate users might enter their passwords incorrectly as well and be unable to access the system. This may result in the increased number of calls to the company’s Help Desk or increased time for manual password reset.

Password complexity and usability explained in one comic.