How to plan and deliver benefits on an information security project

Benefits

Major changes frequently introduced by security projects might be seen as necessary evils without delivering value to the business. To change this perspective, a project manager should proactively manage benefits and make sure they are achievable and verifiable.

The key objectives of benefits management is to ensure that benefits are identified, defined, and linked to the company’s business strategy.

Realistic planning of benefits is the first step to achieve project success. It is, however, an ongoing activity and requires many iterations. In order to drive the realisation of benefits, the following template can be used to capture potential benefits and measure its impact on the organisation

Benefit Expected benefit outcome Benefit Type Where will the benefit occur? Who will be affected?

Image courtesy of ddpavumba / FreeDigitalPhotos.net

Teaching Information Security Concepts at KPMG

KPMG1

I delivered a 1,5-day Information Security Concepts course at KPMG UK.

We covered a wide range of topics, including information security risk management, access control, threat and vulnerability management, etc.

According to the feedback I received after the course, the participants were able to understand the core security concepts much better and, more importantly, apply their knowledge in practice.

Leron is very engaging and interesting to listen to
Leron has the knowledge and he’s very effective making simple delivery of a complex topic
Leron is an effective communicator and explained everything that he was instructing on in a clear and concise manner

There will be continuous collaboration with the Learning and Development team to deliver this course to all new joiners to the Information Protection and Business Resilience team at KPMG.

Productive Security

ID-100235520

Let’s see how some security controls might affect human behaviour in a company.

  • Restricting software installation on computers is in line with one of the main principles of information security – the principle of least privilege. That way a security manager can make sure that employees in his company don’t install unnecessary programs which may contain vulnerabilities. Such vulnerabilities can be exploited by a potential attacker. There are instances, however, when a user may require a piece of software to perform his productive tasks. Failure to install it quickly and easily may result in unnecessary delays.
  • Restricting access to file sharing websites helps to make sure that a company is not in violation of the data privacy regulation and users don’t store sensitive information in the insecure locations. However, it is important for a company to provide an easy-to-use, secure alternative to enable the business.
  • Restricting access to CD/DVD and USB flash drives. Personal USB flash drives can be a source of malware which users can introduce to the corporate network. Restricting access to CD/DVD and USB flash drives not only helps to prevent this threat, but also limits the possibility of sensitive data leaks. It is important to understand the core business processes in a company to make a decision on restricting the access. Sometimes drawbacks of such a policy may overshadow all possible benefits.
  • Regular full antivirus checks help to make sure that employees’ workstations are free from malware. However, the process of scanning a computer for viruses may take up a lot of resources and slow down the machine with the possible impact on productivity,
  • Awareness training can be a powerful measure to protect against a wide range of security threats, including social engineering (e.g. phishing). However, research shows that blanket awareness campaigns are ineffective and a better approach is needed to address this issue.

Image courtesy of renjith krishnan/ FreeDigitalPhotos.net

Thom Langford: Security risk is just one of the many types of risks a business faces on a day-to-day basis

Interview with Thom Langford, Director of security risk management

Could we start with your personal story: your beginnings and how you got to where you are.

I was always interested in computers. My first computer was a Sinclair Spectrum 48K. I’ve always had a technology fascination. I got very much into this during school and university, and my first job was as a VAX/VMS operator, running overnight batch jobs. It was a physically tiring job, as we had to print 70,000 to 100,000 pages at night to have them delivered to the client and a 24-hour shift system, which got me to learn how to work under pressure. I then got into PCs in a big way, and moved from supporting Autodesk CAD products, to being an IT manager for a small systems integration company in Swindon. When the company was bought out by Coopers and Lybrand and subsequently merged with Price Waterhouse, to become PwC, I became known as a “builder of things”. I built a retail solutions centre, both the technology and the physical environments, from the ground up.

I subsequently built a client showcase development centre in Heathrow, a fast-track product delivery centre in London, and was also doing client work in Swansea building an innovation centre. Again, this included building both the IT as well as the physical environment: buildings, walls, the electrics and the soft furnishings, everything, basically.

I then moved to Sapient as an IT and facilities manager, which was a bit of an odd combination, although a natural move given my previous experience. I was doing that job for a number of years, initially for London and then for our global offices, when I noticed a gap in our capabilities around security, disaster-recovery and business continuity. I then spoke to one of our C level executives, and he agreed. He broadened the scope somewhat further and then asked me to start 10 days later. So it was a very rapid move for me into security. Even though I had already had a strong background in physical and IT security, this was a very different world for me. I tried to get qualified very quickly, which is something that is very difficult when you have little to no budget, which happens when you start mid-year.  So I basically begged, borrowed and stole everything. We brought together a team and got a CISO on board and that’s basically where we are today. Right now I am the acting CISO. I am responsible for teams based out of India and North America, working to strengthen our security posture both internally as well as to the industry.

You are responsible for risk management. What is your view on risk management in general? How do you think is your view different from others, if at all?

I think everybody has a view on risk management, and it is not always a good one. Traditionally, risks are seen as bad and that have to be removed. They never change and the same risks are going to be there all the time.  This was, at least, my perspective in the beginning. Everything is static and you live in the world of Excel spread sheets: you list your risks in them, you list what you are going to do about them, how you are going to measure them, and then you decide whether you’ve fixed them or not. Nobody was able to tell me whether a risk was acceptable or otherwise. This was basically as far as I saw, within my responsibility: to act as the conscience of a company, because that was my job. That attitude has changed for me a lot in the last 4 years. If you are the conscience of the business, the business will be stifled quite dramatically because of your security implementations.  Actually, all you are doing is reducing their ability to work effectively because you don’t see the big picture of how the business operates.

Security risk is just one of the many types of risks a business faces on a day-to-day basis: socioeconomic, financial, geopolitical, legal, personnel, everything has to be taken into account. To say that a business cannot carry out an activity based on one aspect or one facet of risk, I think, is the entirely wrong thing to do. You should act more as an enabler and to become more of a yes person than a no person.

When identifying risks you will probably need the help of different stakeholders. How do you identify these different stakeholders? How do you manage the relationship with them? How do you get people to speak up?

Risk in security is just one facet of security in any business. So any enterprise should have a risk committee that is composed by a delivery group, a legal group, a financial group, etc. As long as you are measuring your risks in the same way, whether it’s in ordinal numbers or any kind of format that makes sense, those risks will be filtered as they rise up through the organisation. So if you have, for instance, 1000 security risks on your risk register, a single figure of risks should be reaching the very top of the organisation. Any more than that is an indication of people not being empowered enough to deal with risks as they emerge. Not everybody in the organisation will be able to address a risk, and so therefore, it needs to be escalated.

Escalation is not a bad thing: it’s about getting people who are better qualified or more capable or have more authority of dealing with something than you are. Not because you are incapable, but because they are in a better position than you to do so. So from the thousands that arise at the very bottom level, only a few will reach the higher levels, where they can be better dealt with.

As far as stakeholder management, it is much easier to deal with senior level stakeholder management and just seeing the very tip of the iceberg. As long as they are empowering everyone else and they can be sure that they have the tools to deal with the bulk of it all, the easier it is. This way you don’t have to deal with this vast spreadsheet concerning every single case. By empowering everybody in the organisation, it is easy for them to see why it is important to deal with risks directly. If the people at the top levels don’t want to deal with the stuff that reaches them, they basically delegate it to somebody else back down, in which case, it is being dealt with in the end.

So filtering is one approach, which is about empowering people at various levels of the organisation to recognise and deal with the risks as they feel appropriate and qualified to do so.

There are two main trains of thought in information security, namely, compliance-based and risk-based. What’s your approach, and why do you think is it more beneficial?

I think compliance is extremely useful, but it is not the be-all and end-all. Let’s say that you are using ISO 27001, for example, where measuring risk is a core part of it. But if all you are trying to do is to get the certification, you’re only engaging in security theatre. You’re only doing what is required to get the auditor happy and you are ticking things off and writing procedures, but nobody really knows anything about it. Nobody is paying any actual attention to it, apart from that one day in which you make sure that the right people are in the right office, and the auditor has that long lunch that you need, etc. So it’s a start, but it is not the way to go.

Whereas a proper risk-based approach will actually make that conversation continue way beyond the initial compliance. It’s a bit of an old argument “compliance doesn’t equal security”, which it can be if it is taken in the right sense and with the right approach. But all too often, organisations will stop at compliance and not continue with real risk-based security. An example of that is a risk register that is only looked at once a year: that is compliance. A risk register should be looked at on a regular basis to ascertain that risks haven’t changed, or if likelihoods haven’t changed, or if exploitations have changed, if risk appetites have changed within the organisation, for example. If it becomes a living and breathing document, then you are looking more at a risk-based approach to security. If it’s just a mechanical once-a-year, tick-tick-tick format, then you are in a compliance environment.

What should companies do in order for them to shift from this traditional compliance approach to the risk-based one?

I think that it is about coming back to understanding what are the benefits of security and the objectives of the business. If you can connect the benefits of your security program to the ability for a company to sell more of its products, to safely enter riskier markets (because they are able to handle their data more securely), to give confidence to their clients, to bring confidence to the industry (or to whatever regulatory body that looks after them), then that’s when you can actually get more done as a result of your security program. If you are just doing security for security’s sake, we go back to being just a conscience again.

So it’s about connecting your security programme to the goals of the business. If you haven’t even read your company’s annual report, how do you know what your security programme is supporting? If you haven’t attended a shareholder meeting or an earnings call, you can’t really know what you are doing. You only have to do this a few times to get your bearings. If you don’t understand what the core purpose of the business is, how can you actually align your security with it? It’s like IT giving out computers with Linux and Open Office when the company actually needs Windows with Microsoft Office. Linux and Open Office are perfectly acceptable approaches, but the choice for them is not aligned with the business’ needs, which probably include cross-compatibility and other functions that only Microsoft Office can do. If you don’t know what the business needs from security, you need to find out: talk, listen, read, whatever it takes to find out what it is that the business needs from you.

Let’s say that you are assigned as security manager within a company. What are the first things that you would do in your first weeks?

You need to talk: you talk as broadly and as highly as you need to understand where you are standing and what is required. Talk to as many people as possible. For instance, if you are in a manufacturing plant, you start by talking to the people on the shop floor and see how they operate. Talk to the shift leaders and the managers there. If you are consultancy, start by talking to the programme directors, to the business development people and to the partners. It doesn’t matter where you are: start talking from the ground upwards, so you actually understand what it is they do and how they do it, what they need and what they know.

These conversations might be very short, or you might run into people who don’t know much, in which case you are starting with a blank slate and you can bring your own influence onto them. If the floor leader tells you that smokers are leaving the shop doors open to go have their cigarette break, well, that’s a problem you have already identified. It’s a small issue, but potentially important. If you start solving their problems, perceived or otherwise, then you start to build fanatical advocates for security.

If you understand that the CFO’s primary goal is to ensure that he’s able to get reports and the payroll out on a monthly basis, then you can start focusing more on the integrity and availability of the data. You can then prioritize for a disaster-recovery and business continuity, so that they have the confidence that what you are actually doing is helping them do their job more easily and they are able to sleep at night. If you CFO is staying awake the night before payday because he’s not sure if his Oracle systems are going to stay up and running overnight, then that’s a problem that you can fix. So you need to communicate, talk and listen: in fact, listen twice as much as you talk, because you’ve got two ears and one mouth, and find out what peoples’ problems are, perceived or otherwise.

Password Policies: Security vs Productivity

A password policy can include a number of parameters. Let’s examine them from both security and productivity perspectives:

  • Minimum password length defines how many characters a password should consist of. The longer the password, the more resistant it is to a brute force attack given other password best practices are followed. Longer passwords, however, are usually harder to remember which may lead to instances of writing passwords down.
  • Password complexity. If a password includes a combination of upper- and lowercase characters combined with numbers and special characters, the harder it is to run a dictionary attack against such a password. Similarly to long passwords, complex passwords are usually harder to remember.
  • Password renewal policy ensures that users regularly change their passwords. This helps to minimise the potential security impact of compromised passwords. Although this policy is beneficial from the security perspective, users may struggle to come up with new passwords that satisfy security requirements.
  • The policy restricts users to set passwords they used before. This forces them to come up with new passwords to make sure that if the password was compromised it is not reused. Although this policy is beneficial from the security perspective, users may struggle to come up with new passwords that satisfy security requirements.
  • Locking out a user’s account after a number of wrong password attempts is a strong measure against a brute force attack. The attacker in this case is unable to try all possible combinations using specialized software. From the usability perspective, however, legitimate users might enter their passwords incorrectly as well and be unable to access the system. This may result in the increased number of calls to the company’s Help Desk or increased time for manual password reset.

Password complexity and usability explained in one comic.

Delivering a Seminar at the London Metropolitan University

RIG (1)

I was invited to give a talk on industrial systems security at the London Metropolitan University.

The seminar was intended for academic staff to discuss current problems in this field. We managed to cover a broad range of issues regarding embedding devices and network and IT infrastructure in general.

The professors shared their perspective on this subject.  This resulted in the  identification of several research opportunities in this area.

Image courtesy of Vlado / FreeDigitalPhotos.net

Delivering a Seminar at the IT Security & Computer Forensics Pedagogy Workshop

HIGHER EDU

I presented at the HEA STEM Workshop on human aspects of information security.

The aim of the workshop is to share, disseminate and stimulate discussions on: the pedagogy of teaching subjects related to IT security and computer forensics, and issues relating to employability and research in these areas.

During the workshop the speakers presented topics that focus on: delivery of innovative practical tutorials, workshops and case studies; course design issues; demand for skills and employment opportunities; countering the “point & click” approach linked to vendor supplied training in industry; and current research exploring antivirus deployment strategies.

Modern security professionals while fighting cyber threats also have to take human behaviour into account

In today’s corporations, information security managers have a lot on their plate. While facing major and constantly evolving cyber threats, they must comply with numerous laws and regulations, protect the company’s assets, and mitigate risks as best as possible. To address this, they have to formulate policies to establish desired practices that avoid these dangers. They must then communicate this wanted behavior to the employees so that they adapt and everything can go according to plan. But is this always the case?

Security managers often find that what they put on paper is only half of the story. Getting the corporation to “cooperate” and follow the policy all the time can be far more challenging than it seems. So why do employees seem to be so reluctant?

Are we even asking the right question here?

The correct question is: do security managers know what imposing new rules means to the average employee within the company?

People’s behavior is goal-driven. If processes are imposed on them, people will usually follow them, as long as they still allow them to achieve their goals. If they come across situations where they are under pressure, or they encounter obstacles, people will cut corners, break rules and violate policies.

So why should the behavior of a corporation’s employees be an exception? They will usually follow the rules willingly while trying to comply with the security policy, but, at the end of the day, their objective is simply to get their work done.

Yes., there are cases of employees who have a malicious goal of intentionally violating security policies, but research shows that policy violations will most likely result from the controls implementation that prevented people from performing their tasks.

What happens to an organization when honest workers can’t achieve their goals because of poorly implemented security controls? What happens on the security manager’s end and on the employees’ end that leads to this scenario? A short survey I performed in 2013 shows that there is a huge gap between the employees’ and the security managers’ perceptions of security policies; and it’s this discrepancy that negatively impacts the organization as a whole. Security managers, on their side, assume that they have made all the relevant considerations pertaining the needs of the employees. However, the fact is that they rarely speak directly to the employees to familiarize themselves with their tasks, their needs, and their goals. It is therefore usual to hear employees complain about how security controls hinder or impede their performance.

Let’s consider the following scenario:

In an investment bank, a security manager comes up with a policy document, outlining a list of authorized software which can be installed on computers, according to the principle of least privilege: people can only have the access they require to perform their day-to-day activities and no more. All employees are denied access to install any new software without written permission from the security manager.

John is writing a report for the client. The deadline is fast approaching but he still has a lot of work ahead of him. The night before the deadline, John realizes that in order to finish his work, he requires a special data analysis software which was not included in the list of authorized programs. He is also unable to install it on his workstation, because he doesn’t have the required privileges. Getting the formal written approval from the security manager is not feasible, because it is going to take too long. John decides to copy the sensitive information required for the analysis on his personal computer, using a flash drive, to finish the work at home, where he can install any software he wants. He understands the risk but he also wants to get the job done in order to avoid missing the deadline and get good performance review. Unfortunately, he leaves his bag with the flash drive in the taxi on the way back home. He never tells anyone about this incident to avoid embarrassment or a reprimand.

The security manager in this scenario clearly failed to recognize the employee’s needs before implementing the controls.

A general rule of thumb to never forget is that employees will most likely work around the security controls to get their work done regardless of the risks this might pose, because they value their main business activities more than compliance with security policies.

To address this, security managers should consider analyzing security controls in a given context in order to identify clashes and resolve potential conflicts adjusting the policy. They should also communicate the value of security accordingly. Scaring people and imposing sanctions might not be the best approach. They should instead demonstrate to the employees that they contribute to the efficient operation of the business when they comply with security policies. Not only does security ensure confidentiality and the integrity of information, but it also makes sure that the resources are available to complete their primary tasks.

Employees need to understand that security is something that important for achieving the company’s goals, not something that gets in the way. To achieve this, the culture of the organisation must change.

Javvad Malik: One of the biggest challenges that companies are facing is securing at the same rate of innovation

Interview with Javvad Malik – Senior Analyst at 451 Research and blogger at http://www.J4vv4D.com

Javvad

Could you start by telling us about yourself?

My first proper job was during my work placement year during my degree as an IT security administrator at NatWest Bank which, to be honest, I had no idea what this job was about. Actually, very few people knew what it was. But as a student doing a degree in Business Information Systems, I needed to specialise in something and so I went and took this job to see if I could make any sense of this field. I figured that this bank was a huge company and if things didn’t work out in IT Security, I could always explore opportunities in other departments.

Back in the day, there was around seven people in the security operations team for the whole bank, and only three for the monitoring team with whom we only had an intermittent communication. NatWest was then acquired by RBS and I remained in IT security for the next five years, during which I moved more to the project-side of security, as opposed to the operations-side. I had more interactions with the internal consultancy-team and their job appealed to me, because they didn’t seem to need to keep so up-to-date with all the latest technologies from a hands-on perspective and they made more money.. I was unable to make an internal move so I decided to get into contracting and stayed within financial services, where the majority of my roles involved arguing with auditors, resolving issues through internal consulting, being the middle-man between the business and pen-testers, project reviews, and the sort.

On the side, I got very interested in blogging. Blogs were the new fantastic boom readily accessible and cheap for everybody. Suddenly everybody with a blog felt like a professional writer, which I enjoyed, but found it a difficult area in which one could differentiate or bring a unique perspective to. I then tried video blogging, which I discovered was bloody hard, because it takes a lot of skills to help you look like a professional instead of like an idiot most of the time. But because I was among the first to get into this type of delivery mode, my profile was raised quite quickly within the security community, and perhaps to an even broader one. One of the advantages to video blogging that I uncovered was that people who watch you can somehow relate to you better than if they just read your work: they can see your body language, hear your voice, your tone, everything. The result is quite funny, because it often happens to me that when I go to a conference, somebody will greet me as if I’m their best friend. Because they see me so often on YouTube, they feel like they know me. It’s very nice when people acknowledge you like that, and it goes to show that the delivery channel really has that impact.

So because of this impact, one day, Wendy, the research director at 451 Research, asked me if I would be interested in becoming an analyst. In reality I had no idea what an analyst did. She said that I would have to speak to vendors and write about them, which sounded a lot like blogging to me. She immediately said, “yes, it is pretty much like blogging,” to which I then replied, “well, I have my demands. I do video blogging, I’d like to attend and speak at conferences and I don’t want any restrictions here, because I know that many companies impose restrictions around this kind of activity.”

Currently I’ve been an analyst for the past two years, which I have enjoyed very much and has allowed me to broaden my skillset; not to mention give me the opportunity to meet a ton of extremely talented people.

Where do you predict will the security field go?

When I was starting in the field, nobody really knew what security was. Then came the perception that it was all about hackers working from their mums’ basements. Then, they were assumed to be IT specialists, and then that they were specialists who didn’t necessarily know much about IT but who knew more about the risk and/or the government background and now everyone is just confused

Security itself is very broad. It is kind of like medicine: you have GPs who know a little bit about everything, which is the base level of knowledge. For complex cases they will refer you to other doctors who specialise in, say, blood, heart, eyes, ears, and other specific body parts. The same applies to security. You will have some broad generalists and others who are technical experts or those who are more into security development and can tell you how to use code more securely.  You then have non-technical security people, who know more about understanding the business, the risk, and how to implement security into it. You also get product or technology specific experts who are only there to maybe tune your SIEMs for you, forensics experts, incident-response specialists, and so on. You will find specialists with overlapping skills, just as you will find those who possess unique abilities as well. Security has exploded “sideways” like that. So you can call lots of people “security experts” but in reality they are very different from each other, which means that they are not necessarily interchangeable. You can’t, obviously, switch a non-technical person for a technical one. I believe that one of the signs of immaturity within the industry is that people still don’t recognize these differences, which often leads to lots of finger-pointing in situations like: “you don’t know how to code, how can you call yourself a security professional? You don’t understand what the business does. You’ll never be a security professional.” These kinds of things, I think, are the natural growing pains of this and any industry.

What will probably happen going forward is that as things become increasingly interconnected and peoples’ whole lives more and more online, you will have more and more of a visibility of security. Additionally, we will see the need to extend the capabilities outside of the enterprise into the consumer space. We are already seeing an overlap between personal and corporate devices. So I think that everything will kind of bleed into everything else: some areas will become operationalised, others will be commoditised, but I think that there will continuously be a need for security that will always have to be there. What that will look like will probably be different to what we see today.

What kind of challenges do you think will the companies face in the future in terms of security?

One of the biggest challenges that companies are facing is securing at the same rate of innovation. Every company wants to be the first one to develop a new way that they can hook in with their customers. Whether this is in the form of being the first in developing a new app that can enable consumers to do banking, or to do payments and inter-payments, and so on, which sometimes comes at the cost of security. Balancing this business case between the perceived benefits and the security risks of it can be very challenging. The speed at which businesses want to and need to innovate, because that’s what the market is forcing them to do, is making security cost-prohibitive.

The other challenge is that the business model for many companies lies almost exclusively in advertising revenue. Nearly every mobile app or social media site or other online service that is free is typically generating either their primary or supplementary revenue by selling user information. With so many companies trying to grab data and sell to the highest bidder – we have a big challenge in educating users in terms of what security risks lie as well as trying to enforce good security practises within the vendor space but without breaking business models.

How would you say companies should then approach this challenge in the first place?

The way that companies typically “solve” this challenge is by burying their head in the sand and outsourcing the problem. So they will go out to another company and ask them: “can you offer us a secure platform to do it?” To which they answer, “of course we can. Just give us your money.” The challenge is that companies and individuals don’t appreciate that poor security choices made today may have an impact that will not be immediately felt, but perhaps in a few months’ or years’ time. Sadly, by then, it’s usually too late. So this is what both companies and individuals need to be careful about.

Returning to the point about security professionals being very diverse, what’s the role of security professionals from the risk governance and compliance perspective? Can you elaborate more on the security culture within a company and how can it be developed?

Security culture is a very difficult thing: it is not impossible but it relies on understanding human behaviour more than technical aspects. Understanding human behaviour means understanding personality types and how they respond to different environments and stimuli, which can be more challenging that understanding technical aspects.

The general observation that I can make about human behaviour, regardless of the personality type, is that people don’t tend to be aware of what they are giving up. The best and most prevalent example would be how much in demand mobile apps are and how insecure they are, because people unknowingly give away lots of data in order to have access to them. Chris Eng from Veracode makes an excellent analogy by saying that “people usually don’t care what they are agreeing to as long as they can still fling birds against pigs.” This is the crux of it. People don’t think it makes much of a difference if they give their email address away, or if they let the app access their GPS data or their contacts, because they can’t perceive a direct impact.  The problem is that this impact might not be felt until ten years’ time. So if you are giving data to Facebook, Instagram and Whatsapp, for example, you can’t really predict what will happen later on. In the last year Facebook acquired both Instagram and Whatsapp. So now you have a single company that holds all of your photo data that you maybe didn’t want on Facebook, along with all the stats on your behaviour that you’ve been feeding to Facebook, along with the people you are chatting to, and so on. So now Facebook has an incredible amount of information about you and can target and market a lot better. Someone could also use all this data for any purpose. I’m not saying that Facebook or other companies gather users personal data for malicious purposes, but it reminds me of the saying, “The path to hell is paved with good intentions.”

How can you make people change their behaviour?

You have to make it real and personal for them. You have to make that personal connection. In security we tend to say: “we have 50,000 phishing emails that come through every day, and people click on them.” But to the individual user, that doesn’t really have that much of an impact. Are we making this information personal? The communication methods and the techniques that we need to change behaviour are there, we don’t need to reinvent it with security people who don’t understand how communication necessarily works or who are not the best communicators to begin with.

We can remember how 15-20 years ago, nobody cared about recycling, because nobody really cared about the environment. It was just a few people in Greenpeace with long hair and who smelled a bit funny who were trying to stop the oil companies from drilling into the sea, for example. Now, you go into any office and you find 10 bins for every different type of recycling material, which everybody now uses. It’s been a long-term campaign which finally created that social change, and which now makes it unacceptable for people to behave in another way. As you walk on the street, you will see that very few people, if any, throw wrappers on the floor. They usually hold onto them until they get to a bin and then they dispose of them. We need to adopt the same practices to change behaviour in security and in many cases that means actually letting people who know how to market and communicate do that for us instead of trying to do it all ourselves.

Risks to Risk Management

Nasim Taleb in his book The Black Swan provides the following examples of Mirage Casino’s four largest losses:

  • $100 million from a tiger mauling
  • Unsuccessful attempt to dynamite casino
  • Neglect in completing tax returns
  • Ransom demand for owner’s kidnapped daughter

How many of these losses could’ve been identified and managed appropriately?

John Adams in his research Risk, Freedom and Responsibility suggests that “Risk management is not rocket science – it’s much more complicated.” He further elaborates on this point in his research: “The risk manager must […] deal not only with risk perceived through science, but also with virtual risk – risks where the science is inconclusive and people are thus liberated to argue from, and act upon, pre-established beliefs, convictions, prejudices and superstitions.”

According to Adams, there are three types of risk:

three_kinds_or_risk

  • Directly perceptible risks are dealt with using a proper judgment. “One does not undertake a formal, probabilistic, risk assessment before crossing the road.”
  • Risks perceived through science are subject to formal risk managementprocess.  “Here one finds not only biological scientists in lab coats peering through microscopes, but physicists, chemists, engineers, doctors, statisticians, actuaries, epidemiologists and numerous other categories of scientist who have helped us to see risks that are invisible to the naked eye. Collectively they have improved enormously our ability to manage risk – as evidenced by the huge increase in average life spans that has coincided with the rise of science and technology.”
  • Virtual risk is not perceived through science, hence people are forced to act based on their convictions and beliefs.Such risks may or may not be real, but they have real consequences. In the presence of virtual risk what we believe depends on whom we believe, and whom we believe depends on whom we trust.”

Klein in his Streetlights and shadows: searching for the keys to adaptive decision making suggests the following issues with risk management:

  • It works best in well-ordered situations
  • Fear of speaking out may result in poor risk identification
  • Organisations should understand that plans do not guarantee success and may result in a false sense of safety
  • Risk Management plans may actually increase risk.

Klein also identifies three risk decision making approaches:

  • Prioritise and reduce
  • Calculate and decide
  • Anticipate and adapt

To illustrate individual’s decision-making process while dealing with risk, Adams introduces another concept called “Risk thermostat”

risk_thermostat

The main idea behind it is that people vary in their propensity to take risks which is influenced by the perception of risk, experience of losses, and potential rewards.

People tend to overestimate spectacular but rare risks, but downplay common risks. Also, personified risks are perceived to be greater than anonymous risks.

The protection measures also can be introduced to only increase perceived security, rather than implement actual mechanisms. A possible example might be using National Guard in airports after 9/11 to provide re-assurance. However, such a security theatre has other applications in relation to motivation, deception and economics.

Finally, Adams discusses the phenomenon of risk compensation and appropriate adjustments which take place in the risk thermostat. He argues that introducing safety measures changes behavior: for example, seat belts can save a life in a crash, so people buckle up and take more risks when driving, leading to an increased number of accidents. As a result, the overall number of deaths remains unchanged.