Discussing Human Aspects of Information Security

June 11 (1)

I delivered a seminar on the human aspects of information security at the University of West London. We discussed conflicts between security and productivity in companies and possible solutions. Research students with different backgrounds helped to drive the debates around usability, awareness and policy design.

We also talked about the practical applications of behavioural theories, where I shared my views on user monitoring and trust in organisations within the context of security culture.

Daniel, one of the participants, summarised his experience in his blog.

Image courtesy of Vlado / FreeDigitalPhotos.net

Mo Amin: You can transform technology but how do you transform people?

Mo Amin – Information Security Professional

Mo

Can you please tell us a little bit about your background?

Long ago in a galaxy far far…oh ok…ok…Just like a lot of people in IT I got asked the same question “My PC has died, can you help me?” When you say yes to one person it’s a downward spiral…and before you know it you’re THE computer guy! Even now I (depending on my mood) will help out. So this was my first real experience in building rapport with clients, charging for my time and to a certain extent being held accountable for the service I provided.

I taught me a lot and was a catalyst in helping me to land my first role in desktop support. I was part of a small team which allowed me to get involved in some network and application support too. Whilst doing my day role I was involved in a couple of investigations, which got me interested in information security and through a few lucky breaks I slowly moved into the field. I’ve been lucky enough to have worked in a number of areas ranging from operational security through to consultancy. However, I’ve always intrinsically enjoyed the awareness and education side of things.

What is it that you are working on at the moment?

I am working with Kai Roer of The Roer Group to help develop the Security Culture Framework. Essentially, the framework aims to help organisations to build a security culture within their business, as opposed to simply relying on topic based security awareness. Making sure that organisations begin to build a security culture into their business is something I believe in strongly. So when Kai asked if I’d like to help I was more than happy.

Let’s talk for a moment about information security in general. What do you think are the biggest challenges that companies are facing at the moment?

I think that one of the biggest challenges is educating staff on the risks that the business faces and getting people to understand and relate to why it is that we are asking them to adopt secure practices. The problem revolves around changing the attitude and overall culture of an organisation. In my humble opinion, this is the biggest challenge. The difficulty lies in changing behaviour because you can change technology but how do you positively change the behaviour of people?

What is your approach or proposed solution to this challenge? What should companies do?

I’ve always learned by seeing something in action or by actually doing it. Obviously, within the context of a busy organisation this isn’t easy to do.  However, as information security practitioners, professional or however we label ourselves we need to be more creative in our attempts to help those that we work with – we need to make awareness more engaging. I think it’s important to have workshops or sessions in breakout areas where staff can come along see how quickly weak passwords are cracked, what can happen if you click on that dodgy but enticing looking attachment. It’s about visualising and personalising threats for people, for example, if you plan your awareness programme carefully you could map your corporate security messages for the home environment and provide your staff with a “Top 10 of do’s and dont’s” Make it creative and engaging and the messages that you give for their home environment they will begin to bring them back to the office.

Lots companies offer security awareness training, which doesn’t seem to have much of an impact. What do you think about these trainings? Should they be changed in some way in terms of targeting, or accounting for individuals’ particular needs, or focusing on behaviour?

The problem is that most of this is simply topic based awareness, in that it’s not seeking to change behaviour. There seems be to be a lot of generic content that applies to everyone in an organisation. Sadly this is a tick-box exercise for the purposes of compliance. Awareness should be unique to your organisation where you cater for different personality types as best you can.  Some people actually like reading policies where as some prefer visual aids, so the ways that individuals learn needs to be better understood. The process of educating your staff should be a sustained and measured programme; it needs to be strategic in its outlook.

What about communication?

Better engagement with the business is what we need to be doing. Our relationships with the likes of legal, HR, finance, marketing, PR should be on an everyday basis not only when we actually need their expertise. These departments usually already have the respect from the business. Information security needs to be seen in the same light.

How do you identify the relevant stakeholders and establish communication with them, and further propagate the whole process of communication within the organisation?

Grab a copy of the organisation chart and start from there. Your job is to introduce yourself to everyone. In my experience doing this over a coffee really helps and preferably not in a meeting room, because it is better to create a new business relationship in a social context, wherein the other person gets to understand you, firstly as a human being and secondly as a work colleague. Most importantly, do this at the beginning and not two months down the line. Building relationships at the very beginning increases your chances of being in the position of asking for last minute favours and paves the path for easier collaboration, as opposed to having to ask for people’s help when they don’t even know you. Usually people are open and honest. They may have a negative image of information security, not because they don’t like you, but most likely because of the interaction they’ve had in the past.

So let’s say that you have joined a new organisation that has a very negative preconception of information security because of a bad previous experience. Once you have already identified all the key people you have to work with, how do you fight this negative perception?

You need to find out what was done previously and why the outcome was negative in the first place. Once you’ve established the actual problem, you have to diffuse the situation. You need to be positive, open and even simple things like walking around and talking to people – show your face. Visit different departments and admit any failings, you need to do a PR and marketing exercise. In a previous role I’ve actually said

“I know what went wrong the last time, I know we screwed up. I want to ask you what you want to see from the information security department from now on.”

People are ready to engage if you are, be personable and be professional. It’s surprising how much positive and usable feedback you actually get.

The majority of the time, people will tell you,

“I just want to be able to do my job without security getting in the way”.

Once you have these sorts of conversations going you begin to understand how the business actually functions on a day-to-day basis. It’s at this stage where you can be influential and change perception.

Delivering a Seminar at the IT Security & Computer Forensics Pedagogy Workshop

HIGHER EDU

I presented at the HEA STEM Workshop on human aspects of information security.

The aim of the workshop is to share, disseminate and stimulate discussions on: the pedagogy of teaching subjects related to IT security and computer forensics, and issues relating to employability and research in these areas.

During the workshop the speakers presented topics that focus on: delivery of innovative practical tutorials, workshops and case studies; course design issues; demand for skills and employment opportunities; countering the “point & click” approach linked to vendor supplied training in industry; and current research exploring antivirus deployment strategies.

Modern security professionals while fighting cyber threats also have to take human behaviour into account

In today’s corporations, information security managers have a lot on their plate. While facing major and constantly evolving cyber threats, they must comply with numerous laws and regulations, protect the company’s assets, and mitigate risks as best as possible. To address this, they have to formulate policies to establish desired practices that avoid these dangers. They must then communicate this wanted behavior to the employees so that they adapt and everything can go according to plan. But is this always the case?

Security managers often find that what they put on paper is only half of the story. Getting the corporation to “cooperate” and follow the policy all the time can be far more challenging than it seems. So why do employees seem to be so reluctant?

Are we even asking the right question here?

The correct question is: do security managers know what imposing new rules means to the average employee within the company?

People’s behavior is goal-driven. If processes are imposed on them, people will usually follow them, as long as they still allow them to achieve their goals. If they come across situations where they are under pressure, or they encounter obstacles, people will cut corners, break rules and violate policies.

So why should the behavior of a corporation’s employees be an exception? They will usually follow the rules willingly while trying to comply with the security policy, but, at the end of the day, their objective is simply to get their work done.

Yes., there are cases of employees who have a malicious goal of intentionally violating security policies, but research shows that policy violations will most likely result from the controls implementation that prevented people from performing their tasks.

What happens to an organization when honest workers can’t achieve their goals because of poorly implemented security controls? What happens on the security manager’s end and on the employees’ end that leads to this scenario? A short survey I performed in 2013 shows that there is a huge gap between the employees’ and the security managers’ perceptions of security policies; and it’s this discrepancy that negatively impacts the organization as a whole. Security managers, on their side, assume that they have made all the relevant considerations pertaining the needs of the employees. However, the fact is that they rarely speak directly to the employees to familiarize themselves with their tasks, their needs, and their goals. It is therefore usual to hear employees complain about how security controls hinder or impede their performance.

Let’s consider the following scenario:

In an investment bank, a security manager comes up with a policy document, outlining a list of authorized software which can be installed on computers, according to the principle of least privilege: people can only have the access they require to perform their day-to-day activities and no more. All employees are denied access to install any new software without written permission from the security manager.

John is writing a report for the client. The deadline is fast approaching but he still has a lot of work ahead of him. The night before the deadline, John realizes that in order to finish his work, he requires a special data analysis software which was not included in the list of authorized programs. He is also unable to install it on his workstation, because he doesn’t have the required privileges. Getting the formal written approval from the security manager is not feasible, because it is going to take too long. John decides to copy the sensitive information required for the analysis on his personal computer, using a flash drive, to finish the work at home, where he can install any software he wants. He understands the risk but he also wants to get the job done in order to avoid missing the deadline and get good performance review. Unfortunately, he leaves his bag with the flash drive in the taxi on the way back home. He never tells anyone about this incident to avoid embarrassment or a reprimand.

The security manager in this scenario clearly failed to recognize the employee’s needs before implementing the controls.

A general rule of thumb to never forget is that employees will most likely work around the security controls to get their work done regardless of the risks this might pose, because they value their main business activities more than compliance with security policies.

To address this, security managers should consider analyzing security controls in a given context in order to identify clashes and resolve potential conflicts adjusting the policy. They should also communicate the value of security accordingly. Scaring people and imposing sanctions might not be the best approach. They should instead demonstrate to the employees that they contribute to the efficient operation of the business when they comply with security policies. Not only does security ensure confidentiality and the integrity of information, but it also makes sure that the resources are available to complete their primary tasks.

Employees need to understand that security is something that important for achieving the company’s goals, not something that gets in the way. To achieve this, the culture of the organisation must change.

Javvad Malik: One of the biggest challenges that companies are facing is securing at the same rate of innovation

Interview with Javvad Malik – Senior Analyst at 451 Research and blogger at http://www.J4vv4D.com

Javvad

Could you start by telling us about yourself?

My first proper job was during my work placement year during my degree as an IT security administrator at NatWest Bank which, to be honest, I had no idea what this job was about. Actually, very few people knew what it was. But as a student doing a degree in Business Information Systems, I needed to specialise in something and so I went and took this job to see if I could make any sense of this field. I figured that this bank was a huge company and if things didn’t work out in IT Security, I could always explore opportunities in other departments.

Back in the day, there was around seven people in the security operations team for the whole bank, and only three for the monitoring team with whom we only had an intermittent communication. NatWest was then acquired by RBS and I remained in IT security for the next five years, during which I moved more to the project-side of security, as opposed to the operations-side. I had more interactions with the internal consultancy-team and their job appealed to me, because they didn’t seem to need to keep so up-to-date with all the latest technologies from a hands-on perspective and they made more money.. I was unable to make an internal move so I decided to get into contracting and stayed within financial services, where the majority of my roles involved arguing with auditors, resolving issues through internal consulting, being the middle-man between the business and pen-testers, project reviews, and the sort.

On the side, I got very interested in blogging. Blogs were the new fantastic boom readily accessible and cheap for everybody. Suddenly everybody with a blog felt like a professional writer, which I enjoyed, but found it a difficult area in which one could differentiate or bring a unique perspective to. I then tried video blogging, which I discovered was bloody hard, because it takes a lot of skills to help you look like a professional instead of like an idiot most of the time. But because I was among the first to get into this type of delivery mode, my profile was raised quite quickly within the security community, and perhaps to an even broader one. One of the advantages to video blogging that I uncovered was that people who watch you can somehow relate to you better than if they just read your work: they can see your body language, hear your voice, your tone, everything. The result is quite funny, because it often happens to me that when I go to a conference, somebody will greet me as if I’m their best friend. Because they see me so often on YouTube, they feel like they know me. It’s very nice when people acknowledge you like that, and it goes to show that the delivery channel really has that impact.

So because of this impact, one day, Wendy, the research director at 451 Research, asked me if I would be interested in becoming an analyst. In reality I had no idea what an analyst did. She said that I would have to speak to vendors and write about them, which sounded a lot like blogging to me. She immediately said, “yes, it is pretty much like blogging,” to which I then replied, “well, I have my demands. I do video blogging, I’d like to attend and speak at conferences and I don’t want any restrictions here, because I know that many companies impose restrictions around this kind of activity.”

Currently I’ve been an analyst for the past two years, which I have enjoyed very much and has allowed me to broaden my skillset; not to mention give me the opportunity to meet a ton of extremely talented people.

Where do you predict will the security field go?

When I was starting in the field, nobody really knew what security was. Then came the perception that it was all about hackers working from their mums’ basements. Then, they were assumed to be IT specialists, and then that they were specialists who didn’t necessarily know much about IT but who knew more about the risk and/or the government background and now everyone is just confused

Security itself is very broad. It is kind of like medicine: you have GPs who know a little bit about everything, which is the base level of knowledge. For complex cases they will refer you to other doctors who specialise in, say, blood, heart, eyes, ears, and other specific body parts. The same applies to security. You will have some broad generalists and others who are technical experts or those who are more into security development and can tell you how to use code more securely.  You then have non-technical security people, who know more about understanding the business, the risk, and how to implement security into it. You also get product or technology specific experts who are only there to maybe tune your SIEMs for you, forensics experts, incident-response specialists, and so on. You will find specialists with overlapping skills, just as you will find those who possess unique abilities as well. Security has exploded “sideways” like that. So you can call lots of people “security experts” but in reality they are very different from each other, which means that they are not necessarily interchangeable. You can’t, obviously, switch a non-technical person for a technical one. I believe that one of the signs of immaturity within the industry is that people still don’t recognize these differences, which often leads to lots of finger-pointing in situations like: “you don’t know how to code, how can you call yourself a security professional? You don’t understand what the business does. You’ll never be a security professional.” These kinds of things, I think, are the natural growing pains of this and any industry.

What will probably happen going forward is that as things become increasingly interconnected and peoples’ whole lives more and more online, you will have more and more of a visibility of security. Additionally, we will see the need to extend the capabilities outside of the enterprise into the consumer space. We are already seeing an overlap between personal and corporate devices. So I think that everything will kind of bleed into everything else: some areas will become operationalised, others will be commoditised, but I think that there will continuously be a need for security that will always have to be there. What that will look like will probably be different to what we see today.

What kind of challenges do you think will the companies face in the future in terms of security?

One of the biggest challenges that companies are facing is securing at the same rate of innovation. Every company wants to be the first one to develop a new way that they can hook in with their customers. Whether this is in the form of being the first in developing a new app that can enable consumers to do banking, or to do payments and inter-payments, and so on, which sometimes comes at the cost of security. Balancing this business case between the perceived benefits and the security risks of it can be very challenging. The speed at which businesses want to and need to innovate, because that’s what the market is forcing them to do, is making security cost-prohibitive.

The other challenge is that the business model for many companies lies almost exclusively in advertising revenue. Nearly every mobile app or social media site or other online service that is free is typically generating either their primary or supplementary revenue by selling user information. With so many companies trying to grab data and sell to the highest bidder – we have a big challenge in educating users in terms of what security risks lie as well as trying to enforce good security practises within the vendor space but without breaking business models.

How would you say companies should then approach this challenge in the first place?

The way that companies typically “solve” this challenge is by burying their head in the sand and outsourcing the problem. So they will go out to another company and ask them: “can you offer us a secure platform to do it?” To which they answer, “of course we can. Just give us your money.” The challenge is that companies and individuals don’t appreciate that poor security choices made today may have an impact that will not be immediately felt, but perhaps in a few months’ or years’ time. Sadly, by then, it’s usually too late. So this is what both companies and individuals need to be careful about.

Returning to the point about security professionals being very diverse, what’s the role of security professionals from the risk governance and compliance perspective? Can you elaborate more on the security culture within a company and how can it be developed?

Security culture is a very difficult thing: it is not impossible but it relies on understanding human behaviour more than technical aspects. Understanding human behaviour means understanding personality types and how they respond to different environments and stimuli, which can be more challenging that understanding technical aspects.

The general observation that I can make about human behaviour, regardless of the personality type, is that people don’t tend to be aware of what they are giving up. The best and most prevalent example would be how much in demand mobile apps are and how insecure they are, because people unknowingly give away lots of data in order to have access to them. Chris Eng from Veracode makes an excellent analogy by saying that “people usually don’t care what they are agreeing to as long as they can still fling birds against pigs.” This is the crux of it. People don’t think it makes much of a difference if they give their email address away, or if they let the app access their GPS data or their contacts, because they can’t perceive a direct impact.  The problem is that this impact might not be felt until ten years’ time. So if you are giving data to Facebook, Instagram and Whatsapp, for example, you can’t really predict what will happen later on. In the last year Facebook acquired both Instagram and Whatsapp. So now you have a single company that holds all of your photo data that you maybe didn’t want on Facebook, along with all the stats on your behaviour that you’ve been feeding to Facebook, along with the people you are chatting to, and so on. So now Facebook has an incredible amount of information about you and can target and market a lot better. Someone could also use all this data for any purpose. I’m not saying that Facebook or other companies gather users personal data for malicious purposes, but it reminds me of the saying, “The path to hell is paved with good intentions.”

How can you make people change their behaviour?

You have to make it real and personal for them. You have to make that personal connection. In security we tend to say: “we have 50,000 phishing emails that come through every day, and people click on them.” But to the individual user, that doesn’t really have that much of an impact. Are we making this information personal? The communication methods and the techniques that we need to change behaviour are there, we don’t need to reinvent it with security people who don’t understand how communication necessarily works or who are not the best communicators to begin with.

We can remember how 15-20 years ago, nobody cared about recycling, because nobody really cared about the environment. It was just a few people in Greenpeace with long hair and who smelled a bit funny who were trying to stop the oil companies from drilling into the sea, for example. Now, you go into any office and you find 10 bins for every different type of recycling material, which everybody now uses. It’s been a long-term campaign which finally created that social change, and which now makes it unacceptable for people to behave in another way. As you walk on the street, you will see that very few people, if any, throw wrappers on the floor. They usually hold onto them until they get to a bin and then they dispose of them. We need to adopt the same practices to change behaviour in security and in many cases that means actually letting people who know how to market and communicate do that for us instead of trying to do it all ourselves.

Risks to Risk Management

Nasim Taleb in his book The Black Swan provides the following examples of Mirage Casino’s four largest losses:

  • $100 million from a tiger mauling
  • Unsuccessful attempt to dynamite casino
  • Neglect in completing tax returns
  • Ransom demand for owner’s kidnapped daughter

How many of these losses could’ve been identified and managed appropriately?

John Adams in his research Risk, Freedom and Responsibility suggests that “Risk management is not rocket science – it’s much more complicated.” He further elaborates on this point in his research: “The risk manager must […] deal not only with risk perceived through science, but also with virtual risk – risks where the science is inconclusive and people are thus liberated to argue from, and act upon, pre-established beliefs, convictions, prejudices and superstitions.”

According to Adams, there are three types of risk:

three_kinds_or_risk

  • Directly perceptible risks are dealt with using a proper judgment. “One does not undertake a formal, probabilistic, risk assessment before crossing the road.”
  • Risks perceived through science are subject to formal risk managementprocess.  “Here one finds not only biological scientists in lab coats peering through microscopes, but physicists, chemists, engineers, doctors, statisticians, actuaries, epidemiologists and numerous other categories of scientist who have helped us to see risks that are invisible to the naked eye. Collectively they have improved enormously our ability to manage risk – as evidenced by the huge increase in average life spans that has coincided with the rise of science and technology.”
  • Virtual risk is not perceived through science, hence people are forced to act based on their convictions and beliefs.Such risks may or may not be real, but they have real consequences. In the presence of virtual risk what we believe depends on whom we believe, and whom we believe depends on whom we trust.”

Klein in his Streetlights and shadows: searching for the keys to adaptive decision making suggests the following issues with risk management:

  • It works best in well-ordered situations
  • Fear of speaking out may result in poor risk identification
  • Organisations should understand that plans do not guarantee success and may result in a false sense of safety
  • Risk Management plans may actually increase risk.

Klein also identifies three risk decision making approaches:

  • Prioritise and reduce
  • Calculate and decide
  • Anticipate and adapt

To illustrate individual’s decision-making process while dealing with risk, Adams introduces another concept called “Risk thermostat”

risk_thermostat

The main idea behind it is that people vary in their propensity to take risks which is influenced by the perception of risk, experience of losses, and potential rewards.

People tend to overestimate spectacular but rare risks, but downplay common risks. Also, personified risks are perceived to be greater than anonymous risks.

The protection measures also can be introduced to only increase perceived security, rather than implement actual mechanisms. A possible example might be using National Guard in airports after 9/11 to provide re-assurance. However, such a security theatre has other applications in relation to motivation, deception and economics.

Finally, Adams discusses the phenomenon of risk compensation and appropriate adjustments which take place in the risk thermostat. He argues that introducing safety measures changes behavior: for example, seat belts can save a life in a crash, so people buckle up and take more risks when driving, leading to an increased number of accidents. As a result, the overall number of deaths remains unchanged.

Daniel Schatz: It is generally appreciated if security professionals understand that they are supposed to support the strategy of an organisation

Interview with Daniel Schatz – Director for Threat & Vulnerability Management

Daniel

Let’s first discuss how you ended up doing threat and vulnerability management. What is your story?

I actually started off as a Banker at Deutsche Bank in Germany but was looking for a more technical role so I hired on with Thomson Reuters as Senior Support Engineer. I continued on to other roles in the enterprise support and architecture space with increasing focus on information security (as that was one of my strong interests) so it was just logical for me to move into that area. I particularly liked to spend my time understanding the developing threat landscape and existing vulnerabilities with the potential to impact the organisation which naturally led me to be a part of that team.

What are you working on at the moment and what challenges are you facing?

On a day to day basis I’m busy trying to optimise the way vulnerability management is done and provide advice on current and potential threats relevant to the organisation. I think one of the challenges in my space is to find a balance between getting the attention of the right people to be able to notify them of concerning developments/situations while doing so in a non-alarmist way. It is very easy to deplete the security goodwill of people especially if they have many other things to worry about (like budgets, project deadlines, customer expectations, etc.). On the other hand they may be worried about things that they picked up on the news which they shouldn’t waste time on; so providing guidance on what they can put aside for now is also important. Other than that there are the usual issues that any security professional will face – limited resources, competing priorities with other initiatives, etc.

Can you share your opinion on the current security trends?

I think it is less valuable to look at current security trends as they tend to be defined by media/press and reinforced by vendors to suit their own strategy. If you look at e.g. Nation state cyber activities; this has been ongoing for a decade at least yet we now perceive it as a trend because we see massive reporting on it. I believe it is more sensible to spend time anticipating where the relevant threat landscape will be in a few months or years’ time and plan against that instead of trying to catch up with today’s threats by buying the latest gadget. Initiatives like the ISF Threat Horizon are good ways to start with this; or follow a DIY approach like I describe in my article

What is the role of the users in security?

I think this is the wrong approach to ask this question to be honest. Culture and mind-set are two of the most important factors when looking at security so the question should emphasise the relationship of user and security in the right way. To borrow a phrase from JFK – Do not ask what users can do for security, ask what security can do for your users.

How does the good security culture look like?

One description of culture I like defines it as ‘an emotional environment shared by members of the organisation; It reflects how staff feels about themselves, about the people for whom and with whom they work and about their jobs.’ In this context it implies that security is part of the fabric of an organisation naturally weaved in every process and interaction without being perceived to be a burden. We see this at work within the Health & Safety area, but this didn’t happen overnight either.

How one can develop it in his/her company?

There is no cookie cutter approach but talking to the Health & Safety colleagues would not be the worst idea. I also think it is generally appreciated if security professionals understand that they are supposed to support the strategy of an organisation and recognise how their piece of the puzzle fits in. Pushing for security measures that would drive the firm out of the competitive market due to increased cost or lost flexibility is not a good way to go about it.

What are the main reasons of users’ non-secure behaviour?

Inconvenience is probably the main driver for certain behaviour. Everyone is unconsciously constantly doing a cost/benefit calculation; if an users expected utility of opening the ‘Cute bunnies’ attachment exceeds the inconvenience of ignoring all those warning messages a reasonable decision was made, albeit an insecure one.

What is the solution?

Either raise the cost or lower the benefit. While it will be difficult to teach your staff to dislike cute bunnies, raising the cost may work. To stick with the previous example, this could be done by imposing draconian punishment for opening malicious attachments or deploying technology solutions to aid the user in being compliant. There is an operational and economic perspective to this of course. If employees are scared to open attachments because of the potential for punishment it will likely have a depressing consequence for your business communications.

Some will probably look for ‘security awareness training’ as answer here; while I think there is a place for such training the direct impact is low in my view. If security awareness training aims to change an organisations culture you’re on the right track but trying to train users utility decisions away will fail.

Thank you Daniel!

Research Proposal: People and Security

UCL - research proposal
Purpose: The study aims to develop a model to support security managers’ decision-making process when implementing security policies in their organisations and incorporates users into the system in a way that mitigates the negative impact of users’ behaviour on security controls

Background: Security managers in companies lack a clear process to implement security controls in order to ensure compliance with various regulations and standards. The company can be formally compliant but still inefficient in performing its revenue-generating activities.
Security managers may take ISO 27001 standard as a framework and then make a decision on any particular implementation based on their experience. Such implementations run the risk of creating collisions with users’ business activities and result in violation of security policies in the company, because they introduce friction with the business process. Users try to avoid such friction. It is important, however, to differentiate between malicious non-compliance and cases when security policy obstructs business processes leading to workarounds. There is a mismatch between users’ and security managers’ perception of workload, introduced by security tasks

Method: To achieve the goal of the study, a combination of quantitative and qualitative methods is applied to research the perception of information security by both users and security managers.

Research benefits. The model points a security manager in the direction of a better understanding of the users in his company.  It provides the means to gain an insight into users’ core business activities and reflect on how they relate to the security tasks. This can help security managers to come up with more usable security policies and reduce the number of potential complaints, and instances of violation of security policy.
Moreover, this model can help the security manager to understand how much time users in his company spend on various security activities. This information can be used to make better investment decisions, and help in security policy optimisation. Additionally understanding that the security manager’s decisions affect the whole organisation may result in cost savings from pre-implementation security analysis and its relation to main business processes of the company

Giving a seminar at the University of East London

Poster

This morning I delivered a seminar for a group of graduate students at the University of East London. An enriched mix of participants from various degrees, including information security, forensics, and IT law made the classroom discussions very interesting.
I was very glad to see that students were very eager to learn more about the subject and were willing to share their ideas and experience.  We were even able to managed to identify new research opportunities in the field of economics of information security.
East London small
After the presentation, I facilitated a workshop which was designed based on a case study around USB drive encryption. This exercise helped the students to understand the perspective of both a security manager and an end-user on the same problem.

Image courtesy of Stuart Miles / FreeDigitalPhotos.net

Information security policy compliance, business processes and human behaviour

This article aims to review the literature on information security policy compliance issues and their relation to core business processes in the company and users’ behaviour. It also provides an insight into particular implementation examples of the ISO 27001 Standard, and methods of analysis of the effectiveness of such implementations.

Information security

Information security issues in organisations have been brought up long before the rapid development of technology. Companies have always been concerned with protecting their confidential information, including their intellectual property and trade secrets. There are many possible approaches to addressing information security. Wood [30] points out that security is a broad subject including financial controls, human resource policies, physical protection and safety measures. However, Ruighaver et al. [23] state that information security is usually viewed as a purely technical concern and is expected to have the same technical solution. On the other hand, Schneier [25], Lampson [17], and Sasse and Flechais [24]  emphasise the people aspect of security, and people play crucial role as they use and implement security controls.

As stated by Anderson [3], it is essential to properly define information security in order to pay merit to all these aspects.

The Standard for Information Security Management ISO 27001 [32] defines information security as “the protection of information from a wide range of threats in order to ensure business continuity, minimize business risk, and maximise return on investments and business opportunities.

Dhillon [10] states security issues in organisations can arise due to absence of an information security policy. One of the ways to implement such a security policy is to take ISO 27001 standard as a framework.

ISO 27001 Standard

ISO 27001 Standard which is a member of the ISO 27000 standards family evolved from British national standard BS7799 [31]. It aims to provide guidance on managing the risk associated with threats to confidentiality, integrity and availability of organisation’s assets. Such assets, as defined in ISO 27001 [32] include people, software, hardware, services, etc.

Doherty and Fulford [11], Von Solms [28], and Canavan [8] all came to the conclusion that well-established standards such as ISO 27001 might be a stepping-stone to implementing good information security programs in organisations.

However, Anttila and Kajava in their study [4] identify the following issues with ISO 27001 Standard:

–       The standard is high-level and basic concepts are not presented consistently in the standard.

–       It is hard to measure business benefits from implementing this standard.

–       Presented process management is not fully supporting current business practices.

–       The standard struggles to recommend solutions to contemporary business environments.

Neubauer et al. [19] in their research states that the main problem with security standards, including ISO 27001 is their “abstract control definition, which leaves space for interpretation”. Furthermore, the authors suggest that companies focus on obtaining formal certification and often do not to assess and put in place the adequate security controls according their main business goals. Ittner et al. [14] support this point, adding that organisation also fail to estimate the effectiveness of the investments in such initiatives.

According to Sharma and Dash [26], ISO 27001 does not provide detailed guidance requires substantial level of expertise to implement. Moreover, the authors claim that “If risk assessment is flawed, don’t have sufficient security and risk assessment expertise, or do not have the management and organizational commitment to implement security then it is perfectly possible to be fully compliant with the standard, but be insecure.” Results of their study suggest that the organizations, which participated in the study implemented information security mainly to comply with legal and regulatory requirements. The consequence of that was low cost-effectiveness of such implementations. However, the researcher don’t analyse the level of users’ acceptance of implemented controls. The authors also fail to recommend an approach which would support security manager’s decision-making process in implementing ISO 27001 Standard controls.

Karabacak and Sogukpinar in in their paper [16] present a flexible and low-cost ISO 17799 compliance check tool.  The authors use qualitative techniques to collect and analyse data and sate that “the success of our method depends on the answers of surveyors. Accurately answered questions lead to accurate compliance results.” However, the researchers stop short of analysing the impact of compliance with security policy on users’ behaviour. The authors do not consider the issue that a security manager’s decisions regarding a particular implementation of security policy affects that organisation as a whole and may introduce additional cognitive burdens to users. These issues in extreme cases (e.g. obstructing core business processes) may result in non-compliance as users prioritise their primary task.

Vuppala et al. their study [29] discuss their experience from implementing ISO27001 information security management systems. One of the most important lessons learnt was developing an understanding of the role of users’ behaviour in this process. The authors recommend to “not make drastic changes to the current processes; this will only infuriate the users. Remember, users are an important, if not the most important, part of the overall security system.”

Human behaviour

Johnson and Goetz in [15] conducted a series of interviews with security managers to identify main challenges of influencing employees’ behaviour. The results of this study revealed that security managers rely extensively on information security policies, not only as a means of ensuring compliance with legal and regulatory requirements, but also to guide and direct users’ behaviour.

To explore the question of the impact on users’ behaviour while implementing security policies, the following theories were researched:

1. Theory of Rational Choice – a framework, which provides insight into social and economic behaviour. It implies that users tend to maximise their personal benefits [13]. Beautement et al. in their paper [6] uses this theory to  build a foundation explaining how people make decisions about whether to comply or not to comply with any particular information security policy.

Herley [12] suggests that it is rational for users not to comply with security policy, because of the perceived risk reduction is lower than the effort needed.

2. Protection Motivation Theory – a theory which describes four factors that individuals consider when trying to protect themselves [22]:

–       perceived severity

–       probability of the adverse event

–       efficiency of the preventive behaviour

–       self-efficiency

Siponen builds on this theory to gain an understanding of the attitude of individuals towards compliance with security policies. Siponen refers to it in order to study the impact of the punishment on the actual compliance and on intention to comply [27], [20].

3. The Theory of General Deterrence – this suggests that users will not comply with the rules if they are not concerned with punishment [1].

4. Theory of Planned Behaviour – this suggests that subjective norms and perceived behavioural controls influence individuals’ behaviour [2]. Siponen [27] and Pahnila [20] discovered that social norms play a significant role in users’ intention to comply.

These theories suggest that to effectively protect a company’s assets, the security manager should develop and implement security policies not only to ensure formal compliance with legal and regulatory requirements, but also to make sure that users are considered as a part of the system. Policies should be designed in a way that reduces the mental and physical workload of users [1], [6].

Business process visualisation and compliance

It is important to consider information security compliance and users’ behaviour in the context of a company. Users in organisations involved into activities, which could be presented as business processes.

Business process is defined as a set of logically related tasks (or activities) to achieve a defined business outcome [9].

The continuous monitoring of their business processes is essential for any organisation. This can be achieved by visualisation of business processes [21]. However, they are usually complex, due to number of different users or user roles in large companies [7]. Barrett [5] also argues that it is essential to create a “vision of the process” to successfully reengineer it.

Namiri and Stojanovic in their paper [18] present a scenario demonstrating a particular business process and implement controls necessary to achieve compliance with regulatory requirements. The authors separate business and control objectives, introducing two roles: a business process expert, who is motivated solely by business objectives, and a compliance expert, who is concerned with ensuring compliance of a given business process.

References

[1]        Adams, A. and Sasse, M.A. 1999. Users are not the enemy. Commun. ACM. 42, 12 (Dec. 1999).

[2]        Ajzen, I. 1991. The theory of planned behavior. Organizational Behavior and Human Decision Processes. 50, 2 (Dec. 1991).

[3]        Anderson, J.M. 2003. Why we need a new definition of information security. Computers & Security. 22, 4 (May 2003).

[4]        Anttila, J. and Kajava, J. 2010. Challenging IS and ISM Standardization for Business Benefits. ARES  ’10 International Conference on Availability, Reliability, and Security, 2010 (2010).

[5]        Barrett, J.L. 1994. Process Visualisation: Getting the Vision Right Is Key. Information Systems Management. 11, 2 (1994).

[6]        Beautement, A. et al. 2008. The compliance budget: managing security behaviour in organisations. Proceedings of the 2008 workshop on New security paradigms (New York, NY, USA, 2008).

[7]        Bobrik, R. et al. 2005. Requirements for the visualization of system-spanning business processes. Sixteenth International Workshop on Database and Expert Systems Applications, 2005. Proceedings (2005), 948–954.

[8]        Canavan, S. 2003. An information security policy development guide for large companies. SANS Institute. (2003).

[9]        Davenport, T.H. and Short, J.E. 2003. Information technology and business process redesign. Operations management: critical perspectives on business and management. 1, (2003), 1–27.

[10]     Dhillon, G. 2007. Principles of information systems security: text and cases. John Wiley & Sons.

[11]     Doherty, N.F. and Fulford, H. 2005. Do Information Security Policies Reduce the Incidence of Security Breaches: An Exploratory Analysis. Information Resources Management Journal. 18, 4 (34 2005).

[12]     Herley, C. 2009. So long, and no thanks for the externalities: the rational rejection of security advice by users. Proceedings of the 2009 workshop on New security paradigms workshop (New York, NY, USA, 2009).

[13]     Herrnstein, R.J. 1990. Rational choice theory: Necessary but not sufficient. American Psychologist. 45, 3 (1990).

[14]     Ittner, C.D. and Larcker, D.F. 2003. Coming up short on nonfinancial performance measurement. Harvard business review. 81, 11 (2003), 88–95.

[15]     Johnson, M.E. and Goetz, E. 2007. Embedding Information Security into the Organization. IEEE Security Privacy. 5, 3 (2007).

[16]     Karabacak, B. and Sogukpinar, I. 2006. A quantitative method for ISO 17799 gap analysis. Computers & Security. 25, 6 (Sep. 2006).

[17]     Lampson, B.W. 2004. Computer security in the real world. Computer. 37, 6 (2004), 37–46.

[18]     Namiri, K. and Stojanovic, N. 2007. Pattern-based design and validation of business process compliance. On the Move to Meaningful Internet Systems 2007: CoopIS, DOA, ODBASE, GADA, and IS. Springer. 59–76.

[19]     Neubauer, T. et al. 2008. Interactive Selection of ISO 27001 Controls under Multiple Objectives. Proceedings of The Ifip Tc 11 23rd International Information Security Conference. S. Jajodia et al., eds. Springer US. 477–492.

[20]     Pahnila, S. et al. 2007. Employees’ Behavior towards IS Security Policy Compliance. 40th Annual Hawaii International Conference on System Sciences, 2007. HICSS 2007 (2007).

[21]     Rinderle, S.B. et al. 2006. Business process visualization-use cases, challenges, solutions. (2006).

[22]     Rogers, R.W. 1975. A Protection Motivation Theory of Fear Appeals and Attitude Change1. The Journal of Psychology. 91, 1 (1975).

[23]     Ruighaver, A.B. et al. 2007. Organisational security culture: Extending the end-user perspective. Computers & Security. 26, 1 (Feb. 2007).

[24]     Sasse, M.A. and Flechais, I. 2005. Usable Security: Why Do We Need It? How Do We Get It? Security and Usability: Designing secure systems that people can use. L.F. Cranor and S. Garfinkel, eds. O’Reilly.

[25]     Schneier, B. 2003. Beyond Fear: Thinking Sensibly About Security in an Uncertain World. Springer.

[26]     Sharma, D.N. and Dash, P.K. 2012. Effectiveness Of Iso 27001, As An Information Security Management System: An Analytical Study Of Financial Aspects. Far East Journal of Psychology and Business. 9, 5 (2012), 57–71.

[27]     Siponen, M. et al. 2010. Compliance with Information Security Policies: An Empirical Investigation. Computer. 43, 2 (2010).

[28]     Solms, R. von 1999. Information security management: why standards are important. Information Management & Computer Security. 7, 1 (Mar. 1999).

[29]     Vuppala, V. et al. Securing a Control System: Experiences from ISO 27001 Implementation.

[30]     Wood, M.B. 1982. Introducing Computer Security. National Computing Centre.

[31]     BS, BS7799 – Information Technology – Code of practice for information security management, London: BS, 1995.

[32]     ISO/IEC, ISO/IEC 27001 – Information technology – Security techniques – Information security management systems – Requirements, Geneva: ISO/IEC, 2005 and Draft for the new revision ISO/IEC JTC 1/SC 27 N10641, 2011.