Mo Amin: You can transform technology but how do you transform people?

Mo Amin – Information Security Professional

Mo

Can you please tell us a little bit about your background?

Long ago in a galaxy far far…oh ok…ok…Just like a lot of people in IT I got asked the same question “My PC has died, can you help me?” When you say yes to one person it’s a downward spiral…and before you know it you’re THE computer guy! Even now I (depending on my mood) will help out. So this was my first real experience in building rapport with clients, charging for my time and to a certain extent being held accountable for the service I provided.

I taught me a lot and was a catalyst in helping me to land my first role in desktop support. I was part of a small team which allowed me to get involved in some network and application support too. Whilst doing my day role I was involved in a couple of investigations, which got me interested in information security and through a few lucky breaks I slowly moved into the field. I’ve been lucky enough to have worked in a number of areas ranging from operational security through to consultancy. However, I’ve always intrinsically enjoyed the awareness and education side of things.

What is it that you are working on at the moment?

I am working with Kai Roer of The Roer Group to help develop the Security Culture Framework. Essentially, the framework aims to help organisations to build a security culture within their business, as opposed to simply relying on topic based security awareness. Making sure that organisations begin to build a security culture into their business is something I believe in strongly. So when Kai asked if I’d like to help I was more than happy.

Let’s talk for a moment about information security in general. What do you think are the biggest challenges that companies are facing at the moment?

I think that one of the biggest challenges is educating staff on the risks that the business faces and getting people to understand and relate to why it is that we are asking them to adopt secure practices. The problem revolves around changing the attitude and overall culture of an organisation. In my humble opinion, this is the biggest challenge. The difficulty lies in changing behaviour because you can change technology but how do you positively change the behaviour of people?

What is your approach or proposed solution to this challenge? What should companies do?

I’ve always learned by seeing something in action or by actually doing it. Obviously, within the context of a busy organisation this isn’t easy to do.  However, as information security practitioners, professional or however we label ourselves we need to be more creative in our attempts to help those that we work with – we need to make awareness more engaging. I think it’s important to have workshops or sessions in breakout areas where staff can come along see how quickly weak passwords are cracked, what can happen if you click on that dodgy but enticing looking attachment. It’s about visualising and personalising threats for people, for example, if you plan your awareness programme carefully you could map your corporate security messages for the home environment and provide your staff with a “Top 10 of do’s and dont’s” Make it creative and engaging and the messages that you give for their home environment they will begin to bring them back to the office.

Lots companies offer security awareness training, which doesn’t seem to have much of an impact. What do you think about these trainings? Should they be changed in some way in terms of targeting, or accounting for individuals’ particular needs, or focusing on behaviour?

The problem is that most of this is simply topic based awareness, in that it’s not seeking to change behaviour. There seems be to be a lot of generic content that applies to everyone in an organisation. Sadly this is a tick-box exercise for the purposes of compliance. Awareness should be unique to your organisation where you cater for different personality types as best you can.  Some people actually like reading policies where as some prefer visual aids, so the ways that individuals learn needs to be better understood. The process of educating your staff should be a sustained and measured programme; it needs to be strategic in its outlook.

What about communication?

Better engagement with the business is what we need to be doing. Our relationships with the likes of legal, HR, finance, marketing, PR should be on an everyday basis not only when we actually need their expertise. These departments usually already have the respect from the business. Information security needs to be seen in the same light.

How do you identify the relevant stakeholders and establish communication with them, and further propagate the whole process of communication within the organisation?

Grab a copy of the organisation chart and start from there. Your job is to introduce yourself to everyone. In my experience doing this over a coffee really helps and preferably not in a meeting room, because it is better to create a new business relationship in a social context, wherein the other person gets to understand you, firstly as a human being and secondly as a work colleague. Most importantly, do this at the beginning and not two months down the line. Building relationships at the very beginning increases your chances of being in the position of asking for last minute favours and paves the path for easier collaboration, as opposed to having to ask for people’s help when they don’t even know you. Usually people are open and honest. They may have a negative image of information security, not because they don’t like you, but most likely because of the interaction they’ve had in the past.

So let’s say that you have joined a new organisation that has a very negative preconception of information security because of a bad previous experience. Once you have already identified all the key people you have to work with, how do you fight this negative perception?

You need to find out what was done previously and why the outcome was negative in the first place. Once you’ve established the actual problem, you have to diffuse the situation. You need to be positive, open and even simple things like walking around and talking to people – show your face. Visit different departments and admit any failings, you need to do a PR and marketing exercise. In a previous role I’ve actually said

“I know what went wrong the last time, I know we screwed up. I want to ask you what you want to see from the information security department from now on.”

People are ready to engage if you are, be personable and be professional. It’s surprising how much positive and usable feedback you actually get.

The majority of the time, people will tell you,

“I just want to be able to do my job without security getting in the way”.

Once you have these sorts of conversations going you begin to understand how the business actually functions on a day-to-day basis. It’s at this stage where you can be influential and change perception.


Delivering a Seminar at the IT Security & Computer Forensics Pedagogy Workshop

HIGHER EDU

I presented at the HEA STEM Workshop on human aspects of information security.

The aim of the workshop is to share, disseminate and stimulate discussions on: the pedagogy of teaching subjects related to IT security and computer forensics, and issues relating to employability and research in these areas.

During the workshop the speakers presented topics that focus on: delivery of innovative practical tutorials, workshops and case studies; course design issues; demand for skills and employment opportunities; countering the “point & click” approach linked to vendor supplied training in industry; and current research exploring antivirus deployment strategies.


Modern security professionals while fighting cyber threats also have to take human behaviour into account

In today’s corporations, information security managers have a lot on their plate. While facing major and constantly evolving cyber threats, they must comply with numerous laws and regulations, protect the company’s assets, and mitigate risks as best as possible. To address this, they have to formulate policies to establish desired practices that avoid these dangers. They must then communicate this wanted behavior to the employees so that they adapt and everything can go according to plan. But is this always the case?

Security managers often find that what they put on paper is only half of the story. Getting the corporation to “cooperate” and follow the policy all the time can be far more challenging than it seems. So why do employees seem to be so reluctant?

Are we even asking the right question here?

The correct question is: do security managers know what imposing new rules means to the average employee within the company?

People’s behavior is goal-driven. If processes are imposed on them, people will usually follow them, as long as they still allow them to achieve their goals. If they come across situations where they are under pressure, or they encounter obstacles, people will cut corners, break rules and violate policies.

So why should the behavior of a corporation’s employees be an exception? They will usually follow the rules willingly while trying to comply with the security policy, but, at the end of the day, their objective is simply to get their work done.

Yes., there are cases of employees who have a malicious goal of intentionally violating security policies, but research shows that policy violations will most likely result from the controls implementation that prevented people from performing their tasks.

What happens to an organization when honest workers can’t achieve their goals because of poorly implemented security controls? What happens on the security manager’s end and on the employees’ end that leads to this scenario? A short survey I performed in 2013 shows that there is a huge gap between the employees’ and the security managers’ perceptions of security policies; and it’s this discrepancy that negatively impacts the organization as a whole. Security managers, on their side, assume that they have made all the relevant considerations pertaining the needs of the employees. However, the fact is that they rarely speak directly to the employees to familiarize themselves with their tasks, their needs, and their goals. It is therefore usual to hear employees complain about how security controls hinder or impede their performance.

Let’s consider the following scenario:

In an investment bank, a security manager comes up with a policy document, outlining a list of authorized software which can be installed on computers, according to the principle of least privilege: people can only have the access they require to perform their day-to-day activities and no more. All employees are denied access to install any new software without written permission from the security manager.

John is writing a report for the client. The deadline is fast approaching but he still has a lot of work ahead of him. The night before the deadline, John realizes that in order to finish his work, he requires a special data analysis software which was not included in the list of authorized programs. He is also unable to install it on his workstation, because he doesn’t have the required privileges. Getting the formal written approval from the security manager is not feasible, because it is going to take too long. John decides to copy the sensitive information required for the analysis on his personal computer, using a flash drive, to finish the work at home, where he can install any software he wants. He understands the risk but he also wants to get the job done in order to avoid missing the deadline and get good performance review. Unfortunately, he leaves his bag with the flash drive in the taxi on the way back home. He never tells anyone about this incident to avoid embarrassment or a reprimand.

The security manager in this scenario clearly failed to recognize the employee’s needs before implementing the controls.

A general rule of thumb to never forget is that employees will most likely work around the security controls to get their work done regardless of the risks this might pose, because they value their main business activities more than compliance with security policies.

To address this, security managers should consider analyzing security controls in a given context in order to identify clashes and resolve potential conflicts adjusting the policy. They should also communicate the value of security accordingly. Scaring people and imposing sanctions might not be the best approach. They should instead demonstrate to the employees that they contribute to the efficient operation of the business when they comply with security policies. Not only does security ensure confidentiality and the integrity of information, but it also makes sure that the resources are available to complete their primary tasks.

Employees need to understand that security is something that important for achieving the company’s goals, not something that gets in the way. To achieve this, the culture of the organisation must change.


Javvad Malik: One of the biggest challenges that companies are facing is securing at the same rate of innovation

Interview with Javvad Malik – Senior Analyst at 451 Research and blogger at http://www.J4vv4D.com

Javvad

Could you start by telling us about yourself?

My first proper job was during my work placement year during my degree as an IT security administrator at NatWest Bank which, to be honest, I had no idea what this job was about. Actually, very few people knew what it was. But as a student doing a degree in Business Information Systems, I needed to specialise in something and so I went and took this job to see if I could make any sense of this field. I figured that this bank was a huge company and if things didn’t work out in IT Security, I could always explore opportunities in other departments.

Back in the day, there was around seven people in the security operations team for the whole bank, and only three for the monitoring team with whom we only had an intermittent communication. NatWest was then acquired by RBS and I remained in IT security for the next five years, during which I moved more to the project-side of security, as opposed to the operations-side. I had more interactions with the internal consultancy-team and their job appealed to me, because they didn’t seem to need to keep so up-to-date with all the latest technologies from a hands-on perspective and they made more money.. I was unable to make an internal move so I decided to get into contracting and stayed within financial services, where the majority of my roles involved arguing with auditors, resolving issues through internal consulting, being the middle-man between the business and pen-testers, project reviews, and the sort.

On the side, I got very interested in blogging. Blogs were the new fantastic boom readily accessible and cheap for everybody. Suddenly everybody with a blog felt like a professional writer, which I enjoyed, but found it a difficult area in which one could differentiate or bring a unique perspective to. I then tried video blogging, which I discovered was bloody hard, because it takes a lot of skills to help you look like a professional instead of like an idiot most of the time. But because I was among the first to get into this type of delivery mode, my profile was raised quite quickly within the security community, and perhaps to an even broader one. One of the advantages to video blogging that I uncovered was that people who watch you can somehow relate to you better than if they just read your work: they can see your body language, hear your voice, your tone, everything. The result is quite funny, because it often happens to me that when I go to a conference, somebody will greet me as if I’m their best friend. Because they see me so often on YouTube, they feel like they know me. It’s very nice when people acknowledge you like that, and it goes to show that the delivery channel really has that impact.

So because of this impact, one day, Wendy, the research director at 451 Research, asked me if I would be interested in becoming an analyst. In reality I had no idea what an analyst did. She said that I would have to speak to vendors and write about them, which sounded a lot like blogging to me. She immediately said, “yes, it is pretty much like blogging,” to which I then replied, “well, I have my demands. I do video blogging, I’d like to attend and speak at conferences and I don’t want any restrictions here, because I know that many companies impose restrictions around this kind of activity.”

Currently I’ve been an analyst for the past two years, which I have enjoyed very much and has allowed me to broaden my skillset; not to mention give me the opportunity to meet a ton of extremely talented people.

Where do you predict will the security field go?

When I was starting in the field, nobody really knew what security was. Then came the perception that it was all about hackers working from their mums’ basements. Then, they were assumed to be IT specialists, and then that they were specialists who didn’t necessarily know much about IT but who knew more about the risk and/or the government background and now everyone is just confused

Security itself is very broad. It is kind of like medicine: you have GPs who know a little bit about everything, which is the base level of knowledge. For complex cases they will refer you to other doctors who specialise in, say, blood, heart, eyes, ears, and other specific body parts. The same applies to security. You will have some broad generalists and others who are technical experts or those who are more into security development and can tell you how to use code more securely.  You then have non-technical security people, who know more about understanding the business, the risk, and how to implement security into it. You also get product or technology specific experts who are only there to maybe tune your SIEMs for you, forensics experts, incident-response specialists, and so on. You will find specialists with overlapping skills, just as you will find those who possess unique abilities as well. Security has exploded “sideways” like that. So you can call lots of people “security experts” but in reality they are very different from each other, which means that they are not necessarily interchangeable. You can’t, obviously, switch a non-technical person for a technical one. I believe that one of the signs of immaturity within the industry is that people still don’t recognize these differences, which often leads to lots of finger-pointing in situations like: “you don’t know how to code, how can you call yourself a security professional? You don’t understand what the business does. You’ll never be a security professional.” These kinds of things, I think, are the natural growing pains of this and any industry.

What will probably happen going forward is that as things become increasingly interconnected and peoples’ whole lives more and more online, you will have more and more of a visibility of security. Additionally, we will see the need to extend the capabilities outside of the enterprise into the consumer space. We are already seeing an overlap between personal and corporate devices. So I think that everything will kind of bleed into everything else: some areas will become operationalised, others will be commoditised, but I think that there will continuously be a need for security that will always have to be there. What that will look like will probably be different to what we see today.

What kind of challenges do you think will the companies face in the future in terms of security?

One of the biggest challenges that companies are facing is securing at the same rate of innovation. Every company wants to be the first one to develop a new way that they can hook in with their customers. Whether this is in the form of being the first in developing a new app that can enable consumers to do banking, or to do payments and inter-payments, and so on, which sometimes comes at the cost of security. Balancing this business case between the perceived benefits and the security risks of it can be very challenging. The speed at which businesses want to and need to innovate, because that’s what the market is forcing them to do, is making security cost-prohibitive.

The other challenge is that the business model for many companies lies almost exclusively in advertising revenue. Nearly every mobile app or social media site or other online service that is free is typically generating either their primary or supplementary revenue by selling user information. With so many companies trying to grab data and sell to the highest bidder – we have a big challenge in educating users in terms of what security risks lie as well as trying to enforce good security practises within the vendor space but without breaking business models.

How would you say companies should then approach this challenge in the first place?

The way that companies typically “solve” this challenge is by burying their head in the sand and outsourcing the problem. So they will go out to another company and ask them: “can you offer us a secure platform to do it?” To which they answer, “of course we can. Just give us your money.” The challenge is that companies and individuals don’t appreciate that poor security choices made today may have an impact that will not be immediately felt, but perhaps in a few months’ or years’ time. Sadly, by then, it’s usually too late. So this is what both companies and individuals need to be careful about.

Returning to the point about security professionals being very diverse, what’s the role of security professionals from the risk governance and compliance perspective? Can you elaborate more on the security culture within a company and how can it be developed?

Security culture is a very difficult thing: it is not impossible but it relies on understanding human behaviour more than technical aspects. Understanding human behaviour means understanding personality types and how they respond to different environments and stimuli, which can be more challenging that understanding technical aspects.

The general observation that I can make about human behaviour, regardless of the personality type, is that people don’t tend to be aware of what they are giving up. The best and most prevalent example would be how much in demand mobile apps are and how insecure they are, because people unknowingly give away lots of data in order to have access to them. Chris Eng from Veracode makes an excellent analogy by saying that “people usually don’t care what they are agreeing to as long as they can still fling birds against pigs.” This is the crux of it. People don’t think it makes much of a difference if they give their email address away, or if they let the app access their GPS data or their contacts, because they can’t perceive a direct impact.  The problem is that this impact might not be felt until ten years’ time. So if you are giving data to Facebook, Instagram and Whatsapp, for example, you can’t really predict what will happen later on. In the last year Facebook acquired both Instagram and Whatsapp. So now you have a single company that holds all of your photo data that you maybe didn’t want on Facebook, along with all the stats on your behaviour that you’ve been feeding to Facebook, along with the people you are chatting to, and so on. So now Facebook has an incredible amount of information about you and can target and market a lot better. Someone could also use all this data for any purpose. I’m not saying that Facebook or other companies gather users personal data for malicious purposes, but it reminds me of the saying, “The path to hell is paved with good intentions.”

How can you make people change their behaviour?

You have to make it real and personal for them. You have to make that personal connection. In security we tend to say: “we have 50,000 phishing emails that come through every day, and people click on them.” But to the individual user, that doesn’t really have that much of an impact. Are we making this information personal? The communication methods and the techniques that we need to change behaviour are there, we don’t need to reinvent it with security people who don’t understand how communication necessarily works or who are not the best communicators to begin with.

We can remember how 15-20 years ago, nobody cared about recycling, because nobody really cared about the environment. It was just a few people in Greenpeace with long hair and who smelled a bit funny who were trying to stop the oil companies from drilling into the sea, for example. Now, you go into any office and you find 10 bins for every different type of recycling material, which everybody now uses. It’s been a long-term campaign which finally created that social change, and which now makes it unacceptable for people to behave in another way. As you walk on the street, you will see that very few people, if any, throw wrappers on the floor. They usually hold onto them until they get to a bin and then they dispose of them. We need to adopt the same practices to change behaviour in security and in many cases that means actually letting people who know how to market and communicate do that for us instead of trying to do it all ourselves.


Risks to Risk Management

Nasim Taleb in his book The Black Swan provides the following examples of Mirage Casino’s four largest losses:

  • $100 million from a tiger mauling
  • Unsuccessful attempt to dynamite casino
  • Neglect in completing tax returns
  • Ransom demand for owner’s kidnapped daughter

How many of these losses could’ve been identified and managed appropriately?

John Adams in his research Risk, Freedom and Responsibility suggests that “Risk management is not rocket science – it’s much more complicated.” He further elaborates on this point in his research: “The risk manager must […] deal not only with risk perceived through science, but also with virtual risk – risks where the science is inconclusive and people are thus liberated to argue from, and act upon, pre-established beliefs, convictions, prejudices and superstitions.”

According to Adams, there are three types of risk:

three_kinds_or_risk

  • Directly perceptible risks are dealt with using a proper judgment. “One does not undertake a formal, probabilistic, risk assessment before crossing the road.”
  • Risks perceived through science are subject to formal risk managementprocess.  “Here one finds not only biological scientists in lab coats peering through microscopes, but physicists, chemists, engineers, doctors, statisticians, actuaries, epidemiologists and numerous other categories of scientist who have helped us to see risks that are invisible to the naked eye. Collectively they have improved enormously our ability to manage risk – as evidenced by the huge increase in average life spans that has coincided with the rise of science and technology.”
  • Virtual risk is not perceived through science, hence people are forced to act based on their convictions and beliefs.Such risks may or may not be real, but they have real consequences. In the presence of virtual risk what we believe depends on whom we believe, and whom we believe depends on whom we trust.”

Klein in his Streetlights and shadows: searching for the keys to adaptive decision making suggests the following issues with risk management:

  • It works best in well-ordered situations
  • Fear of speaking out may result in poor risk identification
  • Organisations should understand that plans do not guarantee success and may result in a false sense of safety
  • Risk Management plans may actually increase risk.

Klein also identifies three risk decision making approaches:

  • Prioritise and reduce
  • Calculate and decide
  • Anticipate and adapt

To illustrate individual’s decision-making process while dealing with risk, Adams introduces another concept called “Risk thermostat”

risk_thermostat

The main idea behind it is that people vary in their propensity to take risks which is influenced by the perception of risk, experience of losses, and potential rewards.

People tend to overestimate spectacular but rare risks, but downplay common risks. Also, personified risks are perceived to be greater than anonymous risks.

The protection measures also can be introduced to only increase perceived security, rather than implement actual mechanisms. A possible example might be using National Guard in airports after 9/11 to provide re-assurance. However, such a security theatre has other applications in relation to motivation, deception and economics.

Finally, Adams discusses the phenomenon of risk compensation and appropriate adjustments which take place in the risk thermostat. He argues that introducing safety measures changes behavior: for example, seat belts can save a life in a crash, so people buckle up and take more risks when driving, leading to an increased number of accidents. As a result, the overall number of deaths remains unchanged.


Daniel Schatz: It is generally appreciated if security professionals understand that they are supposed to support the strategy of an organisation

Interview with Daniel Schatz – Director for Threat & Vulnerability Management

Daniel

Let’s first discuss how you ended up doing threat and vulnerability management. What is your story?

I actually started off as a Banker at Deutsche Bank in Germany but was looking for a more technical role so I hired on with Thomson Reuters as Senior Support Engineer. I continued on to other roles in the enterprise support and architecture space with increasing focus on information security (as that was one of my strong interests) so it was just logical for me to move into that area. I particularly liked to spend my time understanding the developing threat landscape and existing vulnerabilities with the potential to impact the organisation which naturally led me to be a part of that team.

What are you working on at the moment and what challenges are you facing?

On a day to day basis I’m busy trying to optimise the way vulnerability management is done and provide advice on current and potential threats relevant to the organisation. I think one of the challenges in my space is to find a balance between getting the attention of the right people to be able to notify them of concerning developments/situations while doing so in a non-alarmist way. It is very easy to deplete the security goodwill of people especially if they have many other things to worry about (like budgets, project deadlines, customer expectations, etc.). On the other hand they may be worried about things that they picked up on the news which they shouldn’t waste time on; so providing guidance on what they can put aside for now is also important. Other than that there are the usual issues that any security professional will face – limited resources, competing priorities with other initiatives, etc.

Can you share your opinion on the current security trends?

I think it is less valuable to look at current security trends as they tend to be defined by media/press and reinforced by vendors to suit their own strategy. If you look at e.g. Nation state cyber activities; this has been ongoing for a decade at least yet we now perceive it as a trend because we see massive reporting on it. I believe it is more sensible to spend time anticipating where the relevant threat landscape will be in a few months or years’ time and plan against that instead of trying to catch up with today’s threats by buying the latest gadget. Initiatives like the ISF Threat Horizon are good ways to start with this; or follow a DIY approach like I describe in my article

What is the role of the users in security?

I think this is the wrong approach to ask this question to be honest. Culture and mind-set are two of the most important factors when looking at security so the question should emphasise the relationship of user and security in the right way. To borrow a phrase from JFK – Do not ask what users can do for security, ask what security can do for your users.

How does the good security culture look like?

One description of culture I like defines it as ‘an emotional environment shared by members of the organisation; It reflects how staff feels about themselves, about the people for whom and with whom they work and about their jobs.’ In this context it implies that security is part of the fabric of an organisation naturally weaved in every process and interaction without being perceived to be a burden. We see this at work within the Health & Safety area, but this didn’t happen overnight either.

How one can develop it in his/her company?

There is no cookie cutter approach but talking to the Health & Safety colleagues would not be the worst idea. I also think it is generally appreciated if security professionals understand that they are supposed to support the strategy of an organisation and recognise how their piece of the puzzle fits in. Pushing for security measures that would drive the firm out of the competitive market due to increased cost or lost flexibility is not a good way to go about it.

What are the main reasons of users’ non-secure behaviour?

Inconvenience is probably the main driver for certain behaviour. Everyone is unconsciously constantly doing a cost/benefit calculation; if an users expected utility of opening the ‘Cute bunnies’ attachment exceeds the inconvenience of ignoring all those warning messages a reasonable decision was made, albeit an insecure one.

What is the solution?

Either raise the cost or lower the benefit. While it will be difficult to teach your staff to dislike cute bunnies, raising the cost may work. To stick with the previous example, this could be done by imposing draconian punishment for opening malicious attachments or deploying technology solutions to aid the user in being compliant. There is an operational and economic perspective to this of course. If employees are scared to open attachments because of the potential for punishment it will likely have a depressing consequence for your business communications.

Some will probably look for ‘security awareness training’ as answer here; while I think there is a place for such training the direct impact is low in my view. If security awareness training aims to change an organisations culture you’re on the right track but trying to train users utility decisions away will fail.

Thank you Daniel!


Managing Risk on Security-related Projects

All companies have assets. They help them generate profit and hence require protection. Information security professionals help companies to assess and manage risk to these assets and make sure that cost-effective and appropriate response strategies are chosen to address these risks.

Enterprises in turn may decide to implement mitigation strategies in the form of technical, procedural, physical or legal controls. These implementations would have a defined start and end date and would require resources and hence a project rather than an operational activity.

However, such implementations have their own project risks. According to the Guide to the Project Management Body of Knowledgerisk is an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives.

The project risk management process is similar to the information security risk management and consists of four stages:

1. Identification – Log risk, agree and assign an owner

2. Analysis – An owner assesses risk and sets probability and impact

3. Monitoring and Control – An ongoing process of tracking identified risks, monitoring residual risks, identifying new risks, executing risk response plans and evaluating their effectiveness throughout programme.

4. Response planning  – What response will be taken to manage the risk

It is a good practice to involve your team and all relevant stakeholders during the project planning stage to identify the risks and populate the risk log

risk

  • ID – assign a number (e.g. 1, 2, 3)
  • Risk– a specific definition of the risk event.
  • Consequence –what effect each entry has on the business/change programme/projects
  • Trigger – an event which signals the risk occurrence
  • Date Raised – when the risk was initially raised
  • Date Updated – when the risk was updated
  • Owner – a person responsible for monitoring risk event, notifying team, and executing risk response
  • Due Date – when will the actions be completed
  • Probability (on a scale 1-5) – likelihood of the risk occurring
  • Impact (on a scale 1-5) – impact if the risk does occur
  • Risk Score – probability x Impact
  • Response Strategy – a specific agreed actions which will take place to manage the risk (Avoid, Transfer, Mitigate, Accept))
  • Current Status – indicate risk status (Red, Amber, Green, Closed)

During the execution of the project, the risk log should be continuously revised and kept up to date to ensure that project issues, risks and mitigating actions are fully and formally assessed and managed throughout the project lifecycle.

Download a sample risk log


Konrads Smelkovs: Very few insiders develop overnight

Interview with Konrads Smelkovs – Incident Response

Konrads

Could you please tell us a little bit about your background?

I work at KPMG as a manager, and I started working with security when I was around thirteen years old. I used to go to my mother’s work, because there was nobody to look after me. There used to be an admin there who used to run early versions of Linux, which I found to be rather exciting. I begged him to give me an account on his Linux box, but I didn’t know much about that, so I started searching for information in Altavista. The only things you could find there was how to hack into Unix, and there were no books at the time I could buy. I downloaded some scripts off the internet and started running them. Some university then complained that my scripts were hacking them, though I didn’t really understand much of what I was doing. So my account got suspended for about half a year, but I got hooked and found it rather interesting and exciting, and developed an aspiration in this direction. I then did all sorts of jobs, but I wanted a job in this field. So I saw an add in the newspaper and applied for a job at KPMG back in Latvia, 6 or 7 years ago. I was asked what it is I could do, and I explained to them the sort of things I had done in terms of programming: “a little bit of this, a little bit of that…”, I did some reading about security before the interview, and they then asked me if I could do penetration testing. I had a vague idea of what it entailed, because I understood web applications quite well. So I said, “yeah, sure. I can go ahead and do that because I understand these things quite well.”

What are you working on at the moment?

In the past I used to focus mainly on break-ins. Now people resort to me for advice on how to detect on-going intrusions, which takes up a large portion of my time at the moment, but more at a senior level. I do threat modelling for a corporation. I have to know how to break-in in order to give them reasonable advice, but it’s mainly in the form of PowerPoint presentations and meetings.

When you develop threat models for corporations, how do you factor in insider threats as well as the human aspect of security?

I believe the industry oscillates from one extreme to the other. People spoke a lot about “risk” but they understood very little about what this risk entailed. They then spoke about IT risks, but it was more of a blank message. Then it all became very entangled, and there was talk about vulnerability thinking: “you have to patch everything.” But then people realised that there is no way to patch everything, and then started talking about defence strategies, which pretty much everybody misunderstands, and so they started ignoring vulnerabilities. This especially happened because we all had firewalls, but we know that those don’t help either. So what we are trying to do here is to spread common sense in one go. When we talk about threat models, we have to talk about who is attacking, what they are after, and how they will do it. So the “who” will obviously have a lot of different industry properties, why they are doing it, what their restrictions and their actions are and so on. Despite the popular belief in the press, in The Financial Times, CNN, and so on, everybody talks about the APT, these amazing hackers hacking everything. They don’t realise that the day-to-day reality is quite different. There are two main things people are concerned about. One of them is insider threats, because insiders have legitimate access, and just want to elevate that access by copying or destroying information. The second is malware, which is such a prevalent thing. Most malware is uploaded by criminals who are not specifically after you, but are after some of your resources: you are not special to them. There are very few industries where there is nation-state hacking or where competitive hacking is current. So when we talk about threat models, we mainly talk about insider threats within specific business units and how they work. This is what I think people are most afraid of: the exploitation of trust.

How do you normally advice executives in organisations about proper information security? Do you focus on building a proper security culture, on awareness training, technological/architectural means, or what do you consider is the most important thing they should keep in mind?

We need to implement lots of things. I believe that a lot of the information security awareness training is misguided. It is not about teaching people how to recognise phishing or these sort of things. It is about explaining to them why security is important and how they play a part in it.

Very few insiders develop overnight and I believe that there is a pattern, and even then, insiders are rare. Most of the time you have admins who are trying to make themselves important, or, who out of vengeance, try to destroy things. So whenever you have destruction of information, you have to look at what kind of privileged access there is. Sometimes people copy things in bulk when they leave the company, to distribute it to the company’s competitors.

So lets say you develop a threat model and present it to the company, who’s executives accepted and use to develop a policy which they then implement and enforce. Sometimes, these policies my clash with the end-users’ performance and affect the way business within the company is done. Sometimes they might resist new controls because privileges get taken away. How would you factor in this human aspect, in order to avoid this unwanted result?

Many companies impose new restrictions on their employees without analysing the unwanted result it may lead to. So for example, if companies don’t facilitate a method for sharing large files, the employees might resort to Dropbox which could represent a potential threat. Smart companies learn that it is important to offer alternatives to the privileges they remove from their employees.

How do you go by identifying what the users need?

They will often tell you what it is they need and they might even have a solution in mind. It’s really about offering their solutions securely. Rarely is the case when you have to tell them that what they want is very stupid and that they simply should not do it.

Finally, apart from sharp technical skills, what other skills would you say security professionals need in order to qualify for a job?

You have to know the difference between imposing security and learning how to make others collaborate with security. Having good interpersonal skills is very important: you need to know how to convince people to change their behaviour.

Thank you Konrads.


Preventing Insider Attacks

An insider attack is one of the biggest threats faced by modern enterprises, where even a good working culture might not be sufficient to prevent it. Companies implement sophisticated technology to monitor their employees but it’s not always easy for them to distinguish between an insider and an outside attack.

Those who target and plan attacks from the outside might create strategies for obtaining insider knowledge and access by either resorting to an existing employee, or by making one of their own an insider.

They may introduce a problem to both individuals (in the form of financial fraud, for example) and companies (by abusing authorization credentials provided to legitimate employees). In this scenario, a victim and an attacker are sharing physical space, which makes it very easy to gain login and other sensitive information.

According to CERT, a malicious insider is; a current or former employee, contractor, or business partner who has or had authorised access to an organisation’s network system or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organisation’s information. Furthermore, CERT split insider crimes into three categories:

  •  Insider IT Sabotage, where IT is used to direct specific harm at an organisation or an individual.
  • Insider Theft of Intellectual Property is the use of IT to steal proprietary information from an organisation.
  • Insider Fraud uses IT to add, modify and/or delete an organisation’s data in an unauthorised manner for personal gain. It also includes the theft of information needed for identity crime.

But how can companies detect and prevent such attacks?

In his paper, Framework for Understanding and Predicting Insider Attacks, Eugene Schultz suggests that insiders make human errors, which when spotted can help in preventing such threats. Therefore, constant monitoring, especially focused on low-level employees, is one of the basic measures for preventing insider attacks and gathering evidence.

There are a number of precursors of insider attacks that can help to identify and prevent them:

  • Deliberate markers – These are signs which attackers leave intentionally. They can be very obvious or very subtle, but they all aim to make a statement. Being able to identify the smaller, less obvious markers can help prevent the “big attack.”
  • Meaningful errors – Skilled attackers tend to try and cover their tracks by deleting log files but error logs are often overlooked.
  • Preparatory behaviour – Collecting information, such as testing countermeasures or permissions, is the starting point of any social engineering attack.
  • Correlated usage patterns – It is worthwhile to invest in investigating the patterns of computer usage across different systems. This can reveal a systematic attempt to collect information or test boundaries.
  • Verbal behaviour Collecting information or voicing dissatisfaction about the current working conditions may be considered one of the precursors of an insider attack.
  • Personality traits – A history of rule violation, drug or alcohol addiction, or inappropriate social skills may contribute to the propensity of committing an insider attack.

There are a number of insider attackers who are merely pawns for another inside or outside mastermind. He or she is usually persuaded or trained to perpetrate or facilitate the attack, alone or in collusion with other (outside) agents, motivated by the expectation of personal gain.

Organisations may unknowingly make themselves vulnerable to insider attacks by not screening newcomers properly in the recruitment, not performing threat analyses, or failing to monitor their company thoroughly. Perhaps the most important thing they overlook is to keep everybody’s morale high by communicating to employees that they are valued and trusted.


Understanding the Attackers

know your enemy - practice

When defining attack vectors, it is useful to know who the attackers are. One should understand that attackers are people too, who differ in resources, motivation, ability and risk propensity. According to Bruce Schneier, author of Beyond Fear, the categories of attackers are:

Opportunists

The most common type of attacker. As the category indicates, they spot and seize an “opportunity” and are convinced that they will not get caught. It is easy to deter such attackers via cursory countermeasures.

Emotional attackers

They may accept a high level of risk and usually want to make a statement through their attack. The most common motivation for them is revenge against an organisation due to actual or perceived injustice. Although emotional attackers feel powerful when causing harm, they sometimes “hope to get caught” as a way of solving the issues they were unhappy with but were unable to change from the beginning.

Cold intellectual attackers

Skilled and resourceful professionals who attack for their own gain or are employed to do so. They target information, not the system, and often use insiders to get it. Unlike opportunists, cold intellectual attackers are not discouraged by cursory countermeasures.

Terrorists

They accept high risk to gain visibility and make a statement. They are not only hard to deter by cursory countermeasures, but can even see them as a thrill.

Friends and relations

They may introduce a problem to both individuals (in the form of financial fraud, for example) and companies (by abusing authorization credentials provided to legitimate employees). In this scenario, a victim and an attacker are sharing physical space, which makes it very easy to gain login and other sensitive information.