NIS Directive: are you ready?

UNADJUSTEDNONRAW_thumb_3de4

Governments across Europe recognised that with increased interconnectiveness a cyber incident can affect multiple entities spanning across a number of countries. Moreover, impact and frequency of cyber attacks is at all-time high with recent examples including:

  • 2017 WannaCry ransomware attack
  • 2016 attacks on US water utilities
  • 2015 attack on Ukraine’s electricity network

In order to manage cyber risk, the European Union introduced the Network and Information Systems (NIS) Directive which requires all Member States to protect their critical national infrastructure by implementing cyber security legislation.

Each Member State is required to set their own rules on financial penalties and must take the necessary measures to ensure that they are implemented. For example, in the UK fines, can be up to £17 million.

And yes, in case you are wondering, the UK government has confirmed that the Directive will apply irrespective of Brexit (the NIS Regulations come into effect before the UK leaves the EU).

Who does the NIS Directive apply to?

The law applies to:

  • Operators of Essential Services that are established in the EU
  • Digital Service Providers that offer services to persons within the EU

The sectors affected by the NIS Directive are:

  • Water
  • Health (hospitals, private clinics)
  • Energy (gas, oil, electricity)
  • Transport (rail, road, maritime, air)
  • Digital infrastructure and service providers (e.g. DNS service providers)
  • Financial Services (only in certain Member States e.g. Germany)

NIS Directive objectives

In the UK the NIS Regulations will be implemented in the form of outcome-focused principles rather than prescriptive rules.

National Cyber Security Centre (NCSC) is the UK single point of contact for the legislation. They published top level objectives with underlying security principles.

Objective A – Managing security risk

  • A1. Governance
  • A2. Risk management
  • A3. Asset management
  • A4. Supply chain

Objective B – Protecting against cyber attack

  • B1. Service protection policies and processes
  • B2. Identity and access control
  • B3. Data security
  • B4. System security
  • B5. Resilient networks and systems
  • B6. Staff awareness

Objective C – Detecting cyber security events

  • C1. Security monitoring
  • C2. Proactive security event discovery

Objective D – Minimising the impact of cyber security incidents

  • D1. Response and recovery planning
  • D2. Lessons learned

Table view of principles and related guidance is also available on the NCSC website.

Cyber Assessment Framework

The implementation of the NIS Directive can only be successful if Competent Authorities  can adequately assess the cyber security of organisations is scope. To assist with this, NCSC developed the Cyber Assessment Framework (CAF).

The Framework is based on the 14 outcomes-based principles of the NIS Regulations outlined above. Adherence to each principle is determined based on how well associated outcomes are met. See below for an example:

NIS

Each outcome is assessed based upon Indicators of Good Practice (IGPs), which are statements that can either be true or false for a particular organisation.

Whats’s next?

If your organisation is in the scope of the NIS Directive, it is useful to conduct an initial self-assessment using the CAF described above as an starting point of reference. Remember, formal self-assessment will be required by your Competent Authority, so it is better not to delay this crucial step.

Establishing an early dialogue with the Competent Authority is essential as this will not only help you establish the scope of the assessment (critical assets), but also allow you to receive additional guidance from them.

Initial self-assessment will most probably highlight some gaps. It is important to outline a plan to address these gaps and share it with your Competent Authority. Make sure you keep incident response in mind at all times. The process has to be well-defined to allow you report NIS-specific incidents to your Competent Authority within 72 hours.

Remediate the findings in the agreed time frames and monitor on-going compliance and potential changes in requirements, maintaining the dialogue with the Competent Authority.

Advertisements

Risk management fundamentals

Risk

The focus of many of my projects is on risks. I’ve observed through multiple assessments in various companies and industries a lack of formalised risk management process. Some of the plans may exist but they are not linked to specific risks and risk reduction levels are not being measured and reported on appropriately.

The security function can be effective in responding to incidents but the strategic risk-driven planning is often missing. The root cause of this state of affairs is often can be generalised as low maturity of the security function. If that’s the case, the team spends most of its time fighting fires and have little capacity to address the challenges that cause these fires in the first place.

To address this, I assess current state of the security function, define the target maturity level and then develop a high-level roadmap to achieve that desired state.

If the company is geographically distributed, noticeable differences usually exist between a number of business units in terms of overall policy framework. The suggestion here is to define a baseline level of security controls across the entire enterprise. The first step in defining these is to understand what we are trying to protect – the assets.

Modern corporations own a wide range of assets that enable them to operate and grow. They broadly include physical and non-physical assets, people and reputation. Engagement from appropriate parts of the business to identify these is important here as potential attacks to these assets might negatively affect the operations.

By understanding the assets we are able to better identify risks, enable effective detection and response, and prioritise controls and remediation efforts better.

It also helps to conduct a bottom-up review of assets to understand what exactly we’ve got there, focusing on the most critical ones and creating and updating asset inventories.

Understanding the asset base and setting standards and guidance for protecting them will focus the efforts and help you prevent and better respond to security issues.

Assets are tightly linked to threat actors, because it’s not enough to know what we need to protect – we also need to know what we are protecting our assets against. Threat actors vary in their motivation and ability and – depending on the company – include nation states, organised crime, insiders, hacktivist, competitors, etc.

A combination of assets and threats helps us to define risks.

Identifying risks and placing them on a heat map helps determine the inherent, residual and target risks. Inherent risks show the level of risk assuming all the controls or remediating measures were absent or failing. Think of it as if security function didn’t exist. It’s not a happy place where we see the majority of risks have high impact and likelihood being in the top right hand side corner of the chart.

Luckily, security function does exist and even if they don’t have a formalised risk management process, they are usually doing a good job in addressing some of these risks.

Current level of risk is taking into account all the controls and remediating measures in place. The initial impact and likelihood is usually reduced and sometimes to an acceptable level agreed by the business. The idea here is although further reduction of impact and likelihood is possible, it might not be cost-effective. In other words, the money might be better spent in addressing other risks.

Target risks is the future state risk level once additional controls and remediation measures are implemented by the security team.

The main takeaway here is that a formalised risk management approach (with accompanying processes and policies) is needed to ensure all risks are identified and tracked over time, and the appropriate resources and efforts are spent on the top priority risks.


How to conduct a cyber security assessment

NIST SCF

I remember conducting a detailed security assessment using the CSET (Cybersecurity Evaluation Tool) developed by the US Department of Homeland Security for UK and US gas and electricity generation and distribution networks. The question set contained around 1200 weighted and prioritised questions and resulted in a very detailed report.

Was it useful?

The main learning was that tracking every grain of sand is no longer effective after a certain threshold.

The value add, however, came from going deeper into specific controls and almost treating it as an audit where a certain category is graded lower if none or insufficient evidence was provided.  Sometimes this is the only way to provide an insight into how security is being managed across the estate.

Why?

What’s apparent in some companies – especially in financial services – is that they conduct experiments often using impressive technology. But have they made it into a standard and consistently rolled it out across the organisation? The answer is often ‘no’. Especially if the company is geographically distributed.

I’ve done a lot of assessments and benchmarking exercises against NIST CSF, ISO 27001, ISF IRAM2 and other standards since that CSET engagement and developed a set of questions that cover the areas of the NIST Cybersecurity Framework.

I felt the need to streamline the process and developed a tool to represent the scores nicely and help benchmark against the industry. I usually propose a tailor-made questionnaire that would include 50-100 suitable questions from the bank. From my experience in these assessments, the answers are not binary. Yes, a capability might be present but the real questions are:

  • How is it implemented?
  • How consistently it is being rolled out?
  • Can you actually show me the evidence?

So it’s very much about seeking the facts.

As I’ve mentioned, the process might not be the most pleasant for the parties involved but it is the one that delivers the most value for the leadership.

What about maturity?

I usually map the scores to the CMMI (Capability Maturity Model Integration) levels of:

  • Initial
  • Managed
  • Defined
  • Quantitatively managed and
  • Optimised

But I also consider NIST Cybersecurity framework implementation tiers that are not strictly considered as maturity levels, rather the higher tiers point to a more complete implementation of CSF standards. They go through Tiers 1-4 from Partial through to Risk Informed, Repeatable and finally Adaptive.

The key here is not just the ultimate score but the relation of the score to the coverage across the estate.


The Psychology of Information Security book reviews

51enjkmw1ll-_sx322_bo1204203200_

I wrote about my book  in the previous post. Here I would like to share what others have to say about it.

So often information security is viewed as a technical discipline – a world of firewalls, anti-virus software, access controls and encryption. An opaque and enigmatic discipline which defies understanding, with a priesthood who often protect their profession with complex concepts, language and most of all secrecy.

Leron takes a practical, pragmatic and no-holds barred approach to demystifying the topic. He reminds us that ultimately security depends on people – and that we all act in what we see as our rational self-interest – sometimes ill-informed, ill-judged, even downright perverse.

No approach to security can ever succeed without considering people – and as a profession we need to look beyond our computers to understand the business, the culture of the organisation – and most of all, how we can create a security environment which helps people feel free to actually do their job.
David Ferbrache OBE, FBCS
Technical Director, Cyber Security
KPMG UK

This is an easy-to-read, accessible and simple introduction to information security.  The style is straightforward, and calls on a range of anecdotes to help the reader through what is often a complicated and hard to penetrate subject.  Leron approaches the subject from a psychological angle and will be appealing to both those of a non-technical and a technical background.
Dr David King
Visiting Fellow of Kellogg College
University of Oxford

Read the rest of this entry »


Digital decisions: Understanding behaviours for safer cyber environments

DART

I was invited to participate in a panel discussion at a workshop on digital decision-making and risk-taking hosted by the Decision, Attitude, Risk & Thinking (DART) research group at Kingston Business School.

During the workshop, we addressed the human dimension in issues arising from increasing digital interconnectedness with a particular focus on cyber security risks and cyber safety in web-connected organisations.

We identified behavioural challenges in cyber security such as insider threats, phishing emails, security culture and achieving stakeholder buy-in. We also outlined a potential further research opportunity which could tackle behavioural security risks inherent in the management of organisational information assets.

2016-04-25 14.50


Correlation vs Causation

chart

Scientists in various fields adopt statistical methods to determine relationships between events and assess the strength of such links. Security professionals performing risk assessments are also interested in determining what events are causing the most impact.

When analysing historical data, however, they should remember that correlation doesn’t always imply causation. When patterns of events look similar, it may lead you to believe that one event causes the other. But as demonstrated by the chart above, it is highly unlikely that seeing Nicolas Cage on TV causes people to jump into the pool (although it may in some cases).

This and other spurious correlations can be found on this website, with an option to create your own.


The root causes of a poor security culture within the workplace

15542330623_c052788d46_z

Demonstrating to employees that security is there to make their life easier, not harder, is the first step in developing a sound security culture. But before we discuss the actual steps to improve it, let’s first understand the root causes of poor security culture.

Security professionals must understand that bad habits and behaviours tend to be contagious. Malcolm Gladwell, in his book The Tipping Point,[1] discusses the conditions that allow some ideas or behaviours to “spread like viruses”. He refers to the broken windows theory to illustrate the power of context. This theory advocates stopping smaller crimes by maintaining the environment in order to prevent bigger ones. The claim goes that a broken window left for several days in a neighbourhood would trigger more vandalism. The small defect signals a lack of care and attention on the property, which in turn implies that crime will go unpunished.

Gladwell describes the efforts of George Kelling, who employed the theory to fight vandalism on the New York City subway system. He argued that cleaning up graffiti on the trains would prevent further vandalism. Gladwell concluded that this several-year-long effort resulted in a dramatically reduced crime rate.

Despite ongoing debate regarding the causes of the 1990s crime rate reduction in the US, the broken windows theory can be applied in an information security context.

Security professionals should remember that minor policy violations tend to lead to bigger ones, eroding the company’s security culture.

The psychology of human behaviour should be considered as well

Sometimes people are not motivated to comply with a security policy because they simply don’t see the financial impact of violating it.

Dan Ariely, in his book The Honest Truth about Dishonesty,[2] tries to understand why people break the rules. Among other experiments, he describes a survey conducted among golf players to determine the conditions in which they would be tempted to move the ball into a more advantageous position, and if so, which method they would choose. The golfers were offered three different options: they could use their club, use their shoe or simply pick the ball up using their hands.

Although all of these options break the rules, they were designed in this way to determine if one method of cheating is more psychologically acceptable than others. The results of the study demonstrated that moving the ball with a club was the most common choice, followed by the shoe and, finally, the hand. It turned out that physically and psychologically distancing ourselves from the ‘immoral’ action makes people more likely to act dishonestly.

It is important to understand that the ‘distance’ described in this experiment is merely psychological. It doesn’t change the nature of the action.

In a security context, employees will usually be reluctant to steal confidential information, just as golfers will refrain from picking up a ball with their hand to move it to a more favourable position, because that would make them directly involved in the unethical behaviour. However, employees might download a peer-to-peer sharing application to listen to music while at work, as the impact of this action is less obvious. This can potentially lead to even bigger losses due to even more confidential information being stolen from the corporate network.

Security professionals can use this finding to remind employees of the true meaning of their actions. Breaking security policy does not seem to have a direct financial impact on the company – there is usually no perceived loss, so it is easy for employees to engage in such behaviour. Highlighting this link and demonstrating the correlation between policy violations and the business’s ability to generate revenue could help employees understand the consequences of non-compliance.

References:

[1] Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little, Brown, 2006.

[2] Dan Ariely, The Honest Truth about Dishonesty, Harper, 2013.

Image by txmx 2 https://flic.kr/p/pFqvpD

To find out more about the behaviours behind information security, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond