Vulnerability scanning gone bad

Security teams often have good intentions when they want to improve the security posture of a company by introducing new tools.

In one organisation, for example, they might want to mitigate the risk of exploiting application vulnerabilities and decide to deploy a code-scanning tool. This would make sure that applications are tested for exploits before they are released. Great idea but the uptake on the use of this tool was surprisingly low and created a lot of friction.

After closer examination, it turns out that this was primarily due to challenges with communication with the development teams that would need to use the tool. The impacted teams weren’t sufficiently trained on the use of it and there wasn’t enough support from the management to adopt it.

Development teams have tight timelines and budgets to work to in order to meet the business objectives. Anything that could disrupt these aspects is viewed with caution.

As a result, applications that should have had their code scanned either hadn’t, or had to be scanned at a much later stage of the development cycle. It was not incorporated in the DevOps pipeline– the scans were run as part of a manual check before release in production. Not only the risk of having applications with flaws in them remain largely unchanged, the whole process of delivering working software was prolonged.

These new applications were being delivered to facilitate revenue growth or streamline exiting processes to reduce cost and complexity. The impact on the business was that the new functionality they were expecting took longer to materialise, resulting in users’ frustration.

What can you do to prevent such situations from happening? Here are a few recommendations:

  1. Communicate frequently and at the right level. Communication must start at the top of an organisation and work its way down, so that priorities and expectations can be aligned. A person may need to hear the same message multiple times before they take action.
  2. Articulate the benefits. Security and risk teams need to ensure they position any new processes or tools in a way that highlights the benefits to each stakeholder group.
  3. Provide clear steps. In order to ensure the change is successful, security professionals should clearly outline the steps for how to start realising these benefits.

Communicating and providing support on new security policies, tools and practices to impacted teams is absolutely critical. This is especially important in large organisations with many stakeholder groups spread across multiple geographies. Always keep the people in mind when introducing a change, even if it’s the one for the better.


Innovating in the age of GDPR

Expo

Customers are becoming increasingly aware of their rights when it comes to data privacy and they expect companies to safeguard the data they entrust to them. With the introduction of GDPR, a lot of companies had to think about privacy for the first time.

I’ve been invited to share my views on innovating in the age of GDPR as part of the Cloud and Cyber Security Expo in London.

When I was preparing for this panel I was trying to understand why this was even a topic to begin with. Why should innovation stop? If your business model is threatened by the GDPR then you are clearly doing something wrong. This means that your business model was relying on exploitation of consumers which is not good.

But when I thought about it a bit more, I realised that there are costs to demonstrating compliance to the regulator that a company would also have to account for. It’s arguably easier achieved by bigger companies with established compliance teams rather than smaller upstarts, serving as a barrier to entry. Geography also plays a role here. What if a tech firm starts in the US or India, for example, where the regulatory regime is more relaxed when it comes to protecting customer data and then expands to Europe when it can afford it? At least at a first glance, companies starting up in Europe are at a disadvantage as they face potential regulatory scrutiny from day one.

How big of a problem is this? I’ve been reading about people complaining that you need fancy lawyers who understand technology to address this challenge. I would argue, however, that fancy lawyers are only required when you are doing shady stuff with customer data. Smaller companies that are just starting up have another advantage on their side: they are new. This means they don’t have go and retrospectively purge legacy systems of data they have been collecting over the years potentially breaking the business logic in the interdependent systems. Instead, they start with a clean slate and have an opportunity to build privacy in their product and core business processes (privacy by design).

Risk may increase while the company grows and collects more data, but I find that this risk-based approach is often missing. Implementation of your privacy programme will depend on your risk profile and appetite. Level of risk will vary depending on type and amount of data you collect. For example, a bank can receive thousands of subject access requests per month, while a small B2B company can receive one a year. Implementation of privacy programmes will therefore be vastly different. The bank might look into technology-enabled automation, while a small company might look into outsourcing subject request processes. It is important to note, however, that risk can’t be fully outsourced as the company still ultimately owns it at the end of the day

The market is moving towards technology-enabled privacy processes: automating privacy impact assessments, responding to customer requests, managing and responding to incidents, etc.

I also see the focus shifting from regulatory-driven privacy compliance to a broader data strategy. Companies are increasingly interested in understanding how they can use data as an asset rather than a liability. They are looking for ways to effectively manage marketing consents and opt out and giving power and control back to the customer, for example by creating preference centres.

Privacy is more about the philosophy of handling personal data rather than specific technology tricks. This mindset in itself can lead to innovation rather than stifling it. How can you solve a customers’ problem by collecting the minimum amount of personal data? Can it be anonymised? Think of personal data like toxic waste – sure it can be handled, but with extreme care.


Cyber Security: Law and Guidance

IMG-2260

I’m proud to be one of the contributors to the newly published  Cyber Security: Law and Guidance book.

Although the primary focus of this book is on the cyber security laws and data protection, no discussion is complete without mentioning who all these measures aim to protect: the people.

I draw on my research and practical experience to present a case for the new approach to cyber security and data protection placing people in its core.

Check it out!


NIS Directive: are you ready?

UNADJUSTEDNONRAW_thumb_3de4

Governments across Europe recognised that with increased interconnectiveness a cyber incident can affect multiple entities spanning across a number of countries. Moreover, impact and frequency of cyber attacks is at all-time high with recent examples including:

  • 2017 WannaCry ransomware attack
  • 2016 attacks on US water utilities
  • 2015 attack on Ukraine’s electricity network

In order to manage cyber risk, the European Union introduced the Network and Information Systems (NIS) Directive which requires all Member States to protect their critical national infrastructure by implementing cyber security legislation.

Each Member State is required to set their own rules on financial penalties and must take the necessary measures to ensure that they are implemented. For example, in the UK fines, can be up to £17 million.

And yes, in case you are wondering, the UK government has confirmed that the Directive will apply irrespective of Brexit (the NIS Regulations come into effect before the UK leaves the EU).

Who does the NIS Directive apply to?

The law applies to:

  • Operators of Essential Services that are established in the EU
  • Digital Service Providers that offer services to persons within the EU

The sectors affected by the NIS Directive are:

  • Water
  • Health (hospitals, private clinics)
  • Energy (gas, oil, electricity)
  • Transport (rail, road, maritime, air)
  • Digital infrastructure and service providers (e.g. DNS service providers)
  • Financial Services (only in certain Member States e.g. Germany)

NIS Directive objectives

In the UK the NIS Regulations will be implemented in the form of outcome-focused principles rather than prescriptive rules.

National Cyber Security Centre (NCSC) is the UK single point of contact for the legislation. They published top level objectives with underlying security principles.

Objective A – Managing security risk

  • A1. Governance
  • A2. Risk management
  • A3. Asset management
  • A4. Supply chain

Objective B – Protecting against cyber attack

  • B1. Service protection policies and processes
  • B2. Identity and access control
  • B3. Data security
  • B4. System security
  • B5. Resilient networks and systems
  • B6. Staff awareness

Objective C – Detecting cyber security events

  • C1. Security monitoring
  • C2. Proactive security event discovery

Objective D – Minimising the impact of cyber security incidents

  • D1. Response and recovery planning
  • D2. Lessons learned

Table view of principles and related guidance is also available on the NCSC website.

Cyber Assessment Framework

The implementation of the NIS Directive can only be successful if Competent Authorities  can adequately assess the cyber security of organisations is scope. To assist with this, NCSC developed the Cyber Assessment Framework (CAF).

The Framework is based on the 14 outcomes-based principles of the NIS Regulations outlined above. Adherence to each principle is determined based on how well associated outcomes are met. See below for an example:

NIS

Each outcome is assessed based upon Indicators of Good Practice (IGPs), which are statements that can either be true or false for a particular organisation.

Whats’s next?

If your organisation is in the scope of the NIS Directive, it is useful to conduct an initial self-assessment using the CAF described above as an starting point of reference. Remember, formal self-assessment will be required by your Competent Authority, so it is better not to delay this crucial step.

Establishing an early dialogue with the Competent Authority is essential as this will not only help you establish the scope of the assessment (critical assets), but also allow you to receive additional guidance from them.

Initial self-assessment will most probably highlight some gaps. It is important to outline a plan to address these gaps and share it with your Competent Authority. Make sure you keep incident response in mind at all times. The process has to be well-defined to allow you report NIS-specific incidents to your Competent Authority within 72 hours.

Remediate the findings in the agreed time frames and monitor on-going compliance and potential changes in requirements, maintaining the dialogue with the Competent Authority.


Using SABSA for application security

Aligning OWASP Application Security Verification Standard and SABSA Architecture framework.

OWASP Application Security Verification Standard (Standard) is used at one of my clients to help develop and maintain secure applications. It has been used it as blueprint create a secure coding checklist specific to the organisation and applications used.

Below is an excerpt from the Standard related to the authentication verification requirements:

OWASP

The Standard provides guidance on specific security requirements corresponding to the Physical layer of the SABSA architecture.

SABSA views

Read the rest of this entry »


How to conduct a cyber security assessment

NIST SCF

I remember conducting a detailed security assessment using the CSET (Cybersecurity Evaluation Tool) developed by the US Department of Homeland Security for UK and US gas and electricity generation and distribution networks. The question set contained around 1200 weighted and prioritised questions and resulted in a very detailed report.

Was it useful?

The main learning was that tracking every grain of sand is no longer effective after a certain threshold.

The value add, however, came from going deeper into specific controls and almost treating it as an audit where a certain category is graded lower if none or insufficient evidence was provided.  Sometimes this is the only way to provide an insight into how security is being managed across the estate.

Why?

What’s apparent in some companies – especially in financial services – is that they conduct experiments often using impressive technology. But have they made it into a standard and consistently rolled it out across the organisation? The answer is often ‘no’. Especially if the company is geographically distributed.

I’ve done a lot of assessments and benchmarking exercises against NIST CSF, ISO 27001, ISF IRAM2 and other standards since that CSET engagement and developed a set of questions that cover the areas of the NIST Cybersecurity Framework.

I felt the need to streamline the process and developed a tool to represent the scores nicely and help benchmark against the industry. I usually propose a tailor-made questionnaire that would include 50-100 suitable questions from the bank. From my experience in these assessments, the answers are not binary. Yes, a capability might be present but the real questions are:

  • How is it implemented?
  • How consistently it is being rolled out?
  • Can you actually show me the evidence?

So it’s very much about seeking the facts.

As I’ve mentioned, the process might not be the most pleasant for the parties involved but it is the one that delivers the most value for the leadership.

What about maturity?

I usually map the scores to the CMMI (Capability Maturity Model Integration) levels of:

  • Initial
  • Managed
  • Defined
  • Quantitatively managed and
  • Optimised

But I also consider NIST Cybersecurity framework implementation tiers that are not strictly considered as maturity levels, rather the higher tiers point to a more complete implementation of CSF standards. They go through Tiers 1-4 from Partial through to Risk Informed, Repeatable and finally Adaptive.

The key here is not just the ultimate score but the relation of the score to the coverage across the estate.


Building a security culture

Building on the connection between breaking security policies and cheating, let’s look at a study[1] that asked participants to solve 20 simple maths problems and promised 50 cents for each correct answer.

The participants were allowed to check their own answers and then shred the answer sheet, leaving no evidence of any potential cheating. The results demonstrated that participants reported solving, on average, five more problems than under conditions where cheating was not possible (i.e. controlled conditions).

The researchers then introduced David – a student who was tasked to raise his hand shortly after the experiment begun and proclaim that he had solved all the problems. Other participants were obviously shocked by such a statement. It was clearly impossible to solve all the problems in only a few minutes. The experimenter, however, didn’t question his integrity and suggested that David should shred the answer sheet and take all the money from the envelope.

Interestingly, other participants’ behaviour adapted as a result. They reported solving on average eight more problems than under controlled conditions.

Much like the broken windows theory mentioned in my previous blog, this demonstrates that unethical behaviour is contagious, as are acts of non-compliance. If employees in a company witness other people breaking security policies and not being punished, they are tempted to do the same. It becomes socially acceptable and normal. This is the root cause of poor security culture.

The good news is that the opposite holds true as well. That’s why security culture has to have strong senior management support. Leading by example is the key to changing the perception of security in the company: if employees see that the leadership team takes security seriously, they will follow.

So, security professionals should focus on how security is perceived. This point is outlined in three basic steps in the book The Social Animal, by David Brooks:[2]

  1. People perceive a situation.
  2. People estimate if the action is in their long-term interest.
  3. People use willpower to take action.

P-A-A

He claims that, historically, people were mostly focused on the last two steps of this process. In the previous blog I argued that relying solely on willpower has a limited effect. Willpower can be exercised like a muscle, but it is also prone to atrophy.

In regard to the second step of the decision-making process, if people were reminded of the potential negative consequences they would be likely not to take the action. Brooks then refers to ineffective HIV/AIDS awareness campaigns, which focused only on the negative consequences and ultimately failed to change people’s behaviour.

He also suggests that most diets fail because willpower and reason are not strong enough to confront impulsive desires: “You can tell people not to eat the French fry. You can give them pamphlets about the risks of obesity … In their nonhungry state, most people will vow not to eat it. But when their hungry self rises, their well-intentioned self fades, and they eat the French fry”.

This doesn’t only apply to dieting: when people want to get their job done and security gets in the way, they will circumvent it, regardless of the degree of risk they might expose the company to.

That is the reason for perception being the cornerstone of the decision-making process. Employees have to be taught to see security violations in a particular way that minimises the temptation to break policies.

In ‘Strangers to Ourselves’, Timothy Wilson claims, “One of the most enduring lessons of social psychology is that behaviour change often precedes changes in attitudes and feelings”.[3]

Security professionals should understand that there is no single event that alters users’ behaviour – changing security culture requires regular reinforcement, creating and sustaining habits.

Charles Duhigg, in his book The Power of Habit,[4] tells a story about Paul O’Neill, a CEO of the Aluminum Company of America (Alcoa) who was determined to make his enterprise the safest in the country. At first, people were confused that the newly appointed executive was not talking about profit margins or other finance-related metrics. They didn’t see the link between his ‘zero-injuries’ goal and the company’s performance. Despite that, Alcoa’s profits reached a historical high within a year of his announcement. When O’Neill retired, the company’s annual income was five times greater than it had been before his arrival. Moreover, it became one of the safest companies in the world.

Duhigg explains this phenomenon by highlighting the importance of the “keystone habit”. Alcoa’s CEO identified safety as such a habit and focused solely on it.

O’Neill had a challenging goal to transform the company, but he couldn’t just tell people to change their behaviour. He said, “that’s not how the brain works. So I decided I was going to start by focusing on one thing. If I could start disrupting the habits around one thing, it would spread throughout the entire company.”

He recalled an incident when one of his workers died trying to fix a machine despite the safety procedures and warning signs. The CEO called an emergency meeting to understand what had caused this tragic event.

He took personal responsibility for the worker’s death, identifying numerous shortcomings in safety education. For example, the training programme didn’t highlight the fact that employees wouldn’t be blamed for machinery failure or the fact that they shouldn’t commence repair work before finding a manager.

As a result, the policies were updated and the employees were encouraged to suggest safety improvements. Workers, however, went a step further and started suggesting business improvements as well. Changing their behaviour around safety led to some innovative solutions, enhanced communication and increased profits for the company.

Security professionals should understand the importance of group dynamics and influences to build an effective security culture.

They should also remember that just as ‘broken windows’ encourage policy violations, changing one security habit can encourage better behaviour across the board.

References:

[1] Francesca Gino, Shahar Ayal and Dan Ariely, “Contagion and Differentiation in Unethical Behavior: The Effect of One Bad Apple on the Barrel”, Psychological Science, 20(3), 2009, 393–398.

[2] David Brooks, The Social Animal: The Hidden Sources of Love, Character, and Achievement, Random House, 2011.

[3] Timothy Wilson, Strangers to Ourselves, Harvard University Press, 2004, 212.

[4] Charles Duhigg, The Power of Habit: Why We Do What We Do and How to Change, Random House, 2013.

To find out more about building a security culture, read Leron’s book, The Psychology of Information Security. Twitter: @le_rond