Vulnerability scanning gone bad

Security teams often have good intentions when they want to improve the security posture of a company by introducing new tools.

In one organisation, for example, they might want to mitigate the risk of exploiting application vulnerabilities and decide to deploy a code-scanning tool. This would make sure that applications are tested for exploits before they are released. Great idea but the uptake on the use of this tool was surprisingly low and created a lot of friction.

After closer examination, it turns out that this was primarily due to challenges with communication with the development teams that would need to use the tool. The impacted teams weren’t sufficiently trained on the use of it and there wasn’t enough support from the management to adopt it.

Development teams have tight timelines and budgets to work to in order to meet the business objectives. Anything that could disrupt these aspects is viewed with caution.

As a result, applications that should have had their code scanned either hadn’t, or had to be scanned at a much later stage of the development cycle. It was not incorporated in the DevOps pipeline– the scans were run as part of a manual check before release in production. Not only the risk of having applications with flaws in them remain largely unchanged, the whole process of delivering working software was prolonged.

These new applications were being delivered to facilitate revenue growth or streamline exiting processes to reduce cost and complexity. The impact on the business was that the new functionality they were expecting took longer to materialise, resulting in users’ frustration.

What can you do to prevent such situations from happening? Here are a few recommendations:

  1. Communicate frequently and at the right level. Communication must start at the top of an organisation and work its way down, so that priorities and expectations can be aligned. A person may need to hear the same message multiple times before they take action.
  2. Articulate the benefits. Security and risk teams need to ensure they position any new processes or tools in a way that highlights the benefits to each stakeholder group.
  3. Provide clear steps. In order to ensure the change is successful, security professionals should clearly outline the steps for how to start realising these benefits.

Communicating and providing support on new security policies, tools and practices to impacted teams is absolutely critical. This is especially important in large organisations with many stakeholder groups spread across multiple geographies. Always keep the people in mind when introducing a change, even if it’s the one for the better.


Transparency in security

Transparent

I was asked to deliver a keynote in Germany at the Security Transparent conference. Of course, I agreed. Transparency in security is one of the topics that is very close to my heart and I wish professionals in the industry not only talked about it more, but also applied it in practice.

Back in the old days, security through obscurity was one of the many defence layers security professionals were employing to protect against attackers. On the surface, it’s hard to argue with such a logic: the less the adversary knows about our systems, the less likely they are to find a vulnerability that can be exploited.

There are some disadvantages to this approach, however. For one, you now need to tightly control the access to the restricted information about the system to limit the possibility of leaking sensitive information about its design. But this also limits the scope for testing: if only a handful of people are allowed to inspect the system for security flaws, the chances of actually discovering them are greatly reduced, especially when it comes to complex systems. Cryptographers were among the first to realise this. One of Kerckhoff’s principles states that “a cryptosystem should be secure even if everything about the system, except the key, is public knowledge”.

Modern encryption algorithms are not only completely open to public, exposing them to intense scrutiny, but they have often been developed by public, as is the case, for example, with AES. If a vendor is boasting using their own proprietary encryption algorithm, I suggest giving them a wide berth.

Cryptography aside, you can approach transparency from many different angles: the way you handle personal data, respond to a security incident or work with your partners and suppliers. All of these and many more deserve attention of the security community. We need to move away from ambiguous privacy policies and the desire to save face by not disclosing a security breach affecting our customers or downplaying its impact.

The way you communicate internally and externally while enacting these changes within an organisation matters a lot, which is why I focused on this communication element while presenting at Security Transparent 2019. I also talked about friction between security and productivity and the need for better alignment between security and the business.

I shared some stories from behavioural economics, criminology and social psychology to demonstrate that challenges we are facing in information security are not always unique – we can often look at other seemingly unrelated fields to borrow and adjust what works for them. Applying lessons learned from other disciplines when it comes to transparency and understanding people is essential when designing security that works, especially if your aim is to move beyond compliance and be an enabler to the business.

Remember, people are employed to do a particular job: unless you’re hired as an information security specialist, your job is not to be an expert in security. In fact, badly designed and implemented security controls can prevent you from doing your job effectively by reducing your productivity.

After all, even Kerckhoff recognised the importance of context and fatigue that security can place on people. One of his lesser known principles states that “given the circumstances in which it is to be used, the system must be easy to use and should not be stressful to use or require its users to know and comply with a long list of rules”. He was a wise man indeed.


Understanding your threat landscape

Identifying applicable threats is a good step to take before defining security controls your organisation should put in place. There are various techniques to help you with threat modelling but I wanted to give you some high-level pointers in this blog to get you started. Of course, all of these should be tailored to your specific business.

I find it useful to think about potential attacks as three broad categories:

1. Commoditised attacks. Usually not targeted and involve off-the-shelf-malware. Examples include:

2. Tailored attacks. As the name suggests, these are tailored and can vary in degree of sophistication. Examples include:

3. Accidental. Not every data breach is triggered by a malicious actor. Therefore, it is important to recognise that mistakes happen. Unfortunately sometimes they lead to undesired consequences, like the below:

Information security professionals can use the above examples in communications with their business stakeholders not to spread fear, but to present certain security challenges in context.

It’s often helpful to make it a bit more personal, defining specific threat actors, their target, motivation and impact on the business. Again, the below table serves as an example and can be used as a starting point for you define your own.

Threat actor Description Motivation Target Impact on business
Organised crime International hacking groups Financial gain Commercial data, personal data for identity fraud Reputational damage, regulatory fines, loss of customer trust
Insider Intentional or unintentional Human error, grudge, financial gain Intellectual property, commercial data Destruction or alteration of information, theft of information, reputational damage, regulatory fines
Competitors Espionage and sabotage Competitive advantage Intellectual property, commercial information Disruption or destruction, theft of information, reputational damage, loss of customer
State-sponsored Espionage Political Intellectual property, commercial data, personal data Theft of information, reputational damage

You can then use your understanding of assets and threats relevant to your company to identify security risks. For instance:

  • Failure to comply with relevant regulation – revenue loss and reputational damage due to fines and unwanted media attention as a result of non-compliance with GDPR, PCI DSS, etc.
  • Breach of personal data – regulatory fines, potential litigation and loss of customer trust due to accidental mishandling, external system compromise or insider threat leading to exposure of personal data of customers
  • Disruption of operations – decreased productivity or inability to trade due to compromise of IT systems by malicious actor, denial of service attacks, sabotage or employee error

Again, feel free to use these as examples, but always tailor them based on what’s important you your business. It’s also worth remembering that this is not a one-off exercise. Tracking your assets, threats and risks should be part of your security management function and be incorporated in operational risk management and continuous improvement cycles.

This will allow you to demonstrate the value of security through pragmatic and prioritised security controls, focusing on protecting the most important assets, ensuring alignment to business strategy and embedding security into the business.


Cyber security in divestments

A company may divest its assets for a number of reasons: political, social or purely financial in order to free up resources to focus on core business. Regulators may also demand a divestment to prevent one company holding a monopoly. When such a decision is made, the security function can support the business by managing risks during this process. These risks not only include the obvious legal and regulatory compliance ones, but also risks related to business disruption and leaks of intellectual property or other sensitive information. Security teams can also help the business identify value adding opportunities through, for instance, saving costs on software licenses.

The scale of divestments vary and depend on the nature of the organisation: they can range from a single subsidiary to a whole division. Information usually accompanies physical assets, which opens up potential challenges with data governance when these assets change hands. The magnitude of such risks differ depending on specific conditions of the deal, for example:

  • Number of assets is scope
  • Criticality of assets
  • Location of assets and applicable jurisdictions

In my experience, divestments are almost always associated with aggressive timelines for completion usually in the form of legally binding agreements. Therefore, as a security professional, the last thing you want to do is to slow down the process and prevent the business from meeting these timelines.

You need to balance this, however, with the risk exposure. It helps when the security team gets involved early to support the process from the start. All too often, however, the business can be asking for security sign-off after the finalisation of the deal. This can be disappointing, particularly when a number of data transfer requirements have already been violated.

So if you’re one of the lucky ones, and the business is asking for your advice on divesting securely, what should you tell them? What areas do you consider? Here are some examples to get you started:

  • Information asset inventories and data maps. These might include data, software and infrastructure assets. You can’t help securely transfer something you don’t know exists. Start with establishing visibility and interdependencies.
  • Access control. Who has access to what? Do they need that access? Will they need that access in the future? Segregation of duties and least privilege principles are not just abstract philosophical concepts – they have real applications when it comes to divestments.
  • Consider legal and regulatory requirements when it comes to data asset transfer, retention and disposal. Involve your legal team, but don’t forget about technical controls, like encryption and secure data wipes.
  • Availability of skilled resource and mature IT function on the ‘buy’ side. Remember, whoever is buying the assets must have their infrastructure ready to support the acquisition and integration of new assets. Despite being perceived as a ‘buyer’s problem’, risks like that can negatively impact the overall project and should be considered.

All in all, the divestment process can be challenging but the early integration of security professionals ensures the appropriate oversight is given to all relevant areas for a smooth transfer to the buyer.


Behavioural science in cyber security

Why your staff ignore security policies and what to do about it.               

Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.

In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.

His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it.  So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.

Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.

This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.

The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.

This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.

To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.

Read the rest of this entry »


NIS Directive: are you ready?

UNADJUSTEDNONRAW_thumb_3de4

Governments across Europe recognised that with increased interconnectiveness a cyber incident can affect multiple entities spanning across a number of countries. Moreover, impact and frequency of cyber attacks is at all-time high with recent examples including:

  • 2017 WannaCry ransomware attack
  • 2016 attacks on US water utilities
  • 2015 attack on Ukraine’s electricity network

In order to manage cyber risk, the European Union introduced the Network and Information Systems (NIS) Directive which requires all Member States to protect their critical national infrastructure by implementing cyber security legislation.

Each Member State is required to set their own rules on financial penalties and must take the necessary measures to ensure that they are implemented. For example, in the UK fines, can be up to £17 million.

And yes, in case you are wondering, the UK government has confirmed that the Directive will apply irrespective of Brexit (the NIS Regulations come into effect before the UK leaves the EU).

Who does the NIS Directive apply to?

The law applies to:

  • Operators of Essential Services that are established in the EU
  • Digital Service Providers that offer services to persons within the EU

The sectors affected by the NIS Directive are:

  • Water
  • Health (hospitals, private clinics)
  • Energy (gas, oil, electricity)
  • Transport (rail, road, maritime, air)
  • Digital infrastructure and service providers (e.g. DNS service providers)
  • Financial Services (only in certain Member States e.g. Germany)

NIS Directive objectives

In the UK the NIS Regulations will be implemented in the form of outcome-focused principles rather than prescriptive rules.

National Cyber Security Centre (NCSC) is the UK single point of contact for the legislation. They published top level objectives with underlying security principles.

Objective A – Managing security risk

  • A1. Governance
  • A2. Risk management
  • A3. Asset management
  • A4. Supply chain

Objective B – Protecting against cyber attack

  • B1. Service protection policies and processes
  • B2. Identity and access control
  • B3. Data security
  • B4. System security
  • B5. Resilient networks and systems
  • B6. Staff awareness

Objective C – Detecting cyber security events

  • C1. Security monitoring
  • C2. Proactive security event discovery

Objective D – Minimising the impact of cyber security incidents

  • D1. Response and recovery planning
  • D2. Lessons learned

Table view of principles and related guidance is also available on the NCSC website.

Cyber Assessment Framework

The implementation of the NIS Directive can only be successful if Competent Authorities  can adequately assess the cyber security of organisations is scope. To assist with this, NCSC developed the Cyber Assessment Framework (CAF).

The Framework is based on the 14 outcomes-based principles of the NIS Regulations outlined above. Adherence to each principle is determined based on how well associated outcomes are met. See below for an example:

NIS

Each outcome is assessed based upon Indicators of Good Practice (IGPs), which are statements that can either be true or false for a particular organisation.

Whats’s next?

If your organisation is in the scope of the NIS Directive, it is useful to conduct an initial self-assessment using the CAF described above as an starting point of reference. Remember, formal self-assessment will be required by your Competent Authority, so it is better not to delay this crucial step.

Establishing an early dialogue with the Competent Authority is essential as this will not only help you establish the scope of the assessment (critical assets), but also allow you to receive additional guidance from them.

Initial self-assessment will most probably highlight some gaps. It is important to outline a plan to address these gaps and share it with your Competent Authority. Make sure you keep incident response in mind at all times. The process has to be well-defined to allow you report NIS-specific incidents to your Competent Authority within 72 hours.

Remediate the findings in the agreed time frames and monitor on-going compliance and potential changes in requirements, maintaining the dialogue with the Competent Authority.


Augusta University’s Cyber Institute adopts my book

jsacacalog

Just received some great news from my publisher.  My book has been accepted for use on a course at Augusta University. Here’s some feedback from the course director:

Augusta University’s Cyber Institute adopted the book “The Psychology of Information Security” as part of our Masters in Information Security Management program because we feel that the human factor plays an important role in securing and defending an organisation. Understanding behavioural aspects of the human element is important for many information security managerial functions, such as developing security policies and awareness training. Therefore, we want our students to not only understand technical and managerial aspects of security, but psychological aspects as well.