Can AI help improve security culture?

I’ve been exploring the current application of machine learning techniques to cybersecurity. Although, there are some strong use cases in the areas of log analysis and malware detection, I couldn’t find the same quantity of research on applying AI to the human side of cybersecurity.

Can AI be used to support the decision-making process when developing cyber threat prevention mechanisms in organisations and influence user behaviour towards safer choices? Can modelling adversarial scenarios help us better understand and protect against social engineering attacks?

To answer these questions, a multidisciplinary perspective should be adopted with technologists and psychologists working together with industry and government partners.

While designing such mechanisms, consideration should be given to the fact that many interventions can be perceived by users as negatively impacting their productivity, as they demand additional effort to be spent on security and privacy activities not necessarily related to their primary activities [1, 2].

A number of researchers use the principles from behavioural economics to identify cyber security “nudges” (e.g.  [3], [4]) or visualisations [5,6].  This approach helps them make better decisions and minimises perceived effort by moving them away from their default position. This method is being applied in the privacy area, for example for reduced Facebook sharing [7] and improved smartphone privacy settings [8]. Additionally there is greater use of these as interventions, particularly with installation of mobile applications [9].

The proposed socio-technical approach to the reduction of cyber threats aims to account for the development of responsible and trustworthy people-centred AI solutions that can use data whilst maintaining personal privacy.

A combination of supervised and unsupervised learning techniques is already being employed to predict new threats and malware based on existing patterns. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Building adversarial models, designing empirical studies and running experiments (e.g. using Amazon’s Mechanical Turk) can help better measure the effectiveness of attackers’ techniques and develop better defence mechanisms. I believe there is a need to explore opportunities to utilise machine learning to aid the human decision-making process whereby people are supported by, and work together with, AI to better defend against cyber attacks.

We should draw upon participatory co-design and follow a people-centred approach so that relevant stakeholders are engaged in the process. This can help develop personalised and contextualised solutions, crucial to addressing ethical, legal and social challenges that cannot be solved with AI automation alone.

More

How to be a trusted advisor

Being a security leader is first and foremost acting as a trusted advisor to the business. This includes understanding its objectives and aligning your efforts to support and enable delivery on the wider strategy.

It is also about articulating cyber risks and opportunities and working with the executive team on managing them. This doesn’t mean, however, that your role is to highlight security weaknesses and leave it to the board to figure it all out. Instead, being someone they can turn to for advice is the best way to influence the direction and make the organisation more resilient in combating cyber threats.

For your advice to be effective, you first need to earn the right to offer it. One of the best books I’ve read on the subject is The Trusted Advisor by David H. Maister. It’s not a new book and it’s written from the perspective of a professional services firm but that doesn’t mean the lessons from it can’t be applied in the security context. It covers the mindset, attributes and principles of a trusted advisor.

Unsurprisingly, the major focus of this work is on developing trust. The author summarises his views on this subject in the trust equation:

Trust = (Credibility + Reliability + Intimacy) / Self-Orientation

It’s a simple yet powerful representation of what contributes to and hinders the trust building process.

It’s hard to trust someone’s recommendations when they don’t put our interests first and instead are preoccupied with being right or jump to solutions without fully understanding the problem.

Equally, as important credibility is, the long list of your professional qualifications and previous experience on its own is not sufficient to be trustworthy. Having courage and integrity, following through on your promises and active listening, among other things are key. In the words of Maister, “it is not enough to be right, you must also be helpful”.

How to pass the Azure Fundamentals exam

Microsoft Certified Azure Fundamentals (AZ-900) is an entry-level qualification focusing on core cloud concepts and Azure services. It doesn’t have any prerequisites so it’s a natural first step on your journey of mastering Azure cloud.

If you have experience with cloud technologies from other providers, you might be already familiar with the basics of cloud computing but that doesn’t mean you should skip this certification. GCP, AWS and Azure have their own peculiarities and despite having seemingly similar offerings, there are significant differences in how their services are provisioned and configured.

Studying for this exam will give you a good overview of Azure-specific terminology and services, and will be useful regardless of your previous skill level.

I recommend the following free resources to help with your preparation:

How to apply FBI’s behavioural change stairway to security

Unlike the FBI’s Hostage Negotiation Team, cyber security professionals are rarely involved in high-stakes negotiations involving human life. But that doesn’t mean they can’t use some of the techniques developed by them to apply it to improve security culture, overcome resistance and guide organisational change.

Behind the apparent simplicity, this model is a tried and tested way to influence human behaviour over time. The crux of it is that you can’t skip any steps as consecutive efforts build on the previous ones. The common mistake many cyber security professionals make is they jump straight to Influence or Behavioral change with phishing simulations or security awareness campaigns but this can be counterproductive. 

As explained in the original paper, it is recommended to invest time in active listening, empathy and establishing rapport first. In the security context, this might mean working with the business stakeholders to understand their objectives and concerns, rather than sowing fear of security breaches and regulatory fines.

All of this doesn’t mean you have to treat every interaction like a hostile negotiation or treat your business executives as violent felons. The aim is to build trust to be able to best support the business not manipulate your way into getting your increased budget signed off.I cover some techniques in The Psychology of Information Security – feel free to check it out if you would like to learn more.

Guest lecture at Hull University Business School

I had the pleasure to deliver a guest lecture to students of Hull University Business School. I spoke about current cybersecurity trends and what they meant to the businesses. I also used this opportunity to talk about various career paths one can explore while starting in or transitioning into security.

There is a misconception that you have to have a technical background to be successful in cybersecurity. In reality through, the field is so broad and roles can be so varied that the diversity of backgrounds, skills, experiences and perspectives is essential to help organisations across the world be more secure. Arguably, communication skills and understanding of the business can be more valuable in certain contexts than a proficiency in a narrow vendor-specific technology.

There are a lot of transferable skills one can bring to the profession regardless of their prior cybersecurity knowledge and I urge everyone who is considering a transition into this field to reflect on how they can contribute.

How to assess security risks using the bow tie method

Bow tie risk diagrams are used in safety critical environments, like aviation, chemicals and oil and gas. They visualise potential causes and consequences of hazardous events and allow for preventative and recovery controls to be highlighted.

You don’t have to be a gas engineer or work for a rail operator to benefit from this tool, however. Cyber security professionals can use simplified bow tie diagrams to communicate security risks to non-technical audiences as they succinctly capture business consequences and their precursors on a single slide.

If you work for one of the safety critical industries already, using this technique to represent cyber risks has an added benefit of aligning to the risk assessment patterns your engineers likely already use, increasing the adoption and harmonising the terminology.

There are templates available online and, depending on the purpose of the exercise, they can vary in complexity. However, if you are new to the technique and want to focus on improving your business communication when talking about cyber risk, I suggest starting with a simple PowerPoint slide

Feel free to refer to my example diagram above where I walk through a sensitive data exposure scenario. For example, it can occur through either a phishing attempt or a credential stuffing attack (supply chain and web application/infrastructure exposure being another vector) leading to a variety of business consequences ranging from a loss of funds to reputational damage. The figure also incorporates potential preventative barriers and recovery controls that are applicable before and after the incident respectively. 

Business alignment framework for security

In my previous blogs on the role of the CISO, CISO’s first 100 days and developing security strategy and architecture, I described some of the points a security leader should consider initially while formulating an approach to supporting an organisation. I wanted to build on this and summarise some of the business parameters in a high-level framework that can be used as a guide to learn about the company in order to tailor a security strategy accordingly.

This framework can also be used as a due diligence cheat sheet while deciding on or prioritising potential opportunities – feel free to adapt it to your needs.

More

How to set up a bug bounty program

Bug bounty programmes are becoming the norm in larger software organisations but it doesn’t mean you have to be Google or Facebook to run one for continuous security testing and engaging with the security community..

Setting it up can be easier than you might think as there are multiple platforms like HackerOne, BugCrowd or similar out there that can help with centralised management. They also offer an option to introduce it gradually through private participation first before opening it to the whole world. 

At a minimum, you can have a dedicated email address (e.g. security@yourexamplecompanyname) that security researchers can use to report security issues. Having a page transparently explaining the participation terms, scope and payout rate also helps. Additionally, it’s good to have appropriate tooling to track issues and verify fixes.

Even if you don’t have any of the above, security researchers can still find vulnerabilities in your product and report them to you responsibly, so you effectively get free testing but can exercise limited control over it. Therefore, it’s a good idea to have a process in place to keep them happy enough to avoid them disclosing issues publicly.

There is probably nothing more frustrating for a security researcher than receiving no response  (apart perhaps from being threatened legal action), so communication is key. At the very least, thank them and request more information to help verify their finding while you kick off the investigation internally. Bonus points for keeping them in the loop when it comes to plans for remediation, if appropriate. 

There are some prerequisites for setting up the bug bounty programme though. Beyond the obvious budget requirement for paying researchers for the vulnerabilities they discover, there is a broader need for engineering resources being available to analyse reported issues and work on improving the security of your products and services. What’s the point of setting up a bug bounty programme if no one is looking at the findings?

Many companies, therefore, might feel they are not ready for a bug bounty programme. They may have too many known issues already and fear they will be overwhelmed with duplicate submissions. These might indeed be problematic to manage, so such organisations are better off focusing their efforts on remediating known vulnerabilities and implementing measures to prevent them (e.g. setting up a Content Security Policy).

They could also consider introducing security tests in the pipeline as this will help catch potential vulnerabilities much earlier in the process, so the bug bounty programme can be used as a fall back mechanism, not the primary way for identifying security issues.

I’ve been named a CSO30 Awards 2020 winner

I am excited to be recognised as one of the top security executives who have demonstrated outstanding thought leadership and business value.

The winners of the CSO30 Award “demonstrated risk and security excellence in helping guide their organisations through the challenges of COVID19, worked to secure digital transformation initiatives, strengthen security awareness and education efforts, utilize new security technologies, engage with the wider security community to share learnings, and much more.”

It’s a team effort and I’m proud to be working with great professionals helping businesses innovate while managing risks.