A Causal Loop Diagram of The Happy Path Testing Pattern, Acquisition Archetypes, Carnegie Mellon University
Product security is more than running code scanning tools and facilitating pentests. Yet that’s what many security teams focus on. Secure coding is not a standalone discipline, it’s about developing systems that are safe. It starts with organisational culture, embedding the right behaviours and building on existing code quality practices.
I recently had a chance to collaborate with researchers at The Optus Macquarie University Cyber Security Hub. Their interdisciplinary approach brings industry practitioners and academics from a variety of backgrounds to tackle the most pressing cyber security challenges our society and businesses face today.
Both academia and industry practitioners can and should learn from each other. The industry can guide problem definition and allow access to data, but also learn to apply the scientific method and test their hypotheses. We often assume the solutions we implement lead to risk reduction but how this is measured is not always clear. Designing experiments and using research techniques can help bring the necessary rigour when delivering and assessing outcomes.
I had an opportunity to work on some exciting projects to help build an AI-powered cyber resilience simulator, phone scam detection capability and investigate the role of human psychology to improve authentication protocols. I deepened my understanding of modern machine learning techniques like topic extraction and emotion analysis and how they can be applied to solve real world problems. I also had a privilege to contribute to a research publication to present our findings, so watch this space for some updates next year.
While in quarantine after arriving in Australia, I had a chance to catch-up on some learning.
I completed two specialisation tracks on Coursera offered by Macquarie Business School as part of their Global MBA programme. The courses covered a variety topics, including negotiations, change management, storytelling, board engagement, innovation, strategic management, sustainability, supply chains and more.
NISTIR 7756 Contextual Description of the CAESARS System
Knowing your existing assets, threats and countermeasures is a necessary step in establishing a starting point to begin prioritising cyber risk management activities. Indeed, when driving the improvement of the security posture in an organisation, security leaders often begin with getting a view of the effectiveness of security controls.
A common approach is to perform a security assessment that involves interviewing stakeholders and reviewing policies in line with a security framework (e.g. NIST CSF).
A report is then produced presenting the current state and highlighting the gaps. It can then be used to gain wider leadership support for a remediation programme, justifying the investment for security uplift initiatives. I wrote a number of these reports myself while working as a consultant and also internally in the first few weeks of being a CISO.
These reports have a lot of merits but they also have limitations. They are, by definition, point-in-time: the document is out of date the day after it’s produced, or even sooner. The threat landscape has already shifted, state of assets and controls changed and business context and priorities are no longer the same.
Some exciting news – I have relocated to Australia đŸ‡¦đŸ‡º
I’m honoured to be awarded the Distinguished Talent (now called Global Talent) visa for my ‘internationally recognised record of exceptional and outstanding achievement’ in cyber security.
Although I will miss the UK, my friends and colleagues there, I look forward to the next adventures in Sydney.
As someone who worked for both large multinationals and small tech startups, I’m often asked whether the scale of the organisation matters when building security culture.
I think it does. Managing stakeholders and communication gets increasingly complex in larger organisations. In fact, the number of communication paths tends to increase dramatically with every new stakeholder introduced to the network.
I’ve had the privilege to advise a number of smaller companies in the beginning of their journey and I must admit it’s much more effective to embed secure behaviours from the start. We talk about security by design in the context of technical controls – it’s no different with security culture.
While working as a consultant, I helped large corporations with that challenge too. The key is to start small and focus on the behaviours you want to influence, keeping stakeholder engagement in mind. Active listening, empathy and rapport building are essential – just rolling out an eLearning module is unlikely to be effective.
I’ve been featured in an eBook by Thales sharing my thoughts on challenges organisations face on their Zero Trust journey and how to overcome them. It’s a huge topic that can be approached from different angles and it’s certainly difficult to capture it in a single quote. However, asset management should be an important consideration regardless of an implementation model.
I had a privilege to engage with NHS Digital as an external consultant in a technical architect capacity to help enhance their cyber security capabilities. NHS Digital continues to play an important role in the current pandemic in the UK and it was an honour to be able to contribute to the security of their operations.
The worst time to write a security incident response plan is during an incident itself. Anticipating adverse events and preparing playbooks for likely scenarios and testing them in advance are important facets of a wider cyber resilience strategy.
Incident response, however, is not only about technology, logs and forensic investigation – managing communication is equally important. It is often a compliance requirement to notify the relevant regulator and customers about a data breach or a cyber incident, so having a plan, as well as an internal and external communication strategy, is key.
Security incidents can quickly escalate into a crisis depending on their scale and impact. There are lessons we can learn from other disciplines when it comes to crisis communication.
One of the best example is offered by the Centers for Disease Control and Prevention (CDC). The resources, tools and training materials they have created and made available online for free have been tested in emergency situations around the world, including the latest Covid-19 pandemic.
CDC’s Crisis and Emergency Risk Communication (CERC) manuals and templates emphasise the six core principles of crisis communication:
1. Be first. Quickly sharing information about an incident can help stop the spread, and prevent or reduce impact. Even if the cause is unknown, share facts that are available.
2. Be right. Accuracy establishes credibility. Information should include what is known, what is not known, and what is being done to fill in the information gaps.
3. Be credible. Honesty, timeliness, and scientific evidence encourage the public to trust your information and guidance. Acknowledge when you do not have enough information to answer a question and then work with the appropriate experts to get an answer.
4. Express empathy. Acknowledging what people are feeling and their challenges shows that you are considering their perspectives when you give recommendations.
5. Promote action. Keep action messages simple, short, and easy to remember.
6. Show respect. Respectful communication is particularly important when people feel vulnerable. Respectful communication promotes cooperation and rapport.
Cyber security professionals can adopt the above principles in crisis situations during a cyber incident, demonstrating commitment and competence and communicating with transparency and empathy both inside and outside of the organisation.
In the DevSecOps paradigm, the need for manual testing and review is minimised in favour of speed and agility. Security input should be provided as early as possible, and at every stage of the process. Automation, therefore, becomes key. Responsibility for quality and security as well as decision-making power should also shift to the local teams delivering working software. Appropriate security training must be provided to these teams to reduce the reliance on dedicated security resources.
I created a diagram illustrating a simplified software development lifecycle to show where security-enhancing practices, input and tests are useful. The process should be understood as a continuous cycle but is represented in a straight line for the ease of reading.
There will, of course, be variations in this process – the one used in your organisation might be different. The principles presented here, however, can be applied to any development lifecycle, your specific pipeline and tooling.
I deliberately kept this representation tool and vendor agnostic. For some example tools and techniques at every stage, feel free to check out the DevSecOps tag on this site.