As someone who worked for both large multinationals and small tech startups, I’m often asked whether the scale of the organisation matters when building security culture.
I think it does. Managing stakeholders and communication gets increasingly complex in larger organisations. In fact, the number of communication paths tends to increase dramatically with every new stakeholder introduced to the network.
I’ve had the privilege to advise a number of smaller companies in the beginning of their journey and I must admit it’s much more effective to embed secure behaviours from the start. We talk about security by design in the context of technical controls – it’s no different with security culture.
While working as a consultant, I helped large corporations with that challenge too. The key is to start small and focus on the behaviours you want to influence, keeping stakeholder engagement in mind. Active listening, empathy and rapport building are essential – just rolling out an eLearning module is unlikely to be effective.
I’ve been featured in an eBook by Thales sharing my thoughts on challenges organisations face on their Zero Trust journey and how to overcome them. It’s a huge topic that can be approached from different angles and it’s certainly difficult to capture it in a single quote. However, asset management should be an important consideration regardless of an implementation model.
I had a privilege to engage with NHS Digital as an external consultant in a technical architect capacity to help enhance their cyber security capabilities. NHS Digital continues to play an important role in the current pandemic in the UK and it was an honour to be able to contribute to the security of their operations.
The worst time to write a security incident response plan is during an incident itself. Anticipating adverse events and preparing playbooks for likely scenarios and testing them in advance are important facets of a wider cyber resilience strategy.
Incident response, however, is not only about technology, logs and forensic investigation – managing communication is equally important. It is often a compliance requirement to notify the relevant regulator and customers about a data breach or a cyber incident, so having a plan, as well as an internal and external communication strategy, is key.
Security incidents can quickly escalate into a crisis depending on their scale and impact. There are lessons we can learn from other disciplines when it comes to crisis communication.
One of the best example is offered by the Centers for Disease Control and Prevention (CDC). The resources, tools and training materials they have created and made available online for free have been tested in emergency situations around the world, including the latest Covid-19 pandemic.
CDC’s Crisis and Emergency Risk Communication (CERC) manuals and templates emphasise the six core principles of crisis communication:
1. Be first. Quickly sharing information about an incident can help stop the spread, and prevent or reduce impact. Even if the cause is unknown, share facts that are available.
2. Be right. Accuracy establishes credibility. Information should include what is known, what is not known, and what is being done to fill in the information gaps.
3. Be credible. Honesty, timeliness, and scientific evidence encourage the public to trust your information and guidance. Acknowledge when you do not have enough information to answer a question and then work with the appropriate experts to get an answer.
4. Express empathy. Acknowledging what people are feeling and their challenges shows that you are considering their perspectives when you give recommendations.
5. Promote action. Keep action messages simple, short, and easy to remember.
6. Show respect. Respectful communication is particularly important when people feel vulnerable. Respectful communication promotes cooperation and rapport.
Cyber security professionals can adopt the above principles in crisis situations during a cyber incident, demonstrating commitment and competence and communicating with transparency and empathy both inside and outside of the organisation.
In the DevSecOps paradigm, the need for manual testing and review is minimised in favour of speed and agility. Security input should be provided as early as possible, and at every stage of the process. Automation, therefore, becomes key. Responsibility for quality and security as well as decision-making power should also shift to the local teams delivering working software. Appropriate security training must be provided to these teams to reduce the reliance on dedicated security resources.
I created a diagram illustrating a simplified software development lifecycle to show where security-enhancing practices, input and tests are useful. The process should be understood as a continuous cycle but is represented in a straight line for the ease of reading.
There will, of course, be variations in this process – the one used in your organisation might be different. The principles presented here, however, can be applied to any development lifecycle, your specific pipeline and tooling.
I deliberately kept this representation tool and vendor agnostic. For some example tools and techniques at every stage, feel free to check out the DevSecOps tag on this site.
There is a strong correlation between code quality and security: more bugs lead to more security vulnerabilities. Simple coding mistakes can cause serious security problems, like in the case of Apple’s ‘goto fail’ and OpenSSL’s ‘Heartbleed‘, highlighting the need for disciplined engineering and testing culture.
An open source tool that is gaining a lot of momentum in this space is Semgrep. It supports multiple languages and integrates well in the CI/CD pipeline. You can run Semgrep out of a Docker container and the test results are conveniently displayed in the command line interface.
The rule set is still relatively limited compared to other established players and it might find fewer issues. You can write your own rules, however, and things are bound to improve as the tool continues to develop with contributions from the community.
A more well-known tool in this space is SonarQube. You can still use a free Community Edition for basic testing but advanced enterprise-level features would require a paid subscription. CI/CD integration is possible, inter plugins (like the one for ESLint) are also available.
SonarQube has a dashboard view which scores your code across reliability, security, maintainability and other metrics.
As with any tool of this type, to get the best use out of it, some initial configuration is required to reduce the number of false positives, so don’t be discouraged by the high number of bugs or “code smells” being discovered initially.
Keeping in mind the fact that no amount of automation can guarantee finding all the vulnerabilities, I found SonarQube’s analysis around Security Hotspots particularly interesting. It also estimates a time required to pay down the tech debt which can serve as a good indicator of maintainability of your codebase.
It is important to remember that static code analysis tools are not a substitute for the testing culture, code reviews, threat modelling and secure software engineering. They can, however, be a useful set of guardrails for engineers and act as an additional layer in your defence in depth strategy. Although these tools are unlikely to catch all the issues, they can certainly help raise awareness among software developers about quality and security improvements, alert about potential vulnerabilities before they become a problem and prevent the accumulation of tech and security debt in your company.
Building on my previous blogs on CISO responsibilities, initial priorities and developing information security strategy, I wanted to share an example of what a security dashboard might look like. It is important to communicate regularly with your stakeholders and sharing a status update like this might be one way of doing it. The dashboard incorporates a high-level view of a threat landscape, top risks and security capabilities to address these risks (with maturity and projected progression for each). Feel free to use this as a starting point and adjust to your needs.
The dashboard above aligns to the NIST Cybersecurity Framework functions as structuring your security programme activities in this way, in my experience, allows for better communication with business stakeholders. However, capabilities can be adjusted to align with any other framework or your control set of choice. Some of the elements can be deliberately simplified further depending on your target audience.