Understanding your threat landscape

Identifying applicable threats is a good step to take before defining security controls your organisation should put in place. There are various techniques to help you with threat modelling but I wanted to give you some high-level pointers in this blog to get you started. Of course, all of these should be tailored to your specific business.

I find it useful to think about potential attacks as three broad categories:

1. Commoditised attacks. Usually not targeted and involve off-the-shelf-malware. Examples include:

2. Tailored attacks. As the name suggests, these are tailored and can vary in degree of sophistication. Examples include:

3. Accidental. Not every data breach is triggered by a malicious actor. Therefore, it is important to recognise that mistakes happen. Unfortunately sometimes they lead to undesired consequences, like the below:

Information security professionals can use the above examples in communications with their business stakeholders not to spread fear, but to present certain security challenges in context.

It’s often helpful to make it a bit more personal, defining specific threat actors, their target, motivation and impact on the business. Again, the below table serves as an example and can be used as a starting point for you define your own.

Threat actor Description Motivation Target Impact on business
Organised crime International hacking groups Financial gain Commercial data, personal data for identity fraud Reputational damage, regulatory fines, loss of customer trust
Insider Intentional or unintentional Human error, grudge, financial gain Intellectual property, commercial data Destruction or alteration of information, theft of information, reputational damage, regulatory fines
Competitors Espionage and sabotage Competitive advantage Intellectual property, commercial information Disruption or destruction, theft of information, reputational damage, loss of customer
State-sponsored Espionage Political Intellectual property, commercial data, personal data Theft of information, reputational damage

You can then use your understanding of assets and threats relevant to your company to identify security risks. For instance:

  • Failure to comply with relevant regulation – revenue loss and reputational damage due to fines and unwanted media attention as a result of non-compliance with GDPR, PCI DSS, etc.
  • Breach of personal data – regulatory fines, potential litigation and loss of customer trust due to accidental mishandling, external system compromise or insider threat leading to exposure of personal data of customers
  • Disruption of operations – decreased productivity or inability to trade due to compromise of IT systems by malicious actor, denial of service attacks, sabotage or employee error

Again, feel free to use these as examples, but always tailor them based on what’s important you your business. It’s also worth remembering that this is not a one-off exercise. Tracking your assets, threats and risks should be part of your security management function and be incorporated in operational risk management and continuous improvement cycles.

This will allow you to demonstrate the value of security through pragmatic and prioritised security controls, focusing on protecting the most important assets, ensuring alignment to business strategy and embedding security into the business.

Advertisement

Startup security

14188692143_8ed6740a1d_z

In the past year I had a pleasure working with a number of startups on improving their security posture. I would like to share some common pain points here and what to do about them.

Advising startups on security is not easy, as it tends to be a ‘wicked’ problem for a cash-strapped company – we often don’t want to spend money on security but can’t afford not to because of the potential devastating impact of security breaches. Business models of some of them depend on customer trust and the entire value of a company can be wiped out in a single incident.

On a plus side, security can actually increase the value of a startup through elevating trust and amplifying the brand message, which in turn leads to happier customers. It can also increase company valuation through demonstrating a mature attitude towards security and governance, which is especially useful in fundraising and acquisition scenarios.

Security is there to support the business, so start with understanding the product who uses it.  Creating personas is quite a useful tool when trying to understand your customers. The same approach can be applied to security. Think through the threat model – who’s after the company and why? At what stage of a customer journey are we likely to get exposed?

Are we trying to protect our intellectual property from competitors or sensitive customer data from organised crime? Develop a prioritised plan and risk management approach to fit the answers. You can’t secure everything – focus on what’s truly important.

A risk based approach is key. Remember that the company is still relatively small and you need to be realistic what threats we are trying to protect against. Blindly picking your favourite NIST Cybersecurity Framework and applying all the controls might prove counterproductive.

Yes, the challenges are different compared to securing a large enterprise, but there some upsides too. In a startup, more often than not, you’re in a privileged position to build in security and privacy by design and deal with much less technical debt. You can embed yourself in the product development and engineering from day one. This will save time and effort trying to retrofit security later – the unfortunate reality of many large corporations.

Be wary, however, of imposing too much security on the business. At the end of the day, the company is here to innovate, albeit securely. Your aim should be to educate the people in the company about security risks and help them make the right decisions. Communicate often, showing that security is not only important to keep the company afloat but that it can also be an enabler. Changing behaviours around security will create a positive security culture and protect the business value.

How do you apply this in practice? Let’s say we established that we need to guard the company’s reputation, customer data and intellectual property all the while avoiding data breaches and regulatory fines. What should we focus on when it comes to countermeasures?

I recommend an approach that combines process and technology and focuses on three main areas: your product, your people and your platform.

  1. Product

Think of your product and your website as a front of your physical store. Thant’s what customers see and interact with. It generates sales, so protecting it is often your top priority. Make sure your developers are aware of OWASP vulnerabilities and secure coding practices. Do it from the start, hire a DevOps security expert if you must. Pentest your product regularly. Perform code reviews, use automated code analysis tools. Make sure you thought through DDoS attack prevention. Look into Web Application Firewalls and encryption. API security is the name of the game here. Monitor your APIs for abuse and unusual activity. Harden them, think though authentication.

  1. People

I talked about building security culture above, but in a startup you go beyond raising awareness of security risks. You develop processes around reporting incidents, documenting your assets, defining standard builds and encryption mechanisms for endpoints, thinking through 2FA and password managers, locking down admin accounts, securing colleagues’ laptops and phones through mobile device management solutions and generally do anything else that will help people do their job better and more securely.

  1. Platform

Some years ago I would’ve talked about network perimeter, firewalls and DMZs here. Today it’s all about the cloud. Know your shared responsibility model. Check out good practices of your cloud service provider. Main areas to consider here are: data governance, logging and monitoring, identity and access management, disaster recovery and business continuity. Separate your development and production environments. Resist the temptation to use sensitive (including customer) data in your test systems, minimise it as much as possible. Architect it well from the beginning and it will save you precious time and money down the road.

Every section above deserves its own blog and I have deliberately kept it high-level. The intention here is to provide a framework for you to think through the challenges most startups I encountered face today.

If the majority of your experience comes from the corporate environment, there are certainly skills you can leverage in the startup world too but be mindful of variances. The risks these companies face are different which leads to the need for a different response. Startups are known to be flexible, nimble and agile, so you should be too.

Image by Ryan Brooks.

Innovating in the age of GDPR

Expo

Customers are becoming increasingly aware of their rights when it comes to data privacy and they expect companies to safeguard the data they entrust to them. With the introduction of GDPR, a lot of companies had to think about privacy for the first time.

I’ve been invited to share my views on innovating in the age of GDPR as part of the Cloud and Cyber Security Expo in London.

When I was preparing for this panel I was trying to understand why this was even a topic to begin with. Why should innovation stop? If your business model is threatened by the GDPR then you are clearly doing something wrong. This means that your business model was relying on exploitation of consumers which is not good.

But when I thought about it a bit more, I realised that there are costs to demonstrating compliance to the regulator that a company would also have to account for. It’s arguably easier achieved by bigger companies with established compliance teams rather than smaller upstarts, serving as a barrier to entry. Geography also plays a role here. What if a tech firm starts in the US or India, for example, where the regulatory regime is more relaxed when it comes to protecting customer data and then expands to Europe when it can afford it? At least at a first glance, companies starting up in Europe are at a disadvantage as they face potential regulatory scrutiny from day one.

How big of a problem is this? I’ve been reading about people complaining that you need fancy lawyers who understand technology to address this challenge. I would argue, however, that fancy lawyers are only required when you are doing shady stuff with customer data. Smaller companies that are just starting up have another advantage on their side: they are new. This means they don’t have go and retrospectively purge legacy systems of data they have been collecting over the years potentially breaking the business logic in the interdependent systems. Instead, they start with a clean slate and have an opportunity to build privacy in their product and core business processes (privacy by design).

Risk may increase while the company grows and collects more data, but I find that this risk-based approach is often missing. Implementation of your privacy programme will depend on your risk profile and appetite. Level of risk will vary depending on type and amount of data you collect. For example, a bank can receive thousands of subject access requests per month, while a small B2B company can receive one a year. Implementation of privacy programmes will therefore be vastly different. The bank might look into technology-enabled automation, while a small company might look into outsourcing subject request processes. It is important to note, however, that risk can’t be fully outsourced as the company still ultimately owns it at the end of the day

The market is moving towards technology-enabled privacy processes: automating privacy impact assessments, responding to customer requests, managing and responding to incidents, etc.

I also see the focus shifting from regulatory-driven privacy compliance to a broader data strategy. Companies are increasingly interested in understanding how they can use data as an asset rather than a liability. They are looking for ways to effectively manage marketing consents and opt out and giving power and control back to the customer, for example by creating preference centres.

Privacy is more about the philosophy of handling personal data rather than specific technology tricks. This mindset in itself can lead to innovation rather than stifling it. How can you solve a customers’ problem by collecting the minimum amount of personal data? Can it be anonymised? Think of personal data like toxic waste – sure it can be handled, but with extreme care.

Author of the month for January 2019

discount-banner

IT Governance Publishing named me the author of the month and kindly provided a 20% discount on my book.

There’s an interview available in a form of a podcast, where I discuss the most significant challenges related to change management and organisational culture; the common causes of a poor security culture my advice for improving the information security culture in your organisation.

ITGP also made one of the chapters of the audio version of my book available for free – I hope you enjoy it!

How to pass the CCSP exam

CCSP-logo-2lines

The CCSP exam is not easy but nothing you can’t prepare for. It tests your knowledge of the following CCSP domains:

  • Cloud Concepts, Architecture and Design
  • Cloud Data Security
  • Cloud Platform and Infrastructure Security
  • Cloud Application Security
  • Cloud Security Operations
  • Legal, Risk and Compliance

The structure and format might change as (ISC)2 continuously revise their exams, so please check the official website to make sure you are up-to-date with the latest developments.

Apart from the official (ISC)2 guides, here are some of the resources I used in my studies:

If you would prefer to add video lectures to your study plan, there’s a free course on Cybrary. For a quick summary, check out these mindmaps. Also, multiple sets of free flashcards are available on Quizlet.

It is a good idea to do some practice questions: there are books and mobile apps out there to help you with this. Practical experience in cloud security is also essential.

On the day, read the questions carefully. It’s not a time pressured exam (I was done in two hours), so it’s worth re-reading the questions and answers again to make sure you are answering exactly what is being asked. Eliminate the wrong options first and then decide on the best out of the remaining ones.

Finally, my suggestion would be to approach the questions from the perspective of a consultant. What would you recommend in each situation? Don’t be too technical – keep the business needs in mind at all times.

Don’t stress too much about the final result. I’m sure you’ll pass, but even if not on your first attempt, you’ll learn either way! Remember, the knowledge you accumulate in the process of preparing for the test itself has the most value, not the credential.

Good luck!

The Psychology of Information Security is now an audiobook too!

Snip20181127_2

Thanks to my publisher, my book is now available in the audio format. It’s been narrated by Peter Silverleaf, who’s done a great job as always.

If you would rather listen to an audio while driving, exercising or commuting, this version is for you. The book has intentionally been kept to the point which means you can finish the audio in slightly over two hours. The fact that it costs the equivalent of two cups of coffee is an added benefit.

You can get it for free on Audible as part of their introductory offer (you can listen to the sample there too), through Apple iTunes or download it in the MP3 format on my publisher’s website.

I know I’m slightly biased here, but I highly recommend it!

Internet of Toys Security

NSPCC

To support my firm’s corporate and social responsibility efforts, I volunteered to help NSPCC, a charity working in child protection, understand the Internet of Toys and its security and privacy implications.

I hope the efforts in this area will result in better policymaking and raise awareness among children and parents about the risks and threats posed by connected devices.

Toys are different from other connected devices not only because how they are normally used, but also who uses them.

For example, children may tell secrets to their toys, sharing particularly sensitive information with them. This, combined with often insufficient security considerations by the manufacturers, may be a cause for concern.

Apart from helping NSPCC in creating campaign materials and educating the staff on the threat landscape, we were able to suggest a high-level framework to assess the security of a connected toy, consisting of parental control, privacy and technology security considerations.

More

Artificial intelligence and cyber security: attacking and defending

3237928173_9d99dc9113_z

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack. Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to the rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.

Image by fdecomite.

SABSA architecture and design case study

Let’s talk about applying the SABSA framework to design an architecture that would solve a specific business problem.  In this blog post I’ll be using a fictitious example of a public sector entity aiming to roll-out an accommodation booking service for tourists visiting the country.

To ensure that security meets the needs of the business we’re going to go through the layers of the SABSA architecture from top to bottom.

layers

Start by reading your company’s business strategy, goals and values, have a look at the annual report. Getting the business level attributes from these documents should be straightforward. There’s no need to invent anything new – business stakeholders have already defined what’s important to them.

Contextual architecture

Every single word in these documents has been reviewed and changed potentially hundreds of times. Therefore, there’s usually a good level of buy-in on the vision. Simply use the same language for your business level attributes.

After analysing the strategy of my fictitious public sector client I’m going to settle for the following attributes: Stable, Respected, Trusted, Reputable, Sustainable, Competitive. Detailed definitions for these attributes are agreed with the business stakeholders.

Conceptual architecture

Next step is to link these to the broader objectives for technology. Your CIO or CTO might be able to assist with these. In my example, the Technology department has already done the hard job of translating high-level business requirements into a set of IT objectives. Your task is just distill these into attributes:

Objectives

Now it’s up to you to define security attributes based on the Technology and Infrastructure attributes above. The examples might be attributes like Available, Confidential, Access-Controlled and so on.

Requirements traceability

The next step would be to highlight or define relationships between attributes on each level:

Attributes

These attributes show how security supports the business and allows for two-way traceability of requirements. It can be used for risk management, assurance and architecture projects.

Back to our case study. Let’s consider a specific example of developing a hotel booking application for a public sector client we’ve started out with. To simplify the scenario, we will limit the application functionality requirements to the following list:

ID Name Purpose
P001 Register Accommodation Enable the registration of temporary accommodations available
P002 Update Availability Enable accommodation managers to update availability status
P003 Search Availability Allow international travellers to search and identify available accommodation
P004 Book Accommodation Allow international travellers to book accommodation
P005 Link to other departments Allow international travellers to link to other departments and agencies such as the immigration or security services (re-direct)

And here is how the process map would look like:

Process map

There are a number of stakeholders involved within the government serving international travellers’ requests. Tourists can access Immigration Services to get information on visa requirements and Security Services for safety advice. The application itself is owned by the Ministry of Tourism which acts as the “face” of this interaction and provides access to Tourist Board approved options. External accommodation (e.g. hotel chains) register and update their offers on the government’s website.

The infrastructure is outsourced to an external cloud service provider and there are mobile applications available, but these details are irrelevant for the current abstraction level.

Trust modelling

From the Trust Modelling perspective, the relationship will look like this:

Trust

Subdomain policy is derived from, and compliant with, super domain but has specialised local interpretation authorised by super domain authority. The government bodies act as Policy Authorities (PA) owning the overall risk of the interaction.

At this stage we might want to re-visit some of the attributes we defined previously to potentially narrow them down to only the ones applicable to the process flows in scope. We will focus on making sure the transactions are trusted:

New attributes

Let’s overlay applicable attributes over process flows to understand requirements for security:

Flows and attributes

Logical Architecture

Now it’s time to go down a level and step into more detailed Designer’s View. Remember requirement “P004 – Book Accommodation” I’ve mentioned above? Below is the information flow for this transaction. In most cases, someone else would’ve drawn these for you.

Flow 1

With security attributes applied (the direction of orange arrows define the expectation of a particular attribute being met):

Flow 2

These are the exact attributes we identified as relevant for this transaction on the business process map above. It’s ok if you uncover additional security attributes at this stage. If that’s the case, feel free to add them retrospectively to your business process map at the Conceptual Architecture level.

Physical architecture

After the exercise above is completed for each interaction, it’s time to go down to the Physical Architecture level and define specific security services for each attribute for every transaction:

Security services

Component architecture

At the Component Architecture level, it’s important to define solution-specific mechanisms, components and activities for each security service above. Here is a simplified example for confidentiality and integrity protection for data at rest and in-transit:

Service Physical mechanism Component brands, tools, products or technical standards Service Management activities required to manage the solution through-life
Message confidentiality protection Message encryption IPSec VPN Key management, Configuration Management, Change management
Stored data confidentiality protection Data encryption AES 256 Disk Encryption Key management, Configuration Management, Change management
Message integrity protection Checksum SHA 256 Hash Key management, Configuration Management, Change management
Stored data integrity protection Checksum SHA 256 Hash Key management, Configuration Management, Change management

As you can see, every specific security mechanism and component is now directly and traceable linked to business requirements. And that’s one of the ways you demonstrate the value of security using the SABSA framework.

Vienna Cyber Security Week 2018

energypact_oranizers_version-eventmeldung

I’ve spend last week in Vienna at the annual intergovernmental conference focused on protecting critical energy infrastructure.

The first two days were dedicated to the issues of security and diplomacy.

A number of panel discussions, talks and workshops covered the following topics:

  • Implementing the EU strategy for safe, open and secure cyberspace
  • Cyber-threats to critical energy infrastructure
  • Operational resilience
  • Reducing the risks of conflicts stemming from the use of cyber-capabilities
  • Cyber-diplomacy: developing capacity and trust between states

JkVjIud6Si2cm1Ws687l6Q_thumb_3f22

For the rest of the conference we moved from the Diplomatic Academy of Vienna to Tech Gate, a science and technology park and home to a number of local cyber startups.

We’ve discussed trends in technology and cyber security, participated in Cyber Range simulation tutorial and a scenario-based exercise on policy development to address the growing cyber-threat to the energy sector.

UNADJUSTEDNONRAW_thumb_3f40

AIT Austrian Institute of Technology together with WKO Austrian Economic Chambers, ASW Austrian Defence and Security Industry, and the Austrian Cyber Security Cluster hosted a technology exhibition of latest solutions and products as well as R&D projects.

Participants had an opportunity to see state-of-the-art of next generation solutions and meet key experts in the field of cyber security for protecting critical infrastructures to fight against cyber-crime and terrorism.

Talks continued throughout the week with topics covering:

  • Securing the energy economy: oil, gas, electricity and nuclear
  • Emerging and future threats to digitalised energy systems
  • Cyber security standards in critical energy infrastructure
  • Public sector, industry and research cooperation in cyber security
  • Securing critical energy infrastructures by understanding global energy markets

UNADJUSTEDNONRAW_thumb_3f5e

The last day focused on innovation and securing the emerging technologies. The CIO of City of Vienna delivered an insightful presentation about on cities and security implications of digitalisation.  A closing panel discussed projected trends and emerging areas of technology, approaches and methods for verifying and securing new technologies and the future of the cyber threat.