Innovating in the age of GDPR

Expo

Customers are becoming increasingly aware of their rights when it comes to data privacy and they expect companies to safeguard the data they entrust to them. With the introduction of GDPR, a lot of companies had to think about privacy for the first time.

I’ve been invited to share my views on innovating in the age of GDPR as part of the Cloud and Cyber Security Expo in London.

When I was preparing for this panel I was trying to understand why this was even a topic to begin with. Why should innovation stop? If your business model is threatened by the GDPR then you are clearly doing something wrong. This means that your business model was relying on exploitation of consumers which is not good.

But when I thought about it a bit more, I realised that there are costs to demonstrating compliance to the regulator that a company would also have to account for. It’s arguably easier achieved by bigger companies with established compliance teams rather than smaller upstarts, serving as a barrier to entry. Geography also plays a role here. What if a tech firm starts in the US or India, for example, where the regulatory regime is more relaxed when it comes to protecting customer data and then expands to Europe when it can afford it? At least at a first glance, companies starting up in Europe are at a disadvantage as they face potential regulatory scrutiny from day one.

How big of a problem is this? I’ve been reading about people complaining that you need fancy lawyers who understand technology to address this challenge. I would argue, however, that fancy lawyers are only required when you are doing shady stuff with customer data. Smaller companies that are just starting up have another advantage on their side: they are new. This means they don’t have go and retrospectively purge legacy systems of data they have been collecting over the years potentially breaking the business logic in the interdependent systems. Instead, they start with a clean slate and have an opportunity to build privacy in their product and core business processes (privacy by design).

Risk may increase while the company grows and collects more data, but I find that this risk-based approach is often missing. Implementation of your privacy programme will depend on your risk profile and appetite. Level of risk will vary depending on type and amount of data you collect. For example, a bank can receive thousands of subject access requests per month, while a small B2B company can receive one a year. Implementation of privacy programmes will therefore be vastly different. The bank might look into technology-enabled automation, while a small company might look into outsourcing subject request processes. It is important to note, however, that risk can’t be fully outsourced as the company still ultimately owns it at the end of the day

The market is moving towards technology-enabled privacy processes: automating privacy impact assessments, responding to customer requests, managing and responding to incidents, etc.

I also see the focus shifting from regulatory-driven privacy compliance to a broader data strategy. Companies are increasingly interested in understanding how they can use data as an asset rather than a liability. They are looking for ways to effectively manage marketing consents and opt out and giving power and control back to the customer, for example by creating preference centres.

Privacy is more about the philosophy of handling personal data rather than specific technology tricks. This mindset in itself can lead to innovation rather than stifling it. How can you solve a customers’ problem by collecting the minimum amount of personal data? Can it be anonymised? Think of personal data like toxic waste – sure it can be handled, but with extreme care.

Resilience in the Cloud

4157700219_609fca025b_z

Modern digital technology underpins the shift that enables businesses to implement new processes, scale quickly and serve customers in a whole new way.

Historically, organisations would invest in their own IT infrastructure to support their business objectives and the IT department’s role would be focused on keeping the ‘lights on’.

To minimise the chance of failure of the equipment, engineers traditionally introduced an element of redundancy in the architecture. That redundancy could manifest itself on many levels. For example, it could be a redundant datacentre, which is kept as a ‘hot’ or ‘warm’ site with a complete set of hardware and software ready to take the workload in case of the failure of a primary datacentre. Components of the datacentre, like power and cooling, can also be redundant to increase the resiliency.

On a lesser scale, within a single datacentre, networking infrastructure elements can be redundant. It is not uncommon to procure two firewalls instead of just one to configure them to balance the load or just to have a second one as a backup. Power and utilities companies still stock up on critical industrial control equipment to be able to quickly react to a failed component.

The majority of effort, however, went into protecting the data storage. Magnetic disks were assembled in RAIDs to reduce the chances of data loss in case of failure and backups were relegated to magnetic tapes to preserve less time-sensitive data and stored in separate physical locations.

Depending on specific business objectives or compliance requirements, organisations had to heavily invest in these architectures. One-off investments were, however, only one side of the story. On-going maintenance, regular tests and periodic upgrades were also required to keep these components operational. Labour, electricity, insurance and other costs were adding to the final bill. Moreover, if a company was operating in a regulated space, for example if they processed payments and cardholder data, then external audits, certification and attestation were also required.

With the advent of cloud computing, companies were able to abstract away a lot of this complexity and let someone else handle the building and operation of datacentres and dealing with compliance issues relating to physical security.

The need for the business resilience, however, did not go away.

Cloud providers can offer options that far exceed (at comparable costs) the traditional infrastructure; but only if configured appropriately.

One example of this is the use of ‘zones’ of availability, where your resources can be deployed across physically separate datacentres. In this scenario, your service can be balanced across these availability zones and can remain running even if one of the zones goes down. If you build your own infrastructure for this, you would have to build one datacentre in each location and you better have a solid business case for that.

It is important to keep this in mind when deciding to move to the cloud from the traditional infrastructure. Simply lifting and shifting your applications to the cloud may not work. These applications are unlikely to have been developed to run in the cloud and take advantage of these additional resiliency options. Therefore, I advise against such migration in favour of re-architecting.

Cloud Service Provider SLAs should also be considered. Compensation might be offered for failure to meet these, but it’s your job to check how this compares to the traditional “5 nines” of availability in a traditional datacentre.

You should also be aware of the many differences between cloud service models.

When procuring a SaaS, for example, your ability to manage resilience is significantly reduced. Instead, you are relying on your provider to keep the service up and running, potentially raising the provider outage concern. Even if you have access to the data itself, your options are limited without a second application on-hand to process that data. Study the historical performance and pick your SaaS provider carefully.

IaaS gives you more options to design an architecture for your application, but with this great freedom comes great responsibility. The provider is responsible for fewer layers of the overall stack when it comes to IaaS, so you must design and maintain a lot of it yourself. When doing so, assume failure rather than thinking of it as a (remote) possibility. Availability Zones are helpful, but not always sufficient. What scenarios require consideration of the use of a separate geographical region? The European Banking Authority recommendations on Exit and Continuity can be an interesting example to look at from a testing and deliverability perspective.

Be mindful of characteristics of SaaS that also affect PaaS from a redundancy perspective. For example, if you’re using a proprietary PaaS then you can’t just lift and shift your data and code.

Above all, when designing for resiliency, take a risk-based approach. Not all your assets have the same criticality – know your RPOs and RTOs. Remember that SaaS can be built on top of AWS or Azure, exposing you to supply chain risks.

Even when assuming the worst, you may not have to keep every single service running should the worst actually happen. For one thing, it’s too expensive – just ask your business stakeholders. The very worst time to be defining your approach to resilience is in the middle of an incident, closely followed by shortly after an incident. As with other elements of security in the cloud, resilience should “shift left” and be addressed as early in the delivery cycle as possible. As the Scout movement is fond of saying – “be prepared”.

Image by Berkeley Lab.

Author of the month for January 2019

discount-banner

IT Governance Publishing named me the author of the month and kindly provided a 20% discount on my book.

There’s an interview available in a form of a podcast, where I discuss the most significant challenges related to change management and organisational culture; the common causes of a poor security culture my advice for improving the information security culture in your organisation.

ITGP also made one of the chapters of the audio version of my book available for free – I hope you enjoy it!

Securing JSON Web Tokens

snip20190118_5

JSON Web Tokens (JWTs) are quickly becoming a popular way to implement information exchange and authorisation in single sign-on scenarios.

As with many things, this technology can be either quite secure or very insecure at the same time and a lot is dependent on the implementation. This opens a number of possibilities for attackers to exploit vulnerabilities if this standard is poorly implemented or outdated libraries are sued.

Here are some of the possible attack scenarios:

  • A attackers can modify the token and hashing algorithm to indicate, through the ‘none’ keyword, that the integrity of the token has already been verified, fooling the server into accepting it as a valid token
  • Attackers can change the algorithm from ‘RS256’ to ‘HS256’ and use the public key to generate a HMAC signature for the token, as server trusts the data inside the header of a JWT and doesn’t validate the algorithm it used to issue a token. The server will now treat this token as one generated with ‘HS256’ algorithm and use its public key to decode and verify it
  • JWTs signed with HS256 algorithm could be susceptible to private key disclosure when weak keys are used. Attackers can conduct offline brute-force or dictionary attacks against the token, since a client does not need to interact with the server to check the validity of the private key after a token has been issued by the server
  • Sensitive information (e.g. internal IP addresses) can be revealed, as all the information inside the JWT payload is stored in plain text

I recommend the following steps to address the concerns above:

  • Reject tokens set with ‘none’ algorithm when a private key was used to issue them
  • Use appropriate key length (e.g. 256 bit) to protect against brute force attacks
  • Adjust the JWT token validation time depending on required security level (e.g. from few minutes up to an hour). For extra security, consider using reference tokens if there’s a need to be able to revoke/invalidate them
  • Use HTTPS/SSL to ensure JWTs are encrypted during client-server communication, reducing the risk of the man-in-the-middle attack
  • Overall, follow the best practices for implementing them, only use up-to-date and secure libraries and choose the right algorithm for requirements

OWASP have more detailed recommendations with Java code samples alongside other noteworthy material for common vulnerabilities and secure coding practices, so I encourage you to check it out if you need more information.

Digital transformation

5340338263_fd2b79290b_z

I’ve recently been involved in a number of digital transformation projects and wanted to share some lessons learned in this blog.

Firstly, there’s no one-size-fits-all approach to successful digital transformation, so it always helps to start with a why. For instance, why is the company considering digitalisation? Perhaps the competitive landscape has changed or some of the existing business models are becoming less relevant in light of new technological trends.

Regardless of the reasons, I would argue that no special digital strategy needs to be developed. Rather, we need to to see how digitalisation supports the overall business strategy, and how digital trends affect your company.

While strategising in the boardroom helps, keeping customers in mind is paramount. Rather than simply digitising existing business processes (such as going paperless), it’s useful to think about them as multiple customer journeys to maximise the value for the consumer.

Design thinking is a good method to use when approaching this, as it helps to create a customer-centric solution. It begins with a deep understanding of customer problems and iterates through prototyping, testing and continuous feedback. This process also aligns well with modern iterative frameworks for software development and broader agile working.

Learning from feedback on your minimal viable product (MVP) helps to refine your initial assumptions and adjust the approach where necessary.

For example, adopting and combining technology like Cloud, Big Data and Machine Learning can help improve the decision-making process in one department, so it can then be adopted by the rest of the enterprise once the business benefits have been validated.

Having a clear data architecture is key in such transformation. It’s rarely about just building a mobile app, but about making better business decisions through effective use of data. Therefore, before embarking on any data analytics initiative, it’s imperative to be clear on why the data is being collected and what it’s going to be used for.

While working with a Power and Utilities company, I helped them securely combine Internet of Things devices and Cloud infrastructure to connect assets to the grid, analyse consumption data to predict and respond to demand and automate inventory management. As outlined above, it started with a relatively small pilot and quickly scaled up across the enterprise.

Yes, traditional companies might not be as nimble as startups, but they have other advantages: assets and data are two obvious ones. Digitalisation can help make this data actionable to better service the customers. To enable this, such companies should seek out not only opportunities to digitise their core functions, but also find new growth areas. If some of the capabilities are missing, they can be acquired by interacting with other members of the ecosystems though partnerships or acquisitions.

It’s not all about technology, however. People play a key role in digital transformation. And I’m not only talking about the customers. Employees in your organisation might have to adopt new ways of working and develop new skills to keep up with the pace of change. Recruitment requirements and models might have to adjust accordingly too.

If you would like to learn more, there’s a free online course on digital transformation developed by BCG in collaboration with the University of Virginia that provides a good summary of current technology trends impacting businesses. Feel free to jump straight to week 4 for the last few modules discussing their framework and some case studies if you are after more practical advice.

Image by John Pastor.

How to pass the CCSP exam

CCSP-logo-2lines

The CCSP exam is not easy but nothing you can’t prepare for. It tests your knowledge of the following CCSP domains:

  • Cloud Concepts, Architecture and Design
  • Cloud Data Security
  • Cloud Platform and Infrastructure Security
  • Cloud Application Security
  • Cloud Security Operations
  • Legal, Risk and Compliance

The structure and format might change as (ISC)2 continuously revise their exams, so please check the official website to make sure you are up-to-date with the latest developments.

Apart from the official (ISC)2 guides, here are some of the resources I used in my studies:

If you would prefer to add video lectures to your study plan, there’s a free course on Cybrary. For a quick summary, check out these mindmaps. Also, multiple sets of free flashcards are available on Quizlet.

It is a good idea to do some practice questions: there are books and mobile apps out there to help you with this. Practical experience in cloud security is also essential.

On the day, read the questions carefully. It’s not a time pressured exam (I was done in two hours), so it’s worth re-reading the questions and answers again to make sure you are answering exactly what is being asked. Eliminate the wrong options first and then decide on the best out of the remaining ones.

Finally, my suggestion would be to approach the questions from the perspective of a consultant. What would you recommend in each situation? Don’t be too technical – keep the business needs in mind at all times.

Don’t stress too much about the final result. I’m sure you’ll pass, but even if not on your first attempt, you’ll learn either way! Remember, the knowledge you accumulate in the process of preparing for the test itself has the most value, not the credential.

Good luck!

Videos for InfoSec Awareness

sans

It was another fantastic event by SANS. This time, apart from a regular line up of great speakers, there were some interactive workshops.

Javvad Malik facilitated one of them and challenged the participants to create their own awareness videos.

javvad

It felt like we covered the entire production cycle in under two hours: we talked about brainstorming, scripting, filming styles, editing and much more! But the most important part was about putting the ideas into practice and we actually got to create out own security awareness videos.

The audience was split into several groups, each tasked with producing an engaging clip with only one requirement: it shouldn’t be boring.

Javvad’s tips certainly helped and with a bit of humour, my team’s video won the first prize!

snip20190111_1

If you would like to learn more, check out Summit Archives for presentation slides, including Javvad’s workshop deck and past events.

The Psychology of Information Security is now an audiobook too!

Snip20181127_2

Thanks to my publisher, my book is now available in the audio format. It’s been narrated by Peter Silverleaf, who’s done a great job as always.

If you would rather listen to an audio while driving, exercising or commuting, this version is for you. The book has intentionally been kept to the point which means you can finish the audio in slightly over two hours. The fact that it costs the equivalent of two cups of coffee is an added benefit.

You can get it for free on Audible as part of their introductory offer (you can listen to the sample there too), through Apple iTunes or download it in the MP3 format on my publisher’s website.

I know I’m slightly biased here, but I highly recommend it!

Human-computer interaction

IDF

I’ve previously written about open online courses you can take to develop your skills in user experience design.  I’ve also talked about how this knowledge can be used and abused when it comes to cyber security.

If you want to build a solid foundation in interaction design, I recommend The Encyclopedia of Human-Computer Interaction. This collection of open source textbooks cover the design of interactive products, services, software and many many more.

And while you’re on the website, check out another free and insightful book on gamification. Also on offer you’ll find free UX Courses.

Modelling SABSA architecture using ArchiMate

001

ArchiMate modelling language is one of the The Open Group enterprise architecture standards. It is aligned with TOGAF and aims to help architects (and other interested parties) understand the impact of design choices and changes.

I provide a high-level overview of this standard and the free open-source modelling tool used to describe, analyse and visualise architecture in my previous blog.

Here I would like to build on the foundation we’ve laid while discussing SABSA architecture and design case study and share and example of using the Archi tool to model security architecture using the SABSA framework.

Let’s say ACME Corp asked us to help them with their security architecture. Where do we start?

As described in my previous blog, let’s establish Contextual Architecture.

1 - Contextual

Using Archi, I select Principles (can be found in Motivation section) for attributes and define composition relationship between elements (e.g. ACME Corp is composed of Cost-effective, Reputable and many other attributes that hopefully define the business).

Here and below I’ll be using a simplified example just to illustrate a point – you will have many more attributes in practice.

From reading company annual reports and talking to business stakeholders we can start identifying business drivers of ACME Corp. We can them map these business drivers to attributes. Below is an illustration of mapping a business driver Generate revenue (Driver element) to the attribute Cost-effective using Influence relation, as business drivers influence attributes.

2 - business driver t oattribute mapping

On the Conceptual architecture level we need to start defining lower level attributes. For example, Cost-effective is composed (Composition relation) of Available and Business-driven

3 - Conceptual

Remember that you can provide definitions of your attributes in the element’s properties (Main section). In this example I’m defining Available as Service should be uninterrupted. You are also encouraged to establish a measurement approach for each attribute. You can see above that Uptime is the main KPI for availability. It’s a hard measure where we monitor the percentage of time system is available compared to what is specified in the SLA.

Logical level provides an insight into what capabilities enable the attributes. In the example below, Available is realised (Realisation relation) by Backup capability which in turn is comprised of Synchronous and Asynchronous backup capabilities (Composition relation).

4 - Logical Model

Archi tool allows us to model SABSA Physical Architecture view by describing services, events, processes, interfaces, functions and other elements of the TOGAF Technology layer.

Below is a simplified example of describing the Asynchronous backup capability.

6 - Physical model

Asynchronous backup is being realised by Backup manager application service (realisation relation). Backup store is a data object that is being accessed by the Backup manager (access relation).

You can be quite detailed here and that’s where Archi tool can add a lot of value. But to keep things simple, I’m going to leave it at that. You can decompose elements into services and function, group them together and even go lower describing actual technology solutions on SABSA Component architecture level.

The real question is: what do you do with all of this?

My answer is simple: visualise.

Visualiser

Archi lets you switch into the Visualiser mode and create graphs bringing all your hard work together. Playing with depth (6 in the example above) you can analyse the architecture and ensure traceability: you can see and, more importantly, demonstrate to your business stakeholders how a particular technology solution contributes to the overall business objective.

In addition, the Validator allows you to see the elements that are orphaned, i.e. not related to any other element. You then have the ability to rectify this and introduce a relationship or discontinue the capability (otherwise, why are you paying for something that is not in use?).

If you followed the steps above, the tool, despite being free, actually does a lot of the heavy lifting for you and automatically adjusts the models and graphs if changes to the architecture are introduced.

Now it’s your turn to try out Archi for SABSA architecture. Good luck!

I would like to thank Chul Choi for outlining the above technique.