Resilience in the Cloud

Modern digital technology underpins the shift that enables businesses to implement new processes, scale quickly and serve customers in a whole new way.

Historically, organisations would invest in their own IT infrastructure to support their business objectives and the IT department’s role would be focused on keeping the ‘lights on’.

To minimise the chance of failure of the equipment, engineers traditionally introduced an element of redundancy in the architecture. That redundancy could manifest itself on many levels. For example, it could be a redundant datacentre, which is kept as a ‘hot’ or ‘warm’ site with a complete set of hardware and software ready to take the workload in case of the failure of a primary datacentre. Components of the datacentre, like power and cooling, can also be redundant to increase the resiliency.

On a lesser scale, within a single datacentre, networking infrastructure elements can be redundant. It is not uncommon to procure two firewalls instead of just one to configure them to balance the load or just to have a second one as a backup. Power and utilities companies still stock up on critical industrial control equipment to be able to quickly react to a failed component.

The majority of effort, however, went into protecting the data storage. Magnetic disks were assembled in RAIDs to reduce the chances of data loss in case of failure and backups were relegated to magnetic tapes to preserve less time-sensitive data and stored in separate physical locations.

Depending on specific business objectives or compliance requirements, organisations had to heavily invest in these architectures. One-off investments were, however, only one side of the story. On-going maintenance, regular tests and periodic upgrades were also required to keep these components operational. Labour, electricity, insurance and other costs were adding to the final bill. Moreover, if a company was operating in a regulated space, for example if they processed payments and cardholder data then external audits, certification and attestation were also required.

With the advent of cloud computing, companies were able to abstract away a lot of this complexity and let someone else handle the building and operation of datacentres and dealing with compliance issues relating to physical security.

The need for the business resilience, however, did not go away.

Cloud providers can offer options that far exceed (at comparable costs) the traditional infrastructure; but only if configured appropriately.

One example of this is the use of ‘zones’ of availability, where your resources can be deployed across physically separate datacentres. In this scenario, your service can be balanced across these availability zones and can remain running even if one of the ‘zones’ goes down. If you build your own infrastructure for this, you would have to build one datacentre in each zone and . You better have a solid business case for this.

It is important to keep this in mind when deciding to move to the cloud from the traditional infrastructure. Simply lifting and shifting your applications to the cloud may, in fact. These applications are unlikely to have been developed to work in the cloud and take advantage of these additional resiliency options. Therefore, I advise against such migration in favour of re-architecting.

Cloud Service Provider SLAs should also be considered. Compensation might be offered for failure to meet these, but it’s your job to check how this compares to the traditional “5 nines” of a availability in a traditional datacentre.

You should also be aware of the many differences between cloud service models.

When procuring a SaaS, for example, your ability to manage resilience is significantly reduced. In this case you are relying completely on your provider to keep the service up and running, potentially raising the provider outage concern. . Even with the data, however, your options are limited without a second application on-hand to process that data which may also require data transformation. Study the historical performance and pick your SaaS provider carefully.

IaaS gives you more options to design an architecture for your application, but with this great freedom comes great responsibility. The provider is responsible for fewer layers of the overall stack when it comes to IaaS, so you must design and maintain a lot of it yourself. When doing so, assume failure rather than thinking of it as a (remote) possibility.  Availability Zones are helpful, but not always sufficient.  What scenarios require consideration of the use of a separate geographical region? The European Banking Authority recommendations on Exit and Continuity can be an interesting example to look at from a testing and deliverability perspective.

Be mindful of characteristics of SaaS that also affect PaaS from a redundancy perspective. For example, if you’re using a proprietary PaaS then you can’t just lift and shift your data and code.

Above all, when designing for resiliency, take a risk-based approach. Not all your assets have the same criticality. , know your RPO and RTO. Remember that SaaS can be built on top of AWS or Azure, exposing you to supply chain risks.

Even when assuming the worst, you may not have to keep every single service running should the worst actually happen. For one thing, it’s too expensive – just ask your business stakeholders. The very worst time to be defining your approach to resilience is in the middle of an incident, closely followed by shortly after an incident.  As with other elements of security in the cloud, resilience should “shift left” and be addressed as early in the delivery cycle as possible.  As the Scout movement is fond of saying – “be prepared”.


Author of the month for January 2019

discount-banner

IT Governance Publishing named me the author of the month and kindly provided a 20% discount on my book.

There’s an interview available in a form of a podcast, where I discuss the most significant challenges related to change management and organisational culture; the common causes of a poor security culture my advice for improving the information security culture in your organisation.

ITGP also made one of the chapters of the audio version of my book available for free – I hope you enjoy it!


Securing JSON Web Tokens

snip20190118_5

JSON Web Tokens (JWTs) are quickly becoming a popular way to implement information exchange and authorisation in single sign-on scenarios.

As with many things, this technology can be either quite secure or very insecure at the same time and a lot is dependent on the implementation. This opens a number of possibilities for attackers to exploit vulnerabilities if this standard is poorly implemented or outdated libraries are sued.

Here are some of the possible attack scenarios:

  • A attackers can modify the token and hashing algorithm to indicate, through the ‘none’ keyword, that the integrity of the token has already been verified, fooling the server into accepting it as a valid token
  • Attackers can change the algorithm from ‘RS256’ to ‘HS256’ and use the public key to generate a HMAC signature for the token, as server trusts the data inside the header of a JWT and doesn’t validate the algorithm it used to issue a token. The server will now treat this token as one generated with ‘HS256’ algorithm and use its public key to decode and verify it
  • JWTs signed with HS256 algorithm could be susceptible to private key disclosure when weak keys are used. Attackers can conduct offline brute-force or dictionary attacks against the token, since a client does not need to interact with the server to check the validity of the private key after a token has been issued by the server
  • Sensitive information (e.g. internal IP addresses) can be revealed, as all the information inside the JWT payload is stored in plain text

I recommend the following steps to address the concerns above:

  • Reject tokens set with ‘none’ algorithm when a private key was used to issue them
  • Use appropriate key length (e.g. 256 bit) to protect against brute force attacks
  • Adjust the JWT token validation time depending on required security level (e.g. from few minutes up to an hour). For extra security, consider using reference tokens if there’s a need to be able to revoke/invalidate them
  • Use HTTPS/SSL to ensure JWTs are encrypted during client-server communication, reducing the risk of the man-in-the-middle attack
  • Overall, follow the best practices for implementing them, only use up-to-date and secure libraries and choose the right algorithm for requirements

OWASP have more detailed recommendations with Java code samples alongside other noteworthy material for common vulnerabilities and secure coding practices, so I encourage you to check it out if you need more information.


Digital transformation

I’ve recently been involved in a number of digital transformation projects and wanted to share some lessons learned in this blog.

Firstly, there’s no one-size-fits-all approach to successful digital transformation, so it always helps to start with a why. For instance, why is the company considering digitalisation? Perhaps the competitive landscape has changed or some of the existing business models are becoming less relevant in light of new technological trends.

Regardless of the reasons, I would argue that no special digital strategy needs to be developed. Rather, we need to to see how digitalisation supports the overall business strategy, and how digital trends affect your company.

While strategising in the boardroom helps, keeping customers in mind is paramount. Rather than simply digitising existing business processes (such as going paperless), it’s useful to think about them as multiple customer journeys to maximise the value for the consumer.

Design thinking is a good method to use when approaching this, as it helps to create a customer-centric solution. It begins with a deep understanding of customer problems and iterates through prototyping, testing and continuous feedback. This process also aligns well with modern iterative frameworks for software development and broader agile working.

Learning from feedback on your minimal viable product (MVP) helps to refine your initial assumptions and adjust the approach where necessary.

For example, adopting and combining technology like Cloud, Big Data and Machine Learning can help improve the decision-making process in one department, so it can then be adopted by the rest of the enterprise once the business benefits have been validated.

Having a clear data architecture is key in such transformation. It’s rarely about just building a mobile app, but about making better business decisions through effective use of data. Therefore, before embarking on any data analytics initiative, it’s imperative to be clear on why the data is being collected and what it’s going to be used for.

While working with a Power and Utilities company, I helped them securely combine Internet of Things devices and Cloud infrastructure to connect assets to the grid, analyse consumption data to predict and respond to demand and automate inventory management. As outlined above, it started with a relatively small pilot and quickly scaled up across the enterprise.

Yes, traditional companies might not be as nimble as startups, but they have other advantages: assets and data are two obvious ones. Digitalisation can help make this data actionable to better service the customers. To enable this, such companies should seek out not only opportunities to digitise their core functions, but also find new growth areas. If some of the capabilities are missing, they can be acquired by interacting with other members of the ecosystems though partnerships or acquisitions.

It’s not all about technology, however. People play a key role in digital transformation. And I’m not only talking about the customers. Employees in your organisation might have to adopt new ways of working and develop new skills to keep up with the pace of change. Recruitment requirements and models might have to adjust accordingly too.

If you would like to learn more, there’s a free online course on digital transformation developed by BCG in collaboration with the University of Virginia that provides a good summary of current technology trends impacting businesses. Feel free to jump straight to week 4 for the last few modules discussing their framework and some case studies if you are after more practical advice.


How to pass the CCSP exam

CCSP-logo-2lines

I just passed the Certified Cloud Security Practitioner (CCSP) exam. It wasn’t easy, but nothing you can’t prepare for.

Apart from the official (ISC)2 guides, here are some of the resources I used in my studies:

If you would prefer to add video lectures to your study plan, there’s a free course on Cybrary. For a quick summary, check out these study notes and mindmaps. Also, multiple sets of free flashcards are available on Quizlet.

It is a good idea to do some practice questions: there are books and mobile apps out there to help you with this. Practical experience in cloud security is also essential.

The exam tests your knowledge of the following CCSP domains:

  • Architectural Concepts and Design Requirements
  • Cloud Data Security
  • Cloud Platform and Infrastructure Security
  • Cloud Application Security
  • Operations
  • Legal and Compliance

The structure and format might change as (ISC)2 continuously revise their exams, so please check the official website to make sure you are up-to-date with the latest developments.

On the day, read the questions carefully. It’s not a time pressured exam (I was done in two hours), so it’s worth re-reading the questions and answers again to make sure you are answering exactly what is being asked. Eliminate the wrong options first and then decide on the best out of the remaining ones.

Finally, my suggestion would be to approach the questions from the perspective of a consultant. What would you recommend in each situation? Don’t go too technical – keep the business needs in mind at all times.

Don’t stress too much about the final result. I’m sure you’ll pass, but even if not on your first attempt, you’ll learn either way! Remember, the knowledge you accumulate in the process of preparing for the test itself has the most value, not the credential.

Good luck!


Videos for InfoSec Awareness

sans

It was another fantastic event by SANS. This time, apart from a regular line up of great speakers, there were some interactive workshops.

Javvad Malik facilitated one of them and challenged the participants to create their own awareness videos.

javvad

It felt like we covered the entire production cycle in under two hours: we talked about brainstorming, scripting, filming styles, editing and much more! But the most important part was about putting the ideas into practice and we actually got to create out own security awareness videos.

The audience was split into several groups, each tasked with producing an engaging clip with only one requirement: it shouldn’t be boring.

Javvad’s tips certainly helped and with a bit of humour, my team’s video won the first prize!

snip20190111_1

If you would like to learn more, check out Summit Archives for presentation slides, including Javvad’s workshop deck and past events.


The Psychology of Information Security is now an audiobook too!

Snip20181127_2

Thanks to my publisher, my book is now available in the audio format. It’s been narrated by Peter Silverleaf, who’s done a great job as always.

If you would rather listen to an audio while driving, exercising or commuting, this version is for you. The book has intentionally been kept to the point which means you can finish the audio in slightly over two hours. The fact that it costs the equivalent of two cups of coffee is an added benefit.

You can get it for free on Audible as part of their introductory offer (you can listen to the sample there too), through Apple iTunes or download it in the MP3 format on my publisher’s website.

I know I’m slightly biased here, but I highly recommend it!