Learn software engineering

Python_logo_and_wordmark.svg

I’ve recently decided to brush up on my programming skills with one of the courses on Udemy. Despite completing a degree in Computer Science back in the day, my recent focus has been away from software development and a lot has changed since I graduated.

At university I studied mathematics and algorithms but actual programming was performed on archaic languages – such as Pascal for high-level and Assembly for low-level programming.

Although they provide a solid foundation, I was looking for something more practical and because of this I ended up taking up Python because of its versatility. Python is not only widely used, but can also be applied to a variety of projects, including data analysis and machine learning.

The course has been very good and Jupyter notebooks with extensive comments and exercises are available for free on GitHub.

You can start applying it in practice straight away or just have some fun with your own pet projects.

If you’re an experienced developer or just want to have some extra practice, I found the below brain teasers quite entertaining:

On the other hand, if you are just starting up and would like some more grounding in computer science, check out Harvard University’s CS50’s Introduction to Computer Science. It’s completely free, online and self-paced. It starts with some basic principles and lets you put them into practice straight away through Scratch, a graphical programming language developed by MIT.  You then go on to learn more advanced concepts and apply them using C, Python, JavaScript and more.

The course also has a great community, so I highly recommend checking it out.


Startup security

14188692143_8ed6740a1d_z

In the past year I had a pleasure working with a number of startups on improving their security posture. I would like to share some common pain points here and what to do about them.

Advising startups on security is not easy, as it tends to be a ‘wicked’ problem for a cash-strapped company – we often don’t want to spend money on security but can’t afford not to because of the potential devastating impact of security breaches. Business models of some of them depend on customer trust and the entire value of a company can be wiped out in a single incident.

On a plus side, security can actually increase the value of a startup through elevating trust and amplifying the brand message, which in turn leads to happier customers. It can also increase company valuation through demonstrating a mature attitude towards security and governance, which is especially useful in fundraising and acquisition scenarios.

Security is there to support the business, so start with understanding the product who uses it.  Creating personas is quite a useful tool when trying to understand your customers. The same approach can be applied to security. Think through the threat model – who’s after the company and why? At what stage of a customer journey are we likely to get exposed?

Are we trying to protect our intellectual property from competitors or sensitive customer data from organise crime? Develop a prioritised plan and risk management approach to fit the answers. You can’t secure everything – focus on what’s truly important.

A risk based approach is key. Remember that the company is still relatively small and you need to be realistic what threats we are trying to protect against. Blindly picking your favourite NIST Cybersecurity Framework and applying all the controls might prove counterproductive.

Yes, the challenges are different compared to securing a large enterprise, but there some upsides too. In a startup, more often than not, you’re in a privileged position to build in security and privacy by design and deal with much less technical debt. You can embed yourself in the product development and engineering from day one. This will save time and effort trying to retrofit security later – the unfortunate reality of many large corporations.

Be wary, however, of imposing too much security on the business. At the end of the day, the company is here to innovate, albeit securely. Your aim should be to educate the people in the company about security risks and help them make the right decisions. Communicate often, showing that security is not only important to keep the company afloat but that it can also be an enabler. Changing behaviours around security will create a positive security culture and protect the business value.

How do you apply this in practice? Let’s say we established that we need to guard the company’s reputation, customer data and intellectual property all the while avoiding data breaches and regulatory fines. What should we focus on when it comes to countermeasures?

I recommend an approach that combines process and technology and focuses on three main areas: your product, your people and your platform.

  1. Product

Think of your product and your website as a front of your physical store. Thant’s what customers see and interact with. It generates sales, so protecting it is often your top priority. Make sure your developers are aware of OWASP vulnerabilities and secure coding practices. Do it from the start, hire a DevOps security expert if you must. Pentest your product regularly. Perform code reviews, use automated code analysis tools. Make sure you thought through DDoS attack prevention. Look into Web Application Firewalls and encryption. API security is the name of the game here. Monitor your APIs for abuse and unusual activity. Harden them, think though authentication.

  1. People

I talked about building security culture above, but in a startup you go beyond raising awareness of security risks. You develop processes around reporting incidents, documenting your assets, defining standard builds and encryption mechanisms for endpoints, thinking through 2FA and password managers, locking down admin accounts, securing colleagues’ laptops and phones through mobile device management solutions and generally do anything else that will help people do their job better and more securely.

  1. Platform

Some years ago I would’ve talked about network perimeter, firewalls and DMZs here. Today it’s all about the cloud. Know your shared responsibility model. Check out good practices of your cloud service provider. Main areas to consider here are: data governance, logging and monitoring, identity and access management, disaster recovery and business continuity. Separate your development and production environments. Resist the temptation to use sensitive (including customer) data in your test systems, minimise it as much as possible. Architect it well from the beginning and it will save you precious time and money down the road.

Every section above deserves its own blog and I have deliberately kept it high-level. The intention here is to provide a framework for you to think through the challenges most startups I encountered face today.

If the majority of your experience comes from the corporate environment, there are certainly skills you can leverage in the startup world too but be mindful of variances. The risks these companies face are different which leads to the need for a different response. Startups are known to be flexible, nimble and agile, so you should be too.

Image by Ryan Brooks.


Innovating in the age of GDPR

Expo

Customers are becoming increasingly aware of their rights when it comes to data privacy and they expect companies to safeguard the data they entrust to them. With the introduction of GDPR, a lot of companies had to think about privacy for the first time.

I’ve been invited to share my views on innovating in the age of GDPR as part of the Cloud and Cyber Security Expo in London.

When I was preparing for this panel I was trying to understand why this was even a topic to begin with. Why should innovation stop? If your business model is threatened by the GDPR then you are clearly doing something wrong. This means that your business model was relying on exploitation of consumers which is not good.

But when I thought about it a bit more, I realised that there are costs to demonstrating compliance to the regulator that a company would also have to account for. It’s arguably easier achieved by bigger companies with established compliance teams rather than smaller upstarts, serving as a barrier to entry. Geography also plays a role here. What if a tech firm starts in the US or India, for example, where the regulatory regime is more relaxed when it comes to protecting customer data and then expands to Europe when it can afford it? At least at a first glance, companies starting up in Europe are at a disadvantage as they face potential regulatory scrutiny from day one.

How big of a problem is this? I’ve been reading about people complaining that you need fancy lawyers who understand technology to address this challenge. I would argue, however, that fancy lawyers are only required when you are doing shady stuff with customer data. Smaller companies that are just starting up have another advantage on their side: they are new. This means they don’t have go and retrospectively purge legacy systems of data they have been collecting over the years potentially breaking the business logic in the interdependent systems. Instead, they start with a clean slate and have an opportunity to build privacy in their product and core business processes (privacy by design).

Risk may increase while the company grows and collects more data, but I find that this risk-based approach is often missing. Implementation of your privacy programme will depend on your risk profile and appetite. Level of risk will vary depending on type and amount of data you collect. For example, a bank can receive thousands of subject access requests per month, while a small B2B company can receive one a year. Implementation of privacy programmes will therefore be vastly different. The bank might look into technology-enabled automation, while a small company might look into outsourcing subject request processes. It is important to note, however, that risk can’t be fully outsourced as the company still ultimately owns it at the end of the day

The market is moving towards technology-enabled privacy processes: automating privacy impact assessments, responding to customer requests, managing and responding to incidents, etc.

I also see the focus shifting from regulatory-driven privacy compliance to a broader data strategy. Companies are increasingly interested in understanding how they can use data as an asset rather than a liability. They are looking for ways to effectively manage marketing consents and opt out and giving power and control back to the customer, for example by creating preference centres.

Privacy is more about the philosophy of handling personal data rather than specific technology tricks. This mindset in itself can lead to innovation rather than stifling it. How can you solve a customers’ problem by collecting the minimum amount of personal data? Can it be anonymised? Think of personal data like toxic waste – sure it can be handled, but with extreme care.


Resilience in the Cloud

4157700219_609fca025b_z

Modern digital technology underpins the shift that enables businesses to implement new processes, scale quickly and serve customers in a whole new way.

Historically, organisations would invest in their own IT infrastructure to support their business objectives and the IT department’s role would be focused on keeping the ‘lights on’.

To minimise the chance of failure of the equipment, engineers traditionally introduced an element of redundancy in the architecture. That redundancy could manifest itself on many levels. For example, it could be a redundant datacentre, which is kept as a ‘hot’ or ‘warm’ site with a complete set of hardware and software ready to take the workload in case of the failure of a primary datacentre. Components of the datacentre, like power and cooling, can also be redundant to increase the resiliency.

On a lesser scale, within a single datacentre, networking infrastructure elements can be redundant. It is not uncommon to procure two firewalls instead of just one to configure them to balance the load or just to have a second one as a backup. Power and utilities companies still stock up on critical industrial control equipment to be able to quickly react to a failed component.

The majority of effort, however, went into protecting the data storage. Magnetic disks were assembled in RAIDs to reduce the chances of data loss in case of failure and backups were relegated to magnetic tapes to preserve less time-sensitive data and stored in separate physical locations.

Depending on specific business objectives or compliance requirements, organisations had to heavily invest in these architectures. One-off investments were, however, only one side of the story. On-going maintenance, regular tests and periodic upgrades were also required to keep these components operational. Labour, electricity, insurance and other costs were adding to the final bill. Moreover, if a company was operating in a regulated space, for example if they processed payments and cardholder data then external audits, certification and attestation were also required.

With the advent of cloud computing, companies were able to abstract away a lot of this complexity and let someone else handle the building and operation of datacentres and dealing with compliance issues relating to physical security.

The need for the business resilience, however, did not go away.

Cloud providers can offer options that far exceed (at comparable costs) the traditional infrastructure; but only if configured appropriately.

One example of this is the use of ‘zones’ of availability, where your resources can be deployed across physically separate datacentres. In this scenario, your service can be balanced across these availability zones and can remain running even if one of the ‘zones’ goes down. If you build your own infrastructure for this, you would have to build one datacentre in each zone and . You better have a solid business case for this.

It is important to keep this in mind when deciding to move to the cloud from the traditional infrastructure. Simply lifting and shifting your applications to the cloud may, in fact. These applications are unlikely to have been developed to work in the cloud and take advantage of these additional resiliency options. Therefore, I advise against such migration in favour of re-architecting.

Cloud Service Provider SLAs should also be considered. Compensation might be offered for failure to meet these, but it’s your job to check how this compares to the traditional “5 nines” of a availability in a traditional datacentre.

You should also be aware of the many differences between cloud service models.

When procuring a SaaS, for example, your ability to manage resilience is significantly reduced. In this case you are relying completely on your provider to keep the service up and running, potentially raising the provider outage concern. . Even with the data, however, your options are limited without a second application on-hand to process that data which may also require data transformation. Study the historical performance and pick your SaaS provider carefully.

IaaS gives you more options to design an architecture for your application, but with this great freedom comes great responsibility. The provider is responsible for fewer layers of the overall stack when it comes to IaaS, so you must design and maintain a lot of it yourself. When doing so, assume failure rather than thinking of it as a (remote) possibility.  Availability Zones are helpful, but not always sufficient.  What scenarios require consideration of the use of a separate geographical region? The European Banking Authority recommendations on Exit and Continuity can be an interesting example to look at from a testing and deliverability perspective.

Be mindful of characteristics of SaaS that also affect PaaS from a redundancy perspective. For example, if you’re using a proprietary PaaS then you can’t just lift and shift your data and code.

Above all, when designing for resiliency, take a risk-based approach. Not all your assets have the same criticality. , know your RPO and RTO. Remember that SaaS can be built on top of AWS or Azure, exposing you to supply chain risks.

Even when assuming the worst, you may not have to keep every single service running should the worst actually happen. For one thing, it’s too expensive – just ask your business stakeholders. The very worst time to be defining your approach to resilience is in the middle of an incident, closely followed by shortly after an incident.  As with other elements of security in the cloud, resilience should “shift left” and be addressed as early in the delivery cycle as possible.  As the Scout movement is fond of saying – “be prepared”.

Image by Berkeley Lab.


Author of the month for January 2019

discount-banner

IT Governance Publishing named me the author of the month and kindly provided a 20% discount on my book.

There’s an interview available in a form of a podcast, where I discuss the most significant challenges related to change management and organisational culture; the common causes of a poor security culture my advice for improving the information security culture in your organisation.

ITGP also made one of the chapters of the audio version of my book available for free – I hope you enjoy it!


Securing JSON Web Tokens

snip20190118_5

JSON Web Tokens (JWTs) are quickly becoming a popular way to implement information exchange and authorisation in single sign-on scenarios.

As with many things, this technology can be either quite secure or very insecure at the same time and a lot is dependent on the implementation. This opens a number of possibilities for attackers to exploit vulnerabilities if this standard is poorly implemented or outdated libraries are sued.

Here are some of the possible attack scenarios:

  • A attackers can modify the token and hashing algorithm to indicate, through the ‘none’ keyword, that the integrity of the token has already been verified, fooling the server into accepting it as a valid token
  • Attackers can change the algorithm from ‘RS256’ to ‘HS256’ and use the public key to generate a HMAC signature for the token, as server trusts the data inside the header of a JWT and doesn’t validate the algorithm it used to issue a token. The server will now treat this token as one generated with ‘HS256’ algorithm and use its public key to decode and verify it
  • JWTs signed with HS256 algorithm could be susceptible to private key disclosure when weak keys are used. Attackers can conduct offline brute-force or dictionary attacks against the token, since a client does not need to interact with the server to check the validity of the private key after a token has been issued by the server
  • Sensitive information (e.g. internal IP addresses) can be revealed, as all the information inside the JWT payload is stored in plain text

I recommend the following steps to address the concerns above:

  • Reject tokens set with ‘none’ algorithm when a private key was used to issue them
  • Use appropriate key length (e.g. 256 bit) to protect against brute force attacks
  • Adjust the JWT token validation time depending on required security level (e.g. from few minutes up to an hour). For extra security, consider using reference tokens if there’s a need to be able to revoke/invalidate them
  • Use HTTPS/SSL to ensure JWTs are encrypted during client-server communication, reducing the risk of the man-in-the-middle attack
  • Overall, follow the best practices for implementing them, only use up-to-date and secure libraries and choose the right algorithm for requirements

OWASP have more detailed recommendations with Java code samples alongside other noteworthy material for common vulnerabilities and secure coding practices, so I encourage you to check it out if you need more information.


Digital transformation

5340338263_fd2b79290b_z

I’ve recently been involved in a number of digital transformation projects and wanted to share some lessons learned in this blog.

Firstly, there’s no one-size-fits-all approach to successful digital transformation, so it always helps to start with a why. For instance, why is the company considering digitalisation? Perhaps the competitive landscape has changed or some of the existing business models are becoming less relevant in light of new technological trends.

Regardless of the reasons, I would argue that no special digital strategy needs to be developed. Rather, we need to to see how digitalisation supports the overall business strategy, and how digital trends affect your company.

While strategising in the boardroom helps, keeping customers in mind is paramount. Rather than simply digitising existing business processes (such as going paperless), it’s useful to think about them as multiple customer journeys to maximise the value for the consumer.

Design thinking is a good method to use when approaching this, as it helps to create a customer-centric solution. It begins with a deep understanding of customer problems and iterates through prototyping, testing and continuous feedback. This process also aligns well with modern iterative frameworks for software development and broader agile working.

Learning from feedback on your minimal viable product (MVP) helps to refine your initial assumptions and adjust the approach where necessary.

For example, adopting and combining technology like Cloud, Big Data and Machine Learning can help improve the decision-making process in one department, so it can then be adopted by the rest of the enterprise once the business benefits have been validated.

Having a clear data architecture is key in such transformation. It’s rarely about just building a mobile app, but about making better business decisions through effective use of data. Therefore, before embarking on any data analytics initiative, it’s imperative to be clear on why the data is being collected and what it’s going to be used for.

While working with a Power and Utilities company, I helped them securely combine Internet of Things devices and Cloud infrastructure to connect assets to the grid, analyse consumption data to predict and respond to demand and automate inventory management. As outlined above, it started with a relatively small pilot and quickly scaled up across the enterprise.

Yes, traditional companies might not be as nimble as startups, but they have other advantages: assets and data are two obvious ones. Digitalisation can help make this data actionable to better service the customers. To enable this, such companies should seek out not only opportunities to digitise their core functions, but also find new growth areas. If some of the capabilities are missing, they can be acquired by interacting with other members of the ecosystems though partnerships or acquisitions.

It’s not all about technology, however. People play a key role in digital transformation. And I’m not only talking about the customers. Employees in your organisation might have to adopt new ways of working and develop new skills to keep up with the pace of change. Recruitment requirements and models might have to adjust accordingly too.

If you would like to learn more, there’s a free online course on digital transformation developed by BCG in collaboration with the University of Virginia that provides a good summary of current technology trends impacting businesses. Feel free to jump straight to week 4 for the last few modules discussing their framework and some case studies if you are after more practical advice.

Image by John Pastor.