How to detect threats in AWS with GuardDuty

GuardDuty

Once some basic asset management, identity and access management and logging capabilities in AWS have been established, it’s time to move to the threat detection phase of your security programme.

There are several ways to implement threat detection in AWS but by far the easiest (and perhaps cheapest) set up is to use Amazon’s native GuardDuty. It detects root user logins, policy changes, compromised keys, instances, users and more. As an added benefit, Amazon keep adding new rules as they continue evolving the service.

To detect threats in your AWS environment, GuardDuty ingests CloudTrail, VPC FlowLogs and VPC DNS logs. You don’t need to configure these separately for GuardDuty to be able to access them, simplifying the set up. The price of the service depends on the number of events analysed but it comes with a free 30-day trial which allows you to understand the scope, utility and potential costs.

It’s a regional service, so it should be enabled in all regions, even the ones you currently don’t have any resources. You might start using new regions in the future and, perhaps more importantly, the attackers might do it on your behalf. It doesn’t cost extra in the region with no activity, so there is really no excuse to switch it on everywhere.

To streamline the management, I recommend following the AWS guidance on channelling the findings to a single account, where they can be analysed by the security operations team.

Master

It requires establishing master-member relationship between accounts, where the master account will be the one monitored by the security operations team. You will then need to enable GuardDuty in every member account and accept the invite from the master.

You don’t have to rely on the AWS console to access GuardDuty findings, as they can be streamed using CloudWatch Events and Kinesis to centralise the analysis. You can also write custom rules specific to your environment and mute existing ones customising the implementation. These, however, require a bit more practice, so I will cover them in future blogs.


Netrunner: what infosec might look like in the future

Netrunner

Android: Netrunner is a two-player card game that can teach you a great deal about cyber security. It’s fun to play too.

Bad news first: although initially intended as a ‘living card game’ with constantly evolving gameplay, this game has now been discontinued, so no expansions will be published, limiting the community interest, ongoing deckbuilding and tournaments.

Now to the good news, which is pretty much the rest of this blog. None of the above can stop you from enjoying this great game. You can still acquire the initial core set which contains all you need for casual play.

The premise of this game is simple: mega corporations control all aspects of our lives and hackers (known as runners) oppose them. I know it was supposed to be set in the dystopian cyberpunk future, but some of the elements of it are coming to life sooner than expected since the original game release in 1996.

Runners

The runners vary in their abilities that closely align to their motivation: money, intellectual curiosity, disdain for corporations. Corporations have their core competencies too. Again, just like in real life. The core set I mentioned earlier consists of seven pre-built, and balanced by creators, decks: three for runners and four for corporations with their unique play styles.

The game is asymmetrical with different win conditions: runners are trying to hack into corporations’ networks to steal sensitive information (known as agendas in the game) and corporations are aiming to defend their assets to achieve their objectives (advance agendas). This masterfully highlights the red team versus blue team tension commonplace in today’s infosec community.

Troubleshooter

A corporation has to adapt to evolving threats posed by hackers installing protective devices and conducting defensive operations all the while generating revenue to fund these projects and reach their targets to win the game. It’s not only about defence for the corporation either. Today’s “hacking back” debate got apparently settled in the future, with corporations being able to trap, tag and trace hackers to inflict real damage, as an alternative win condition.

Cyberfeeder

Runners differ vastly in methods to penetrate corporation’s defences and have to take care of an economy of their own: all these cutting edge hacking consoles cost money and memory units. Example cards in runner’s toolbox sometimes closely resemble modern methods (e.g. siphoning off corp’s accounts) and sometimes gaze far into the future with brain-machine interfaces to speed up the process.

Basic rules are simple but there are plenty of intricate details that make players think about strategy and tactics. It’s a game of bluff, risk and careful calculation. There’s also an element of chance in it, which teaches you to be able to make the best use of resources you currently have and adapt accordingly.

It’s not an educational game but you can learn some interesting security concepts while playing, as you are forced to think like a hacker taking chances and exploiting weaknesses or a defender trying to protect your secrets. All you need is the deck of cards and someone to play with.


Understanding your threat landscape

Identifying applicable threats is a good step to take before defining security controls your organisation should put in place. There are various techniques to help you with threat modelling but I wanted to give you some high-level pointers in this blog to get you started. Of course, all of these should be tailored to your specific business.

I find it useful to think about potential attacks as three broad categories:

1. Commoditised attacks. Usually not targeted and involve off-the-shelf-malware. Examples include:

2. Tailored attacks. As the name suggests, these are tailored and can vary in degree of sophistication. Examples include:

3. Accidental. Not every data breach is triggered by a malicious actor. Therefore, it is important to recognise that mistakes happen. Unfortunately sometimes they lead to undesired consequences, like the below:

Information security professionals can use the above examples in communications with their business stakeholders not to spread fear, but to present certain security challenges in context.

It’s often helpful to make it a bit more personal, defining specific threat actors, their target, motivation and impact on the business. Again, the below table serves as an example and can be used as a starting point for you define your own.

Threat actor Description Motivation Target Impact on business
Organised crime International hacking groups Financial gain Commercial data, personal data for identity fraud Reputational damage, regulatory fines, loss of customer trust
Insider Intentional or unintentional Human error, grudge, financial gain Intellectual property, commercial data Destruction or alteration of information, theft of information, reputational damage, regulatory fines
Competitors Espionage and sabotage Competitive advantage Intellectual property, commercial information Disruption or destruction, theft of information, reputational damage, loss of customer
State-sponsored Espionage Political Intellectual property, commercial data, personal data Theft of information, reputational damage

You can then use your understanding of assets and threats relevant to your company to identify security risks. For instance:

  • Failure to comply with relevant regulation – revenue loss and reputational damage due to fines and unwanted media attention as a result of non-compliance with GDPR, PCI DSS, etc.
  • Breach of personal data – regulatory fines, potential litigation and loss of customer trust due to accidental mishandling, external system compromise or insider threat leading to exposure of personal data of customers
  • Disruption of operations – decreased productivity or inability to trade due to compromise of IT systems by malicious actor, denial of service attacks, sabotage or employee error

Again, feel free to use these as examples, but always tailor them based on what’s important you your business. It’s also worth remembering that this is not a one-off exercise. Tracking your assets, threats and risks should be part of your security management function and be incorporated in operational risk management and continuous improvement cycles.

This will allow you to demonstrate the value of security through pragmatic and prioritised security controls, focusing on protecting the most important assets, ensuring alignment to business strategy and embedding security into the business.


Startup security

14188692143_8ed6740a1d_z

In the past year I had a pleasure working with a number of startups on improving their security posture. I would like to share some common pain points here and what to do about them.

Advising startups on security is not easy, as it tends to be a ‘wicked’ problem for a cash-strapped company – we often don’t want to spend money on security but can’t afford not to because of the potential devastating impact of security breaches. Business models of some of them depend on customer trust and the entire value of a company can be wiped out in a single incident.

On a plus side, security can actually increase the value of a startup through elevating trust and amplifying the brand message, which in turn leads to happier customers. It can also increase company valuation through demonstrating a mature attitude towards security and governance, which is especially useful in fundraising and acquisition scenarios.

Security is there to support the business, so start with understanding the product who uses it.  Creating personas is quite a useful tool when trying to understand your customers. The same approach can be applied to security. Think through the threat model – who’s after the company and why? At what stage of a customer journey are we likely to get exposed?

Are we trying to protect our intellectual property from competitors or sensitive customer data from organise crime? Develop a prioritised plan and risk management approach to fit the answers. You can’t secure everything – focus on what’s truly important.

A risk based approach is key. Remember that the company is still relatively small and you need to be realistic what threats we are trying to protect against. Blindly picking your favourite NIST Cybersecurity Framework and applying all the controls might prove counterproductive.

Yes, the challenges are different compared to securing a large enterprise, but there some upsides too. In a startup, more often than not, you’re in a privileged position to build in security and privacy by design and deal with much less technical debt. You can embed yourself in the product development and engineering from day one. This will save time and effort trying to retrofit security later – the unfortunate reality of many large corporations.

Be wary, however, of imposing too much security on the business. At the end of the day, the company is here to innovate, albeit securely. Your aim should be to educate the people in the company about security risks and help them make the right decisions. Communicate often, showing that security is not only important to keep the company afloat but that it can also be an enabler. Changing behaviours around security will create a positive security culture and protect the business value.

How do you apply this in practice? Let’s say we established that we need to guard the company’s reputation, customer data and intellectual property all the while avoiding data breaches and regulatory fines. What should we focus on when it comes to countermeasures?

I recommend an approach that combines process and technology and focuses on three main areas: your product, your people and your platform.

  1. Product

Think of your product and your website as a front of your physical store. Thant’s what customers see and interact with. It generates sales, so protecting it is often your top priority. Make sure your developers are aware of OWASP vulnerabilities and secure coding practices. Do it from the start, hire a DevOps security expert if you must. Pentest your product regularly. Perform code reviews, use automated code analysis tools. Make sure you thought through DDoS attack prevention. Look into Web Application Firewalls and encryption. API security is the name of the game here. Monitor your APIs for abuse and unusual activity. Harden them, think though authentication.

  1. People

I talked about building security culture above, but in a startup you go beyond raising awareness of security risks. You develop processes around reporting incidents, documenting your assets, defining standard builds and encryption mechanisms for endpoints, thinking through 2FA and password managers, locking down admin accounts, securing colleagues’ laptops and phones through mobile device management solutions and generally do anything else that will help people do their job better and more securely.

  1. Platform

Some years ago I would’ve talked about network perimeter, firewalls and DMZs here. Today it’s all about the cloud. Know your shared responsibility model. Check out good practices of your cloud service provider. Main areas to consider here are: data governance, logging and monitoring, identity and access management, disaster recovery and business continuity. Separate your development and production environments. Resist the temptation to use sensitive (including customer) data in your test systems, minimise it as much as possible. Architect it well from the beginning and it will save you precious time and money down the road.

Every section above deserves its own blog and I have deliberately kept it high-level. The intention here is to provide a framework for you to think through the challenges most startups I encountered face today.

If the majority of your experience comes from the corporate environment, there are certainly skills you can leverage in the startup world too but be mindful of variances. The risks these companies face are different which leads to the need for a different response. Startups are known to be flexible, nimble and agile, so you should be too.

Image by Ryan Brooks.


Securing JSON Web Tokens

snip20190118_5

JSON Web Tokens (JWTs) are quickly becoming a popular way to implement information exchange and authorisation in single sign-on scenarios.

As with many things, this technology can be either quite secure or very insecure at the same time and a lot is dependent on the implementation. This opens a number of possibilities for attackers to exploit vulnerabilities if this standard is poorly implemented or outdated libraries are sued.

Here are some of the possible attack scenarios:

  • A attackers can modify the token and hashing algorithm to indicate, through the ‘none’ keyword, that the integrity of the token has already been verified, fooling the server into accepting it as a valid token
  • Attackers can change the algorithm from ‘RS256’ to ‘HS256’ and use the public key to generate a HMAC signature for the token, as server trusts the data inside the header of a JWT and doesn’t validate the algorithm it used to issue a token. The server will now treat this token as one generated with ‘HS256’ algorithm and use its public key to decode and verify it
  • JWTs signed with HS256 algorithm could be susceptible to private key disclosure when weak keys are used. Attackers can conduct offline brute-force or dictionary attacks against the token, since a client does not need to interact with the server to check the validity of the private key after a token has been issued by the server
  • Sensitive information (e.g. internal IP addresses) can be revealed, as all the information inside the JWT payload is stored in plain text

I recommend the following steps to address the concerns above:

  • Reject tokens set with ‘none’ algorithm when a private key was used to issue them
  • Use appropriate key length (e.g. 256 bit) to protect against brute force attacks
  • Adjust the JWT token validation time depending on required security level (e.g. from few minutes up to an hour). For extra security, consider using reference tokens if there’s a need to be able to revoke/invalidate them
  • Use HTTPS/SSL to ensure JWTs are encrypted during client-server communication, reducing the risk of the man-in-the-middle attack
  • Overall, follow the best practices for implementing them, only use up-to-date and secure libraries and choose the right algorithm for requirements

OWASP have more detailed recommendations with Java code samples alongside other noteworthy material for common vulnerabilities and secure coding practices, so I encourage you to check it out if you need more information.


Artificial intelligence and cyber security: attacking and defending

3237928173_9d99dc9113_z

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack. Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to the rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.

Image by fdecomite.


Cyber Wargaming Workshop

ID-10071890

I was recently asked to develop a two-day tabletop cyber wargaming exercise. Here’s the agenda.
Please get in touch if you would like to know more.

Day 1
Introduction
Course Objectives
Module 1: What is Business Wargaming?
How Does Business Wargaming Work?

  •         Teams
  •         Interaction
  •         Moves

Module 2 Cyber Fundamentals

  •         Practical Risk Management
  •         Problems with risk management
  •         Human aspects of security
  •         Conversion of physical and information security
  •         Attacker types and motivations
  •         Security Incident management
  •         Security incident handling and response
  •         Crisis management and business continuity
  •         Cyber security trends to consider

Module 3: Introducing a Case Study

  •         Company and organisational structure
  •         Processes and architecture
  •         Issues

Module 4 Case study exercises

  •         Case study exercise 1: Risk Management
  •         Case study exercise 2: Infrastructure and Application Security

Day 2
Introducing a wagaming scenario
Roles and responsibilities
Simulated exercise to stress response capabilities
The scenario will be testing:

  •         How organisations responded from a business perspective
  •         How organisations responded to the attacks technically
  •         How affected organisations were by the scenario
  •         How they shared information amongst relevant parties

Feedback to the participants
Course wrap up

Image courtesy zirconicusso / FreeDigitalPhotos.net