Artificial intelligence and cyber security: attacking and defending

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack.  Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.

Advertisements

Security function review

When determining the level of maturity of a security function, I focus on the following areas and try to answer these questions:

Business alignment

  • Is security strategy aligned with business strategy (including vision and mission)?
  • Is it documented and communicated?
  • Is it supported by the leadership?
  • Is there a guiding policy in place to achieve set objectives?

Governance

  • Have accountable individuals been identified?
  • Have risk management practices been established?
  • Have audit and assurance practices been established?

Operating model

  • Have performance measurement practices been established (including KPI definition)?
  • Have global and regional interfaces been defined?
  • Has team structure and funding been agreed?

Risk management fundamentals

Risk

The focus of many of my projects is on risks. I’ve observed through multiple assessments in various companies and industries a lack of formalised risk management process. Some of the plans may exist but they are not linked to specific risks and risk reduction levels are not being measured and reported on appropriately.

The security function can be effective in responding to incidents but the strategic risk-driven planning is often missing. The root cause of this state of affairs is often can be generalised as low maturity of the security function. If that’s the case, the team spends most of its time fighting fires and have little capacity to address the challenges that cause these fires in the first place.

To address this, I assess current state of the security function, define the target maturity level and then develop a high-level roadmap to achieve that desired state.

If the company is geographically distributed, noticeable differences usually exist between a number of business units in terms of overall policy framework. The suggestion here is to define a baseline level of security controls across the entire enterprise. The first step in defining these is to understand what we are trying to protect – the assets.

Modern corporations own a wide range of assets that enable them to operate and grow. They broadly include physical and non-physical assets, people and reputation. Engagement from appropriate parts of the business to identify these is important here as potential attacks to these assets might negatively affect the operations.

By understanding the assets we are able to better identify risks, enable effective detection and response, and prioritise controls and remediation efforts better.

It also helps to conduct a bottom-up review of assets to understand what exactly we’ve got there, focusing on the most critical ones and creating and updating asset inventories.

Understanding the asset base and setting standards and guidance for protecting them will focus the efforts and help you prevent and better respond to security issues.

Assets are tightly linked to threat actors, because it’s not enough to know what we need to protect – we also need to know what we are protecting our assets against. Threat actors vary in their motivation and ability and – depending on the company – include nation states, organised crime, insiders, hacktivist, competitors, etc.

A combination of assets and threats helps us to define risks.

Identifying risks and placing them on a heat map helps determine the inherent, residual and target risks. Inherent risks show the level of risk assuming all the controls or remediating measures were absent or failing. Think of it as if security function didn’t exist. It’s not a happy place where we see the majority of risks have high impact and likelihood being in the top right hand side corner of the chart.

Luckily, security function does exist and even if they don’t have a formalised risk management process, they are usually doing a good job in addressing some of these risks.

Current level of risk is taking into account all the controls and remediating measures in place. The initial impact and likelihood is usually reduced and sometimes to an acceptable level agreed by the business. The idea here is although further reduction of impact and likelihood is possible, it might not be cost-effective. In other words, the money might be better spent in addressing other risks.

Target risks is the future state risk level once additional controls and remediation measures are implemented by the security team.

The main takeaway here is that a formalised risk management approach (with accompanying processes and policies) is needed to ensure all risks are identified and tracked over time, and the appropriate resources and efforts are spent on the top priority risks.