Artificial intelligence and cyber security: attacking and defending

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack.  Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.

Advertisements

My book has been translated into Persian

Persian.PNG

My book has been translated into Persian by Dr. Mohammad Reza Taghva from Allame Tabatabaee University and Mr. Saeed Kazem Pourian from Shahed University. Please get in touch if you would like to learn more.


Augusta University’s Cyber Institute adopts my book

jsacacalog

Just received some great news from my publisher.  My book has been accepted for use on a course at Augusta University. Here’s some feedback from the course director:

Augusta University’s Cyber Institute adopted the book “The Psychology of Information Security” as part of our Masters in Information Security Management program because we feel that the human factor plays an important role in securing and defending an organisation. Understanding behavioural aspects of the human element is important for many information security managerial functions, such as developing security policies and awareness training. Therefore, we want our students to not only understand technical and managerial aspects of security, but psychological aspects as well.


Presenting at SANS European Security Awareness Summit

It’s been a pleasure delivering a talk on the psychology of information security culture at the SANS European Security Awareness Summit 2016. It was the first time for me to attend and present at this event, I certainly hope it’s not going to be the last.

The summit has a great community feel to it and Lance Spitzner did a great job organising and bringing people together. It was an opportunity for me not only to share my knowledge, but also to learn from others during a number of interactive sessions and workshops. The participants were keen to share tips and tricks to improve security awareness in their companies, as well as sharing war stories of what worked and what didn’t.

It was humbling to find out that my book was quite popular in this community and I even managed to sign a couple of copies.

All speakers’ presentation slides (including from past and future events) can be accessed here.


User Experience Design

9719949207_bcf69ec9fd_z

Here’s a collection of courses designed to further your knowledge in user experience design. Happy learning!

Read the rest of this entry »


The Psychology of Information Security book reviews

51enjkmw1ll-_sx322_bo1204203200_

I wrote about my book  in the previous post. Here I would like to share what others have to say about it.

So often information security is viewed as a technical discipline – a world of firewalls, anti-virus software, access controls and encryption. An opaque and enigmatic discipline which defies understanding, with a priesthood who often protect their profession with complex concepts, language and most of all secrecy.

Leron takes a practical, pragmatic and no-holds barred approach to demystifying the topic. He reminds us that ultimately security depends on people – and that we all act in what we see as our rational self-interest – sometimes ill-informed, ill-judged, even downright perverse.

No approach to security can ever succeed without considering people – and as a profession we need to look beyond our computers to understand the business, the culture of the organisation – and most of all, how we can create a security environment which helps people feel free to actually do their job.
David Ferbrache OBE, FBCS
Technical Director, Cyber Security
KPMG UK

This is an easy-to-read, accessible and simple introduction to information security.  The style is straightforward, and calls on a range of anecdotes to help the reader through what is often a complicated and hard to penetrate subject.  Leron approaches the subject from a psychological angle and will be appealing to both those of a non-technical and a technical background.
Dr David King
Visiting Fellow of Kellogg College
University of Oxford

Read the rest of this entry »


Productive Security

500995147_5f56493a1e_z

The majority of employees within an organisation are hired to execute specific jobs, such as marketing, managing projects, manufacturing goods or overseeing financial investment. Their main – sometimes only – priority will be to efficiently complete their core business activity, so information security will usually only be a secondary consideration. Consequently, employees will be reluctant to invest more than a limited amount of effort and time on such a secondary task that they rarely understand, and from which they perceive no benefit.

Research[1] suggests that when security mechanisms cause additional work, employees will favour non-compliant behaviour in order to complete their primary tasks quickly.

There is a lack of awareness among security managers[2] about the burden that security mechanisms impose on employees, because it is assumed that the users can easily accommodate the effort that security compliance requires. In reality, employees tend to experience a negative impact on their performance because they feel that these cumbersome security mechanisms drain both their time and their effort. The risk mitigation achieved through compliance, from their perspective, is not worth the disruption to their productivity. In extreme cases, the more urgent the delivery of the primary task is, the more appealing and justifiable non-compliance becomes, regardless of employees’ awareness of the risks.

When security mechanisms hinder or significantly slow down employees’ performance, they will cut corners, and reorganise and adjust their primary tasks in order to avoid them. This seems to be particularly prevalent in file sharing, especially when users are restricted by permissions, by data storage or transfer allowance, and by time-consuming protocols. People will usually work around the security mechanisms and resort to the readily available commercial alternatives, which may be insecure. From the employee’s perspective, the consequences of not completing a primary task are severe, as opposed to the ‘potential’ consequences of the risk associated with breaching security policies.

If organisations continue to set equally high goals for both security and business productivity, they are essentially leaving it up to their employees to resolve potential conflicts between them. Employees will focus most of their time and effort on carrying out their primary tasks efficiently and in a timely manner, which means that their target will be to maximise their own benefit, as opposed to the company’s. It is therefore vital for organisations to find a balance between both security and productivity, because when they fail to do so, they lead – or even force – their employees to resort to non-compliant behaviour. When companies are unable to recognise and correct security mechanisms and policies that affect performance and when they exclusively reward their employees for productivity, not for security, they are effectively enabling and reinforcing non-compliant decision-making on behalf of the employees.

Employees will only comply with security policies if they are motivated to do so: they must have the perception that compliant behaviour results in personal gain. People must be given the tools and the means to understand the potential risks associated with their roles, as well as the benefits of compliant behaviour, both to themselves and to the organisation. Once they are equipped with this information and awareness, they must be trusted to make their own decisions that can serve to mitigate risks at the organisational level.

References:

[1] Iacovos Kirlappos, Adam Beautement and M. Angela Sasse, “‘Comply or Die’ Is Dead: Long Live Security-Aware Principal Agents”, in Financial Cryptography and Data Security, Springer, 2013, 70–82.

[2] Leron Zinatullin, “The Psychology of Information Security.”, IT Governance Publishing, 2016.

Photo by Nick Carter https://www.flickr.com/photos/8323834@N07/500995147/