How to uplift your data analytics capability

Source: adapted from Davenport and Harris (2017)

Data strategy begins with an understanding of your business goals. What capabilities do you need to develop to realise your strategic objectives? In this blog I continue to build on the data analytics concepts to outline how to improve the analytics capability in your organisation.

More

Advertisement

Generative AI acceptable use policy

Encouraging the use of Generative AI technology at work can enhance productivity and streamline tasks. Generative AI can provide valuable support in various areas, from customer service and problem-solving to research and data analysis.

By leveraging the power of Generative AI, we can improve our workflows, reduce time spent on manual tasks, and ultimately achieve better results. However, we should also recognise the importance of using Generative AI responsibly and in accordance with company policies and guidelines. By doing so, we can maximise the benefits of Generative AI while protecting sensitive information and intellectual property. 

More

Collaborating with the Optus Macquarie University Cyber Security Hub

I recently had a chance to collaborate with researchers at The Optus Macquarie University Cyber Security Hub. Their interdisciplinary approach brings industry practitioners and academics from a variety of backgrounds to tackle the most pressing cyber security challenges our society and businesses face today.

Both academia and industry practitioners can and should learn from each other. The industry can guide problem definition and allow access to data, but also learn to apply the scientific method and test their hypotheses. We often assume the solutions we implement lead to risk reduction but how this is measured is not always clear. Designing experiments and using research techniques can help bring the necessary rigour when delivering and assessing outcomes.

I had an opportunity to work on some exciting projects to help build an AI-powered cyber resilience simulator, phone scam detection capability and investigate the role of human psychology to improve authentication protocols. I deepened my understanding of modern machine learning techniques like topic extraction and emotion analysis and how they can be applied to solve real world problems. I also had a privilege to contribute to a research publication to present our findings, so watch this space for some updates next year.

Can AI help improve security culture?

I’ve been exploring the current application of machine learning techniques to cybersecurity. Although, there are some strong use cases in the areas of log analysis and malware detection, I couldn’t find the same quantity of research on applying AI to the human side of cybersecurity.

Can AI be used to support the decision-making process when developing cyber threat prevention mechanisms in organisations and influence user behaviour towards safer choices? Can modelling adversarial scenarios help us better understand and protect against social engineering attacks?

To answer these questions, a multidisciplinary perspective should be adopted with technologists and psychologists working together with industry and government partners.

While designing such mechanisms, consideration should be given to the fact that many interventions can be perceived by users as negatively impacting their productivity, as they demand additional effort to be spent on security and privacy activities not necessarily related to their primary activities [1, 2].

A number of researchers use the principles from behavioural economics to identify cyber security “nudges” (e.g.  [3], [4]) or visualisations [5,6].  This approach helps them make better decisions and minimises perceived effort by moving them away from their default position. This method is being applied in the privacy area, for example for reduced Facebook sharing [7] and improved smartphone privacy settings [8]. Additionally there is greater use of these as interventions, particularly with installation of mobile applications [9].

The proposed socio-technical approach to the reduction of cyber threats aims to account for the development of responsible and trustworthy people-centred AI solutions that can use data whilst maintaining personal privacy.

A combination of supervised and unsupervised learning techniques is already being employed to predict new threats and malware based on existing patterns. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Building adversarial models, designing empirical studies and running experiments (e.g. using Amazon’s Mechanical Turk) can help better measure the effectiveness of attackers’ techniques and develop better defence mechanisms. I believe there is a need to explore opportunities to utilise machine learning to aid the human decision-making process whereby people are supported by, and work together with, AI to better defend against cyber attacks.

We should draw upon participatory co-design and follow a people-centred approach so that relevant stakeholders are engaged in the process. This can help develop personalised and contextualised solutions, crucial to addressing ethical, legal and social challenges that cannot be solved with AI automation alone.

More

I’m joining PigeonLine’s Advisory Board

I’ve been asked to join PigeonLine – Research-AI as a Board Advisor for cyber security. I’m excited to be able to contribute to the success of this promising startup.

PigeonLine is a fast growing AI development and consulting company that builds tools to solve common enterprise problems. Their customers include the UAE Prime Ministers Office, the Bank of Canada, the London School of Economics, among others.

Building accessible AI tools to empower people should go hand-in-hand with protecting their privacy and preserving the security of their information.

I like the company’s user-centric approach and the fact that data privacy is one of their core values. I’m thrilled to be part of their journey to push the boundaries of human-machine interaction to solve common decision-making problems for enterprises and governments.

Artificial intelligence and cyber security: attacking and defending

3237928173_9d99dc9113_z

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack. Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to the rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.

Image by fdecomite.