I’ve been exploring the current application of machine learning techniques to cybersecurity. Although, there are some strong use cases in the areas of log analysis and malware detection, I couldn’t find the same quantity of research on applying AI to the human side of cybersecurity.
Can AI be used to support the decision-making process when developing cyber threat prevention mechanisms in organisations and influence user behaviour towards safer choices? Can modelling adversarial scenarios help us better understand and protect against social engineering attacks?
To answer these questions, a multidisciplinary perspective should be adopted with technologists and psychologists working together with industry and government partners.
While designing such mechanisms, consideration should be given to the fact that many interventions can be perceived by users as negatively impacting their productivity, as they demand additional effort to be spent on security and privacy activities not necessarily related to their primary activities [1, 2].
A number of researchers use the principles from behavioural economics to identify cyber security “nudges” (e.g. [3], [4]) or visualisations [5,6]. This approach helps them make better decisions and minimises perceived effort by moving them away from their default position. This method is being applied in the privacy area, for example for reduced Facebook sharing [7] and improved smartphone privacy settings [8]. Additionally there is greater use of these as interventions, particularly with installation of mobile applications [9].
The proposed socio-technical approach to the reduction of cyber threats aims to account for the development of responsible and trustworthy people-centred AI solutions that can use data whilst maintaining personal privacy.
A combination of supervised and unsupervised learning techniques is already being employed to predict new threats and malware based on existing patterns. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.
Building adversarial models, designing empirical studies and running experiments (e.g. using Amazon’s Mechanical Turk) can help better measure the effectiveness of attackers’ techniques and develop better defence mechanisms. I believe there is a need to explore opportunities to utilise machine learning to aid the human decision-making process whereby people are supported by, and work together with, AI to better defend against cyber attacks.
We should draw upon participatory co-design and follow a people-centred approach so that relevant stakeholders are engaged in the process. This can help develop personalised and contextualised solutions, crucial to addressing ethical, legal and social challenges that cannot be solved with AI automation alone.
References
[1] Herley, C. 2009. So long, and no thanks for the externalities: the rational rejection of security advice by users. In Proceedings of the 2009 workshop on New Security Paradigms Workshop (NSPW ’09) (2009), 133–144
[2] Sasse, M.A., Brostoff, S. and Weirich, D. 2001. Transforming the “weakest link” — a human/computer interaction approach to usable and effective security. BT Technology Journal. 19, 3 (2001), 122–131.
[3] Thaler, R.H. and Sunstein, C.R. 2009. Nudge: Improving decisions about health, wealth, and happiness. Yale
[4] Nicholson, J., Coventry, L., Briggs, P. 12 Jul 2017. Can We Fight Social Engineering Attacks By Social Means? Assessing Social Salience as a Means to Improve Phish Detection,, Proceedings of the 13th Symposium on Usable Privacy and Security (0SOUPS 2017), Usenix
[5] Chen, J., Gates, C.S., Li, N. and Proctor, R.W. 2015. Influence of Risk/Safety Information Framing on Android App-Installation Decisions. Journal of Cognitive Engineering and Decision Making. 9, 2 (Jun. 2015), 149– 168.
[6] Choe, E.K., Jung, J., Lee, B. and Fisher, K. 2013. Nudging People Away from Privacy-Invasive Mobile Apps through Visual Framing. Springer Berlin Heidelberg. 74–91
[7] Wang, Y., Leon, P.G., Acquisti, A., Cranor, L.F., Forget, A., Sadeh, N., Wang, Y., Leon, P.G., Acquisti, A., Cranor, L.F., Forget, A. and Sadeh, N. 2014. A field trial of privacy nudges for facebook. Proceedings of the 32nd annual ACM conference on Human factors in computing systems – CHI ’14 (New York, New York, USA, 2014), 2367–2376.
[8] Almuhimedi, H., Schaub, F., Sadeh, N., Adjerid, I., Acquisti, A., Gluck, J., Cranor, L.F. and Agarwal, Y. 2015. Your Location has been Shared 5,398 Times! Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems – CHI ’15 (New York, New York, USA, 2015), 787–796
[9] Chen, J., Gates, C.S., Li, N. and Proctor, R.W. 2015. Influence of Risk/Safety Information Framing on Android App-Installation Decisions. Journal of Cognitive Engineering and Decision Making. 9, 2 (Jun. 2015), 149– 168.