I’ve completed the train the trainer workshop on AI skills organised by the CyberPeace Institute, equipping me with the knowledge to help not‑for‑profits harness the power of AI for good.
I look forward to supporting not‑for‑profits in building their AI capabilities, from foundational training on responsible use of AI to hands‑on guidance on transforming data into actionable insights.
Navigating the intersection between AI and cybersecurity can be tricky. If you’re looking to elevate your AI skills, or if you’re curious about how AI can amplify your mission, please reach out!
Cyber security is a relentless race to keep pace with evolving threats, where staying ahead isn’t always possible. Advancing cyber maturity demands more than just reactive measures—it requires proactive strategies, cultural alignment, and a deep understanding of emerging risks.
I had an opportunity to share my thoughts on staying informed about threats, defining cyber maturity, and aligning security metrics with business goals with Corinium’s Maddie Abe ahead of my appearance as a speaker at the upcoming CISO Sydney next month.
Data strategy begins with an understanding of your business goals. What capabilities do you need to develop to realise your strategic objectives? In this blog I continue to build on the data analytics concepts to outline how to improve the analytics capability in your organisation.
I completed the Data Analytics and Decision Making course as part of my Executive MBA. In this blog, I summarise some of the insights and learnings that you can apply in your work too.
Encouraging the use of Generative AI technology at work can enhance productivity and streamline tasks. Generative AI can provide valuable support in various areas, from customer service and problem-solving to research and data analysis.
By leveraging the power of Generative AI, we can improve our workflows, reduce time spent on manual tasks, and ultimately achieve better results. However, we should also recognise the importance of using Generative AI responsibly and in accordance with company policies and guidelines. By doing so, we can maximise the benefits of Generative AI while protecting sensitive information and intellectual property.
I recently had a chance to collaborate with researchers at The Optus Macquarie University Cyber Security Hub. Their interdisciplinary approach brings industry practitioners and academics from a variety of backgrounds to tackle the most pressing cyber security challenges our society and businesses face today.
Both academia and industry practitioners can and should learn from each other. The industry can guide problem definition and allow access to data, but also learn to apply the scientific method and test their hypotheses. We often assume the solutions we implement lead to risk reduction but how this is measured is not always clear. Designing experiments and using research techniques can help bring the necessary rigour when delivering and assessing outcomes.
I had an opportunity to work on some exciting projects to help build an AI-powered cyber resilience simulator, phone scam detection capability and investigate the role of human psychology to improve authentication protocols. I deepened my understanding of modern machine learning techniques like topic extraction and emotion analysis and how they can be applied to solve real world problems. I also had a privilege to contribute to a research publication to present our findings, so watch this space for some updates next year.
I’ve been exploring the current application of machine learning techniques to cybersecurity. Although, there are some strong use cases in the areas of log analysis and malware detection, I couldn’t find the same quantity of research on applying AI to the human side of cybersecurity.
Can AI be used to support the decision-making process when developing cyber threat prevention mechanisms in organisations and influence user behaviour towards safer choices? Can modelling adversarial scenarios help us better understand and protect against social engineering attacks?
To answer these questions, a multidisciplinary perspective should be adopted with technologists and psychologists working together with industry and government partners.
While designing such mechanisms, consideration should be given to the fact that many interventions can be perceived by users as negatively impacting their productivity, as they demand additional effort to be spent on security and privacy activities not necessarily related to their primary activities [1, 2].
A number of researchers use the principles from behavioural economics to identify cyber security “nudges” (e.g. [3], [4]) or visualisations [5,6]. This approach helps them make better decisions and minimises perceived effort by moving them away from their default position. This method is being applied in the privacy area, for example for reduced Facebook sharing [7] and improved smartphone privacy settings [8]. Additionally there is greater use of these as interventions, particularly with installation of mobile applications [9].
The proposed socio-technical approach to the reduction of cyber threats aims to account for the development of responsible and trustworthy people-centred AI solutions that can use data whilst maintaining personal privacy.
A combination of supervised and unsupervised learning techniques is already being employed to predict new threats and malware based on existing patterns. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.
Building adversarial models, designing empirical studies and running experiments (e.g. using Amazon’s Mechanical Turk) can help better measure the effectiveness of attackers’ techniques and develop better defence mechanisms. I believe there is a need to explore opportunities to utilise machine learning to aid the human decision-making process whereby people are supported by, and work together with, AI to better defend against cyber attacks.
We should draw upon participatory co-design and follow a people-centred approach so that relevant stakeholders are engaged in the process. This can help develop personalised and contextualised solutions, crucial to addressing ethical, legal and social challenges that cannot be solved with AI automation alone.
I recently completed this AWS Machine Learning course on Coursera (it’s free!). Besides covering basic theory behind machine learning, it discusses common use cases and how AWS services can be applied to them. Overall, it’s quite quick, interesting and doesn’t require deep technical skills.