Author of the month for January 2019

discount-banner

IT Governance Publishing named me the author of the month and kindly provided a 20% discount on my book.

There’s an interview available in a form of a podcast, where I discuss the most significant challenges related to change management and organisational culture; the common causes of a poor security culture my advice for improving the information security culture in your organisation.

ITGP also made one of the chapters of the audio version of my book available for free – I hope you enjoy it!


Videos for InfoSec Awareness

sans

It was another fantastic event by SANS. This time, apart from a regular line up of great speakers, there were some interactive workshops.

Javvad Malik facilitated one of them and challenged the participants to create their own awareness videos.

javvad

It felt like we covered the entire production cycle in under two hours: we talked about brainstorming, scripting, filming styles, editing and much more! But the most important part was about putting the ideas into practice and we actually got to create out own security awareness videos.

The audience was split into several groups, each tasked with producing an engaging clip with only one requirement: it shouldn’t be boring.

Javvad’s tips certainly helped and with a bit of humour, my team’s video won the first prize!

snip20190111_1

If you would like to learn more, check out Summit Archives for presentation slides, including Javvad’s workshop deck and past events.


The Psychology of Information Security is now an audiobook too!

Snip20181127_2

Thanks to my publisher, my book is now available in the audio format. It’s been narrated by Peter Silverleaf, who’s done a great job as always.

If you would rather listen to an audio while driving, exercising or commuting, this version is for you. The book has intentionally been kept to the point which means you can finish the audio in slightly over two hours. The fact that it costs the equivalent of two cups of coffee is an added benefit.

You can get it for free on Audible as part of their introductory offer (you can listen to the sample there too), through Apple iTunes or download it in the MP3 format on my publisher’s website.

I know I’m slightly biased here, but I highly recommend it!


Cyber Security: Law and Guidance

IMG-2260

I’m proud to be one of the contributors to the newly published  Cyber Security: Law and Guidance book.

Although the primary focus of this book is on the cyber security laws and data protection, no discussion is complete without mentioning who all these measures aim to protect: the people.

I draw on my research and practical experience to present a case for the new approach to cyber security and data protection placing people in its core.

Check it out!


Behavioural science in cyber security

Why your staff ignore security policies and what to do about it.               

Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.

In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.

His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it.  So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.

Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.

This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.

The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.

This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.

To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.

Read the rest of this entry »


Artificial intelligence and cyber security: attacking and defending

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack. Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to the rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.


Of people and security: To build a working security culture, focus first on empathy and communication

A security department may sometimes be referred to by executives as the ‘Business Prevention Department’. Cyber security professionals, eager to minimise potential risks, can put controls in place that may stifle productivity and innovation.

Cyber security professionals are often too aware of what the business shouldn’t do and forget to mention what it should be doing instead. Ok, USB ports are now blocked, but have we provided people with an alternative to share files securely? Yes, we might’ve mitigated the risk of introducing malware through a flash drive, but have we considered a wider impact on the ability of employees to perform their core business activities, and, in turn, on overall profitability of the company.

Instead of saying ‘No’ to everything, let’s try to understand the business context of what we are trying to protect and why. Because that’s what actually matters and is absolutely key when designing security solutions that work.

People often think that security is the opposite of usability. In reality, the reverse is true. Design and security can coexist by defining constructive and destructive behaviours: what people should and shouldn’t do. Effective design, therefore, streamlines constructive behaviours while making risky ones harder to accomplish.

To do this effectively, security has to be a vocal influence in the design process, and not an afterthought. But it can only regain this influence if the value to the people and business is first demonstrated.

Wondering why your security policies don’t work? Ask your staff!  Empathy, communication and collaboration are vital to build a culture of security. Security processionals need to shift their role from that of policeman enforcing policy from the top-down through sanctions to someone who is empathetic to the business needs and takes time to understand them.

Security mechanisms should be shaped around the day-to-day working lives of employees, and not the other way around. The best way to do this is to engage with employees and to factor in their unique experiences and insights into the design process. The aim should be to correct the misconceptions, misunderstandings and faulty decision-making processes that result in non-compliant behaviour.

Changing culture is not easy and will take time; but it is possible. Check out my book to find out more about developing an effective business-oriented security programme and improving security culture in your organisation.