Cyber Security: Law and Guidance

IMG-2260

I’m proud to be one of the contributors to the newly published  Cyber Security: Law and Guidance book.

Although the primary focus of this book is on the cyber security laws and data protection, no discussion is complete without mentioning who all these measures aim to protect: the people.

I draw on my research and practical experience to present a case for the new approach to cyber security and data protection placing people in its core.

Check it out!

Advertisements

Internet of Toys Security

NSPCC

To support my firm’s corporate and social responsibility efforts, I volunteered to help NSPCC, a charity working in child protection, understand the Internet of Toys and its security and privacy implications.

I hope the efforts in this area will result in better policymaking and raise awareness among children and parents about the risks and threats posed by connected devices.

Toys are different from other connected devices not only because how they are normally used, but also who uses them.

For example, children may tell secrets to their toys, sharing particularly sensitive information with them. This, combined with often insufficient security considerations by the manufacturers, may be a cause for concern.

Apart from helping NSPCC in creating campaign materials and educating the staff on the threat landscape, we were able to suggest a high-level framework to assess the security of a connected toy, consisting of parental control, privacy and technology security considerations.

Read the rest of this entry »


An open source modelling toolkit for enterprise architects

archi_laptop

Telling stories is one of the best ways to get your ideas across, especially when your audience is not technical. Therefore, as an architect, you might want to communicate in a way that can be easily understood by others.

TOGAF, for example, encourages enterprise architects to develop Business Scenarios. But what if you want to represent your concepts visually? The solution might lie in using a modelling language that meets this requirement.

ArchiMate is an open standard for such a language that supports enterprise architects in the documenting and analysing of architecture. Full alignment with aforementioned TOGAF is an added bonus.

The ArchiMate mimics constructs of the English language i.e. it has a subject, an object and a verb that refer to active, passive and behavior (action) aspects respectively. It employs these constructs to model business architecture.

To illustrate this, let’s model a specific business process using ArchiMate. Similarly to the example described in one of the whitepapers, let’s consider a stock trader registering an order on the exchange as part of the overall Place Order process.

Thinking back to the English language parallel, what does this sentence tell us? In other words, who is doing what to what?

In this scenario, a Trader (subject) places (verb) the order (object).

The diagram below illustrates how this might look like when modelled in ArchiMate.

ArchiMate

‘Trader’, being an active element is modelled as Business Role, ‘Place Order’ as a behavior (action) element is represented as Business Process and the passive ‘Order’ itself is modelled as Business Object.

The relationship between elements carry meaning in ArchiMate too. In our example, Assign relation is used to model the ‘Trader’ performing the ‘Place Order’ action. Contrary, the interaction between ‘Place Order’ and ‘Order’ is modelled using Access relation to illustrate that the the Business Process creates the Business Object.

To put all of this into practice, you can use the Archi modelling toolkit. It’s free, open-source and support multiple platforms.

In fact, I used it to illustrate the scenario above, but it can do much more. For example, I talk about modelling SABSA architecture using ArchiMate in my other blog.


Behavioural science in cyber security

Why your staff ignore security policies and what to do about it.               

Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.

In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.

His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it.  So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.

Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.

This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.

The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.

This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.

To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.

Read the rest of this entry »


Security architecture: how to

When building a house you would not consider starting the planning, and certainly not the build itself, without the guidance of an architect. Throughout this process you would use a number of experts such as plumbers, electricians and carpenters.  If each individual expert was given a blank piece of paper to design and implement their aspect of the property with no collaboration with the other specialists and no architectural blueprint, then it’s likely the house would be difficult and costly to maintain, look unattractive and not be easy to live in.  It’s highly probable that the installation of such aspects would not be in time with each other, therefore causing problems at a later stage when, for example, the plastering has been completed before the wiring is complete.

This analogy can be applied  to security architecture, with many companies implementing different systems at different times with little consideration of how other experts will implement their ideas, often without realising they are doing it.  This, like the house build, will impact on the overarching effectiveness of the security strategy and will in turn impact employees, clients and the success of the company.

For both of the above, an understanding of the baseline requirements, how these may change in the future and overall framework is essential for a successful project. Over time, building regulations and practices have evolved to help the house building process and we see the same in the security domain; with industry standards being developed and shared to help overcome some of these challenges.

The approach I use when helping clients with their security architecture is outlined below.

Approach

I begin by understanding the business, gathering requirements and analysing risks. Defining current and target states leads to assessing the gaps between them and developing the roadmap that aims to close these gaps.

I prefer to start the security architecture development cycle from the top by defining security strategy and outlining how lower levels of the architecture support it, linking them to business objectives. But this approach is adjusted based on the specific needs.

Read the rest of this entry »


Amsterdam

This is one of these blog posts with no content. I just really wanted to share some pics from one of the coolest cities I had a privilege to live and work in for the past few months.


Artificial intelligence and cyber security: attacking and defending

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack. Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to the rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.