Behavioural science in cyber security

Why your staff ignore security policies and what to do about it.               

Dale Carnegie’s 1936 bestselling self-help book How To Win Friends And Influence People is one of those titles that sits unloved and unread on most people’s bookshelves. But dust off its cover and crack open its spine, and you’ll find lessons and anecdotes that are relevant to the challenges associated with shaping people’s behaviour when it comes to cyber security.

In one chapter, Carnegie tells the story of George B. Johnson, from Oklahoma, who worked for a local engineering company. Johnson’s role required him to ensure that other employees abide by the organisation’s health and safety policies. Among other things, he was responsible for making sure other employees wore their hard hats when working on the factory floor.

His strategy was as follows: if he spotted someone not following the company’s policy, he would approach them, admonish them, quote the regulation at them, and insist on compliance. And it worked — albeit briefly. The employee would put on their hard hat, and as soon as Johnson left the room, they would just as quickly remove it.  So he tried something different: empathy. Rather than addressing them from a position of authority, Johnson spoke to his colleagues almost as though he was their friend, and expressed a genuine interest in their comfort. He wanted to know if the hats were uncomfortable to wear, and that’s why they didn’t wear them when on the job.

Instead of simply reciting the rules as chapter-and-verse, he merely mentioned it was in the best interest of the employee to wear their helmets, because they were designed to prevent workplace injuries.

This shift in approach bore fruit, and workers felt more inclined to comply with the rules. Moreover, Johnson observed that employees were less resentful of management.

The parallels between cyber security and George B. Johnson’s battle to ensure health-and-safety compliance are immediately obvious. Our jobs require us to adequately address the security risks that threaten the organisations we work for. To be successful at this, it’s important to ensure that everyone appreciates the value of security — not just engineers, developers, security specialists, and other related roles.

This isn’t easy. On one hand, failing to implement security controls can result in an organisation facing significant losses. However, badly-implemented security mechanisms can be worse: either by obstructing employee productivity or by fostering a culture where security is resented.

To ensure widespread adoption of secure behaviour, security policy and control implementations not only have to accommodate the needs of those that use them, but they also must be economically attractive to the organisation. To realise this, there are three factors we need to consider: motivation, design, and culture.

Understanding the motivation

Understanding motivation begins with understating why people don’t comply with information security policies. Three common reasons include:

  • There is no obvious reason to comply
  • Compliance comes at a steep cost to workers
  • Employees are simply unable to comply

There is no obvious reason to comply

Risk and threat are part of cyber security specialists’ everyday lives, and they have a universal appreciation for what they entail. But regular employees seldom have an accurate concept of what information security actually is, and what it is trying to protect.

Employees are hazy about the rules themselves, and tend to lack a crystallised understanding of what certain security policies forbid and allow, which results in so-called “security myths.” Furthermore, even in the rare cases where employees are aware of a particular security policy and interpret it correctly, the motivation to comply isn’t there. They’ll do the right thing, but their heart isn’t really in it.

People seldom feel that their actions have any bearing on the overall information security of an organisation. As the poet Stanisław Jerzy Lec once said, “No snowflake in an avalanche ever feels responsible.” This is troubling because if adhering to a policy involves a certain amount of effort, and there is no perceived immediate threat, non-compliant behaviour can appear to be the more attractive and comfortable option.

Compliance comes at a steep cost to workers

All people within an organisation have their own duties and responsibilities to execute. A marketing director is responsible for PR and communications; a project manager is responsible for ensuring tasks remain on track; a financial analyst is helping an organisation decide which stocks and shares to buy. For most of these employees, their main concern — if not their sole concern — is ensuring their jobs get done. Anything secondary, like information security, falls to the wayside especially if employees perceive it to be arduous or unimportant.

The evidence shows that if security mechanisms create additional work for employees, they will tend to err on the side of non-compliant behaviour, in order to concentrate on executing their primary tasks efficiently.

There is a troubling lack of concern among security managers about the burden security mechanisms impose on employees. Many assume that employees can simply adjust to new shifting security requirements without much extra effort. This belief is often mistaken, as employees regard new security mechanisms as arduous and cumbersome, draining both their time and effort. From their perspective, reduced risk to the organisation as a consequence of their compliance is seen as not a worthwhile trade-off for the disruption to their productivity.

And in extreme cases — for example, when an individual is faced with an impending deadline — employees may find it fit to cut corners and fail to comply with established security procedure, regardless of being aware of the risks.

An example of this is file sharing. Many organisations enact punishing restrictions regarding the exchange of digital files, in an effort to prevent the organisation from data exfiltration or phishing attempts. This often takes the form of strict permissions, by storage or transfer limits, or by time-consuming protocols. If pressed for time, an employee may resort to an unapproved alternative — like Dropbox, Google Drive, or Box. Shadow IT is a major security concern for enterprises, and is often a consequence of cumbersome security protocols. And from the perspective of an employee they can justify it, as failing to complete their primary tasks holds more immediate consequences for them, especially compared to the potential and unclear risk associated with security non-compliance.

Employees are simply unable to comply

In rare and extreme cases, compliance — whether enforced or voluntary — fails to be an option for employees, no matter how much time or effort they are willing to commit. In these cases, the most frequent scenario is that the security protocols imposed do not match their basic work requirements.

An example of this would be an organisation that distributed encrypted USB flash drives with an insufficient amount of storage. Employees who frequently need to transfer large files — such as those working with audio-visual assets — would be forced to rely on unauthorised mechanisms, like online file sharing services, or larger, non-encrypted external hard drives. It is also common to see users copy files onto their laptops from secure locations, either because the company’s remote access doesn’t work well, or because they’ve been allocated an insufficient amount of storage on their network drives.

Password complexity rules often force employees to break established security codes of conduct. When forced to memorise different, profoundly complex passwords, employees will try and find a shortcut by writing them down — either physically, or electronically.

In these situations, the employees are cognisant of the fact that they’re breaking the rules, but they justify it by saying their employer had failed to offer them a workable technical implementation. They assume the company would be more comfortable with a failure to adhere by security rules than the failing to perform their primary duties. This assumption is often reinforced by non-security managerial staff.

The end result is that poorly implemented security protocols create a chasm between the security function and the rest of the organisation, creating a “them-and-us” scenario, where they are perceived as “out of touch” to the needs of the rest of the organisation. Information security — and information security professionals — become resented, and the wider organisation responds to security enforcers with scepticism or derision. These reinforced perspectives can result in resistance to security measures, regardless of how well-designed or seamlessly implemented they are.

How people make decisions

The price of overly complicated security mechanisms is productivity; the tougher compliance is, the more it’ll interfere with the day-to-day running of the organisation. It’s not uncommon to see the business-critical parts of an organisation engaging heavily in non-compliant behaviour, because they value productivity over security and don’t perceive an immediate risk.

And although employees will often make a sincere effort to comply with an organisation’s policies, their predominant concern is getting their work done. When they violate a rule, it’s usually not due to deliberately malicious behaviour, but rather because of poor control implementation that pays scant attention to their needs.

On the other hand the more employee-centred a security policy is, the better it incentivises employees to comply, and strengthens the overall security culture. This requires empathy, and actually listening to those users downstream. Crucially, it requires remembering that employee behaviour is primarily driven by meeting goals and key performance indicators. This is often in contrast to the security world, which emphasises managing risks and proactively responding to threats that may or may not emerge, and is often seen by outsiders as abstract and lacking context.

That’s why developing a security programme that works requires an understanding of the human decision-making process.

How individuals make decisions is a subject of interest for psychologists and economists, who have traditionally viewed human behaviour as regular and highly predictable. This framework let researchers build models that allowed them to comprehend social and economic behaviour almost like clockwork, where it can be deconstructed and observed how the moving parts fit together.

But people are unique, and therefore, complicated. There is no one-size-fits-all paradigm for humanity. People have behaviour that can be irrational, disordered, and prone to spur-of-the-moment thinking, reflecting the dynamic and ever-changing working environment. Research in psychology and economics later pivoted to understand the drivers behind certain actions. This research is relevant to the information security field.

Among the theories pertaining to human behaviour is the theory of rational choice, which explains how people aim to maximise their benefits and minimise their costs. Self-interest is the main motivator, with people making decisions based on personal benefit, as well as the cost of the outcome.

This can also explain how employees make decisions about what institutional information security rules they choose to obey. According to the theory of rational choice, it may be rational for users to fail to adhere to a security policy because the effort vastly outweighs the perceived benefit — in this case, a reduction in risk.

University students, for example, have been observed to frequently engage in unsafe computer security practices, like sharing credentials, downloading attachments without taking safe precautions, and failing to back up their data. Although students — being digital natives — were familiar with the principles of safe computing behaviour, they still continued to exhibit risky practices. Researchers who have looked into this field believe that simple recommendations aren’t enough to ensure compliance; educational institutions may need to impose secure behaviour through more forceful means.

This brings us onto the theory of general deterrence, which states that users will fail to comply with the rules if they know that there will be no consequences. In the absence of a punishment, users feel compelled to behave as they feel fit.

Two terms vital to understanding this theory are ‘intrinsic motivation’ and ‘extrinsic motivation.’ As the name suggests, intrinsic motivations come from within, and usually lead to actions that are personally rewarding. The main mover here is one’s own desires. Extrinsic motivations, on the other hand, derive from the hope of gaining a reward or avoiding a punishment.

Research into the application of the theory of general deterrence within the context of information security awareness suggests that the perception of consequences is far more effective in deterring unsafe behaviour than actually imposing sanctions. These findings came after examining the behaviour of a sample of 269 employees from eight different companies who had received security training and were aware of the existence of user-monitoring software on their computers.

But there isn’t necessarily a consensus on this. A criticism of the aforementioned theory is that it’s based solely on extrinsic motivations. This lacks the consideration of intrinsic motivation, which is a defining and driving facet of the human character. An analysis of a sample of 602 employees showed that approaches which address intrinsic motivations lead to a significant increase in compliant employee behaviour, rather than ones rooted in punishment and reward. In short, the so-called “carrot and stick” method might not be particularly effective.

The value of intrinsic motivations is supported by the cognitive evaluation theory, which can be used to predict the impact that rewards have on intrinsic motivations. So, if an effort is recognised by an external factor, such as with an award or prize, the individual will be more likely to adhere to the organisation’s security policies.

However, if rewards are seen as a “carrot” to control behaviour, they have a negative impact on intrinsic motivation. This is due to the fact that a recipient’s sense of individual autonomy and self-determination will diminish when they feel as though they’re being controlled.

The cognitive evaluation theory also explains why non-tangible rewards — like praise — also have positive impacts on intrinsic motivation. Verbal rewards boost an employee’s sense of self-esteem and self-worth, and reinforces the view that they’re skilled at a particular task, and their performance is well-regarded by their superiors. However, for non-tangible rewards to be effective, they must not appear to be coercive.

Focusing on ensuring greater compliance within an information security context, this theory recommends adoption of a positive, non-tangible reward system that recognises positive efforts in order to ensure constructive behaviour regarding security policy compliance.

And ultimately, the above theories show that in order to effectively protect an institution, security policies shouldn’t merely ensure formal compliance with legal and regulatory requirements, but also pay respect to the motivations and attitudes of the employees that must live and work under them.

Designing security that works

A fundamental aspect of ensuring compliance is providing employees with the tools and working environments they need, so they don’t feel compelled to use insecure, unauthorised third-party alternatives. For example, an enterprise could issue encrypted USB flash drives and provide a remotely-accessible network drive, so employees can save and access their documents as required. Therefore employees aren’t tempted to use Dropbox or Google Drive; however these options must have enough storage capacity for employees to do their work.

Additionally, these network drives can be augmented with auto-archiving systems, allowing administrators to ensure staffers do not travel with highly-sensitive documents. If employees must travel with their laptops, their internal storage drives can be encrypted, so that even if they leave them in a restaurant or train, there is scant possibility that the contents will be accessed by an unauthorised third-party.

Other steps taken could include the use of remote desktop systems, meaning that no files are actually stored on the device, or single-sign-on systems, so that employees aren’t forced to remember, or worse, write down, several unique and complex passwords. Ultimately, whatever security steps taken must align with the needs of employees and the realities of their day-to-day jobs.

People’s resources are limited. This doesn’t just refer to time, but also to energy. Individuals often find decision making to be hard when fatigued.  This concept was highlighted by a psychological experiment, where two sets of people had to memorise a different number. One was a simple, two-digit number, while the other was a longer seven-digit number. The participants were offered a reward for correctly reciting the number; but had to walk to another part of the building to collect it.

On the way, they were intercepted with a second pair of researchers who offered them a snack, which could only be collected after the conclusion of the experiment. The participants were offered a choice between a healthy option and chocolate. Those presented with the easier number tended to err towards the healthy option, while those tasked with remembering the seven digit number predominantly selected chocolate.

Another prominent study examines the behaviour of judges during different times of the day. It found that in the mornings and after lunch, judges had more energy, and were better able to consider the merits of an individual case. This resulted in more grants of parole. Those seen before a judge in the evenings were denied parole more frequently. This is believed to be because they simply ran out of mental energy, and defaulted to what they perceived to be the safest option: refusal.

So how do these studies apply to an information security context? Those working in the field should reflect on the individual circumstances of those in the organisation. If people are tired or engaged in activities requiring high concentration, they get fatigued, which affects their ability or willingness to maintain compliance. This makes security breaches a real possibility.

But compliance efforts don’t need to contribute to mental depletion. When people perform tasks that work with their mental models (defined as the way they view the world and expect it to work), the activities are less mentally tiring than those that divert from the aforementioned models. If people can apply their previous knowledge and expertise to a problem, less energy is required to solve it in a secure manner.

This is exemplified by a piece of research that highlights the importance of secure file removal, which highlighted that merely emptying the Recycle Bin is insufficient, and files can easily be recovered through trivial forensic means. However, there are software products that exploit the “mental models” from the physical world. One uses a “shredding” analogy to highlight that files are being destroyed securely. If you shred a physical file, it is extremely challenging to piece it together, and this is what is happening on the computer, and echoes a common workplace task. This interface design might lighten the cognitive burden on users.

Another example of ensuring user design resembles existing experiences refers to the desktop metaphor introduced by researchers at Xerox in the 1980s, where people were presented with a graphical experience, rather than a text-driven command line. Users could manipulate objects much like they would in the real world (i.e. drag and drop, move files to the recycle bin, and organise files in visual folder-based hierarchies).  Building on the way people think makes it significantly easier for individuals to accept ways of working and new technologies. However, it’s important to remember that cultural differences can make this hard. Not everything is universal. The original Apple Macintosh trash icon, for example, puzzled users in Japan, where metallic bins were unheard of.

Good interface design isn’t just great for users; it makes things easier for those responsible for cyber security. This contradicts the established thinking that security is antithetical to good design. In reality, design and security can coexist by defining constructive and destructive behaviours. Effective design should streamline constructive behaviours, while making damaging ones hard to accomplish. To do this, security has to be a vocal influence in the design process, and not an afterthought.

Designers can involve security specialists in a variety of ways. One way is iterative design, where design is performed in cycles followed by testing, evaluation, and criticism. The other is participatory design, which ensures that all key stakeholders – especially those working in security – are presented with an opportunity to share their perspective.

Of course, this isn’t a panacea. The involvement of security professionals isn’t a cast-iron guarantee that security-based usability problems won’t crop up later. These problems are categorised as ‘wicked’.  A wicked problem is defined as one that is arduous, if not entirely impossible, to solve. This is often due to vague, inaccurate, changing or missing requirements from stakeholders.  Wicked problems cannot be solved through traditional means. It requires creative and novel thinking, such as the application of design thinking techniques. This includes performing situational analysis, interviewing stakeholders, creating user profiles, examining how others faced with a similar problem solved it, creating prototypes, and mind-mapping.

Design thinking is summed up by four different rules. The first is “the human rule,” which states that all design activity is “ultimately social in nature.” The ambiguity rule states that “design thinkers must preserve ambiguity.” The redesign rule says that “all design is redesign,” while the tangibility rule mandates that “making ideas tangible always facilitates communication”.

Security professionals should learn these rules and use them in order to design security mechanisms that don’t merely work, but are fundamentally usable. To do this, it’s important they escape their bubbles, and engage with those who actually use them. This can be done by utilising existing solutions and creating prototypes that can demonstrate the application of security concepts within a working environment.

The Achilles heel of design thinking is that while it enables the design of fundamentally better controls, it doesn’t highlight why existing ones fail.

When things go awry, we tend to look at the symptoms and not the cause. Tailichi Ohno, the Japanese industrialist who created the Toyota Production System (which inspired Lean Manufacturing), developed a technique known as “Five Whys” as a systematic problem-solving tool.

One example, given by Ohno in one of his books, shows this technique in action when trying to diagnose a faulty machine:

  1. Why did the machine stop? There was an overload and the fuse blew
  2. Why was there an overload? The bearing was not sufficiently lubricated.
  3. Why was it not lubricated sufficiently? The lubrication pump was not pumping sufficiently
  4. Why was it not pumping sufficiently? The shaft of the pump was worn and rattling
  5. Why was the shaft worn out? There was no strainer attached and metal scrap got in.

Rather than focus on the first issue, Ohno drilled down through a myriad of issues, which together culminated into a “perfect storm,” resulting in the machine failure. As security professionals, continuing to ask “why” can help us determine why a mechanism failed.

In the example, Ohno pointed out that the root cause was a human failure (namely, a failure to apply a strainer) rather than technical. This is something most security professionals can relate to. As Eric Reis said in his 2011 book The Lean Startup, “the root of every seemingly technical problem is actually a human problem”.

Creating a culture of security

Culture is ephemeral, and often hard to define. Yet, it can be the defining factor of whether a security programme fails or succeeds. Once employees’ primary tasks are identified and aligned with a seamless and considerate set of security controls, it’s vital to demonstrate that information security exists for a purpose, and not to needlessly inconvenience them.  Therefore it is also vital we understand the root causes of poor security culture.

The first step is to recognise is that bad habits and behaviours tend to be contagious. As highlighted by Canadian psychologist Malcolm Gladwell in his book The Tipping Point, there are certain conditions that allow some ideas or behaviours to spread virally. Gladwell refers specifically to the broken window theory to highlight the importance and power of context. This was originally used in law enforcement, and argued that stopping smaller crimes (like vandalism, hence the “broken window” link) is vital in stopping larger crimes (like murder). If a broken window is left for several days in a neighbourhood, more vandalism would inevitably ensue. This shows that crime will effectively go unpunished, leading to bigger and more harmful crimes.

The broken window theory is subject to a fierce debate. Some argue that it led to a dramatic crime reduction in the 1990’s. Other attribute the drop in crime to other factors, like the elimination of leaded petrol. Regardless of what argument is right, it’s worth recognising that the broken window theory can be applied in an information security context, and addressing smaller infractions can reduce the risk of larger, more damaging infractions.

Moving forward, it’s worth recognising that people are unmoved to behave in a compliant way because they do not see the financial consequences of violating it.

In The Honest Truth about Dishonesty, Dan Ariely tries to understand what motivates people to break the rules. Ariely describes a survey of golf players, which tries to find the conditions on which they might be tempted to move the ball into a more advantageous position, and how they would go about it. The golfers were presented with three options: using their club, their foot, or picking up the ball with their hands.

All of these are considered cheating, and are major no-nos. However, the survey is presented in a way where one is psychologically more acceptable than the others. Predictably, the players said that they would move the ball with their club. Second and third respectably were moving the ball with their foot, and picking up with their hand. The survey shows that by psychologically distancing themselves from the act of dishonesty – in this case, by using a tool actually used in the game of golf to cheat – the act of dishonesty becomes more acceptable, and people become more likely to behave in such a fashion.  It’s worth mentioning that the “distance” in this experiment is merely psychological. Moving the ball with the club is just as wrong as picking it up. The nature of the action isn’t changed.

In a security context, the vast majority of employees are unlikely to steal confidential information or sabotage equipment, much like professional golfers are unlikely to pick up the ball. However, employees might download a peer-to-peer application, like Gnutella, in order to download music to listen to at work. This could expose an organisation to data exfiltration, much like if someone left the office with a flash drive full of documents that they shouldn’t have. The motivation may be different, but the impact is the same.

This can be used to remind employees that their actions have consequences. Breaking security policy doesn’t seem to have a direct financial cost to the company – at least at first – making it easier for employees to rationalise behaving in a non-compliant way. Policy violations, however, can lead to a security breaches. Regulation like GDPR with fines of up to €20 million or four per cent of a firm’s global turnover makes this connection clearer and could help employees understand the consequences of acting improperly.

Another study relates tangentially to the broader discussion of breaking security policies and cheating. Participants were asked to solve 20 simple math problems, and promised 50 cents for each correct answer. Crucially, the researchers made it technically possible to cheat, by allowing participants to check their work against a sheet containing the correct answers. Participants could shred the sheet, leaving no evidence of cheating.

Compared to controlled conditions, where cheating wasn’t possible, participants with access to the answer sheet answered on average five more problems correctly.

The researchers looked at how a peer might influence behaviour in such circumstances. They introduced an individual, who answered all the problems correctly in a seemingly-impossible amount of time. Since such behaviour remained unchallenged, this had a marked effect on the other participants, who answered roughly eight more problems correctly than those working under conditions where cheating wasn’t possible.

Much like the broken window theory, this reinforces the idea that cheating is contagious and he same can be said of the workplace. If people see others violating security polices, like using unauthorised tools and services to conduct work business, they may be inclined to exhibit the same behaviour. Non-compliance becomes normalised, and above all, socially acceptable. This normalisation is why poor security behaviour exists.

Fortunately, the inverse is also true. If employees see others acting in a virtuous manner, they’ll be less inclined to break the rules. This is why, when it comes to security campaigns, it’s important that senior leadership set a positive example, and become role models for the rest of the company. If the CEO takes security policy seriously, it’s more likely the rank-and-file foot soldiers of the company will too.

One of the examples of this is given in the book The Power of Habit, where journalist Charles Duhigg discusses the story of Paul O’Neill, then CEO of the Aluminium Company of America (Alcoa), who aimed to make his company the safest in the nation to work for. Initially he experienced resistance, as stakeholders were concerned that his primary priority wasn’t merely margins and other finance-related performance indicators. They failed to see the connection between his aim for zero workplace injuries, and the company’s financial performance.  And yet Alcoa’s profits reached an all-time record high within a year of his announcement, and when he retired, the company’s annual income was five times than it was before he arrived. Moreover, it became one of the safest industrial companies in the world.

Duhigg attributes this to the “keystone habit.” O’Neill identified safety as such a habit, and fervently focused on it. He wanted to change the company, but this couldn’t be done by merely telling people to change his behaviour, explaining: “… That’s not how the brain works. So I decided I was going to start by focusing on one thing. If I could start disrupting the habits around one thing, it would spread throughout the entire company.”

In the book, O’Neill discusses an incident when a worker died trying to fix a piece of equipment in a way that violated the established security procedures and warning signs. The CEO issued an emergency meeting to understand the cause of the event, and took personal responsibility for the worker’s death.  He also pinpointed several inadequacies with workplace safety education, specifically that the fact that training material didn’t highlight that employees wouldn’t be sanctioned for hardware failure, and that they shouldn’t commence repair before first consulting a manager.

In the aftermath, Alcoa safety policies were updated and employees were encouraged to engage with management in drafting new policies. This engagement led workers to take a step further and suggest improvements to how the business could be run. By talking about safety, the company was able to improve communication and innovation, which lead to a marked improvement in the company’s financial performance.

Timothy D. Wilson, Professor of Psychology at the University of Virginia says that behaviour change precedes changes in sentiment – not the other way around. Those responsible for security should realise that there is no silver bullet, and changing culture requires an atmosphere of constant vigilance, where virtuous behaviour is constantly reinforced in order to create and sustain positive habits.

The goal isn’t to teach one-off tricks, but rather to create a culture that is accepted by everyone without resistance, and is understood. To do this, messages need to cater to each type of employee, and eschew the idea that a one-size-fits-all campaign could work. Questions that must be answered include: What are the benefits? Why should I bother? What are the impacts of my actions?

Tone is important. Campaigns must avoid scare tactics, such as threatening employees with punishment in the case of breaches or non-compliances. These can be dismissed as scaremongering. In the same breath, they should acknowledge the damage caused by non-compliant employee behaviour and recognise that employee error can result in risk to the organisation. They should acknowledge the aims and values of the user, as well as the values of the organisation, like professionalism and timely delivery of projects. The campaign should recognise that everyone has a role to play.

Above all, a campaign should emphasise the value that information security brings to the business. This reframes the conversation around security from being about imposing limits on user behaviour, and deflects the idea that security can be a barrier from employees doing their job.

Security campaigns targeted to specific groups enable better flexibility, and allow information security professionals to be more effective at communicating risk to more employees, which is crucial for creating behavioural change. When everyone in the organisation is aware of security risks and procedures, the organisation can identify chinks in the communal knowledge, and respond by providing further education.

From this point onwards, role-specific education can be offered. So, if an employee has access to a company laptop and external storage drive, they could be offered guidance on keeping company data secure when out of the office. Additionally, employees should have a library of reference materials to consult on procedure, should they need to reinforce their knowledge later on.

Security professionals should understand the importance of the collective in order to build a vibrant and thriving security culture. Above all, they should remember that as described in the broken windows theory, addressing minor infractions can result in better behaviour across the board.

Conclusion

Companies want to have their cake and eat it. On one hand, they want their employees to be productive; that is obvious as productivity is directly linked to the performance of the business. On the other hand, they are wary of facing security breaches, which can result in financial penalties from regulators, costs associated with remediation and restitution, as well as negative publicity.

As we have seen, employees are concerned primarily with doing their day-to-day jobs in a timely and effective manner. Anything else is secondary and as far as compliance goes, for many employees, the ends justify the means. Therefore, it’s vital that productivity and security be reconciled. When companies fail to do so, they effectively force employees’ hands into breaking policy, and heightening risk for the organisation.

Employees will only comply with security policy if they feel motivated to do so. They must see a link between compliance and personal benefit. They must be empowered to adhere to security policy. To do this, they have to be given the tools and means to comprehend risks facing the organisation, and to see how their actions play into this. Once they are sufficiently equipped, they must be trusted to act unhindered to make decisions that mitigate risk at the organisational level.

Crucially, it’s important that front-line information security workers shift their role from that of a policeman enforcing policy from the top-down through sanctions and hand-wringing. This traditional approach no longer works, especially when you consider that today’s businesses are geographically distributed, and often consist of legions of remote workers.

It’s vital that we shift from identikit, one-size-fits-all frameworks. They fail to take advantage of context, both situational and local. Flexibility and adaptability are key mechanisms to use when faced with conflicts between tasks and established security codes of conduct.

Security mechanisms should be shaped around the day-to-day working lives of employees, and not the other way around. The best way to do this is to engage with employees, and to factor in their unique experiences and insights into the design process. The aim should be to correct the misconceptions, misunderstandings, and faulty decision-making processes that result in non-compliant behaviour. To effectively protect your company’s assets from cyber-attacks, focus on the most important asset – your people.

References

Dale Carnegie, How to Win Friends and Influence People. Simon and Schuster, 2010.

Iacovos Kirlappos, Adam Beautement and M. Angela Sasse, “‘Comply or Die’ Is Dead: Long Live Security-Aware Principal Agents”, in Financial Cryptography and Data Security, Springer, 2013, pages 70–82.

Leron Zinatullin, The Psychology of Information Security: Resolving conflicts between security compliance and human behaviour. IT Governance Ltd, 2016

Kregg Aytes and Terry Connolly, “Computer and Risky Computing Practices: A Rational Choice Perspective”, Journal of Organizational End User Computing, 16(2), 2004, 22–40

John D’Arcy, Anat Hovav and Dennis Galletta, “User Awareness of Security Countermeasures and Its Impact on Information Systems Misuse: A Deterrence Approach”, Information Systems Research, 17(1), 2009, 79–98

Jai-Yeol, Son “Out of Fear or Desire? Toward a Better Understanding of Employees’ Motivation to Follow IS Security Policies”, Information &. Management, 48(7), 2011, 296–302

Baba Shiv and Alexander Fedorikhin, “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making”, Journal of Consumer Research, 1999, 278–292

Shai Danziger, Jonathan Levav and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions”, Proceedings of the National Academy of Sciences, 108(17), 2011, 6889–6892

Simson. L. Garfinkel and Abhi Shelat, “Remembrance of Data Passed: A Study of Disk Sanitization Practices”, IEEE Security & Privacy, 1, 2003, 17–27.

John Maeda, The Laws of Simplicity, MIT Press, 2006.

Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning”, Policy Sciences, 4, 1973, 155–169.  

Hasso Plattner, Christoph Meinel and Larry J. Leifer, eds., Design Thinking: Understand–Improve–Apply, Springer Science & Business Media, 2010

Taiichi Ohno, Toyota Production System: Beyond Large-Scale Production, Productivity Press, 1988.

Eric Reis, The Lean Startup, Crown Business, 2011

Malcolm Gladwell, The Tipping Point: How Little Things Can Make a Big Difference, Little, Brown, 2006

Dan Ariely, The Honest Truth about Dishonesty, Harper, 2013

Francesca Gino, Shahar Ayal and Dan Ariely, “Contagion and Differentiation in Unethical Behavior: The Effect of One Bad Apple on the Barrel”, Psychological Science, 20(3), 2009, pages 393–398

Charles Duhigg, The Power of Habit: Why We Do What We Do and How to Change, Random House, 2013

Timothy Wilson, Strangers to Ourselves, Harvard University Press, 2004, 212

Advertisements

Governance Models – Cloud

Your company has decided to adopt Cloud. Or maybe it was among the ones that relied on virtualised environments before it was even a thing? In either case, cloud security has to be managed. How do you go about that?

Before checking out vendor marketing materials in search of the perfect technology solution, let’s step back and think of it from a governance perspective. In an enterprise like yours, there are a number of business functions and departments with various level of autonomy. Do you trust them to manage business process-specific risk or choose to relieve them from this burden by setting security control objectives and standards centrally? Or maybe something in-between?

Centralised model

1

Managing security centrally allows you to uniformly project your security strategy and guiding policy across all departments. This is especially useful when aiming to achieve alignment across business functions. It helps when your customers, products or services are similar across the company, but even if not, centralised governance and clear accountability may reduce duplication of work through streamlining the processes and cost-effective use of people and technology (if organised in a central pool).

If one of the departments is struggling financially or is less profitable, the centralised approach ensures that overall risk is still managed appropriately and security is not neglected.  This point is especially important when considering a security incident (e.g. due to misconfigured access permissions) that may affect the whole company.

Responding to incidents in general may be simplified not only from the reporting perspective, but also by making sure due process is followed with appropriate oversight.

There are, of course, some drawbacks. In the effort to come up with a uniform policy, you may end up in a situation where it loses its appeal. It’s now perceived as too high-level and out of touch with real business unit needs. The buy-in from the business stakeholders, therefore, might be challenging to achieve.

Let’s explore the alternative; the decentralised model.

Decentralised model

2

This approach is best applied when your company’s departments have different customers, varied needs and business models. This situation naturally calls for more granular security requirements preferably set at the business unit level.

In this scenario, every department is empowered to develop their own set of policies and controls. These policies should be aligned with the specific business need relevant to that team. This allows for local adjustments and increased levels of autonomy. For example, upstream and downstream operations of an oil company have vastly different needs due to the nature of activities they are involved in. Drilling and extracting raw materials from the ground is not the same as operating a petrol station, which can feel more like a retail business rather than one dominated by industrial control systems.

Another example might be a company that grew through a series of mergers and acquisitions where acquired companies retained a level of individuality and operate as an enterprise under the umbrella of a parent corporation.

With this degree of decentralisation, resource allocation is no longer managed centrally and, combined with increased buy-in, allows for greater ownership of the security programme.

This model naturally has limitations. These have been highlighted when identifying the benefits of the centralised approach: potential duplication of effort, inconsistent policy framework, challenges while responding to the enterprise-wide incident, etc. But is there a way to combine the best of both worlds? Let’s explore what a hybrid model might look like.

Hybrid model

3

The middle ground can be achieved through establishing a governance body setting goals and objectives for the company overall, and allowing departments to choose the ways to achieve these targets. What are the examples of such centrally defined security outcomes? Maintaining compliance with relevant laws and regulations is an obvious one but this point is more subtle.

The aim here is to make sure security is supporting the business objectives and strategy. Every department in the hybrid model in turn decides how their security efforts contribute to the overall risk reduction and better security posture.

This means setting a baseline of security controls and communicating it to all business units and then gradually rolling out training, updating policies and setting risk, assurance and audit processes to match. While developing this baseline, however, input from various departments should be considered, as it is essential to ensure adoption.

When an overall control framework is developed, departments are asked to come up with a specific set of controls that meet their business requirements and take distinctive business unit characteristics into account. This should be followed up by gap assessment, understanding potential inconsistencies with the baseline framework.

In the context of the Cloud, decentralised and hybrid models might allow different business units to choose different cloud providers based on individual needs and cost-benefit analysis.  They can go further and focus on different solution types such as SaaS over IaaS.

As mentioned above, business units are free to decide on implementation methods of security controls providing they align with the overall policy. Compliance monitoring responsibilities, however, are best shared. Business units can manage the implemented controls but link in with the central function for reporting to agree consistent metrics and remove potential bias. This approach is similar to the Three Lines of Defence employed in many organisations to effectively manage risk. This model suggests that departments themselves own and manage risk in the first instance with security and audit and assurance functions forming second and third lines of defence respectively.

What next?

We’ve looked at three different governance models and discussed their pros and cons in relation to Cloud. Depending on the organisation the choice can be fairly obvious. It might be emerging naturally from the way the company is running its operations. All you need to do is fit in the organisational culture and adopt the approach to cloud governance accordingly.

The point of this article, however, is to encourage you to consider security in the business context. Don’t just select a governance model based on what “sounds good” or what you’ve done in the past. Instead, analyse the company, talk to people, see what works and be ready to adjust the course of action.

If the governance structure chosen is wrong or, worse still, undefined, this can stifle the business instead of enabling it. And believe me, that’s the last thing you want to do.

Be prepared to listen: the decision to choose one of the above models doesn’t have to be final. It can be adjusted as part of the continuous improvement and feedback cycle. It always, however, has to be aligned with business needs.

Summary

Centralised model Decentralised model Hybrid model
A single function responsible for all aspects of a Cloud security: people, process, technology, governance, operations, etc. Strategic direction is set centrally, while all other capabilities are left up to existing teams to define. Strategy, policy, governance and vendors are managed by the Cloud security team; other capabilities remain outside the Cloud security initiative.
Advantages Advantages Advantages
  • Central insight and visibility across entire cloud security initiative
  • High degree of consistency in process execution
  • More streamlined with a single body for accountability
  • Quick results due to reduced dependencies on other teams

 

  • High level of independence amongst departments for decision-making and implementation
  • Easier to obtain stakeholder buy-in
  • Less impact on existing organisation structures and teams
  • Increased adoption due to incremental change
  • High degree of alignment to existing functions
  • High-priority Cloud security capabilities addressed first
  • Maintains centralised management for core Cloud security requirements
  • Allows decentralised decision-making and flexibility for some capabilities

 

Disadvantages Disadvantages Disadvantages
  • Requires dedicated and additional financial support from leadership
  • Makes customisation more time consuming
  • Getting buy-in from all departments is problematic
  • Might be perceived as not relevant and slow in adoption

 

  • Less control to enforce Cloud security requirements
  • Potential duplicate solutions, higher cost, and less effective control operations
  • Delayed results due to conflicting priorities
  • Potential for slower, less coordinated development of required capabilities
  • Lack of insight across non-integrated cloud infrastructure and services
  • Gives up some control of Cloud security capability implementation and operations to existing functions
  • Some organisation change is still required (impacting existing functions)

Security architecture: how to

When building a house you would not consider starting the planning, and certainly not the build itself, without the guidance of an architect. Throughout this process you would use a number of experts such as plumbers, electricians and carpenters.  If each individual expert was given a blank piece of paper to design and implement their aspect of the property with no collaboration with the other specialists and no architectural blueprint, then it’s likely the house would be difficult and costly to maintain, look unattractive and not be easy to live in.  It’s highly probable that the installation of such aspects would not be in time with each other, therefore causing problems at a later stage when, for example, the plastering has been completed before the wiring is complete.

This analogy can be applied  to security architecture, with many companies implementing different systems at different times with little consideration of how other experts will implement their ideas, often without realising they are doing it.  This, like the house build, will impact on the overarching effectiveness of the security strategy and will in turn impact employees, clients and the success of the company.

For both of the above, an understanding of the baseline requirements, how these may change in the future and overall framework is essential for a successful project. Over time, building regulations and practices have evolved to help the house building process and we see the same in the security domain; with industry standards being developed and shared to help overcome some of these challenges.

The approach I use when helping clients with their security architecture is outlined below.

Approach

I begin by understanding the business, gathering requirements and analysing risks. Defining current and target states leads to assessing the gaps between them and developing the roadmap that aims to close these gaps.

I prefer to start the security architecture development cycle from the top by defining security strategy and outlining how lower levels of the architecture support it, linking them to business objectives. But this approach is adjusted based on the specific needs.

Read the rest of this entry »


Amsterdam

This is one of these blog posts with no content. I just really wanted to share some pics from one of the coolest cities I had a privilege to live and work in for the past few months.


Artificial intelligence and cyber security: attacking and defending

Cyber security is a manpower constrained market – therefore the opportunities for AI automation are vast.  Frequently, AI is used to make certain defensive aspects of cyber security more wide reaching and effective: combating spam and detecting malware are prime examples.  On the opposite side there are many incentives to use AI when attempting to attack vulnerable systems belonging to others.  These incentives could include the speed of attack, low costs and difficulties attracting skilled staff in an already constrained environment.

Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes.  At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on mass scale, if they don’t already.

How do we know for sure? The fact is, it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year: 62% of surveyed Black Hat conference participants seem to be convinced in such a possibility.

Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising given the fact that AI systems make an adversary’s job much easier. Why? Resource efficiency point aside, they introduce psychological distance between an attacker and their victim. Indeed, many offensive techniques traditionally involved engaging with others and being present, which in turn limited attacker’s anonymity. AI increases the anonymity and distance. Autonomous weapons is the case in point; attackers are no longer required to pull the trigger and observe the impact of their actions.

It doesn’t have to be about human life either. Let’s explore some of the less severe applications of AI for malicious purposes: cybercrime.

Social engineering remains one of the most common attack vectors. How often is malware introduced in systems when someone just clicks on an innocent-looking link?

The fact is, in order to entice the victim to click on that link, quite a bit of effort is required. Historically it’s been labour-intensive to craft a believable phishing email. Days and sometimes weeks of research and the right opportunity were required to successfully carry out such an attack.  Things are changing with the advent of AI in cyber.

Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine the willingness to pay the ransom based on historical data and even adjust the size of pay-out to maximise the chances and therefore revenue for cyber criminals.

Imagine all the data available in the public domain as well as previously leaked secrets through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.

When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by engaging people in longer dialogues over extensive periods of time on social media which require no human effort – chatbots are now capable of maintaining such interaction and even impersonate the real contacts by mimicking their writing style.

Machine learning used for victim identification and reconnaissance greatly reduces attacker’s resource investments. Indeed, there is even no need to speak the same language anymore! This inevitably leads to an increase in scale and frequency of highly targeted spear phishing attacks.

Sophistication of such attacks can also go up. Exceeding human capabilities of deception, AI can mimic voice thanks to rapid development in speech synthesis. These systems can create realistic voice recordings based on existing data and elevate social engineering to the next level through impersonation. This, combined with other techniques discussed above, paints a rather grim picture.

So what do we do?

Let’s outline some potential defence strategies that we should be thinking about already.

Firstly and rather obviously, increasing the use of AI for cyber defence is not such a bad option. A combination of supervised and unsupervised learning approaches is already being employed to predict new threats and malware based on existing patterns.

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations.

Importantly though, when using AI for defence, we should assume that attackers anticipate it. We must also keep track of AI development and its application in cyber to be able to credibly predict malicious applications.

In order to achieve this, a collaboration between industry practitioners, academic researchers and policymakers is essential. Legislators must account for potential use of AI and refresh some of the definitions of ‘hacking’. Researchers should carefully consider malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.

Finally, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

References

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 2018

Cummings, M. L. 2004. “Creating Moral Buffers in Weapon Control Interface Design.” IEEE Technology and Society Magazine (Fall 2004), 29–30.

Seymour, J. and Tully, P. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter,” Black Hat conference

Allen, G. and Chan, T. 2017. “Artificial Intelligence and National Security,” Harvard Kennedy School Belfer Center for Science and International Affairs,

Yampolskiy, R. 2017. “AI Is the Future of Cybersecurity, for Better and for Worse,” Harvard Business Review, May 8, 2017.


SABSA architecture and design case study

Let’s talk about applying the SABSA framework to design an architecture that would solve a specific business problem.  In this blog post I’ll be using a fictitious example of a public sector entity aiming to roll-out an accommodation booking service for tourists visiting the country.

To ensure that security meets the needs of the business we’re going to go through the layers of the SABSA architecture from top to bottom.

layers

Start by reading your company’s business strategy, goals and values, have a look at the annual report. Getting the business level attributes from these documents should be straightforward. There’s no need to invent anything new – business stakeholders have already defined what’s important to them.

Contextual architecture

Every single word in these documents has been reviewed and changed potentially hundreds of times. Therefore, there’s usually a good level of buy-in on the vision. Simply use the same language for your business level attributes.

After analysing the strategy of my fictitious public sector client I’m going to settle for the following attributes: Stable, Respected, Trusted, Reputable, Sustainable, Competitive. Detailed definitions for these attributes are agreed with the business stakeholders.

Conceptual architecture

Next step is to link these to the broader objectives for technology. Your CIO or CTO might be able to assist with these. In my example, the Technology department has already done the hard job of translating high-level business requirements into a set of IT objectives. Your task is just distill these into attributes:

Objectives

Now it’s up to you to define security attributes based on the Technology and Infrastructure attributes above. The examples might be attributes like Available, Confidential, Access-Controlled and so on.

Requirements tractability

The next step would be to highlight or define relationships between attributes on each level:

Attributes

These attributes show how security supports the business and allows for two-way tracebility of requirements. It can be used for risk management, assurance and architecture projects.

Back to our case study. Let’s consider a specific example of developing a hotel booking application for a public sector client we’ve started out with. To simplify the scenario, we will limit the application functionality requirements to the following list:

ID Name Purpose
P001 Register Accommodation Enable the registration of temporary accommodations available
P002 Update Availability Enable accommodation managers to update availability status
P003 Search Availability Allow international travellers to search and identify available accommodation
P004 Book Accommodation Allow international travellers to book accommodation
P005 Link to other departments Allow international travellers to link to other departments and agencies such as the immigration or security services (re-direct)

And here is how the process map would look like:

Process map

There are a number of stakeholders involved within the government serving international travellers’ requests. Tourists can access Immigration Services to get information on visa requirements and Security Services for safety advice. The application itself is owned by the Ministry of Tourism which acts as the “face” of this interaction and provides access to Tourist Board approved options. External accommodation (e.g. hotel chains) register and update their offers on the government’s website.

The infrastructure is outsourced to an external cloud service provider and there are mobile applications available, but these details are irrelevant for the current abstraction level.

Trust modelling

From the Trust Modelling perspective, the relationship will look like this:

Trust

Subdomain policy is derived from, and compliant with, super domain but has specialised local interpretation authorised by super domain authority. The government bodies act as Policy Authorities (PA) owning the overall risk of the interaction.

At this stage we might want to re-visit some of the attributes we defined previously to potentially narrow them down to only the ones applicable to the process flows in scope. We will focus on making sure the transactions are trusted:

New attributes

Let’s overlay applicable attributes over process flows to understand requirements for security:

Flows and attributes

Logical Architecture

Now it’s time to go down a level and step into more detailed Designer’s View. Remember requirement “P004 – Book Accommodation” I’ve mentioned above? Below is the information flow for this transaction. In most cases, someone else would’ve drawn these for you.

Flow 1

With security attributes applied (the direction of orange arrows define the expectation of a particular attribute being met):

Flow 2

These are the exact attributes we identified as relevant for this transaction on the business process map above. It’s ok if you uncover additional security attributes at this stage. If that’s the case, feel free to add them retrospectively to your business process map at the Conceptual Architecture level.

Physical architecture

After the exercise above is completed for each interaction, it’s time to go down to the Physical Architecture level and define specific security services for each attribute for every transaction:

Security services

Component architecture

At the Component Architecture level, it’s important to define solution-specific mechanisms, components and activities for each security service above. Here is a simplified example for confidentiality and integrity protection for data at rest and in-transit:

Service Physical mechanism Component brands, tools, products or technical standards Service Management activities required to manage the solution through-life
Message confidentiality protection Message encryption IPSec VPN Key management, Configuration Management, Change management
Stored data confidentiality protection Data encryption AES 256 Disk Encryption Key management, Configuration Management, Change management
Message integrity protection Checksum SHA 256 Hash Key management, Configuration Management, Change management
Stored data integrity protection Checksum SHA 256 Hash Key management, Configuration Management, Change management

As you can see, every specific security mechanism and component is now directly and traceable linked to business requirements. And that’s one of the ways you demonstrate the value of security using the SABSA framework.


NIS Directive: are you ready?

UNADJUSTEDNONRAW_thumb_3de4

Governments across Europe recognised that with increased interconnectiveness a cyber incident can affect multiple entities spanning across a number of countries. Moreover, impact and frequency of cyber attacks is at all-time high with recent examples including:

  • 2017 WannaCry ransomware attack
  • 2016 attacks on US water utilities
  • 2015 attack on Ukraine’s electricity network

In order to manage cyber risk, the European Union introduced the Network and Information Systems (NIS) Directive which requires all Member States to protect their critical national infrastructure by implementing cyber security legislation.

Each Member State is required to set their own rules on financial penalties and must take the necessary measures to ensure that they are implemented. For example, in the UK fines, can be up to £17 million.

And yes, in case you are wondering, the UK government has confirmed that the Directive will apply irrespective of Brexit (the NIS Regulations come into effect before the UK leaves the EU).

Who does the NIS Directive apply to?

The law applies to:

  • Operators of Essential Services that are established in the EU
  • Digital Service Providers that offer services to persons within the EU

The sectors affected by the NIS Directive are:

  • Water
  • Health (hospitals, private clinics)
  • Energy (gas, oil, electricity)
  • Transport (rail, road, maritime, air)
  • Digital infrastructure and service providers (e.g. DNS service providers)
  • Financial Services (only in certain Member States e.g. Germany)

NIS Directive objectives

In the UK the NIS Regulations will be implemented in the form of outcome-focused principles rather than prescriptive rules.

National Cyber Security Centre (NCSC) is the UK single point of contact for the legislation. They published top level objectives with underlying security principles.

Objective A – Managing security risk

  • A1. Governance
  • A2. Risk management
  • A3. Asset management
  • A4. Supply chain

Objective B – Protecting against cyber attack

  • B1. Service protection policies and processes
  • B2. Identity and access control
  • B3. Data security
  • B4. System security
  • B5. Resilient networks and systems
  • B6. Staff awareness

Objective C – Detecting cyber security events

  • C1. Security monitoring
  • C2. Proactive security event discovery

Objective D – Minimising the impact of cyber security incidents

  • D1. Response and recovery planning
  • D2. Lessons learned

Table view of principles and related guidance is also available on the NCSC website.

Cyber Assessment Framework

The implementation of the NIS Directive can only be successful if Competent Authorities  can adequately assess the cyber security of organisations is scope. To assist with this, NCSC developed the Cyber Assessment Framework (CAF).

The Framework is based on the 14 outcomes-based principles of the NIS Regulations outlined above. Adherence to each principle is determined based on how well associated outcomes are met. See below for an example:

NIS

Each outcome is assessed based upon Indicators of Good Practice (IGPs), which are statements that can either be true or false for a particular organisation.

Whats’s next?

If your organisation is in the scope of the NIS Directive, it is useful to conduct an initial self-assessment using the CAF described above as an starting point of reference. Remember, formal self-assessment will be required by your Competent Authority, so it is better not to delay this crucial step.

Establishing an early dialogue with the Competent Authority is essential as this will not only help you establish the scope of the assessment (critical assets), but also allow you to receive additional guidance from them.

Initial self-assessment will most probably highlight some gaps. It is important to outline a plan to address these gaps and share it with your Competent Authority. Make sure you keep incident response in mind at all times. The process has to be well-defined to allow you report NIS-specific incidents to your Competent Authority within 72 hours.

Remediate the findings in the agreed time frames and monitor on-going compliance and potential changes in requirements, maintaining the dialogue with the Competent Authority.