Customers are becoming increasingly aware of their rights when it comes to data privacy and they expect companies to safeguard the data they entrust to them. With the introduction of GDPR, a lot of companies had to think about privacy for the first time.
I’ve been invited to share my views on innovating in the age of GDPR as part of the Cloud and Cyber Security Expo in London.
When I was preparing for this panel I was trying to understand why this was even a topic to begin with. Why should innovation stop? If your business model is threatened by the GDPR then you are clearly doing something wrong. This means that your business model was relying on exploitation of consumers which is not good.
But when I thought about it a bit more, I realised that there are costs to demonstrating compliance to the regulator that a company would also have to account for. It’s arguably easier achieved by bigger companies with established compliance teams rather than smaller upstarts, serving as a barrier to entry. Geography also plays a role here. What if a tech firm starts in the US or India, for example, where the regulatory regime is more relaxed when it comes to protecting customer data and then expands to Europe when it can afford it? At least at a first glance, companies starting up in Europe are at a disadvantage as they face potential regulatory scrutiny from day one.
How big of a problem is this? I’ve been reading about people complaining that you need fancy lawyers who understand technology to address this challenge. I would argue, however, that fancy lawyers are only required when you are doing shady stuff with customer data. Smaller companies that are just starting up have another advantage on their side: they are new. This means they don’t have go and retrospectively purge legacy systems of data they have been collecting over the years potentially breaking the business logic in the interdependent systems. Instead, they start with a clean slate and have an opportunity to build privacy in their product and core business processes (privacy by design).
Risk may increase while the company grows and collects more data, but I find that this risk-based approach is often missing. Implementation of your privacy programme will depend on your risk profile and appetite. Level of risk will vary depending on type and amount of data you collect. For example, a bank can receive thousands of subject access requests per month, while a small B2B company can receive one a year. Implementation of privacy programmes will therefore be vastly different. The bank might look into technology-enabled automation, while a small company might look into outsourcing subject request processes. It is important to note, however, that risk can’t be fully outsourced as the company still ultimately owns it at the end of the day
The market is moving towards technology-enabled privacy processes: automating privacy impact assessments, responding to customer requests, managing and responding to incidents, etc.
I also see the focus shifting from regulatory-driven privacy compliance to a broader data strategy. Companies are increasingly interested in understanding how they can use data as an asset rather than a liability. They are looking for ways to effectively manage marketing consents and opt out and giving power and control back to the customer, for example by creating preference centres.
Privacy is more about the philosophy of handling personal data rather than specific technology tricks. This mindset in itself can lead to innovation rather than stifling it. How can you solve a customers’ problem by collecting the minimum amount of personal data? Can it be anonymised? Think of personal data like toxic waste – sure it can be handled, but with extreme care.
I’m proud to be one of the contributors to the newly published Cyber Security: Law and Guidance book.
Although the primary focus of this book is on the cyber security laws and data protection, no discussion is complete without mentioning who all these measures aim to protect: the people.
I draw on my research and practical experience to present a case for the new approach to cyber security and data protection placing people in its core.
Check it out!
What is GDPR?
The General Data Protection Regulation (GDPR) is a new European legislation intended to strengthen personal data protection for European citizens and harmonise personal data protection rules within the European Union. GDPR replaces the 1998 EU Data Protection Directive and the national laws that implemented this Directive. GDPR becomes the law in all EU Member States without the need for further legislation, though in some areas, Member States are allowed to adopt further specific laws on certain topics, for example, in relation to biometric data and employment data.
What is personal data?
Personal data is defined as any information relating to an identified or identifiable living individual. For example, your name, date of birth, home address, personal email address, your tax identification number, fingerprints, phone number, performance data and medical information are all personal data, but it can also be any combination of data that can identify you.
What rights do individuals have?
The GDPR provides the following rights for individuals:
- The right to be informed
- The right of access
- The right to rectification
- The right to erasure
- The right to restrict processing
- The right to data portability
- The right to object
- Rights in relation to automated decision making and profiling.
You can find out more on the ICO website. Companies receive the majority of requests in relation to the right to access and right to be forgotten.
What is the Right of Access?
A data subject access request is when an individual requests to have access to their personal data stored by the company. The purpose of the right to access personal data is to enable individuals to be in control of their own personal data (e.g. understand what personal data is processed and verify the lawfulness of processing).
All personal data which is being processed will need to be provided to the data subject, with a few exceptions to protect the data rights of other individuals and commercial secrets. In some cases, where the relevant systems provide for this, the right of access can be complied with by self-service by the data subject.
What is the Right to be Forgotten?
A data subject may make a request for the right to erasure, also known as the right to be forgotten. The right to be forgotten applies when: the individual has withdrawn consent, the data was processed unlawfully, or the data must be erased to comply with legal obligation. Only data items are forgotten for which the company does not have a legal basis (e.g. tax, accounting, employment, legal, etc.) or business purpose to retain.
The extent to which data can be erased depends on the nature of the personal data. For example, an employee cannot request that the fact that he or she worked at the company be deleted. When a data subject enacts their right to be forgotten, their personal data needs to be either deleted or anonymised such that it can no longer be linked back to the individual.
How to automate responding to data subject requests
Below is a high-level diagram of the solution that automates the processes that need to be carried out to comply with the regulation.
This includes collecting data from different systems in order to fulfill a Subject Access Request and instructing systems to delete/anonymise data as part of a Right to be Forgotten request.
Process automation requires that asset inventories and data flows are first documented and personal data processing systems are identified.
The solution then integrates with system APIs and orchestrates data subject requests. It allows the operator (data privacy team) to generate a consumable report and carry out necessary identity verification checks before responding to the request. It also enables the operator to customise the report if needed.
This approach ensures personal data is collected or removed from all the systems in scope and accelerates the process of responding to the requestor within the 30-day period.
Oxford dictionary defines gamification as the application of typical elements of game playing (e.g. point scoring, competition with others, rules of play) to other areas of activity to encourage engagement with a product or service:
Bringing an element of fun helps to achieve lasting change in human behaviour, as demonstrated by The Fun Theory project. Here are some videos to get an idea how gamification can drive behavioural change to address social and business challenges:
Gamification can also be a powerful learning tool when applied to information security.
For example, CyberCIEGE enhances information assurance and cyber security education and training through the use of computer gaming techniques such as those employed in SimCity™. In the CyberCIEGE virtual world, users spend virtual money to operate and defend their networks, and can watch the consequences of their choices, while under attack.
In its interactive environment, CyberCIEGE covers significant aspects of computer and network security and defense. Players of this video game purchase and configure workstations, servers, operating systems, applications, and network devices. They make trade offs as they struggle to maintain a balance between budget, productivity, and security. In its longer scenarios, users advance through a series of stages and must protect increasingly valuable corporate assets against escalating attacks.
CyberCIEGE includes configurable firewalls, VPNs, link encryptors and access control mechanisms. It includes identity management components such as biometric scanners and authentication servers. Attack types include corrupt insiders, trap doors, Trojan horses, viruses, denial of service, and exploitation of weakly configured systems. Attacker motives to compromise assets differ by asset and scenario, thereby supporting scenarios ranging from e-mail attachment awareness to cyber warfare.
Cybersecure: Your Medical Practice is another example of using gamification to educate people but not in the context of the HIPAA regulation compliance.
This web-based security training module uses a game format that requires users to respond to privacy and security challenges often faced in a typical small medical practice. Users choosing the right response earn points and see their virtual medical practices flourish. But users making the wrong security decisions can hurt their virtual practices. In this version, the wrong decisions lead to floods, server outages, fire damage and other poor outcomes related to a lack of contingency planning.
Gamification can also be applied in user awareness training to change the behaviour of users in the organisation. One instance of this might be helping to recognize phishing links.
Anti-Phishing Phil is an interactive game that teaches users how to identify phishing URLs, where to look for cues in web browsers, and how to use search engines to find legitimate sites.
User studies have found that user education can help prevent people from falling for phishing attacks. However, it is hard to get users to read security tutorials, and many of the available online training materials make users aware of the phishing threat but do not provide them with enough information to protect themselves. Studies demonstrate that Anti-Phishing Phil is an effective approach to user education.
There is a free online course on gamification available. This course will teach you the mechanisms of gamification, why it has such tremendous potential, and how to use it effectively.
In the face of cyber attacks managing to breach industries as diverse as multimedia giants, global retailers and online social networks, the importance of securing our personal information has never been more in the spotlight. The growing demand to address these risks has been recognized across the information security field, and I was recently given the opportunity to participate in the launch of my firm’s own global privacy service line.
During this launch, I was lucky enough to meet many experienced privacy practitioners from all over the world, including New Zealand, South Africa, Japan and the USA. These security professionals generously shared their insights with me, based on their diverse experiences and individual challenges. Interestingly, I discovered that although privacy legislation varies country-by-country, the basic principles remain the same.
I was able to attend multiple interactive workshops, in which I learned how to perform privacy impact and maturity assessments. The week concluded with the IAPP Foundation and other certifications.
The experience I gained with data protection laws and the knowledge I obtained during these training sessions helped me to successfully obtain the Certified Information Privacy Manager and Certified Information Privacy Technologist credentials. These certifications will allow me to demonstrate my knowledge and skills and bring value to this truly exciting security arena.
Imagine a fridge that can tell when the food inside it is going off, or an oven that can cook food automatically. A world of everyday items, all smart, all connected – that’s the Internet of Things.
But is this a force for good – or for evil? Do the sacrifices we’ll have to make in terms of privacy and security outweigh the potential benefits?
I shared my view in the KPMG SLAT video
Javvad Malik: One of the biggest challenges that companies are facing is securing at the same rate of innovationPosted: April 3, 2014
Could you start by telling us about yourself?
My first proper job was during my work placement year during my degree as an IT security administrator at NatWest Bank which, to be honest, I had no idea what this job was about. Actually, very few people knew what it was. But as a student doing a degree in Business Information Systems, I needed to specialise in something and so I went and took this job to see if I could make any sense of this field. I figured that this bank was a huge company and if things didn’t work out in IT Security, I could always explore opportunities in other departments.
Back in the day, there was around seven people in the security operations team for the whole bank, and only three for the monitoring team with whom we only had an intermittent communication. NatWest was then acquired by RBS and I remained in IT security for the next five years, during which I moved more to the project-side of security, as opposed to the operations-side. I had more interactions with the internal consultancy-team and their job appealed to me, because they didn’t seem to need to keep so up-to-date with all the latest technologies from a hands-on perspective and they made more money.. I was unable to make an internal move so I decided to get into contracting and stayed within financial services, where the majority of my roles involved arguing with auditors, resolving issues through internal consulting, being the middle-man between the business and pen-testers, project reviews, and the sort.
On the side, I got very interested in blogging. Blogs were the new fantastic boom readily accessible and cheap for everybody. Suddenly everybody with a blog felt like a professional writer, which I enjoyed, but found it a difficult area in which one could differentiate or bring a unique perspective to. I then tried video blogging, which I discovered was bloody hard, because it takes a lot of skills to help you look like a professional instead of like an idiot most of the time. But because I was among the first to get into this type of delivery mode, my profile was raised quite quickly within the security community, and perhaps to an even broader one. One of the advantages to video blogging that I uncovered was that people who watch you can somehow relate to you better than if they just read your work: they can see your body language, hear your voice, your tone, everything. The result is quite funny, because it often happens to me that when I go to a conference, somebody will greet me as if I’m their best friend. Because they see me so often on YouTube, they feel like they know me. It’s very nice when people acknowledge you like that, and it goes to show that the delivery channel really has that impact.
So because of this impact, one day, Wendy, the research director at 451 Research, asked me if I would be interested in becoming an analyst. In reality I had no idea what an analyst did. She said that I would have to speak to vendors and write about them, which sounded a lot like blogging to me. She immediately said, “yes, it is pretty much like blogging,” to which I then replied, “well, I have my demands. I do video blogging, I’d like to attend and speak at conferences and I don’t want any restrictions here, because I know that many companies impose restrictions around this kind of activity.”
Currently I’ve been an analyst for the past two years, which I have enjoyed very much and has allowed me to broaden my skillset; not to mention give me the opportunity to meet a ton of extremely talented people.
Where do you predict will the security field go?
When I was starting in the field, nobody really knew what security was. Then came the perception that it was all about hackers working from their mums’ basements. Then, they were assumed to be IT specialists, and then that they were specialists who didn’t necessarily know much about IT but who knew more about the risk and/or the government background and now everyone is just confused
Security itself is very broad. It is kind of like medicine: you have GPs who know a little bit about everything, which is the base level of knowledge. For complex cases they will refer you to other doctors who specialise in, say, blood, heart, eyes, ears, and other specific body parts. The same applies to security. You will have some broad generalists and others who are technical experts or those who are more into security development and can tell you how to use code more securely. You then have non-technical security people, who know more about understanding the business, the risk, and how to implement security into it. You also get product or technology specific experts who are only there to maybe tune your SIEMs for you, forensics experts, incident-response specialists, and so on. You will find specialists with overlapping skills, just as you will find those who possess unique abilities as well. Security has exploded “sideways” like that. So you can call lots of people “security experts” but in reality they are very different from each other, which means that they are not necessarily interchangeable. You can’t, obviously, switch a non-technical person for a technical one. I believe that one of the signs of immaturity within the industry is that people still don’t recognize these differences, which often leads to lots of finger-pointing in situations like: “you don’t know how to code, how can you call yourself a security professional? You don’t understand what the business does. You’ll never be a security professional.” These kinds of things, I think, are the natural growing pains of this and any industry.
What will probably happen going forward is that as things become increasingly interconnected and peoples’ whole lives more and more online, you will have more and more of a visibility of security. Additionally, we will see the need to extend the capabilities outside of the enterprise into the consumer space. We are already seeing an overlap between personal and corporate devices. So I think that everything will kind of bleed into everything else: some areas will become operationalised, others will be commoditised, but I think that there will continuously be a need for security that will always have to be there. What that will look like will probably be different to what we see today.
What kind of challenges do you think will the companies face in the future in terms of security?
One of the biggest challenges that companies are facing is securing at the same rate of innovation. Every company wants to be the first one to develop a new way that they can hook in with their customers. Whether this is in the form of being the first in developing a new app that can enable consumers to do banking, or to do payments and inter-payments, and so on, which sometimes comes at the cost of security. Balancing this business case between the perceived benefits and the security risks of it can be very challenging. The speed at which businesses want to and need to innovate, because that’s what the market is forcing them to do, is making security cost-prohibitive.
The other challenge is that the business model for many companies lies almost exclusively in advertising revenue. Nearly every mobile app or social media site or other online service that is free is typically generating either their primary or supplementary revenue by selling user information. With so many companies trying to grab data and sell to the highest bidder – we have a big challenge in educating users in terms of what security risks lie as well as trying to enforce good security practises within the vendor space but without breaking business models.
How would you say companies should then approach this challenge in the first place?
The way that companies typically “solve” this challenge is by burying their head in the sand and outsourcing the problem. So they will go out to another company and ask them: “can you offer us a secure platform to do it?” To which they answer, “of course we can. Just give us your money.” The challenge is that companies and individuals don’t appreciate that poor security choices made today may have an impact that will not be immediately felt, but perhaps in a few months’ or years’ time. Sadly, by then, it’s usually too late. So this is what both companies and individuals need to be careful about.
Returning to the point about security professionals being very diverse, what’s the role of security professionals from the risk governance and compliance perspective? Can you elaborate more on the security culture within a company and how can it be developed?
Security culture is a very difficult thing: it is not impossible but it relies on understanding human behaviour more than technical aspects. Understanding human behaviour means understanding personality types and how they respond to different environments and stimuli, which can be more challenging that understanding technical aspects.
The general observation that I can make about human behaviour, regardless of the personality type, is that people don’t tend to be aware of what they are giving up. The best and most prevalent example would be how much in demand mobile apps are and how insecure they are, because people unknowingly give away lots of data in order to have access to them. Chris Eng from Veracode makes an excellent analogy by saying that “people usually don’t care what they are agreeing to as long as they can still fling birds against pigs.” This is the crux of it. People don’t think it makes much of a difference if they give their email address away, or if they let the app access their GPS data or their contacts, because they can’t perceive a direct impact. The problem is that this impact might not be felt until ten years’ time. So if you are giving data to Facebook, Instagram and Whatsapp, for example, you can’t really predict what will happen later on. In the last year Facebook acquired both Instagram and Whatsapp. So now you have a single company that holds all of your photo data that you maybe didn’t want on Facebook, along with all the stats on your behaviour that you’ve been feeding to Facebook, along with the people you are chatting to, and so on. So now Facebook has an incredible amount of information about you and can target and market a lot better. Someone could also use all this data for any purpose. I’m not saying that Facebook or other companies gather users personal data for malicious purposes, but it reminds me of the saying, “The path to hell is paved with good intentions.”
How can you make people change their behaviour?
You have to make it real and personal for them. You have to make that personal connection. In security we tend to say: “we have 50,000 phishing emails that come through every day, and people click on them.” But to the individual user, that doesn’t really have that much of an impact. Are we making this information personal? The communication methods and the techniques that we need to change behaviour are there, we don’t need to reinvent it with security people who don’t understand how communication necessarily works or who are not the best communicators to begin with.
We can remember how 15-20 years ago, nobody cared about recycling, because nobody really cared about the environment. It was just a few people in Greenpeace with long hair and who smelled a bit funny who were trying to stop the oil companies from drilling into the sea, for example. Now, you go into any office and you find 10 bins for every different type of recycling material, which everybody now uses. It’s been a long-term campaign which finally created that social change, and which now makes it unacceptable for people to behave in another way. As you walk on the street, you will see that very few people, if any, throw wrappers on the floor. They usually hold onto them until they get to a bin and then they dispose of them. We need to adopt the same practices to change behaviour in security and in many cases that means actually letting people who know how to market and communicate do that for us instead of trying to do it all ourselves.