How to set up a bug bounty program

Bug bounty programmes are becoming the norm in larger software organisations but it doesn’t mean you have to be Google or Facebook to run one for continuous security testing and engaging with the security community..

Setting it up can be easier than you might think as there are multiple platforms like HackerOne, BugCrowd or similar out there that can help with centralised management. They also offer an option to introduce it gradually through private participation first before opening it to the whole world. 

At a minimum, you can have a dedicated email address (e.g. security@yourexamplecompanyname) that security researchers can use to report security issues. Having a page transparently explaining the participation terms, scope and payout rate also helps. Additionally, it’s good to have appropriate tooling to track issues and verify fixes.

Even if you don’t have any of the above, security researchers can still find vulnerabilities in your product and report them to you responsibly, so you effectively get free testing but can exercise limited control over it. Therefore, it’s a good idea to have a process in place to keep them happy enough to avoid them disclosing issues publicly.

There is probably nothing more frustrating for a security researcher than receiving no response  (apart perhaps from being threatened legal action), so communication is key. At the very least, thank them and request more information to help verify their finding while you kick off the investigation internally. Bonus points for keeping them in the loop when it comes to plans for remediation, if appropriate. 

There are some prerequisites for setting up the bug bounty programme though. Beyond the obvious budget requirement for paying researchers for the vulnerabilities they discover, there is a broader need for engineering resources being available to analyse reported issues and work on improving the security of your products and services. What’s the point of setting up a bug bounty programme if no one is looking at the findings?

Many companies, therefore, might feel they are not ready for a bug bounty programme. They may have too many known issues already and fear they will be overwhelmed with duplicate submissions. These might indeed be problematic to manage, so such organisations are better off focusing their efforts on remediating known vulnerabilities and implementing measures to prevent them (e.g. setting up a Content Security Policy).

They could also consider introducing security tests in the pipeline as this will help catch potential vulnerabilities much earlier in the process, so the bug bounty programme can be used as a fall back mechanism, not the primary way for identifying security issues.

Threat modelling 101

Using abstractions to think about risks is a useful technique to identify the ways an attacker could compromise a system.

There are various approaches to perform threat modelling but at the core, it’s about understanding what we are building, what can go wrong with it and what we should do about it.

Here is a good video by SAFECode introducing the concept:

Cyber incident readiness

As many organisations are recognising and experiencing first-hand, cyber-attacks are no longer a matter of if, but when. Recent cyber breaches at major corporations highlight the increasing sophistication, stealth, and persistence of cyber-attacks that organisations are facing today. These breaches are resulting in increased regulatory and business impact.

More

Understanding your threat landscape

Identifying applicable threats is a good step to take before defining security controls your organisation should put in place. There are various techniques to help you with threat modelling but I wanted to give you some high-level pointers in this blog to get you started. Of course, all of these should be tailored to your specific business.

I find it useful to think about potential attacks as three broad categories:

1. Commoditised attacks. Usually not targeted and involve off-the-shelf-malware. Examples include:

2. Tailored attacks. As the name suggests, these are tailored and can vary in degree of sophistication. Examples include:

3. Accidental. Not every data breach is triggered by a malicious actor. Therefore, it is important to recognise that mistakes happen. Unfortunately sometimes they lead to undesired consequences, like the below:

Information security professionals can use the above examples in communications with their business stakeholders not to spread fear, but to present certain security challenges in context.

It’s often helpful to make it a bit more personal, defining specific threat actors, their target, motivation and impact on the business. Again, the below table serves as an example and can be used as a starting point for you define your own.

Threat actor Description Motivation Target Impact on business
Organised crime International hacking groups Financial gain Commercial data, personal data for identity fraud Reputational damage, regulatory fines, loss of customer trust
Insider Intentional or unintentional Human error, grudge, financial gain Intellectual property, commercial data Destruction or alteration of information, theft of information, reputational damage, regulatory fines
Competitors Espionage and sabotage Competitive advantage Intellectual property, commercial information Disruption or destruction, theft of information, reputational damage, loss of customer
State-sponsored Espionage Political Intellectual property, commercial data, personal data Theft of information, reputational damage

You can then use your understanding of assets and threats relevant to your company to identify security risks. For instance:

  • Failure to comply with relevant regulation – revenue loss and reputational damage due to fines and unwanted media attention as a result of non-compliance with GDPR, PCI DSS, etc.
  • Breach of personal data – regulatory fines, potential litigation and loss of customer trust due to accidental mishandling, external system compromise or insider threat leading to exposure of personal data of customers
  • Disruption of operations – decreased productivity or inability to trade due to compromise of IT systems by malicious actor, denial of service attacks, sabotage or employee error

Again, feel free to use these as examples, but always tailor them based on what’s important you your business. It’s also worth remembering that this is not a one-off exercise. Tracking your assets, threats and risks should be part of your security management function and be incorporated in operational risk management and continuous improvement cycles.

This will allow you to demonstrate the value of security through pragmatic and prioritised security controls, focusing on protecting the most important assets, ensuring alignment to business strategy and embedding security into the business.

Online Safety and Security

ID-100356086

We live in the developed world where it is now finally safe to walk on the city streets. Police and security guards are there to protect us in the physical world. But who is watching out for us when we are online?

Issues:

  1. Cyber crime and state-sponsored attacks are becoming more and more common. Hackers are now shifting their focus form companies to the individuals. Cars, airplanes, smart homes and other connected devices along with personal phones can be exploited by malicious attackers.
  2. Online reputation is becoming increasingly more important. Potential business partners conduct thorough research prior to signing deals. Bad reputation online dramatically decreases chances to succeed in business and other areas of your life.
  3. Children’s safety online is at risk. Cyber-bullying, identity theft; with a rapid development of mobile technology and geolocation, tracking the whereabouts of your children is as easy as ever, opening opportunities for kidnappers or worse.

Solution:

A one-stop-shop for end-to-end protection of online identity and reputation for you and your children.

A platform of personalised and continuous online threat monitoring secures you, your connections, applications and devices and ensures safety and security online.

Image courtesy ofwinnond / FreeDigitalPhotos.net

Daniel Schatz: It is generally appreciated if security professionals understand that they are supposed to support the strategy of an organisation

Interview with Daniel Schatz – Director for Threat & Vulnerability Management

Daniel

Let’s first discuss how you ended up doing threat and vulnerability management. What is your story?

I actually started off as a Banker at Deutsche Bank in Germany but was looking for a more technical role so I hired on with Thomson Reuters as Senior Support Engineer. I continued on to other roles in the enterprise support and architecture space with increasing focus on information security (as that was one of my strong interests) so it was just logical for me to move into that area. I particularly liked to spend my time understanding the developing threat landscape and existing vulnerabilities with the potential to impact the organisation which naturally led me to be a part of that team.

What are you working on at the moment and what challenges are you facing?

On a day to day basis I’m busy trying to optimise the way vulnerability management is done and provide advice on current and potential threats relevant to the organisation. I think one of the challenges in my space is to find a balance between getting the attention of the right people to be able to notify them of concerning developments/situations while doing so in a non-alarmist way. It is very easy to deplete the security goodwill of people especially if they have many other things to worry about (like budgets, project deadlines, customer expectations, etc.). On the other hand they may be worried about things that they picked up on the news which they shouldn’t waste time on; so providing guidance on what they can put aside for now is also important. Other than that there are the usual issues that any security professional will face – limited resources, competing priorities with other initiatives, etc.

Can you share your opinion on the current security trends?

I think it is less valuable to look at current security trends as they tend to be defined by media/press and reinforced by vendors to suit their own strategy. If you look at e.g. Nation state cyber activities; this has been ongoing for a decade at least yet we now perceive it as a trend because we see massive reporting on it. I believe it is more sensible to spend time anticipating where the relevant threat landscape will be in a few months or years’ time and plan against that instead of trying to catch up with today’s threats by buying the latest gadget. Initiatives like the ISF Threat Horizon are good ways to start with this; or follow a DIY approach like I describe in my article

What is the role of the users in security?

I think this is the wrong approach to ask this question to be honest. Culture and mind-set are two of the most important factors when looking at security so the question should emphasise the relationship of user and security in the right way. To borrow a phrase from JFK – Do not ask what users can do for security, ask what security can do for your users.

How does the good security culture look like?

One description of culture I like defines it as ‘an emotional environment shared by members of the organisation; It reflects how staff feels about themselves, about the people for whom and with whom they work and about their jobs.’ In this context it implies that security is part of the fabric of an organisation naturally weaved in every process and interaction without being perceived to be a burden. We see this at work within the Health & Safety area, but this didn’t happen overnight either.

How one can develop it in his/her company?

There is no cookie cutter approach but talking to the Health & Safety colleagues would not be the worst idea. I also think it is generally appreciated if security professionals understand that they are supposed to support the strategy of an organisation and recognise how their piece of the puzzle fits in. Pushing for security measures that would drive the firm out of the competitive market due to increased cost or lost flexibility is not a good way to go about it.

What are the main reasons of users’ non-secure behaviour?

Inconvenience is probably the main driver for certain behaviour. Everyone is unconsciously constantly doing a cost/benefit calculation; if an users expected utility of opening the ‘Cute bunnies’ attachment exceeds the inconvenience of ignoring all those warning messages a reasonable decision was made, albeit an insecure one.

What is the solution?

Either raise the cost or lower the benefit. While it will be difficult to teach your staff to dislike cute bunnies, raising the cost may work. To stick with the previous example, this could be done by imposing draconian punishment for opening malicious attachments or deploying technology solutions to aid the user in being compliant. There is an operational and economic perspective to this of course. If employees are scared to open attachments because of the potential for punishment it will likely have a depressing consequence for your business communications.

Some will probably look for ‘security awareness training’ as answer here; while I think there is a place for such training the direct impact is low in my view. If security awareness training aims to change an organisations culture you’re on the right track but trying to train users utility decisions away will fail.

Thank you Daniel!

Konrads Smelkovs: Very few insiders develop overnight

Interview with Konrads Smelkovs – Incident Response

Konrads

Could you please tell us a little bit about your background?

I work at KPMG as a manager, and I started working with security when I was around thirteen years old. I used to go to my mother’s work, because there was nobody to look after me. There used to be an admin there who used to run early versions of Linux, which I found to be rather exciting. I begged him to give me an account on his Linux box, but I didn’t know much about that, so I started searching for information in Altavista. The only things you could find there was how to hack into Unix, and there were no books at the time I could buy. I downloaded some scripts off the internet and started running them. Some university then complained that my scripts were hacking them, though I didn’t really understand much of what I was doing. So my account got suspended for about half a year, but I got hooked and found it rather interesting and exciting, and developed an aspiration in this direction. I then did all sorts of jobs, but I wanted a job in this field. So I saw an add in the newspaper and applied for a job at KPMG back in Latvia, 6 or 7 years ago. I was asked what it is I could do, and I explained to them the sort of things I had done in terms of programming: “a little bit of this, a little bit of that…”, I did some reading about security before the interview, and they then asked me if I could do penetration testing. I had a vague idea of what it entailed, because I understood web applications quite well. So I said, “yeah, sure. I can go ahead and do that because I understand these things quite well.”

What are you working on at the moment?

In the past I used to focus mainly on break-ins. Now people resort to me for advice on how to detect on-going intrusions, which takes up a large portion of my time at the moment, but more at a senior level. I do threat modelling for a corporation. I have to know how to break-in in order to give them reasonable advice, but it’s mainly in the form of PowerPoint presentations and meetings.

When you develop threat models for corporations, how do you factor in insider threats as well as the human aspect of security?

I believe the industry oscillates from one extreme to the other. People spoke a lot about “risk” but they understood very little about what this risk entailed. They then spoke about IT risks, but it was more of a blank message. Then it all became very entangled, and there was talk about vulnerability thinking: “you have to patch everything.” But then people realised that there is no way to patch everything, and then started talking about defence strategies, which pretty much everybody misunderstands, and so they started ignoring vulnerabilities. This especially happened because we all had firewalls, but we know that those don’t help either. So what we are trying to do here is to spread common sense in one go. When we talk about threat models, we have to talk about who is attacking, what they are after, and how they will do it. So the “who” will obviously have a lot of different industry properties, why they are doing it, what their restrictions and their actions are and so on. Despite the popular belief in the press, in The Financial Times, CNN, and so on, everybody talks about the APT, these amazing hackers hacking everything. They don’t realise that the day-to-day reality is quite different. There are two main things people are concerned about. One of them is insider threats, because insiders have legitimate access, and just want to elevate that access by copying or destroying information. The second is malware, which is such a prevalent thing. Most malware is uploaded by criminals who are not specifically after you, but are after some of your resources: you are not special to them. There are very few industries where there is nation-state hacking or where competitive hacking is current. So when we talk about threat models, we mainly talk about insider threats within specific business units and how they work. This is what I think people are most afraid of: the exploitation of trust.

How do you normally advice executives in organisations about proper information security? Do you focus on building a proper security culture, on awareness training, technological/architectural means, or what do you consider is the most important thing they should keep in mind?

We need to implement lots of things. I believe that a lot of the information security awareness training is misguided. It is not about teaching people how to recognise phishing or these sort of things. It is about explaining to them why security is important and how they play a part in it.

Very few insiders develop overnight and I believe that there is a pattern, and even then, insiders are rare. Most of the time you have admins who are trying to make themselves important, or, who out of vengeance, try to destroy things. So whenever you have destruction of information, you have to look at what kind of privileged access there is. Sometimes people copy things in bulk when they leave the company, to distribute it to the company’s competitors.

So lets say you develop a threat model and present it to the company, who’s executives accepted and use to develop a policy which they then implement and enforce. Sometimes, these policies my clash with the end-users’ performance and affect the way business within the company is done. Sometimes they might resist new controls because privileges get taken away. How would you factor in this human aspect, in order to avoid this unwanted result?

Many companies impose new restrictions on their employees without analysing the unwanted result it may lead to. So for example, if companies don’t facilitate a method for sharing large files, the employees might resort to Dropbox which could represent a potential threat. Smart companies learn that it is important to offer alternatives to the privileges they remove from their employees.

How do you go by identifying what the users need?

They will often tell you what it is they need and they might even have a solution in mind. It’s really about offering their solutions securely. Rarely is the case when you have to tell them that what they want is very stupid and that they simply should not do it.

Finally, apart from sharp technical skills, what other skills would you say security professionals need in order to qualify for a job?

You have to know the difference between imposing security and learning how to make others collaborate with security. Having good interpersonal skills is very important: you need to know how to convince people to change their behaviour.

Thank you Konrads.

Preventing Insider Attacks

An insider attack is one of the biggest threats faced by modern enterprises, where even a good working culture might not be sufficient to prevent it. Companies implement sophisticated technology to monitor their employees but it’s not always easy for them to distinguish between an insider and an outside attack.

Those who target and plan attacks from the outside might create strategies for obtaining insider knowledge and access by either resorting to an existing employee, or by making one of their own an insider.

They may introduce a problem to both individuals (in the form of financial fraud, for example) and companies (by abusing authorization credentials provided to legitimate employees). In this scenario, a victim and an attacker are sharing physical space, which makes it very easy to gain login and other sensitive information.

According to CERT, a malicious insider is; a current or former employee, contractor, or business partner who has or had authorised access to an organisation’s network system or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organisation’s information. Furthermore, CERT split insider crimes into three categories:

  •  Insider IT Sabotage, where IT is used to direct specific harm at an organisation or an individual.
  • Insider Theft of Intellectual Property is the use of IT to steal proprietary information from an organisation.
  • Insider Fraud uses IT to add, modify and/or delete an organisation’s data in an unauthorised manner for personal gain. It also includes the theft of information needed for identity crime.

But how can companies detect and prevent such attacks?

In his paper, Framework for Understanding and Predicting Insider Attacks, Eugene Schultz suggests that insiders make human errors, which when spotted can help in preventing such threats. Therefore, constant monitoring, especially focused on low-level employees, is one of the basic measures for preventing insider attacks and gathering evidence.

There are a number of precursors of insider attacks that can help to identify and prevent them:

  • Deliberate markers – These are signs which attackers leave intentionally. They can be very obvious or very subtle, but they all aim to make a statement. Being able to identify the smaller, less obvious markers can help prevent the “big attack.”
  • Meaningful errors – Skilled attackers tend to try and cover their tracks by deleting log files but error logs are often overlooked.
  • Preparatory behaviour – Collecting information, such as testing countermeasures or permissions, is the starting point of any social engineering attack.
  • Correlated usage patterns – It is worthwhile to invest in investigating the patterns of computer usage across different systems. This can reveal a systematic attempt to collect information or test boundaries.
  • Verbal behaviour Collecting information or voicing dissatisfaction about the current working conditions may be considered one of the precursors of an insider attack.
  • Personality traits – A history of rule violation, drug or alcohol addiction, or inappropriate social skills may contribute to the propensity of committing an insider attack.

There are a number of insider attackers who are merely pawns for another inside or outside mastermind. He or she is usually persuaded or trained to perpetrate or facilitate the attack, alone or in collusion with other (outside) agents, motivated by the expectation of personal gain.

Organisations may unknowingly make themselves vulnerable to insider attacks by not screening newcomers properly in the recruitment, not performing threat analyses, or failing to monitor their company thoroughly. Perhaps the most important thing they overlook is to keep everybody’s morale high by communicating to employees that they are valued and trusted.

Understanding the Attackers

know your enemy - practice

When defining attack vectors, it is useful to know who the attackers are. One should understand that attackers are people too, who differ in resources, motivation, ability and risk propensity. According to Bruce Schneier, author of Beyond Fear, the categories of attackers are:

Opportunists

The most common type of attacker. As the category indicates, they spot and seize an “opportunity” and are convinced that they will not get caught. It is easy to deter such attackers via cursory countermeasures.

Emotional attackers

They may accept a high level of risk and usually want to make a statement through their attack. The most common motivation for them is revenge against an organisation due to actual or perceived injustice. Although emotional attackers feel powerful when causing harm, they sometimes “hope to get caught” as a way of solving the issues they were unhappy with but were unable to change from the beginning.

Cold intellectual attackers

Skilled and resourceful professionals who attack for their own gain or are employed to do so. They target information, not the system, and often use insiders to get it. Unlike opportunists, cold intellectual attackers are not discouraged by cursory countermeasures.

Terrorists

They accept high risk to gain visibility and make a statement. They are not only hard to deter by cursory countermeasures, but can even see them as a thrill.

Friends and relations

They may introduce a problem to both individuals (in the form of financial fraud, for example) and companies (by abusing authorization credentials provided to legitimate employees). In this scenario, a victim and an attacker are sharing physical space, which makes it very easy to gain login and other sensitive information.