I was asked to deliver a keynote in Germany at the Security Transparent conference. Of course, I agreed. Transparency in security is one of the topics that is very close to my heart and I wish professionals in the industry not only talked about it more, but also applied it in practice.
Back in the old days, security through obscurity was one of the many defence layers security professionals were employing to protect against attackers. On the surface, it’s hard to argue with such a logic: the less the adversary knows about our systems, the less likely they are to find a vulnerability that can be exploited.
There are some disadvantages to this approach, however. For one, you now need to tightly control the access to the restricted information about the system to limit the possibility of leaking sensitive information about its design. But this also limits the scope for testing: if only a handful of people are allowed to inspect the system for security flaws, the chances of actually discovering them are greatly reduced, especially when it comes to complex systems. Cryptographers were among the first to realise this. One of Kerckhoff’s principles states that “a cryptosystem should be secure even if everything about the system, except the key, is public knowledge”.
Modern encryption algorithms are not only completely open to public, exposing them to intense scrutiny, but they have often been developed by the public, as is the case, for example, with Advanced Encryption Standard (AES). If a vendor is boasting using their own proprietary encryption algorithm, I suggest giving them a wide berth.
Cryptography aside, you can approach transparency from many different angles: the way you handle personal data, respond to a security incident or work with your partners and suppliers. All of these and many more deserve attention of the security community. We need to move away from ambiguous privacy policies and the desire to save face by not disclosing a security breach affecting our customers or downplaying its impact.
The way you communicate internally and externally while enacting these changes within an organisation matters a lot, which is why I focused on this communication element while presenting at Security Transparent 2019. I also talked about friction between security and productivity and the need for better alignment between security and the business.
I shared some stories from behavioural economics, criminology and social psychology to demonstrate that challenges we are facing in information security are not always unique – we can often look at other seemingly unrelated fields to borrow and adjust what works for them. Applying lessons learned from other disciplines when it comes to transparency and understanding people is essential when designing security that works, especially if your aim is to move beyond compliance and be an enabler to the business.
Remember, people are employed to do a particular job: unless you’re hired as an information security specialist, your job is not to be an expert in security. In fact, badly designed and implemented security controls can prevent you from doing your job effectively by reducing your productivity.
After all, even Kerckhoff recognised the importance of context and fatigue that security can place on people. One of his lesser known principles states that “given the circumstances in which it is to be used, the system must be easy to use and should not be stressful to use or require its users to know and comply with a long list of rules”. He was a wise man indeed.
JSON Web Tokens (JWTs) are quickly becoming a popular way to implement information exchange and authorisation in single sign-on scenarios.
As with many things, this technology can be either quite secure or very insecure at the same time and a lot is dependent on the implementation. This opens a number of possibilities for attackers to exploit vulnerabilities if this standard is poorly implemented or outdated libraries are sued.
Here are some of the possible attack scenarios:
- A attackers can modify the token and hashing algorithm to indicate, through the ‘none’ keyword, that the integrity of the token has already been verified, fooling the server into accepting it as a valid token
- Attackers can change the algorithm from ‘RS256’ to ‘HS256’ and use the public key to generate a HMAC signature for the token, as server trusts the data inside the header of a JWT and doesn’t validate the algorithm it used to issue a token. The server will now treat this token as one generated with ‘HS256’ algorithm and use its public key to decode and verify it
- JWTs signed with HS256 algorithm could be susceptible to private key disclosure when weak keys are used. Attackers can conduct offline brute-force or dictionary attacks against the token, since a client does not need to interact with the server to check the validity of the private key after a token has been issued by the server
- Sensitive information (e.g. internal IP addresses) can be revealed, as all the information inside the JWT payload is stored in plain text
I recommend the following steps to address the concerns above:
- Reject tokens set with ‘none’ algorithm when a private key was used to issue them
- Use appropriate key length (e.g. 256 bit) to protect against brute force attacks
- Adjust the JWT token validation time depending on required security level (e.g. from few minutes up to an hour). For extra security, consider using reference tokens if there’s a need to be able to revoke/invalidate them
- Use HTTPS/SSL to ensure JWTs are encrypted during client-server communication, reducing the risk of the man-in-the-middle attack
- Overall, follow the best practices for implementing them, only use up-to-date and secure libraries and choose the right algorithm for requirements
OWASP have more detailed recommendations with Java code samples alongside other noteworthy material for common vulnerabilities and secure coding practices, so I encourage you to check it out if you need more information.
For everyone interested in history of information security I highly recommend visiting Bletchley Park. Among other things, visitors can explore legendary British WW2 Codebreaking Huts, learn more about the cryptography and the Enigma machine in particular.
There is even a computer simulation available that explains in simple terms the basic principles behind the device.
Some interesting facts about Alan Turing and more modern exhibitions definitely sparkle the curiosity of any visitor.
I delivered a 1,5-day Information Security Concepts course at KPMG UK.
We covered a wide range of topics, including information security risk management, access control, threat and vulnerability management, etc.
According to the feedback I received after the course, the participants were able to understand the core security concepts much better and, more importantly, apply their knowledge in practice.
Leron is very engaging and interesting to listen to
Leron has the knowledge and he’s very effective making simple delivery of a complex topic
Leron is an effective communicator and explained everything that he was instructing on in a clear and concise manner
There will be continuous collaboration with the Learning and Development team to deliver this course to all new joiners to the Information Protection and Business Resilience team at KPMG.
I participated in UK Cyber Security Challenge.
Our university team won the competition.
It was an interesting experience and through teamwork we solved all the challenging puzzles other universities had submitted.
Try to crack Christmas Cipher 2012 to practice for upcoming UK Cyber Security challenges.
Enigma Machine Spreadsheet
You can try encrypting your own messages using this spreadsheet
How AES encryption works – a flash clip demonstrating Rijndael cipher in action
Public Key Cryptography: Diffie-Hellman Key Exchange