Aligning OWASP Application Security Verification Standard and SABSA Architecture framework.
OWASP Application Security Verification Standard (Standard) is used at one of my clients to help develop and maintain secure applications. It has been used it as blueprint create a secure coding checklist specific to the organisation and applications used.
Below is an excerpt from the Standard related to the authentication verification requirements:
The Standard provides guidance on specific security requirements corresponding to the Physical layer of the SABSA architecture.
As there’s no clear link to the business requirements, let’s perform alignment between two frameworks.
The first step is to gain an understanding of Contextual and Conceptual architectures.
From analysing the company’s corporate strategy I was able to derive multiple business attributes relevant to the shareholders:
After a workshop with the CIO and IT managers in various business units, I’ve defined the following IT attributes supporting the main business attributes and the relationships between them:
How does the security function support the wider IT objectives and corresponding attributes? After a number of workshops and analysis of the security strategy document I’ve created a number of security attributes. Below is an example correlating to the business and IT attributes in scope:
The next step is to develop the security attribute mapping to the requirements of an in-house application security policy, based on the OWASP Application Security Verification Standard:
|A1||Application must use established corporate directory services for authentication i.e. LDAP.|
|A2||Authentication must be performed over a secure connection like SSL/TLS to avoid data sniffing.|
|A3||Authentication mechanism must not store sensitive credentials on the client.|
|A4||Authentication credentials must not be submitted using URL parameters to avoid sniffing.|
|A5||Authentication failure response must never indicate which part of the authentication data was incorrect to prevent username enumeration.|
|A6||Password entry fields must enforce a combination of upper-case, lower-case, special-chars, numbers and a minimum length for secure password.|
|A7||All authentication controls must be enforced on the server side because it’s easy to tamper with data on the client side.|
|A8||Authentication credentials must be salted and stored using industry defined and proven hashing techniques.|
|A9||Forgot password and other recovery paths must send a link including a time-limited activation instead of the password itself.|
|A10||There must be no secret question and answer mechanism for resetting the password to thwart social engineering attacks.|
|A11||Forget password functionality must not disable the login for valid users to ensure valid users don’t get locket out.|
|A12||Authentication mechanism must define the account lockout policy in case of 3 to 5 wrong login attempts to avoid brute-force attacks.|
|B1||Access Control process must be defined, agreed upon, and effectively implemented. The process must cover user authorization management, roles and responsibilities, and access revocation and expiry.|
|B2||The authorization business roles must be defined and documented.|
|B3||Authorization controls must be designed using the Principle of Least Privilege i.e. a user should not be granted excess privileges which is not required to perform his job.|
|B4||Business roles must never have authorization to perform application administration functions.|
|B5||An application must utilize a central component for authorization.|
|B6||Only trusted system objects must be used for making authorization decisions.|
|B7||Authorization check must be performed at every entry point of an application.|
|B8||Authorization controls must fail securely; all access must be denied if the application cannot access its security configuration information.|
|C1||The application must use the session management mechanism provided by the framework or server instead of a custom solution. Session management provided by the framework or server is thoroughly tested for security and hence safe to use.|
|C2||Session creation and management must be done on a trusted system and never on the client side.|
|C3||A new session must be established upon log-in and re-authentication to prevent session-fixation attacks.|
|C4||User must be forced to re-authenticate when attempting to access a function that requires elevated privileges.|
|C5||An idle session timeout and an absolute session timeout regardless of activity must be set.|
|C6||The logout function must explicitly terminate the user session and destroy all session related data.|
|C7||Session ids must have high entropy to avoid session id guessing attacks.|
|C8||The Session ID is sensitive; it must be protected and never displayed except in cookie headers.|
|C9||Session ID cookies must be marked as HttpOnly to avoid an XSS flaw from gaining access to it.|
|C10||User provided session ids must never be accepted.|
|C11||Concurrent session for the same user must not be allowed.|
|D1||All user-input coming from drop-down, text fields, value lists, and other UI components must be validated. By default the user input should be considered malicious.|
|D2||Special characters in input like (but not limited to) <, >, &, ’” if used in the output must be html escaped to make them context safe.|
|D3||Input validation and encoding must be performed on the server side.|
|D4||Queries passed to SQL databases must be parameterized or stored procedures should be used to prevent SQL injection attacks.|
|D5||All input validation failures must be logged to detect attack if any.|
|D6||Input validation failure message should not display any system or configuration information to the user. It may assist the attacker in profiling the system.|
|D7||Input validation failures must result in input rejection.|
|D8||File upload functions must be implemented securely. Uploaded file should be checked for virus and other malicious data|
|E1||Error messages must not display any technical information about the application or the underlying infrastructure to thwart any attempt to profile the system.|
|E2||Application design must perform proper exception handling and must not solely rely on the underlying framework or infrastructure for handling errors.|
|E3||System resources that are no longer needed upon occurrence of an exception must be explicitly released.|
|E4||Application design must deny access by default.|
|F1||Logging controls must be implemented on a trusted system.|
|F2||Access to logs must be strictly controlled.|
|F3||Sensitive information must not be stored in system logs.|
|F4||Successful and unsuccessful login attempts must be logged.|
|F5||Attempts to access unauthorized sensitive transactions must be logged.|
|F6||Sensitive transactions and administrative actions must be identified and their usage logged.|
|F7||Unexpected application exceptions must be logged.|
|F8||Centralized monitoring of logs must be setup for critical applications.|
|F9||A log retention, access, and review process must be defined and effectively implemented.|
|G1||All sensitive data consumed and produced by an application must be classified and there should be a clear policy for access control to these data.|
|G2||The retention period must be obtained or determined for any data stored by an application.|
|G3||All sensitive information must be transmitted using the latest version of SSL/TLS.|
|G4||Only industry accepted encryption/hashing algorithms must be used. Outdated and weak hashing algorithm like MD5 should never be used.|
|G5||Data at rest must be protected by means of access control, and where required, by encryption|
|G6||Sensitive information must not be transmitted in GET query string or in URL parameter.|
|G7||User credentials must never be hard-coded or stored in config/plaintext files.|
|G8||An application must not reveal technical information about system components or underlying infrastructure.|
|G9||Unnecessary application code, documentation and components must be removed before deployment to production environment.|
|G10||Cookies sent over HTTPS must be marked as secure.|
|G11||Extranet applications must protect against Clickjacking attack by setting the iframe as same-origin in the cookie.|
|G12||Client side caching should be disabled for sensitive information using appropriate header or meta-tag like no-cache, Cache-Control, no-store.|
|G13||Temporary data or files must be invalidated after session termination.|
Further aligning the application security policy to the SABSA framework, let’s invent the link between the SABSA layers, developing Logical, Physical and Component architectures:
Note that the list of security requirements and control objectives is not exhaustive and presented for illustrative purposes only.
The Standard defines three security verification levels, with each level increasing in depth. Each ASVS level contains a list of security requirements. Each of these requirements can also be mapped to security-specific features and capabilities that must be built into software by developers.
Let’s map risk level for every security attribute and corresponding components to the Standard verification levels:
The combined framework can now be used for high-level risk reporting:
The OWASP framework is designed as a set of requirements to measure point-in-time compliance with the Standard. I propose widening the scope of the framework to include activities I performed above as part of the following lifecycle:
Controls and control objectives are clearly defined in the Standard in order to address the risks, yet the focus on enablers to exploit potential opportunities is lacking. Below is an example of how a balanced approach might look like for the attribute Authenticated.
|Attribute||Enablement Objectives||Control Objectives|
|Authenticated||1. Ensure that all essential enterprise information uses centralized authentication and access control mechanisms.
2. Encourage information creators and users to leverage centralized content repositories for all information essential to performing their day-to-day activities.
3. Where possible, use automatic assignment of information and data ownership and permissions to ensure information elements are accessed by only those individuals who have a legitimate business need.
4. Where possible, do not require users to explicitly manage individual access control permissions and authorizations.
|Ensure that a verified application satisfies the following high level requirements:
1. Verifies the digital identity of the sender of a communication.
2. Ensures that only those authorised are able to authenticate and credentials are transported in a secure manner.
There is also a lack of metrics and performance targets in the OWASP Standard. To address this, I’ve developed a set of metrics and measurement approaches for security attributes with corresponding primary and secondary risk indicators. Below is an example for the attribute Authenticated.
|Attribute||Metric||Measurement Approach||Category||Primary KRI||Secondary KRI|
|Authenticated||% of information elements stored in repositories, locations or on devices requiring authentication of some kind before the information can be accessed||Audit of information repositories and storage locations.
|Authenticated||% of applications using established corporate directory services for authentication||Audit of web applications||Completeness||80%||90%|
|Authenticated||% of applications performing authentication over SSL/TLS||Audit of web applications||Completeness||80%||90%|
|Authenticated||% of applications storing authentication credentials on the client.||Audit of web applications||Completeness||80%||90%|
|Authenticated||% of applications enforcing password requirements||Audit of web applications||Completeness||80%||90%|
How do you demonstrate benefits of the combined framework to the stakeholders?
The following stakeholders have been chosen to demonstrate features, advantages and benefits of the combined OWASP-SABSA framework as they are directly impacted by this change:
- Chief Executive Officer (CEO) and the Board of Directors
- Chief Risk Officer (CRO)
- Chief Information Officer (CIO)
- Chief Information Security Officer (CISO)
The table below provides a summary of benefits of the aligned framework.
|CEO and Board||CRO||CIO||CISO|
|Business-driven||Value-assured||Protects corporate reputation and||Enables flexible fit with industry regulations||Enables secure adoption of digital business model||Facilitates alignment of application security efforts with business goals|
|Transparent||Two-way traceability||Ensures return on investment||Enables effective compliance measurement approach||Encourages integrated people – process – technology solutions||Provides traceability of implementation of business-aligned security requirements|
|Auditable||Demonstrates compliance to relevant authorities||Demonstrates compliance to regulators and external auditors||Ensures that compliance risk is effectively managed||Facilitates effective internal information systems audits||Supports application security and risk review processes|
- OWASP Application Security Verification Standard
- OWASP Top Ten: The OWASP (Open Web Application Security Project) Top Ten List represents a broad consensus about what the most critical web application security flaws are.
- CWE/SANS Top 25
Building on the connection between breaking security policies and cheating, let’s look at a study that asked participants to solve 20 simple maths problems and promised 50 cents for each correct answer.
The participants were allowed to check their own answers and then shred the answer sheet, leaving no evidence of any potential cheating. The results demonstrated that participants reported solving, on average, five more problems than under conditions where cheating was not possible (i.e. controlled conditions).
The researchers then introduced David – a student who was tasked to raise his hand shortly after the experiment begun and proclaim that he had solved all the problems. Other participants were obviously shocked by such a statement. It was clearly impossible to solve all the problems in only a few minutes. The experimenter, however, didn’t question his integrity and suggested that David should shred the answer sheet and take all the money from the envelope.
Interestingly, other participants’ behaviour adapted as a result. They reported solving on average eight more problems than under controlled conditions.
Much like the broken windows theory mentioned in my previous blog, this demonstrates that unethical behaviour is contagious, as are acts of non-compliance. If employees in a company witness other people breaking security policies and not being punished, they are tempted to do the same. It becomes socially acceptable and normal. This is the root cause of poor security culture.
The good news is that the opposite holds true as well. That’s why security culture has to have strong senior management support. Leading by example is the key to changing the perception of security in the company: if employees see that the leadership team takes security seriously, they will follow.
So, security professionals should focus on how security is perceived. This point is outlined in three basic steps in the book The Social Animal, by David Brooks:
- People perceive a situation.
- People estimate if the action is in their long-term interest.
- People use willpower to take action.
He claims that, historically, people were mostly focused on the last two steps of this process. In the previous blog I argued that relying solely on willpower has a limited effect. Willpower can be exercised like a muscle, but it is also prone to atrophy.
In regard to the second step of the decision-making process, if people were reminded of the potential negative consequences they would be likely not to take the action. Brooks then refers to ineffective HIV/AIDS awareness campaigns, which focused only on the negative consequences and ultimately failed to change people’s behaviour.
He also suggests that most diets fail because willpower and reason are not strong enough to confront impulsive desires: “You can tell people not to eat the French fry. You can give them pamphlets about the risks of obesity … In their nonhungry state, most people will vow not to eat it. But when their hungry self rises, their well-intentioned self fades, and they eat the French fry”.
This doesn’t only apply to dieting: when people want to get their job done and security gets in the way, they will circumvent it, regardless of the degree of risk they might expose the company to.
That is the reason for perception being the cornerstone of the decision-making process. Employees have to be taught to see security violations in a particular way that minimises the temptation to break policies.
In ‘Strangers to Ourselves’, Timothy Wilson claims, “One of the most enduring lessons of social psychology is that behaviour change often precedes changes in attitudes and feelings”.
Security professionals should understand that there is no single event that alters users’ behaviour – changing security culture requires regular reinforcement, creating and sustaining habits.
Charles Duhigg, in his book The Power of Habit, tells a story about Paul O’Neill, a CEO of the Aluminum Company of America (Alcoa) who was determined to make his enterprise the safest in the country. At first, people were confused that the newly appointed executive was not talking about profit margins or other finance-related metrics. They didn’t see the link between his ‘zero-injuries’ goal and the company’s performance. Despite that, Alcoa’s profits reached a historical high within a year of his announcement. When O’Neill retired, the company’s annual income was five times greater than it had been before his arrival. Moreover, it became one of the safest companies in the world.
Duhigg explains this phenomenon by highlighting the importance of the “keystone habit”. Alcoa’s CEO identified safety as such a habit and focused solely on it.
O’Neill had a challenging goal to transform the company, but he couldn’t just tell people to change their behaviour. He said, “that’s not how the brain works. So I decided I was going to start by focusing on one thing. If I could start disrupting the habits around one thing, it would spread throughout the entire company.”
He recalled an incident when one of his workers died trying to fix a machine despite the safety procedures and warning signs. The CEO called an emergency meeting to understand what had caused this tragic event.
He took personal responsibility for the worker’s death, identifying numerous shortcomings in safety education. For example, the training programme didn’t highlight the fact that employees wouldn’t be blamed for machinery failure or the fact that they shouldn’t commence repair work before finding a manager.
As a result, the policies were updated and the employees were encouraged to suggest safety improvements. Workers, however, went a step further and started suggesting business improvements as well. Changing their behaviour around safety led to some innovative solutions, enhanced communication and increased profits for the company.
Security professionals should understand the importance of group dynamics and influences to build an effective security culture.
They should also remember that just as ‘broken windows’ encourage policy violations, changing one security habit can encourage better behaviour across the board.
 Francesca Gino, Shahar Ayal and Dan Ariely, “Contagion and Differentiation in Unethical Behavior: The Effect of One Bad Apple on the Barrel”, Psychological Science, 20(3), 2009, 393–398.
 David Brooks, The Social Animal: The Hidden Sources of Love, Character, and Achievement, Random House, 2011.
 Timothy Wilson, Strangers to Ourselves, Harvard University Press, 2004, 212.
 Charles Duhigg, The Power of Habit: Why We Do What We Do and How to Change, Random House, 2013.
Many employees find information security secondary to their normal day-to-day work, often leaving their organisation vulnerable to cyber attacks, particularly if they are stressed or tired. Leron Zinatullin, the author of The Psychology of Information Security, looks at the opportunities available to prevent such cognitive depletion.
When users perform tasks that comply with their own mental models (i.e. the way that they view the world and how they expect it to work), the activities present less of a cognitive challenge than those that work against these models.
If people can apply their previous knowledge and experience to a problem, less energy is required to solve it in a secure manner and they are less mentally depleted by the end of the day.
For example, a piece of research on disk sanitisation highlighted the importance of secure file removal from the hard disk. It is not clear to users that emptying the ‘Recycle Bin’ is insufficient and that files can easily be recovered. However, there are software products available that exploit users’ mental models. They employ a ‘shredding’ analogy to indicate that files are being removed securely, which echoes an activity they would perform at work. Such an interface design might help lighten the burden on users.
Security professionals should pay attention to the usability of security mechanisms, aligning them with users’ existing mental models.
In The Laws of Simplicity, John Maeda supports the importance of making design more user-friendly by relating it to an existing experience. He refers to an example of the desktop metaphor introduced by Xerox researchers in the 1980s. People were able to relate to the graphical computer interface as opposed to the command line. They could manipulate objects similarly to the way they did with a physical desk: storing and categorising files in folders, as well as moving or renaming them, or deleting them by placing them in the recycle bin.
Using mental models
Building on existing mental models makes it easier for people to adopt new technologies and ways of working. However, such mappings must take cultural background into consideration. The metaphor might not work if it is not part of the existing mental model. For instance, Apple Macintosh’s original trash icon was impossible to recognise in Japan, where users were not accustomed to metallic bins of this kind.
Good interface design not only lightens the burden on users but can also complement security. Traditionally, it has been assumed that security and usability always contradict each other – that security makes things more complicated, while usability aims to improve the user experience. In reality, they can support each other by defining constructive and destructive activities. Effective design should make constructive activities simple to perform while hindering destructive ones.
This can be achieved by incorporating security activities into the natural workflow of productive tasks, which requires the involvement of security professionals early in the design process. Security and usability shouldn’t be extra features introduced as an afterthought once the system has been developed, but an integral part of the design from the beginning.
Security professionals can provide input into the design process via several methods such as iterative or participatory design. The iterative method consists of each development cycle being followed by testing and evaluation and the participatory method ensures that key stakeholders, including security professionals, have an opportunity to be involved.
 S. L. Garfinkel and A. Shelat, “Remembrance of Data Passed: A Study of Disk Sanitization Practices”, IEEE Security & Privacy, 1, 2003, 17–27.
 John Maeda, The Laws of Simplicity, MIT Press, 2006.
 For iterative design see J. Nielsen, “Iterative User Interface Design”, IEEE Computer, 26(11) (1993), 32–41; for participatory design see D. Schuler and A. Namioka, Participatory Design: Principles and Practices, CRC Press, 1993.
Image by Rosenfeld Media https://www.flickr.com/photos/rosenfeldmedia/2141071329/
Information security can often be a secondary consideration for many employees, which leaves their company vulnerable to cyber attacks. Leron Zinatullin, author of The Psychology of Information Security, discusses how organisations can address this.
First, security professionals should understand that people’s resources are limited. Moreover, people tend to struggle with making effective decisions when they are tired.
To test the validity of this argument, psychologists designed an experiment in which they divided participants into two groups: the first group was asked to memorise a two-digit number (e.g. 54) and the second was asked to remember a seven-digit number (e.g. 4509672). They then asked the participants to go down the hall to another room to collect their reward for participating. This payment, however, could be only received if the number was recalled correctly.
While they were making their way down the corridor, the participants encountered another experimenter, who offered them either fruit or chocolate. They were told that they could collect their chosen snack after they finished the experiment, but they had to make a decision there and then.
The results demonstrated that people who were given the easier task of remembering a two-digit number mostly chose the healthy option, while people overburdened by the more challenging task of recalling a longer string of digits succumbed to the more gratifying chocolate.
The implications of these findings, however, are not limited to dieting. A study looked at the decision-making patterns that can be observed in the behaviour of judges when considering inmates for parole during different stages of the day.
Despite the default position being to reject parole, judges had more cognitive capacity and energy to fully consider the details of the case and make an informed decision in the mornings and after lunch, resulting in more frequently granted paroles. In the evenings, judges tended to reject parole far more frequently, which is believed to be due to the mental strain they endure throughout the day. They simply ran out of energy and defaulted to the safest option.
How can this be applied to the information security context?
Security professionals should bear in mind that if people are stressed at work, making difficult decisions, performing productive tasks, they get tired. This might affect their ability or willingness to maintain compliance. In a corporate context, this cognitive depletion may result in staff defaulting to core business activities at the expense of secondary security tasks.
Security mechanisms must be aligned with individual primary tasks in order to ensure effective implementation, by factoring in an individual’s perspective, knowledge and awareness, and a modern, flexible and adaptable information security approach. The aim should therefore be to correct employee misunderstandings and misconceptions that result in non-compliant behaviour, because, in the end, people are a company’s best asset.
 B. Shiv and A. Fedorikhin, “Heart and Mind in Conflict: The Interplay of Affect and Cognition in Consumer Decision Making”, Journal of Consumer Research, 1999, 278–292.
 Shai Danziger, Jonathan Levav and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions”, Proceedings of the National Academy of Sciences, 108(17), 2011, 6889–6892.
Photo by CrossfitPaleoDietFitnessClasses https://www.flickr.com/photos/crossfitpaleodietfitnessclasses/8205162689
The Psychology of Information Security – Resolving conflicts between security compliance and human behaviourPosted: November 26, 2015
In today’s corporations, information security professionals have a lot on their plate. In the face of constantly evolving cyber threats they must comply with numerous laws and regulations, protect their company’s assets and mitigate risks to the furthest extent possible.
Security professionals can often be ignorant of the impact that implementing security policies in a vacuum can have on the end users’ core business activities. These end users are, in turn, often unaware of the risk they are exposing the organisation to. They may even feel justified in finding workarounds because they believe that the organisation values productivity over security. The end result is a conflict between the security team and the rest of the business, and increased, rather than reduced, risk.
This can be addressed by factoring in an individual’s perspective, knowledge and awareness, and a modern, flexible and adaptable information security approach. The aim of the security practice should be to correct employee misconceptions by understanding their motivations and working with the users rather than against them – after all, people are a company’s best assets.
I just finished writing a book with IT Governance Publishing on this topic. This book draws on the experience of industry experts and related academic research to:
- Gain insight into information security issues related to human behaviour, from both end users’ and security professionals’ perspectives.
- Provide a set of recommendations to support the security professional’s decision-making process, and to improve the culture and find the balance between security and productivity.
- Give advice on aligning a security programme with wider organisational objectives.
- Manage and communicate these changes within an organisation.
Based on insights gained from academic research as well as interviews with UK-based security professionals from various sectors, The Psychology of Information Security – Resolving conflicts between security compliance and human behaviour explains the importance of careful risk management and how to align a security programme with wider business objectives, providing methods and techniques to engage stakeholders and encourage buy-in.
The Psychology of Information Security redresses the balance by considering information security from both viewpoints in order to gain insight into security issues relating to human behaviour , helping security professionals understand how a security culture that puts risk into context promotes compliance.
When there is a need to quickly determine where a company is standing in terms of the maturity of its security programme, I developed the below questionnaire which can be useful in this endeavour.
|1.||Information security policy|
|1.1||Is there an information security policy that is appropriate to the purpose of the organisation, gives a framework for setting objectives, and demonstrates commitment to meeting requirements and for continual improvement?|
|1.2||Is the policy documented and communicated to employees within the organisation and available to interested parties, as appropriate?|
|1.3||Is there an established ISMS policy that is ensuring the integration of the information security management system requirements into the organisation’s processes?|
|2.||Information security risk assessment and treatment|
|2.1||Has an information security risk assessment process been defined and applied?|
|2.2||Is there an information security risk treatment process to select appropriate risk treatment options for the results of the information security risk assessment, and are controls determined to implement the risk treatment option chosen?|
|3.||Planning and measuring|
|3.1||Are measurable information security objectives and targets established, documented and communicated throughout the organisation?|
|3.2||Does the organisation determine what needs to be done, when and by whom, in setting its objectives?|
|4.1||Does the organisation conduct internal audits at planned intervals to provide information on whether the information security management system conforms to requirements?|
|5.1||Does the leadership undertake a periodic review of the information security processes and controls, and ISMS?|
|6.||Corrective action and continual improvement|
|6.1||Does the organisation react to the nonconformity and continually improve the suitability, adequacy and effectiveness of the information security management system?|
|7.1||What security laws and data protection legislation apply to the organisation?|
Download the full Questionnaire (with instructions)
Image courtesy Pong / FreeDigitalPhotos.net
Have you seen security controls being implemented just to comply with legal and regulatory requirements? Just like this fence. I’m sure it will pass all the audits: it is functioning as designed, it blocks the path (at least on paper) and it has a bright yellow colour just as specified in the documentation. But is it fit for purpose?
It turns out that many security problems arise from this eager drive to comply: if the regulator needs a fence – it will be added!
Sometimes controls are introduced later, when the project is well passed the design stage. It might be the case that they just don’t align with the real world anymore.
Safety measures, unfortunately, are no exception. The solution may be poorly designed, but more often, safety requirements are included later on with the implementation not fit for purpose.
Same holds for privacy as well. Privacy professionals encourage to adopt the Privacy by Design principle. Is it considered on the image below?