Friday 14 June 2013

My alternative OWASP Top10

With the release of the 2013 OWASP Top10 Project businesses around the world have a security bar against which they can measure themselves.  This OWASP project, more than any other, has the ability to influence business, due in no small part to it being referenced in a variety of compliance standards e.g. PCI.  So this means businesses are listening and the question then becomes, is the right message being sent?

My concern with the OWASP Top10 is that it treats the symptoms and not the disease.  It essentially lists specific vulnerabilities (or attacks) and my question is - might it not be better to list the factors that lead to these vulnerabilities?  If we can address the disease, then the symptoms will also disappear.

I would clarify that there are other OWASP projects to treat the disease, such as the OpenSAMM, Development Guide, or ASVS, but these projects don't have the reach and influence of the Top10.  So in that sense there is potential benefit in having a Top10 that addresses the cause of the symptoms.

So my alternative top ten are:
  1. Legacy code
  2. Undue reliance on application frameworks
  3. Security work is not allocated enough time
  4. Insufficient security architecture/design
  5. Insufficient security code review
  6. Undue reliance on security libraries
  7. Supporting legacy browsers
  8. Inadequate authorisation
  9. Ineffective authentication
  10. Lack of security testing
If your business has minimised the risk of all of the above issues, across all your applications, then your application security is probably in reasonable shape.

Below I have provided a bit more detail and listed ways in which you might not be adequately securing your applications.  If you answer in the affirmative to the points following each entry below, then the issue may affect you and you may need to address the risk it poses.

1. Legacy Code
The definition of 'legacy' is going to be relative, but I'll suggest any code that isn't actively maintained, is sufficiently old or is based on a different technology than you are using for new code.  Legacy code could be a problem if:
- existing code is treated as sacrosanct so nobody wants to change it
- there is code that no existing employee wrote, is un-documented, is magic, and no one wants to touch it
- there is code being used no one knows about
- there is code being used that was written before 'security' was an issue
- there is code being used that was written for a different set of requirements
- there is code being used that was written for a different operational situation
- there is no budget to change or update legacy code
- the risk of changing code is perceived greater than the security risk it poses

2. Undue reliance on application frameworks
Most applications have an underlying framework on which they are built.  Often this framework is developed externally, but the same applies if it is written in-house.  Application frameworks can be a problem if:
- it's assumed the framework solves all security issues
- it's not understood how your framework protects against common vulnerabilities
- the framework is relied on for security it doesn't provide
- it's security features are used in the wrong way
- it isn't realised the security features the framework provides are only a partial solution
- the framework has plugins that extend security
- the security plugins are numerous and it's difficult to know which to use
- security plugins only provide a partial security solution (that they are trying to solve)
- the framework/plugins are not routinely tested for security (by white hats at least)
- the framework is not secure by default

3. Security work is not allocated enough time
There is never enough time for most things, but too often there is no time explicitly dedicated to security.  It's fairly well understood that the performance of your application depends on the amount of time you dedicate to performance coding and analysis - well the same is true for security.  You might not have enough time dedicated to security if:
- there are security requirements, but no allocated time in the project plan to ensure they are met
- there is no security sprint
- there is no security testing time allocated
- security work is lumped together with other non-functional requirement time
- code review has no dedicated time for security

4. Insufficient security architecture/design
Software has a tendency to evolve, but failing to keep track of the bigger picture, especially when it comes to security, can be the source of vulnerabilities without easy solutions.  You may not be investing enough in security architecture/design if:
- security solutions are ad-hoc
- ad-hoc security solutions are reused without re-evaluation
- multiple applications use differing security solutions
- a single application uses differing security solutions
- new mitigations are bolted onto existing solutions
- security solutions are not documented
- incompatible security solutions has led to hacks for interoperability
- security solution knowledge is held by a few key developers
- teams aren't aware of existing security solutions
- teams re-invent the security wheel

5. Insufficient security code review
Code needs to be reviewed specifically for security vulnerabilities, either by the security team or experienced developers.  You might not be dedicating enough code review resources to security code review if:
- the ratio of the security team to developers makes it impractical to review all code
- the security relevant code is inline with the rest of the code meaning every line has to examined to just find the most obvious issues e.g. inline input validation.
- the code does not lend itself to security static analysis tools
- no metrics are used to measure the effectiveness of security code review
- your code review capabilities do not scale
- you don't automate review using static analysis
- you security code review only major changes
- your IDE does not have real time security review functionality

6. Undue reliance on security libraries
We try to make using security easier by providing libraries that expose common security functionality.  But security is complicated and fragile and it's impossible to ensure all developers understand the nuances of using libraries.  Security libraries may be a cause of concern if:
- there are no wrappers around cryptographic libraries to hide cryptographic details or complexity
- use of security libraries is optional
- there is no auditing of the use of security libraries
- there is no updating of security libraries
- security libraries are misconfigured
- there is no abstraction layer over security libraries so they can be replaced or complexities hidden
- security libraries are used to solve problems that should be solved at the architectural or design level

7. Supporting legacy browsers
We all want our software to reach as many users as possible and we don't want to stop supporting existing customers, but at some point the focus must be more on the future than the past.  Catering for legacy browsers might be a problem if you are:
- supporting IE6
- rejecting new security solutions because they don't work in legacy browsers
- not leveraging new security features of current versions of browsers
- not restricting your applications functionality if you must be compatible with legacy browsers
- not applying the principles of Unobtrusive JavaScript
- not applying the principles of Progressive Enhancement

8. Inadequate authorisation
Authorisation is a fundamental part of the security of any application.  It is also one of the most difficult to implement, maintain and manage.  Your authorisation may need to be re-evaluated if you:
- don't have a framework for authorisation
- rely on developers to include checks for permissions in business logic
- are unable to audit authorisation
- fail open
- don't apply authorisation to each stage of multi-stage functionality
- don't tie resource requests with unique IDs to the user associated with the resource e.g. in SQL
- are making managing permissions so difficult that implementing 'least privilege' is impractical

9. Ineffective authentication
Authentication is a pre-requisite for authorisation and so it's important it is effective.  Many applications fail to appreciate the various authentication mechanisms they employ have largely the same security requirements.  Your current authentication mechanisms may not be effective if you are:
- not storing passwords securely
- thinking a session token needs any less protection than a password
- sharing session tokens with 3rd party integrators
- sharing user credentials with 3rd party integrators
- designing a system so users have to share their credentials with 3rd party integrators
- using OAuth
- not telling users to use a password manager
- not enforcing password complexity
- not requiring 2FA for resetting a password

10. Lack of security testing
Vulnerabilities are inevitable, but security testing will minimise the amount that you ship.  The amount of security testing should be proportional to the risks the application faces.  You might not be doing enough security testing if you are:
- not doing any security testing
- not using a web application security scanner
- not training testers to try negative test cases
- not getting periodic 3rd party security testing
- not insisting your service providers prove they do security testing

Tuesday 26 March 2013

Categories of XSS

So I was reading an article recently that talked about finding XSS where the malicious script was injected via a cookie.  This got me thinking about which of the standard categories of XSS - Stored (or Persistent), Reflected (or Non-persistent) and DOM-based - that fell into.  It wasn't immediately clear as it was persistent, but stored on the client, but the fix would need to be in the JavaScript which makes it similar to DOM-based XSS.  This got me thinking about the basis for the standard categories or types of XSS.

I think the first thing to point out is that at some time in history someone, or more likely a bunch of different people, discovered XSS and thought of it as a single type of attack.  It's an interesting thought experiment to imagine you find that you can inject JavaScript into a web page, and that as far as you know no one has done it before.  I imagine I would think of it as a code injection attack, where the code just happens to be JavaScript, and the novelty would be that I proxied the attack via the web server and that the victims were other users of the web server, rather than the server itself.  Potentially I might not even think it was that interesting running arbitrary JavaScript on a client, as it wouldn't have been comparable to the usual arbitrary code execution attacks that existed already.  But then if I could have predicted how popular the web would become, and what it would be used for, well I would have <insert genius idea> and profited.

I'm going to make the completely unfounded assumption that Stored XSS was discovered first, as it's considered the more serious attack (which for now I'll state without justification) and then Reflected XSS was considered a variant type of attack, but different to Stored XSS because it required user interaction.  But really the whole user interaction thing is a fairly weak argument these days, with the proliferation of ad-networks inserting arbitrary HTML in popular websites and 'watering hole' type attacks, a user just surfin' the net is far more likely than ever to hit upon a malicious iframe that exploits a reflected XSS attack.  But granted, the user does have to be logged in to the web site so the window of opportunity for the attacker is smaller in a Reflected XSS attack.

Comparing to a Stored XSS attack, well the user is most likely already logged in to the web site so the attacker's window of opportunity is much larger, however that attacker could face a second issue, the victim still has to navigate to the page with the injected script, so the attack is location limited.  It's interesting to note that Reflected XSS is not location limited, since the attacker will direct the victim to precisely the page with the vulnerability.  But then it doesn't actually make sense to say Stored XSS is location limited if Reflected XSS isn't, because whatever vector of attack Reflected XSS can use, so can Stored XSS i.e. a malicious iframe can point to a web page with Stored XSS.

It would seem then that we should categorise XSS by the vectors of attack that can be used by an attacker to exploit a victim.

Possible types of XSS when the attacker manipulates ...
XSS Type the victim's response the victim's request the victim's DOM properties
Stored XSS Y Y N
Reflected XSS N Y N
DOM-based XSS N Y Y

So it might not be clear what I mean by "the victim's response", what I mean is that the attacker does not control the victim's request, but is able to include data of their choosing in the response to the victim's request.  This is the standard Stored XSS scenario where the data comes from the DB for instance.  However Reflected XSS isn't possible because the attacker doesn't control the request and DOM-based isn't possible because either the vulnerable DOM property is set by the server, or set by the client with data from the server.

When the attacker manipulates "the victim's request", Stored, Reflected and DOM-based XSS is possible.  Stored XSS because the attacker can direct the victim to the infected page.  Reflected XSS because the attacker controls the parameters of the request.  Finally, DOM-based XSS because the attacker can set DOM properties that aren't sent to the server e.g. fragment identifier if the attacker can specify the URL, window.name if the attacker makes a request from script.

In the case of "the victim's DOM property", what I mean is a DOM object property that is set client-side and not set by (or sent to) the server.  In this case only DOM-based is possible, as the DOM property is not set by the server so Stored or Reflected XSS isn't possible.  This is in fact a special case, or a subset, of the "the victim's request" scenario.

I've taken a fairly strict definition of DOM-based XSS here, as I think this is required to truly separate it out from Reflected XSS.  My logic is that if the malicious data comes from, or goes via, the server, then the attack can be considered to come via the server in the same way as Stored or Reflected XSS respectively, but if the malicious data never actually gets sent to the server (e.g. it's in the fragment identifier, window.name, see domxsswiki for more examples) then that is quite plainly a different attack vector.  Note that for DOM-based XSS a request still needs to be made to the server (most likely), so it shares that in common with the other attack vectors, but there is no malicious data sent as part of that request, unlike the other attack vectors.

There are other definitions of DOM-based XSS that are looser, that include any DOM property, even if coming via the server, and that's fine for most practical purposes.  For the purpose of categorisation though I think being stricter helps understand how the different attack vectors for XSS fit together.

So what about the cookie-based XSS?  Well if an attacker finds a way to set a victim's cookie such that client-side JavaScript reads the malicious cookie value resulting in code injection, I would say that the value of cookie goes via the server so that is Reflected XSS.  Whilst I think that is probably the most common scenario, I can imagine scenarios where you would call it Stored XSS and other scenarios where it would be DOM-based XSS.

So there you have it, my opinion on the different categories of XSS, some people will agree and some will disagree.  Generally speaking the security industry has settled on some loose categories of XSS, and that's fine, it's not clear we need strict category definitions, but it can be an informative exercise to consider them.

NB: I refuse to use Type-0, Type-1, or Type-2 names for the different types of XSS, they are meaningless names that only serve to confuse.