Saturday 28 April 2012

Why being a defender is twice as hard as being an attacker

So it occurred to me that being a defender is twice as hard as being an attacker (at least).  I don't mean that in an absolute or measurable sense of course, just in some sense that will become obvious.  I also will limit the context of that claim to applications although it may apply to other areas of security as well.

The goal of an attacker is to find vulnerabilities in an application.  An application is protected by defenders who design vulnerability mitigations and developers who implement functionality.  Of course an attacker only needs to find a single weakness and a defender needs to try to protect against all attacks, which itself would probably support my claim, but it's not what my point is going to be.

Conversely, a defender's goal is to minimise the number of vulnerabilities in an application.  Defenders attempt to realise this goal by designing defenses that both limit what the attacker can do and limit the flexibility the developer has.  However, it is not only attackers that will hack away at a defenders defenses, it's also the developer.  The point of this blog post is that developers show surprisingly similar characteristics to attackers when they create novel ways to circumvent the defense mechanisms defenders put in place.  After all developers have the goal of implementing functionality with the minimum amount of effort as possible, and defenses often make that more difficult (even if only marginally more difficult). 

Clearly the motivations are entirely different in the attackers and developers case, but at the end of the day the defenders are being attacked on twin fronts; by the attackers looking to get in and by the developers looking to break out.

Monday 23 April 2012

CVSS doesn't measure up.

I was doing some basic research into software metrics the other day and I came across something that I was probably taught once but had long since forgotten.  It was to do with the way we measure things and is covered in the Wikipedia article on Level of Measurement.

Basically there are 4 different scales which are available to measure things:
  • Nominal scale - Assigning data to named categories or levels.
  • Ordinal scale - A Nominal scale but the levels have a defined order.
  • Interval scale - An Ordinal scale but the difference between, or units, of each level are well defined.
  • Ratio scale - An Interval scale but with a non-arbitrary zero-point.
Why these scales are interesting is that only certain type of math, and therefore certain conclusions can be drawn from what you measure, depending on what scale the measurements belong.  For instance we can order the finishing place of a horse race into 1st, 2nd, 3rd etc. (an Ordinal scale), but we can't meaningfully say what the average finishing place of a horse is as there is no magnitude associated with the difference between the levels.  If on the other hand the races were over the same distance, we could could measure the time the horse took to complete the race (a Ratio scale) and calculate it's average time.

Sometimes we have an Ordinal scale that looks like an Interval or Ratio scale, for instance when we assign a numeric value to the levels e.g. ask people how much they like something on a scale of 1 to 5.  But this is still an Ordinal scale, and although we can assume that the difference between each level is a constant amount, nothing actually makes that true.  Thus calculating the average amount that people like something e.g. 2.2, is often a meaningless number.

When reading about this I was reminded of the way vulnerabilities are categorised and how we would so dearly like to be able to assign numbers to them so we can do some math and reach some greater insight into the nature of the vulnerabilities we have to deal with.  The Common Vulnerability Scoring System (CVSS) suffers essentially from this problem; vulnerabilities are assigned attributes from certain (ordered) categories, and then a complicated formula is used to derive a number in a range from 1 to 10.  It is basically optimistic to think that a complicated formula can bridge the theoretical problem of doing math on values from an Ordinal scale.  I wouldn't necessarily go to the other extreme and say it makes CVSS totally without merit - just that it's not the metric you likely wish it was.

Sunday 15 April 2012

An idea for adding privacy to social media

It would be great if all the media that people upload to social media sites was still under the control of the owner and we didn't have to place complete trust in the social media site.  Initially I thought that maybe OAuth might offer part of a solution, but really it isn't designed to solve this problem, it just allows you to share a resource on one web site with another web site.  What we really want is to allow a user, the resource owner, to retain some control over their resources and not delegate all security to the social media site (to protect the innocent I'll reference a fictitious social media site called PinFaceTwitPlus).

Of course one way to protect a photo or video (a resource) that you upload is to encrypt it, but then you have 2 problems; where to put the encryption key and how to let people you share with decrypt the resource.  Obviously PinFaceTwitPlus can't have the key or we haven't gained anything, and you yourself can't store the key since you would have to run your own server.  So the solution would seem to be another service that was responsible for handing out decryption keys to those privileged few you have authorised to view your resource.  Let's call this service PrivacyPoint.

Here is how I see this working from the point of view of adding a new resource to PinFaceTwitPlus.  You go to PrivacyPoint in your browser, you select a locally stored photo and ask PrivacyPoint to protect it.  In your browser PrivacyPoint encrypts the photo and embeds an encrypted key that can be used to decrypt the photo (the key is randomly generated and encrypted to a key unique to you), and also a reference to PrivacyPoint.  You then upload the encrypted photo to PinFaceTwitPlus and share it with whoever amongst your friends is worthy.

How this works from a friends point of view is that when PinFaceTwitPlus wants to show them the photo, the encrypted version gets downloaded to their browser and the page determines that it needs to decrypt the photo, so using the embedded link to PrivacyPoint, it sends an identifier for the friend and the encrypted key to PrivacyPoint.  Using the friends identifier PrivacyPoint asks PinFaceTwitPlus if the friend is allowed to view the photo, if they are PrivacyPoint decrypts the key and returns it to the friend whose browser can now locally decrypt the photo and display it.

The desirable properties of this system are these:
  • PinFaceTwitPlus does not have (direct) access to your resources - a vulnerability in PinFaceTwitPlus would not immediately leak user resources.
  • Your resources do not pass through PrivacyPoint, even though PrivacyPoint could decrypt them - a vulnerability in PrivacyPoint would not immediately leak user resources.
This system is not perfect for the following reasons:
  • PinFaceTwitPlus is acting as a decision point for which of your friends can see which of your resources.  This means it could add itself to your list of friends and grant itself access to your resources.  What makes this issue potentially tolerable is that to access your resources it must ask PrivacyPoint for the decryption key, which means PrivacyPoint has a record of it asking; a record that would be viewable by you to audit who accesses your resources.
  • PinFaceTwitPlus can impersonate any of your friends though, so in any access audit log it would just appear as a friend viewed your resource.  I don't see a (sensible) technical solution to this, but I suspect there is pressure on PinFaceTwitPlus to not allow it's employees to impersonate it's users due to the negative business impact.
From a design point of view the goal has been to separate the resource from the social media site, and this has been done by distributing the trust between 2 services; PinFaceTwitPlus to store the encrypted resource and decide who has access, and PrivacyPoint to hold the decryption keys and enforce the resource access decision.

Clearly I have left out vast amounts of detail (lots of which would need to be thought through more thoroughly) as it's difficult to express it all in a blog post.

To be honest I wouldn't give this design high practicality points as it requires a fairly substantial infrastructure in order to work.  Still, any practical system often starts out as something that's possible and evolves into something that's probable.

Monday 9 April 2012

Good password paper

So I have just been reading A Research Agenda Acknowledging the Persistence of Passwords (I actually read it in the Jan/Feb 2012 issue of IEEE Security & Privacy) and I wanted to make a quick post about it so I would have a record to reference in the future and because I thought it was an excellent read that challenges the perceived wisdom of "passwords are bad, mmmkay".  The point of the article is that is doesn't stand and up and say passwords are either good or bad, but that we lack the information to know that answer currently and that since there is no clear emerging solution on the horizon we have to take a more pragmatic approach to how we should be using passwords as a security solution.

Sunday 1 April 2012

OWASP Web Defence Presentation

I went to the OWASP London meeting last Thursday where I watched the excellent Jim Manico give a presentation on the state of the art in web defences (presentation available here).

The talk was largely focused on the most common web app vulnerabilities and how to defend against them.  If I was to over-simplify the message of the presentation I would say that it focused on training developers and giving them libraries to help mitigate the threats.

Whilst I agreed with everything that was said, and I fundamentally believe that training developers is an essential part of any SDLC, I would have liked to see the main emphasis be on developing frameworks that either virtually eliminate the vulnerability or allow for ease of auditing that developers are doing the right thing.

I actually had a chat with Jim in the pub after the presentation and asked him about the focus of his talk; it turns out we actually pretty much agree that web defences need to be part of a framework (and he was clearly better informed than I am about the state of the art in that department).  His talk it seems was focused on the practical things that can be done today.

If I was going to give a similar presentation I think I would feel the need to focus on where we need to get to.  To my mind vulnerabilities of all types, in any piece of software, are basically impossible to eliminate via training; this does not make training useless, training is still necessary, I just don't think it's sufficient.

We need to make our web applications secure by design, secure by default and security auditable.  The first 2 principles are commonly understood, the last I think is something that I don't see people talking about and I think it is the direction we need to move in.  Any framework will have the ability to bypass the default and do something insecure, and that's fine as that kind of flexibility is usually essential.  What we need is the ability to easily find deviations from security best practice and focus our review efforts on those areas.  Unless we can identify the weak links in our security chain we will never be able to get the kind of security assurance we want.