Saturday 13 October 2012

The horrible asymmetry of processing HTML

So I've got my developer hat on for the purpose of this blog post.

If you were designing a client/server system where you had to transport a representation of an object from the server to the client, I'm almost certain that the code you have that serialises and de-serialises that object would be identical on both the client and server.  As an example imagine a web service, the representation of a request and response object defined in XML Schema and instantiated in XML, but both client and server code would only ever deal with objects and not directly with XML i.e. the serialisation and de-serialisation is handled automatically.

So I had a kind of "Huh?, that's weird" moment when thinking about how the majority of web application frameworks handle HTML.  Most frameworks have an HTML template on the server side and then some scripting language interspersed in the template that customises the template for that request.  So that's how an HTML representation of a web page is generated on the server side, but how is that representation processed on the client side?  Well, as we all know, the browser (or User Agent (UA)) creates a Document Object Model (DOM) from the HTML and from this internal object representation displays the web page to the user.

So the client side UA receives a serialised representation of a the web page object (in HTML) and de-serialises that to a DOM (object representation) for processing.  However, the server side starts with a serialised version of a the web page object (in HTML) and makes direct edits to the serialised version of the object (i.e. the HTML) before sending it to the client.

The lack of symmetry hurts that part of me that likes nice design.

So either the way web application frameworks work is particularly clever and those of us that have be serialising and de-serialising objects symmetrically in every other system we build have really missed a trick.  Or, web application frameworks have evolved from the days of static HTML pages into a system where serialised objects are edited directly in their serialised form, a design that would be universally mocked in any other system.

Now to be fair web application frameworks have evolved into convenient and efficient systems so it could be argued that this justifies the current design.  I would worry though that that is an institutionalised point of view, since making changes to HTML (the serialised web page object) directly is all we have every known.  Of course it's the right way to do it, because it's the only way we've ever done it!

I'll be the first to raise my hand and say I'm not sure exactly how you might go about about generating web pages in a convenient and efficient way in code, before serialising it in HTML, but I certainly don't think it's impossible and I accept that any initial design would be less convenient and efficient to what we already have (but that seems inevitable and would change as we get experience of the new system).

Now for all I know there may be great reasons why we generate the web pages the way we do, but if there are I'm guessing they're not widely known.  I can certainly think of some advantages to manipulating a web page as an object on the server before serialising it and sending it to the client, and if you think about it for a bit, perhaps you can too?

Sunday 7 October 2012

Scaling Defences

In defending against vulnerabilities in code there is concept probably best summed up by this quote:
We only need to be lucky once. You need to be lucky every time. 
IRA to Margaret Thatcher 
The concept is basically that an attacker needs to find just one vulnerability amongst all possible instances of that vulnerability, whereas a defender needs to mitigate every instance. If you consider this in the sense of a single vulnerability type e.g. buffer overflows, then I don't think it's true.  The reason I don't think it's true is that the actual problem is the way we create our defences, that is the security controls we put in place to mitigate the vulnerability.

Take the buffer overflow example.  If a developer tries to defend against this vulnerability by ensuring that each instance where he writes to his buffers, that his code never writes beyond the boundaries of  his buffer, then if he fails to do this correctly in just one instance, then it might be possible for an attacker to find and exploit that vulnerability.  But what if that developer is programming in C#?  There is no need for the developer to be right everywhere in his code, as C# is (effectively) not vulnerable to buffer overflow attacks.  So if we can choose the right defence, we don't need to mitigate every instance.

For me the next question is what differentiates defences that are 'right'?  I would argue that one important criteria, often overlooked, is scale.  Taking the buffer overflow example again, if the developer has to check his bounds every time he writes to a buffer, then the number of vulnerabilities scales with the number of times the developer writes to their buffers.  That's a problem that scales linearly, that is, if there are Y buffers referenced in code, then the  number of places you have to check for vulnerabilities is X = aY, where a is the average number of times a buffer is written to.  Other common compensating security controls we put in place to make sure the developer doesn't introduce a vulnerability also tend to scale linearly; code reviewers, penetration tests, security tests, etc.  By this I mean if you have D developers and C code reviewers, then if you increase to 2D developers you will likely need 2C code reviewers.

If you choose the 'right' defence though, so for example using C# or Java, then you don't need either the developer or compensating controls worrying about buffer overflows (realistically you would probably have some compensating controls for such things as unsafe code).  Note, I'm suggesting changing programming languages is practical solution, I'm just trying to give an example of a control that completely mitigates a vulnerability.

Below is a graphical representation of how the costs of certain types of security controls (defences) scale with the number of possible locations of a vulnerability.

The red line is showing the cost of a security control that scales linearly with number of possible locations of a vulnerability. The blue line is the cost of the 'right' security control, including an initial up-front cost.

The purple line is for another type of security control, for situations where we do not have a 'right' security control that mitigates a vulnerability by design.  This type of security control is one where we make accessible, at the locations where the vulnerability might exist, some relevant information about the control.  For example, annotating input data with their data types (which are then automatically validated).  If this information is then available to an auditing tool where it can be reviewed, then the cost of this type of control scales in a manageable way.

What is also interesting to note from the graph is that the red line has a lower cost initially than the blue line, until they intersect.  This implies that until there are sufficient number of possible locations for a vulnerability, it is not worth the initial cost overhead to implement a control that mitigates the vulnerability automatically.  This perhaps goes some way to explaining why we use controls that don't scale, as the controls are just the 'status quo' from when the software was much smaller and it made sense to use that control.

My main point is this; when we design or choose security controls we must factor in how the cost of that control will scale with the number of possible instances of the vulnerability.