Sunday 7 October 2012

Scaling Defences

In defending against vulnerabilities in code there is concept probably best summed up by this quote:
We only need to be lucky once. You need to be lucky every time. 
IRA to Margaret Thatcher 
The concept is basically that an attacker needs to find just one vulnerability amongst all possible instances of that vulnerability, whereas a defender needs to mitigate every instance. If you consider this in the sense of a single vulnerability type e.g. buffer overflows, then I don't think it's true.  The reason I don't think it's true is that the actual problem is the way we create our defences, that is the security controls we put in place to mitigate the vulnerability.

Take the buffer overflow example.  If a developer tries to defend against this vulnerability by ensuring that each instance where he writes to his buffers, that his code never writes beyond the boundaries of  his buffer, then if he fails to do this correctly in just one instance, then it might be possible for an attacker to find and exploit that vulnerability.  But what if that developer is programming in C#?  There is no need for the developer to be right everywhere in his code, as C# is (effectively) not vulnerable to buffer overflow attacks.  So if we can choose the right defence, we don't need to mitigate every instance.

For me the next question is what differentiates defences that are 'right'?  I would argue that one important criteria, often overlooked, is scale.  Taking the buffer overflow example again, if the developer has to check his bounds every time he writes to a buffer, then the number of vulnerabilities scales with the number of times the developer writes to their buffers.  That's a problem that scales linearly, that is, if there are Y buffers referenced in code, then the  number of places you have to check for vulnerabilities is X = aY, where a is the average number of times a buffer is written to.  Other common compensating security controls we put in place to make sure the developer doesn't introduce a vulnerability also tend to scale linearly; code reviewers, penetration tests, security tests, etc.  By this I mean if you have D developers and C code reviewers, then if you increase to 2D developers you will likely need 2C code reviewers.

If you choose the 'right' defence though, so for example using C# or Java, then you don't need either the developer or compensating controls worrying about buffer overflows (realistically you would probably have some compensating controls for such things as unsafe code).  Note, I'm suggesting changing programming languages is practical solution, I'm just trying to give an example of a control that completely mitigates a vulnerability.

Below is a graphical representation of how the costs of certain types of security controls (defences) scale with the number of possible locations of a vulnerability.

The red line is showing the cost of a security control that scales linearly with number of possible locations of a vulnerability. The blue line is the cost of the 'right' security control, including an initial up-front cost.

The purple line is for another type of security control, for situations where we do not have a 'right' security control that mitigates a vulnerability by design.  This type of security control is one where we make accessible, at the locations where the vulnerability might exist, some relevant information about the control.  For example, annotating input data with their data types (which are then automatically validated).  If this information is then available to an auditing tool where it can be reviewed, then the cost of this type of control scales in a manageable way.

What is also interesting to note from the graph is that the red line has a lower cost initially than the blue line, until they intersect.  This implies that until there are sufficient number of possible locations for a vulnerability, it is not worth the initial cost overhead to implement a control that mitigates the vulnerability automatically.  This perhaps goes some way to explaining why we use controls that don't scale, as the controls are just the 'status quo' from when the software was much smaller and it made sense to use that control.

My main point is this; when we design or choose security controls we must factor in how the cost of that control will scale with the number of possible instances of the vulnerability.

No comments:

Post a Comment