Tuesday 1 April 2014

Revisiting Fundamentals: Defence in Depth

I think it is a worthwhile exercise to revisit fundamental ideas occasionally.  We are constantly learning new things and this new knowledge can have an effect on the way we understand fundamental concepts or assumptions, sometimes challenging them and sometimes offering new insights or facets to appreciate.  I recently learned about how Defence in Depth was used in some historic military battles and it really challenged my understanding of how the principle can be applied to security.

So what is Defence in Depth (DiD) and what is it good for?  My understanding is that DiD originated as a military tactic, but is more commonly understood in its engineering sense.  It turns out the term has different meanings in these different disciplines.  Largely I am going to focus on the meanings with regard to application security (as this is relevant to me).

Engineering
In engineering DiD is a design tactic to protect a resource by using layers of controls, so if one control fails (bypassed by an attacker) there is another control still protecting an asset.  By incorporating DiD into an application's security design we add (passive) redundant controls that provide a margin of safety for a control being bypassed.  The controls we add may be at different trust boundaries or act as only a partially redundant control for other controls e.g. input validation at the perimeter partially helps protect against XSS, SQLi, etc.

I would say that adding DiD in the engineering sense is essential to any security design of an application.  Security controls should be expected to fail, and so accounting for this by architecting in redundancy to achieve a margin of safety makes sense.  I will say that some disadvantages to redundancy have been identified that likely apply to DiD as well.  Briefly:
  • Redundancy can result in a more complex system.  Complex systems are harder to secure.
  • The redundancy is used to justify weaker security controls in downstream systems.
  • The redundancy is used as an excuse to implement risky functionality which reduces the margin of safety.  Multiple independent functionality relying on the margin can have an accumulated effect that reduces or removes the margin of safety.

Military
DiD in military terms is a layered use of defensive forces and capabilities that are designed to expend the resources of the attacker trying to penetrate them e.g. Battle of Kursk.  What's crucial to appreciate, I think, is that in the physical world the attacker's resources (e.g. troops, guns, bombs, time) are usually limited in supply.  In this way DiD grinds down an enemy and the defender wins if the cost/benefit analysis of the attacker changes so they either stop or focus on another target.  Additionally, defensive forces are also a limited resource and can be ground down by an attacker.  It is possible for an attack to result in stalemate as well, which may be considered a defensive win.

Taking this point of view then, is protecting an application analogous to protecting a physical resource?  We need it to be analogous in some way otherwise using DiD (in the military sense) might not be an appropriate tactic to use for defending applications.

Well one of the key points in the physical arena is the consumption of resources, but in the application arena it seems less accurate (to me) to say that computing resources are consumed in the same way.  If an attacker uses a computer to attack an application, the computer is not "consumed" in the process, or less able to be used again.  The same is true for the application defences.

So it isn't analogous that physical resources are consumed in the same way in the physical and application security arenas.  But there are other types of resources though, non-physical resources.  I can think of 2 resources of an attacker that are limited and are applicable to application security;
  • time - as a resource, if a defensive position can increase the time an attacker needs to spend performing an attack, the cost/benefit analysis of the attacker may change, the "opportunity cost" of that time may be too high.
  • resolve - in the sense that an attacker will focus on a target for as long as they believe that attacking that target is possible and practical.  If a defensive position can make the attacker believe that attacking it is impractical, then the attacker will focus their efforts elsewhere.
There is an irony in 'resolve' being a required resource.  The folks who have the appropriate skills to attack applications are a funny bunch I reckon, as they are exactly the type of people who are very persistent, after all, if someone can master the art of reverse engineering or blind SQL injection, they are likely one persistence SOB.  In a sense they are drawn to the problem of (application) security because it requires the very resource they have in abundance.

As an aside, this could be why APT attacks are considered so dangerous, it's not so much the advanced bit (I rarely read about advanced attacks), but the 'persistent' bit; the persistent threat is a reflection of the persistence (or resolve) of the attacker.  The risk being that most defences are going to be unlikely to break that resolve.

So if those are the resources of the attacker then that gives us a way to measure how effective our DiD controls are; we need to measure how they increase time or weaken resolve.

Layered controls work well in increasing the time the attacker has to spend.  The more dead ends they go down, the more controls they encounter that they cannot bypass, all that takes time.  Also, attackers tend to have a standard approach, usually a set of tools they use, so any controls that limit the effectiveness of those tools and force them to manually attack the application will cause a larger use of time.  Ironically though, consuming time could have an opposite effect on the resolve of the attacker, since there is likely a strong belief that a weakness exists somewhere, and also the "sunk cost effect",  meaning the attacker may become more resolved to find a way in.

I'm not a psychologist so I can't speak with authority on what would wear down resolve.  I suspect it would involve making the attack frustrating though.  I will suggest that any control that makes the application inconsistent or inefficient to attack, will help to wear down resolve.

I did a bit of brain-storming to think of (mostly pre-existing) controls or designs an application could implement to consume 'time' and 'resolve' resources:
  • Simplicity.  Attackers are drawn to complex functionality because complexity is the enemy of security.  Complexity is where the vulnerabilities will be.  The simpler the design and interface (a.k.a minimal attack surface area) the less opportunity the attacker will perceive.
  • Defensive counter controls.  Any controls that react to block or limit attacks (or tools). 
    • Rapid Incident Response.  Any detection of partial attacks should be met with the tightening of existing controls or attack-specific controls.  If you can fix things faster than the attacker can turn a weakness into a vulnerability, then they may lose hope.  I do not pretend this would be easy.
    • Random CAPTCHAs.  Use a CAPTCHA (a good one that requires a human to solve), make these randomly appear on requests, especially if an attack might be under way against you.
  • Offensive counter controls.  Any control that misleads the attacker (or their tools).
    • Random error response.  Replace real error responses with random (but valid looking) error responses, the goal is to make the attacker think they have found a weakness but in reality they haven't.  This could have a detrimental effect on actual trouble shooting though.
    • Random time delays. Vary the response time of some requests, especially those that hit the database.  Occasionally adding a 1 second delay won't be an issue for most users but could frustrate attacker timing attacks.
    • Hang responses.  If you think you are being attacked, you could hang responses, or deliver very slow responses so the connection doesn't time out.
I'm certainly not the first to suggest this approach, it is known as "active defence".  There are even some tools and a book about it (which I cannot vouch for).  The emphasis may be more on network defensive controls rather than the application controls that my focus has been on.

TL;DR
Defence in Depth in application security should involve redundant controls but may also include active defences.

No comments:

Post a Comment