Sunday 9 September 2012

The Model Developer

In security we spend a lot of time focusing on attackers and imagining all the possible ways they might be able to compromise an application or system.  While I think we are currently immature in our ability to model attackers, the industry does seem to spend some time thinking about this, and generally ends up assuming attackers are very well resourced.

I come from a cryptographic background, and in crypto you tend to define an adversary in a more mathematical way.  When designing a new crypto algorithm the adversary is effectively modelled as another algorithm that is only limited in the sense it cannot be 'computationally unbounded' and it does not have direct access to the secrets of the crypto algorithm.  Apart from that no real assumptions are made and it is most certainly expected that the adversary is much smarter than you.

For all the time we spend thinking about what attackers can do, I wonder if we should also spend some of that time modelling what developers can do.  Developers, after all, are an essential part of the systems we build.  Let's try and model a developer:
  • Knowledge.  Developers understand how to code securely.
  • Experience.  Developers have experience in coding securely.
  • Time.  Developers are given sufficient time to write secure code.
  • Priority.  Developers prioritise security over functionality.
  • Consistency.  Developers code security the same way every time.
  • Reviewed.  Developer code is thoroughly reviewed for security.
  • Tested.  Developer code is thoroughly tested for security.
[We are actually modelling more than just a developer here, but also the environment or support structures in which they develop as that directly effects security too.]

How accurate does that model seem to you?  It would be great for people that design systems and their security if developers could be modeled in this way, it would make their jobs a lot easier.  Unfortunately it seems that people who suggest security controls for vulnerabilities sometimes are making an implicit assumption about developers, they have modeled the developer in a certain way without even realising it, and that model is often fairly similar to one given above.

My favourite example of this is when people say the solution to XSS is to output encode (manually i.e. all data written to a page is individually escaped).  When this is suggested as a solution it is implicitly modelling that developer as; knowledgeable about how to output encode, experienced in output encoding, has the time to do write the extra code, will make it a priority, will be completely consistent and not forget to output encode anywhere, will have their code thoroughly reviewed and tested.  Don't misunderstand me, some of these assumptions might be perfectly reasonable for your developers, but all of them?  Consider yourself fortunate if you can model a developer this way.

Much in the same way that we model an attacker to be as powerful as we can (within reason) when designing systems, I think we also need to model the developers of our system to be as limited as possible (within reason).  It's not that I want people to treat developers as idiots, because they are clearly not, it's that I'd like to see the design of security controls have an implicit (or explicit) model of a developer that is practical.

The Content Security Policy (CSP) is an example of a control that I think comes pretty close to having a practical model for developers, since; the developer requires knowledge about how to completely separate HTML and JavaScript and about how to configure the CSP settings, needs some experience using the CSP, has to take time to write separate HTML and JavaScript, doesn't need to prioritise security, doesn't need to try to be consistent (CSP enforces consistency), does require their CSP settings to be reviewed, does not require extra security testing.  The CSP solution does model a developer as someone that needs to understand the CSP solution and code to accommodate it, which could be argued is a reasonable model for a developer.

Ideally of course we want to model developers like this:
  • Knowledge.  Developers don't understand how to code securely.
  • Experience.  Developers don't have experience in coding securely.
  • Time.  Developers have no time to write secure code.
  • Priority.  Developers prioritise only functionality.
  • Consistency.  Developers code security inconsistently.
  • Reviewed.  Developer code isn't reviewed for security.
  • Tested.  Developer code isn't tested for security.
 If our security controls worked even when a developer gave no thought to security at all, then in my opinion that's a great security control.  I can't think of a lot of current controls in the web application space have this model of the developer.  In the native application world we have languages like .Net and Java that have controls for buffer overflows that model the developer this way, as developers in these languages don't even have to think about buffer overflows.  You might be thinking that's not a great example as developers are able to write code with a buffer overflow in .Net or Java i.e. in unsafe code, however I think we have to model developers to be a limited as possible, within reason, and the reality is it is a sufficiently corner case scenario that we can treat it like the exception it is.

Modelling developers in a way that accounts for the practical limitations they face leads me to believe that creating frameworks for developers to work in, a sand-boxed environment if you will, allows for security controls to be implemented out of view of developers, enabling them to focus on business functionality.  A framework allows a developer to be modeled as requiring some; knowledge, experience, and testing, but minimal; time, priority and consistency.  A framework does still have substantial demands for review though (although I think automating reviews is the key for making this manageable).

If we can start being explicit about the model we use for developers when we create new security controls (or evaluate existing ones) we can hopefully better judge the benefits and effectiveness of those controls and move closer to developing more secure applications.

Drupal 7 security notes

So I just put together a page on Drupal 7 Security.  It doesn't require a lot of existing knowledge about Drupal, but some appreciation would probably help - at least knowing that Drupal is extendable via Modules and customisable via Hooks.

The notes were created so I could give some advice on securing Drupal 7, and since I didn't have any knowledge about Drupal security, the goal of the notes is to bring someone up to speed on what mitigations or approaches Drupal makes available to solve certain security threats.

Here are the topics I cover:
The Basics
Sessions
User Login
Mixed Mode HTTP/HTTPS
CSRF
Access Control
Dynamic Code Execution
Output Encoding
Cookies
Headers
Redirects

What is interesting after you understand what Drupal offers, is to think about the things it does not offer.  I worry a lot  about validating input and if you use the Drupal Form API then you get a good framework for validation as well, similarly for the Menu system.  However for other types of input, GET request parameters, Cookies, Headers etc., you are on your own.  There are a variety of 3rd party modules that implement various security solutions e.g. security related headers etc., but it would be good if these were part of Drupal Core, as security should never just be an add-on.