I come from a cryptographic background, and in crypto you tend to define an adversary in a more mathematical way. When designing a new crypto algorithm the adversary is effectively modelled as another algorithm that is only limited in the sense it cannot be 'computationally unbounded' and it does not have direct access to the secrets of the crypto algorithm. Apart from that no real assumptions are made and it is most certainly expected that the adversary is much smarter than you.
For all the time we spend thinking about what attackers can do, I wonder if we should also spend some of that time modelling what developers can do. Developers, after all, are an essential part of the systems we build. Let's try and model a developer:
- Knowledge. Developers understand how to code securely.
- Experience. Developers have experience in coding securely.
- Time. Developers are given sufficient time to write secure code.
- Priority. Developers prioritise security over functionality.
- Consistency. Developers code security the same way every time.
- Reviewed. Developer code is thoroughly reviewed for security.
- Tested. Developer code is thoroughly tested for security.
How accurate does that model seem to you? It would be great for people that design systems and their security if developers could be modeled in this way, it would make their jobs a lot easier. Unfortunately it seems that people who suggest security controls for vulnerabilities sometimes are making an implicit assumption about developers, they have modeled the developer in a certain way without even realising it, and that model is often fairly similar to one given above.
My favourite example of this is when people say the solution to XSS is to output encode (manually i.e. all data written to a page is individually escaped). When this is suggested as a solution it is implicitly modelling that developer as; knowledgeable about how to output encode, experienced in output encoding, has the time to do write the extra code, will make it a priority, will be completely consistent and not forget to output encode anywhere, will have their code thoroughly reviewed and tested. Don't misunderstand me, some of these assumptions might be perfectly reasonable for your developers, but all of them? Consider yourself fortunate if you can model a developer this way.
Much in the same way that we model an attacker to be as powerful as we can (within reason) when designing systems, I think we also need to model the developers of our system to be as limited as possible (within reason). It's not that I want people to treat developers as idiots, because they are clearly not, it's that I'd like to see the design of security controls have an implicit (or explicit) model of a developer that is practical.
Ideally of course we want to model developers like this:
- Knowledge. Developers don't understand how to code securely.
- Experience. Developers don't have experience in coding securely.
- Time. Developers have no time to write secure code.
- Priority. Developers prioritise only functionality.
- Consistency. Developers code security inconsistently.
- Reviewed. Developer code isn't reviewed for security.
- Tested. Developer code isn't tested for security.
Modelling developers in a way that accounts for the practical limitations they face leads me to believe that creating frameworks for developers to work in, a sand-boxed environment if you will, allows for security controls to be implemented out of view of developers, enabling them to focus on business functionality. A framework allows a developer to be modeled as requiring some; knowledge, experience, and testing, but minimal; time, priority and consistency. A framework does still have substantial demands for review though (although I think automating reviews is the key for making this manageable).
If we can start being explicit about the model we use for developers when we create new security controls (or evaluate existing ones) we can hopefully better judge the benefits and effectiveness of those controls and move closer to developing more secure applications.