tag:blogger.com,1999:blog-1902616112379367592024-03-13T02:17:18.104+00:00Samadhic SecurityMeandering down the road to security enlightenment...Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.comBlogger40125tag:blogger.com,1999:blog-190261611237936759.post-32605092648442156062022-08-16T07:22:00.000+01:002022-08-16T07:22:45.192+01:00Introducing threatware<p>I'd like to share a tool that I have been working on that helps security and engineering teams to threat model. I'd also like to share the threat modelling process it was designed to help with. Both are meant as contributions to help make threat modelling easier (at least for some people).</p><p>I'm not going to go over the process in detail, I've captured that in this <a href="https://threatware.readthedocs.io/en/main/create/overview.html">overview</a> in the documentation. I will give some context about the origin of the process and the tooling though. </p><p>I have been doing some form of threat modelling for many years. Originally the process was using my experience as a penetration tester to ask questions and focus on areas where vulnerabilities usually live. The output was only ever a list of potential vulnerabilities that were discovered. As I did more architecture and design work and found myself needing to create generic frameworks to mitigate threats and vulnerabilities, I found myself more and more modelling systems as a means for designing solutions. That modelling then changed my approach to threat modelling as I sought for a more systematic approach, that could be consistent and lend itself to giving some assurance. </p><p>Of course there were existing threat modelling tools to try to leverage to provide a more systematic approach. I was not impressed. The tooling I discovered needed me to invest a lot of time and effort and often the result was a plethora of threats that I knew were mostly noise and offered little value. Many of the tools required complete indoctrination into the tools methodology, and if your use case wasn't in the sweet spot for the tool, you were out of luck. These problems are by no means unique to threat modelling tools, or security tools, or really any tools.</p><p>So I started exploring developing my own approach. I started to define a template of information that needed to be gathered in order to determine the threats against a system. That template went through many iterations. Initially it wanted so much information that it was difficult to populate, but the goal was not to miss any threats! Through repeated use it became clear the balance between the burden of data collection and finding all the threats needed to be refined. That became the process then, refine the template and evaluate the findings, determining what could be stripped out that offered little value. Eventually the template began to require fewer changes, and I got more confidence that it found an acceptable amount of threats for the effort required.</p><p>I would say it was a success! But with success comes different problems. The teams I worked with could acknowledge that the template was doing a good job, but it was still not accessible to them to populate without a lot of hand holding. As more teams wanted to benefit from using the template, the problem of scale emerged, as I was having difficulty keeping up with the demand - and that is not even mentioning the repetition of providing the same guidance to each team. The next step was to produce detailed documentation so teams didn't need to come to me as the first step to populating the template. This helped, but I was still reviewing the output, and the feedback loop for mistakes was too long.</p><p>Could tooling help to solve this? The problem was the template was just a document, I wasn't capturing information in an app where it would be simpler to detect incorrect input. I should write my own app! Wow did I discover that was going to take a long time and a lot of work and likely just result in an inflexible approach that frustrated me about other tooling I had used. I didn't want teams to have learn a new app, and I knew the business didn't (and shouldn't) want to commit to a process tied to an app. I wanted teams to have the flexibility and familiarity that a document provides. Actually one of the key features that many document solutions provide is the ability to provide inline comments, and this had proved invaluable in reviewing and communicating with teams asynchronously. Replicating that in an app seemed to be well beyond what a part-time tool development effort could achieve.</p><p>But I could do some basic tooling still. Some automation that highlighted the really common errors teams were making that were the real time intensive parts of reviewing threat models. So scripts were written. The format of the document was assumed, information was extracted and verification was applied. With something working I then moved it to an AWS Lambda function so teams could benefit from just a simple request in the browser. Now whenever someone created a new threat model and asked me to review it, the first thing I told them was to run the tool and fix the errors. This was a huge help to me. What surprised me was how positive the reaction was from teams, they loved that they could use a tool to help validate their threat model!</p><p>There was a real 'golden age' for a period of time as I refined the tool to deliver more value to both me and the teams, and it enabled even more threat models to be created and threat models to be updated as well. Unfortunately with success, scaling becomes a problem that won't go away, and it was clear more changes were needed to support demand, but the tooling I had was creaking at the seams.</p><p>Regardless of being a victim of its own success, and forgive me for thinking so, but what I had created seemed like an approach to threat modelling that others in the industry might find useful, and the seeds were planted for how I could share this with the world. In its current state it was not shareable, it needed to be re-written from scratch (and independently of the company I was working for). There was also the danger of hubris over the template I had created, as it would not suit everyone's situation. </p><p>So I made the decision to build something that could support a flexible format for the template but still be able to provide validation. I would provide an example template that others can use if they so choose, but if someone wanted to do something completely different they can do that also. I also made the decision that the tooling needed to support the management of threat models, as I had first hand experience of a successful threat modelling process, and managing all those threat models required processes as well.</p><p>And that is how threatware was born. The <a href="https://threatware.readthedocs.io/en/main/index.html">documentation</a> hopefully gives all the detail you need, and while I use it daily, I would still only consider it in beta, as there are still occasional issues that need to be resolved.</p><p>If you need to do threat modelling, you might find it useful. I threat model and it helps me a lot.</p>Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-1969962868302324172018-12-27T16:12:00.000+00:002018-12-27T16:12:23.300+00:00Establishing identity within a group of servicesHow can a group of services determine each other's identity?<br />
<br />
For the purpose of this discussion, the services are connected (indirectly) to each other over a network, and the services are capable of storing secrets, which can be securely distributed to the services. A central service can also exist with which all other services can authenticate i.e. have a trusted connection to.<br />
<br />
If the central service establishes itself as the registration authority, it can issue credentials e.g. secret key, password, private key etc., then we have a group where every service that registers can authenticate itself back to the central service in the future. That doesn't immediately allow services to identify themselves to other services.<br />
<br />
We usually have additional constraints though, such as<br />
- service-to-service identification should not have to involve the central service every time services communicate, but bootstrapping via trusted centralised information, or occasionally talk to the central component is acceptable<br />
- the overhead of proving identity should not be a burden<br />
- direct connections between services don't exist e.g. there are intermediate network devices<br />
- replay attacks should not work<br />
- stealing credentials is not in our threat model, but stealing ephemeral credentials should not lead to permanent compromise of an identity<br />
<br />
Secret Key pairs<br />
A central service can hand out keys unique to every pair of services, and that way each service can sign a message and the receiving service will know it must have originated from them because only they have the key. For N services that requires managing O(N^2) keys. Keys would need an identifier, a lifetime and to be rotated, all without causing any downtime or miscommunication. Creating signatures would be simple and signatures would be small.<br />
<br />
PKI<br />
Each component can get an asymmetric key, and then sign a message and the receiving component will verify the signature via the public key, which it obtains from a central trusted location, and it will then know it must have originated from the signer because only they have the private key. For N services that requires managing O(N) keys. Keys would need an identifier, a lifetime and to be rotated, all without causing any downtime or miscommunication. Signatures that would be sent would be somewhat large ~ 1-2KB.<br />
<br />
Mutual-TLS<br />
This implies that either the TLS is terminated at each component, or if terminated at some intermediate network point the client certificate is transmitted as an HTTP header. The same constraints as for PKI exist. <br />
<br />
OIDC/OAuth2<br />
Each service (client) gets an ID Token for each other service (Relying Party) it needs to talk to (the ID Token audience would be the Relying Party service). The lifetime of the ID Token would control how often each component needs to talk to the central component. For N services that requires managing O(N) secrets. The tokens used would need to be regularly refreshed at the central service.<br />
<br />
Kerberos<br />
Designed to solve this problem, but difficult to understand and get assurance of security.<br />
<br />
There are likely others I am missing.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-26653925216429733892018-03-30T16:49:00.000+01:002018-03-30T16:49:09.349+01:00Shall we play a game?A while ago I watched a very interesting lecture series on <a href="https://www.thegreatcourses.com/courses/games-people-play-game-theory-in-life-business-and-beyond.html">Game Theory</a> and I wanted to share some of the insights that I got out of it.<br />
<br />
The reason I watched a lecture series on Game Theory is because it shares a lot in common with security. Generally speaking Game Theory studies how two (or more) interacting players should behave to get their desired result. It certainly isn't about studying "games"; games are just tools to enable analysis, it is perhaps better understood as the study of "strategic decision making".<br />
<br />
One of the first things to understand about looking at a game is the idea of a payoff. It's the reason the players are playing, they want to maximise their payoff. Which leads to the first insight I got from the course:<br />
<br />
<blockquote class="tr_bq">
<span id="docs-internal-guid-d72a34ec-7674-129c-808e-c21105962adf"><span style="font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: Arial, Helvetica, sans-serif;">In general, the payoffs for different players cannot be directly compared.</span></span></span></blockquote>
What this means is really interesting, as it says that the strategy you choose as a player is a strategy that maximises your payoff, and so does every other player, but all those strategies might be different because each player is maximising for a different payoff. Relating this to security, an example might be a company choosing security controls to minimise the chance of losing their intellectual property, but an attacker totally focused on gaining control of machines to use for mining cryptocurrency.<br />
<br />
To gain more insights it helpful to know there are different types of games:<br />
<br />
<ul>
<li>finite games vs infinite games</li>
<li>zero sum games vs non-zero sum game</li>
<li>games of pure strategy vs games of mixed strategy</li>
<li>sequential games vs simultaneous games</li>
<li>games of perfect information vs games of incomplete information</li>
<li>cooperative games vs non-cooperative games</li>
</ul>
<br />
From a security perspective if you are trying to choose a strategy to protect an asset against a malicious attacker, then you are playing an infinite, non-zero sum, mixed strategy, simultaneous, incomplete information, non-cooperative game. And yes, people actually study this type of game to determine what an optimal strategy might be (generally given more constraints).<br />
<br />
An important type from the list above is the mixed strategy game. That means that a player will not choose a single strategy, but will choose from multiple strategies that they will play with certain probabilities. The payoff from a mixed strategy game for a player is the sum of the payoffs multiplied by the probability of playing the strategy with that payoff (like calculating an expected value in statistics).<br />
<br />
Now whereas the payoffs for each strategy are fixed, the flexibility comes from a player assigning different probabilities to each strategy. Of course every other player is doing the same thing, and will certainly choose the probabilities they play a certain strategy based on the probabilities they think other players will choose.<br />
<br />
Relating this back to security, if a strategy has a probability and payoff, then the probability is the likelihood of a certain strategy being chosen, and the payoff is the effect of that strategy, or impact we could say. Then the expected payoff for any strategy is likelihood multiplied by impact, which in security we recognise as how risk is calculated.<br />
<br />
Now for the interesting bit if you are into security. Game Theory has determined that there is a "stable" set of strategies that all players can reach, something called the Nash Equilibrium:<br />
<br />
<blockquote class="tr_bq" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;">
<span style="font-family: Arial, Helvetica, sans-serif;"><b><span style="background-color: transparent; color: black; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">A Nash equilibrium is a set of strategies where no player benefits by unilaterally changing his </span><span style="background-color: transparent; color: black; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">or her strategy in the profile</span></b></span></blockquote>
<span style="font-family: inherit;"><span id="docs-internal-guid-d72a34ec-76c0-b168-68c3-3336895a9e16"></span></span><br />
<br />
<span><span style="font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: inherit;">This means there is an approach to security where we can construct a range of controls in such a way as to know exactly what our losses will be, and know that the attacker cannot come up with a better approach (set of strategies) as any other approach will lead to the attacker getting a reduced payoff. It's important to point our that a</span></span></span><span style="font-family: inherit; white-space: pre-wrap;"> Nash equilibrium does not necessarily lead to the best payoffs for each player, or all players. What's also interesting is that for the type of game we play in security, at least one Nash equilibrium is guaranteed to exist.</span><br />
<span style="font-family: inherit; white-space: pre-wrap;"><br /></span>
<span style="font-family: inherit; white-space: pre-wrap;">Of course the problem is we don't know how to find the Nash equilibrium. Even if we knew all the possible strategies of all the players and the payoffs of all the strategies, there isn't a way to calculate it. Not to mention that clearly we don't know all the strategies or all the payoffs.</span><br />
<span style="font-family: inherit; white-space: pre-wrap;"><br /></span>
<span style="font-family: inherit; white-space: pre-wrap;">So useless right? Sure, largely that is right for a quantitative analysis. But there are still insights that can be useful. For instance in the type of games where we know how to find the Nash equilibrium (zero-sum or constant-sum games), the way a player finds a Nash equilibrium is by calculating the probability for each of their strategies by calculating probabilities that give the other player the same payoff regardless of which strategy the opposing player chooses (the minimax strategy).</span><br />
<span style="font-family: inherit; white-space: pre-wrap;"><br /></span>
<span style="font-family: inherit; white-space: pre-wrap;">So what does that mean in practical terms. It means this:</span><br />
<span style="font-family: inherit; white-space: pre-wrap;"><br /></span>
<blockquote class="tr_bq">
<span style="white-space: pre-wrap;"><span style="font-family: Arial, Helvetica, sans-serif;"><b>Choose your security controls based on the payoffs to the attacker </b></span></span></blockquote>
<span style="font-family: inherit; white-space: pre-wrap;"><br /></span>
<span style="font-family: inherit; white-space: pre-wrap;">Don't choose your security controls based on the impact on you of the attacker's strategy. The goal being to force the attacker into a constant payoff strategy no matter what they do. This approach doesn't mean you won't suffer losses to an attacker, it just means those losses will be fixed.</span><br />
<span style="font-family: inherit; white-space: pre-wrap;"><br /></span>
<span style="font-family: inherit; white-space: pre-wrap;">Naturally this approach has a problem, what if the impact (or payoff) to you is unacceptable from a business perspective? Well that leads to another important insight:</span><br />
<span style="font-family: inherit; white-space: pre-wrap;"><br /></span>
<blockquote class="tr_bq">
<span style="white-space: pre-wrap;"><span style="font-family: Arial, Helvetica, sans-serif;"><b>Game Theory can help you determine if you are playing the wrong game.</b></span></span></blockquote>
You can change a game in multiple ways, but ultimately it comes down to changing the payoffs for the other players. In practical terms this means changing the amount of value an attacker can derive from the assets of interest to them e.g. no longer store the credit cards that an attacker wants access to.<br />
<br />
What's interesting is this approach runs counter to the general approach of risk management today, which is to figure out what threats would have the most impact on your business, try to figure how likely those threats are, and try to minimise either the impact or likelihood. Game Theory isn't saying that isn't sensible, just that it might not be the best strategy. The simple reason is due to the very first insight, if you optimise to protect what is of most value to you, you might fail to protect the real target of an attacker, which ultimately might cost you more. Put another way, investing heavily in protecting something that is valuable to the business is a waste of time if no attackers are actually interested in it. For instance if you have IP that is core to your business, attackers won't care about that unless they can monetise it easily, especially if you have other assets they can monetise more easily.<br />
<br />
The last insight that I wanted to mention is this quote:<br />
<br />
<blockquote class="tr_bq">
<span id="docs-internal-guid-d72a34ec-76ec-6b50-c7c3-446817a8b8a8"><span style="font-family: Arial; font-variant-east-asian: normal; font-variant-numeric: normal; vertical-align: baseline; white-space: pre-wrap;"><b>“The power to constrain an adversary depends on the power to bind oneself” - Thomas Schelling</b></span></span></blockquote>
My interpretation of "bind oneself" is the processes you put in place to ensure you have adequate security controls across your business. In appsec this would be your Secure SDLC. The quote is all about the benefits of strictly enforcing controls on your business, so that an adversary is also constrained.<br />
<br />
Game Theory isn't about solve all the problems we have in security, but it seems a very complementary field that has many practical insights to offer. If you are looking to broaden your understanding of security I would thoroughly recommend taking a closer look.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-2646531936895253682014-04-16T15:30:00.001+01:002014-04-16T15:30:32.369+01:00Website specific password managerFollowing on from my <a href="http://www.samadhicsecurity.com/2014/04/removing-usernames-and-enforcing-unique.html">post</a> about what the benefits would be if websites would enforce unique passwords for users, I thought I would try to come up with a better scheme that avoided the dichotomy of the ability of users to remember passwords and password complexity.<br />
<br />
My goal was to devise a way a website could allow users to choose a password they could remember and gain the same benefits as if users chose unique passwords. Additionally, the security of the scheme should ideally be better, but at least no worse than what (a well designed) password authentication scheme is today.<br />
<br />
The basic idea is the same as the way a password manager works. Password managers allow a user to choose a strong local password that is used on their client machine to protect a password for a certain website. When a user enters their local password in their password manager, it decrypts the website password, and then that is sent to the website to authenticate the user. Usually the encrypted password(s) are stored online somewhere so the password manager can be used on multiple devices.<br />
<br />
The problem (I use this term loosely) with password managers is it requires users to go to the effort of using one, or perhaps understanding what it is and trusting it with their passwords. It would be preferable to get some of the benefits of a password manager on a website without requiring the user to know or even be aware one was being used.<br />
<br />
So what I describe below is effectively a website specific password manager (that manages just 1 password, the password of the website). The password management will happen behind the scenes and the user will be none the wiser. The basic idea is that when a user registers with a website and chooses a password, that password will actually be a local password used to encrypt the actual website password. The actual website password will be chosen by the website, be very strong and be unique (to the website). When a user tries to log on, the following would happen:<br />
<ul>
<li>The user will be prompted for their username and (local) password,</li>
<li>The entered username will be sent to the website and used to recover the encrypted password store (encrypted with the local password and containing the actual website password) for that user, which is then sent to the client</li>
<li>On the client the local password is used to decrypt the encrypted password store to recover the website password</li>
<li>The website password is sent to the website to authenticate the user.</li>
</ul>
Ignoring security for a second, we can improve the efficiency of the scheme by caching the encrypted password store on the client. In fact the only reason the encrypted password store is stored on the server is to accommodate login from multiple devices. It's an assumed requirement that login from multiple devices is necessary and that registering a device with a website doesn't scale sufficiently for it to be an option.<br />
<br />
To examine the security of the scheme we need more details, so I provide those below. But first I will say what security threats we are NOT looking to mitigate. This scheme doesn't concern itself with transport security (so it's assumed HTTPS is used for communications), doesn't protect against XSS attacks (attacker script in the client will be able to read passwords when they are entered), doesn't protect against online password guessing attacks (it's assumed the website limits the number of password guesses).<br />
<br />
The threats we are concerned about though are:<br />
<ul>
<li>An attacker that gets an encrypted password store for a user should not be able to recover the user's local password in an offline attack.</li>
<li>An attacker that gets a DB dump of password hashes should not be able to recover either the website password or the local password.</li>
</ul>
So let's get into the detail of the scheme. Here is a description of the algorithms and parameters involved.<br />
<ul>
<li>Hash(), a password hashing algorithm like PBKDF2, bcrypt or scrypt. It's output size is 256 bits.</li>
<li>HMAC_K(), this is HMAC-SHA256 using a secret key K.</li>
<li>K, a 256 bit secret key used by the website solely for password functionality, it is not per-user, it is the same for all users.</li>
<li>xor, this means the binary exclusive-or of two values.</li>
<li>PwdW, the website password, a 256 bit strong random value.</li>
<li>PwdL, the local password, chosen by the user.</li>
</ul>
<div>
Let's start with the registration process, a new user signs up:</div>
<div>
<ul>
<li>A user chooses to register with the website.</li>
<li>The website generates a new PwdW and returns this to the client.</li>
<li>The user chooses a password (and username if appropriate), this is PwdL.</li>
<li>The client sends to the website, PwdW xor Hash(PwdL).</li>
<li>The website stores (against the username), PwdW xor Hash(PwdL) and also PwdW xor HMAC_K(PwdW).</li>
</ul>
<div>
I've skipped over some of the implementation details here (like the Hash parameters etc), but hopefully provided enough details for analysis. One important point is that the website does not store the value PwdW anywhere, it only stores the 2 values in the last step, namely:</div>
</div>
<div>
<ul>
<li>PwdW xor Hash(PwdL) (this is the encrypted password store from the description above)</li>
<li>PwdW xor HMAC_K(PwdW) (this is the equivalent of the password hash in most schemes, it is used to verify that any PwdW submitted by a client is correct)</li>
</ul>
</div>
<div>
Now let's see what happens during login:</div>
<div>
<ol>
<li>User visits the website and enters their username and PwdL</li>
<li>The client sends the username to the website, which looks up the encrypted password store, PwdW xor Hash(PwdL) and returns it to the client</li>
<li>The client generates Hash(PwdL) and uses this to recover PwdW by calculating (PwdW xor Hash(PwdL)) xor Hash(PwdL) = PwdW</li>
<li>The client sends PwdW to the website</li>
<li>The website recovers HMAC_K(PwdW) by calculating (PwdW xor HMAC_K(PwdW)) xor PwdW = HMAC_K(PwdW) (let's call this Recovered_HMAC).</li>
<li>The website calculates HMAC_K(PwdW) using the received value of PwdW from the client (let's call this Calculated_HMAC).</li>
<li>If Recovered_HMAC = Calculated_HMAC the user is successfully authenticated.</li>
</ol>
</div>
So let's address the threats we are concerned about, the first is "An attacker that gets an encrypted password store for a user should not be able to recover the user's local password in an offline attack." Let's assume an attacker knows the username they want to target, and that they download the encrypted password store (PwdW xor Hash(PwdL)) for that user. If we accept that PwdW is a random 256-bit value, then an attacker is unable to determine the value of Hash(PwdL) because there is a value of PwdW that gives every possible hash value of every possible local password that could be chosen. This is the security property of the one-time pad. Of course an attacker could instead generate Hash(PwdL) (by guessing PwdL) and calculate PwdW, but they have no idea if they are correct and would have to submit it to the website to determine if it was correct, however we are assuming there are controls in place that limit the number of online password guessing attempts.<br />
<br />
The other threat we consider is "An attacker that gets a DB dump of password hashes should not be able to recover either the website password or the local password." There are 2 scenarios we want to consider, when the attacker does not have access to the secret key K, and when they do.<br />
<br />
If the attacker does not have access to K, then in this case the attacker has access to (PwdW xor Hash(PwdL)) and (PwdW xor Enc_K(PwdW)). If the attacker tries to offline guess PwdW they have the problem of having no way to determine if a guess is correct, because they cannot verify the HMAC without the key K. They can of course guess PwdW online by submitting it to the website, but we assume controls that mitigate this attack. If the attack tries to guess PwdL, then they have exactly the same problem of being unable to offline verify if the calculated PwdW is correct.<br />
<br />
If the attacker does have access to K then they have the same information that the website has and so can perform an offline (dictionary) attack against the local password PwdL. How successful this attack is depends on the parameters of the password hashing scheme (Hash()) that were chosen and the complexity of the user chosen password PwdL. For the purposes of this scheme though the attacker in this scenario has the same chance of success as they would have against the current industry best practice of storing a password hash in the DB (in a scenario where those hashes were extracted by the attacker and attacked offline).<br />
<br />
So that's a quick and dirty description and security analysis of a novel password authentication and protection scheme for a website. The benefits of this scheme are:<br />
<ul>
<li>The website never gets to see the local password chosen by the user. But this is not an absolute, it assumes a trusted website (and trusted 3rd party resources the website requests). This is a benefit because:</li>
<ul>
<li>Websites cannot accidentally expose the user's local password, either internally or externally.</li>
<li>A malicious insider working for the website would have a much more difficult task in obtaining local passwords with an intent of using those passwords on other websites.</li>
</ul>
<li>A successful SQL Injection attack that dumped out the password hash DB table would not allow an attacker to compromise any accounts.</li>
<li>If the encryption key K was stored and used in an HSM, then it would be impractical under any circumstances for an attacker to mount an offline attack (short of physical access to the user!)</li>
</ul>
But let's also be clear about the problems:<br />
<ul>
<li>You probably shouldn't be implementing any scheme you read about in a blog. Such things need to be reviewed by experts and their acceptance needs to reach a level of consensus in the security community. Anyone can create security system that they themselves cannot break.</li>
<li>Lack of implementation details. Even if the theory is sound there are several pitfalls to avoid in implementing something like this. This means it's going to be difficult to implement securely unless you are very knowledgeable about such things.</li>
<li>If you are using a salted password-hash scheme now and went through and encrypted those hashes (as another layer of security), you basically get the same level of security as this scheme proposes. The benefit of this scheme is that the user's password is not sent to the website.</li>
<li>Password changing. It may be more difficult to mandate or enforce user's change their local password because the change must be done on the client side.</li>
<li>JavaScript crypto. The Hash() algorithm would need to be implemented in JavaScript. The first problem is implementing crypto algorithms in JavaScript in the browser raises <a href="http://www.matasano.com/articles/javascript-cryptography/">several security concerns</a>. The second is the Hash algorithm is computationally expensive which means performance might negatively affect usability, perhaps more so when implemented in JavaScript, and perhaps even more so on certain devices e.g. mobile.</li>
<li>Lastly, I didn't quite achieve my goal of allowing users to choose weaker passwords as in the worst case attack (attacker has access to the secret key), the use of weak passwords would make recovering those passwords relatively simple.</li>
</ul>
But hey, it's a novel (as far as I know) approach, and it might be useful to someone else designing a solution for website authentication.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-56227880780369647672014-04-04T15:08:00.002+01:002014-04-04T15:08:54.326+01:00Removing usernames and enforcing unique passwordsI often have "ideas" that, in the haystack of my mind, are usually straws rather than that elusive needle. One recent one was whether the username is necessary to login to a website. Usually the Internet is excellent at helping identifying straws, and this idea is no exception, see http://blog.codinghorror.com/why-do-login-dialogs-have-a-user-field/. But let's follow through on the idea and see where it takes us.<br />
<br />
So the basic idea is that when a user wants to authenticate to a website, all they do is provide their password and the website will use that information to identify them.<br />
<br />
The draw of removing usernames for me is really an efficiency one, as it's not immediately clear that usernames are required to authenticate to a website.<br />
<br />
The obvious problem though is if someone tries to register with the same password as an existing user. Since we can't have 2 users with the same password (more on unique passwords later), we must reject the registration and reveal that the chosen password is a valid credential for another user. We can overcome this problem by forcing the existing user with that password to change their password (using (an assumed pre-existing) out-of-band password reset mechanism e.g. email). Which highlights one benefit of authenticating with the password only, it creates an incentive for the user to choose a strong password (as otherwise they will continually have to change their password when new users attempt to register with the same password).<br />
<br />
From the website's perspective there is a problem though, how do you identify the user? If passwords are being stored using a salted password-hashing algorithm, then it would be too inefficient for the website to, for each row in the password hash DB table, fetch the salt for that row, then generate the password hash (using the password of the user trying to authenticate), and the compare it against the stored password hash in that row. That approach simply does not scale. We certainly don't want to use a password hash without a salt or with a fixed salt (as this makes dictionary attacks on the password hash DB table much quicker in the event the table is exposed).<br />
<br />
One option is to encrypt the password and use that as the salt for the password hash. To verify a login attempt the website would encrypt the password to create the salt, calculate the password hash (using the salt) and compare with the list of stored password hashes (potentially sorted for efficient searching). It's important to point out this encrypted password, which we are using as the salt, is not stored, but calculated during the login attempt.<br />
<br />
If the password hash store was ever compromised (and we always assume it will be) then it will be impossible to brute-force the passwords without the encryption key as well (as the salt will not be known and not be practical to guess). Thus the security of this approach relies on protecting the encryption key. The key should not be stored in the DB, as the likely extraction method of the password hash DB table is SQL Injection, meaning if the key was in the DB it too could be extracted. The key should be stored somewhere the DB cannot access (the goal is to require the attacker to find a separate exploit to obtain the key). It could be stored in configuration files, but the best option would be an HSM, with encryption done on-board. At the stage an attacker has code executing talking to the HSM, it's game over anyway. If the encryption key was obtained by an attacker then they could more efficiently (than attacking traditional salted passwords hashes) perform a dictionary attack on the password hash DB table.<br />
<br />
We can make another optimisation for security as well. We should keep track of the passwords that more than one user has chosen in the past i.e. password collisions discovered during user registration. After all, we don't want to force a user to change their weak password, only for that password to become available for use again! This way we will avoid the re-use of weak passwords e.g. 123456. Now imagine we had a list of passwords that we know more than one person had thought of. Now imagine we publicly shared that list (so other websites could avoid letting users choose those passwords as well).<br />
<br />
So let's imagine we now have an application that doesn't require usernames for authentication and all users have unique passwords. What are the threats? Well a distributed dictionary attack against our application is a problem because every password guess is a possible match for every user. Annoyingly the more users we have the better the chances of the guess being right. Additionally, limiting the number of guesses is more difficult since authentication attempts are not user specific. This makes clear the benefit of having usernames; they make online authentication attacks much harder.<br />
<br />
So my conclusion was that although usernames might not be strictly necessary, they do offer significant security benefits. From a user perspective as well there is minimal burden in using usernames as browsers (or other user agents) often remember the username for convenience.<br />
<br />
But what about the benefits of unique passwords! What about the incentive we gained when users were forced to choose strong passwords? Well what if we keep usernames AND still forced passwords to be unique? Could this be the best of both worlds? Might I have just pricked my finger on a needle?<br />
<br />
The 'sticking point' for me is the user acceptability of forcing unique passwords. It may drive the uptake of password managers or strong passwords, or it might annoy the hell out of people. Perhaps for higher security situations it could be justified.<br />
<br />Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-16347524433872446432014-04-01T10:21:00.001+01:002014-04-01T10:47:55.531+01:00Revisiting Fundamentals: Defence in DepthI think it is a worthwhile exercise to revisit fundamental ideas occasionally. We are constantly learning new things and this new knowledge can have an effect on the way we understand fundamental concepts or assumptions, sometimes challenging them and sometimes offering new insights or facets to appreciate. I recently learned about how Defence in Depth was used in some historic military battles and it really challenged my understanding of how the principle can be applied to security.<br />
<br />
So what is Defence in Depth (DiD) and what is it good for? My understanding is that DiD originated as a military tactic, but is more commonly understood in its engineering sense. It turns out the term has different meanings in these different disciplines. Largely I am going to focus on the meanings with regard to application security (as this is relevant to me).<br />
<br />
<span style="font-size: large;">Engineering</span><br />
In engineering DiD is a design tactic to protect a resource by using layers of controls, so if one control fails (bypassed by an attacker) there is another control still protecting an asset. By incorporating DiD into an application's security design we add (passive) redundant controls that provide a margin of safety for a control being bypassed. The controls we add may be at different trust boundaries or act as only a partially redundant control for other controls e.g. input validation at the perimeter partially helps protect against XSS, SQLi, etc.<br />
<br />
I would say that adding DiD in the engineering sense is essential to any security design of an application. Security controls should be expected to fail, and so accounting for this by architecting in redundancy to achieve a margin of safety makes sense. I will say that some <a href="http://en.wikipedia.org/wiki/Redundancy_%28engineering%29#Disadvantages">disadvantages to redundancy</a> have been identified that likely apply to DiD as well. Briefly:<br />
<ul>
<li>Redundancy can result in a more complex system. Complex systems are harder to secure.</li>
<li>The redundancy is used to justify weaker security controls in downstream systems.</li>
<li>The redundancy is used as an excuse to implement risky functionality which reduces the margin of safety. Multiple independent functionality relying on the margin can have an accumulated effect that reduces or removes the margin of safety.</li>
</ul>
<br />
<span style="font-size: large;">Military</span><br />
DiD in military terms is a layered use of defensive forces and capabilities that are designed to expend the resources of the attacker trying to penetrate them e.g. <a href="http://en.wikipedia.org/wiki/Battle_of_Kursk">Battle of Kursk</a>. What's crucial to appreciate, I think, is that in the physical world the attacker's resources (e.g. troops, guns, bombs, time) are usually limited in supply. In this way DiD grinds down an enemy and the defender wins if the cost/benefit analysis of the attacker changes so they either stop or focus on another target. Additionally, defensive forces are also a limited resource and can be ground down by an attacker. It is possible for an attack to result in stalemate as well, which may be considered a defensive win.<br />
<div>
<br /></div>
<div>
Taking this point of view then, is protecting an application analogous to protecting a physical resource? We need it to be analogous in some way otherwise using DiD (in the military sense) might not be an appropriate tactic to use for defending applications.<br />
<br />
Well one of the key points in the physical arena is the consumption of resources, but in the application arena it seems less accurate (to me) to say that computing resources are consumed in the same way. If an attacker uses a computer to attack an application, the computer is not "consumed" in the process, or less able to be used again. The same is true for the application defences.</div>
<div>
<br /></div>
<div>
So it isn't analogous that physical resources are consumed in the same way in the physical and application security arenas. But there are other types of resources though, non-physical resources. I can think of 2 resources of an attacker that are limited and are applicable to application security;</div>
<div>
<ul>
<li>time - as a resource, if a defensive position can increase the time an attacker needs to spend performing an attack, the cost/benefit analysis of the attacker may change, the "opportunity cost" of that time may be too high.</li>
<li>resolve - in the sense that an attacker will focus on a target for as long as they believe that attacking that target is possible and practical. If a defensive position can make the attacker believe that attacking it is impractical, then the attacker will focus their efforts elsewhere.</li>
</ul>
</div>
<div>
There is an irony in 'resolve' being a required resource. The folks who have the appropriate skills to attack applications are a funny bunch I reckon, as they are exactly the type of people who are very persistent, after all, if someone can master the art of reverse engineering or blind SQL injection, they are likely one persistence SOB. In a sense they are <i>drawn</i> to the problem of (application) security because it requires the very resource they have in abundance.</div>
<div>
<br /></div>
As an aside, this could be why APT attacks are considered so dangerous, it's not so much the advanced bit (I rarely read about advanced attacks), but the 'persistent' bit; the persistent threat is a reflection of the persistence (or resolve) of the attacker. The risk being that most defences are going to be unlikely to break that resolve.<br />
<br />
So if those are the resources of the attacker then that gives us a way to measure how effective our DiD controls are; we need to measure how they increase time or weaken resolve.<br />
<br />
Layered controls work well in increasing the time the attacker has to spend. The more dead ends they go down, the more controls they encounter that they cannot bypass, all that takes time. Also, attackers tend to have a standard approach, usually a set of tools they use, so any controls that limit the effectiveness of those tools and force them to manually attack the application will cause a larger use of time. Ironically though, consuming time could have an opposite effect on the resolve of the attacker, since there is likely a strong belief that a weakness exists somewhere, and also the "<a href="http://www.skepdic.com/sunkcost.html">sunk cost effect</a>", meaning the attacker may become more resolved to find a way in.<br />
<br />
I'm not a psychologist so I can't speak with authority on what would wear down resolve. I suspect it would involve making the attack frustrating though. I will suggest that any control that makes the application inconsistent or inefficient to attack, will help to wear down resolve. <br />
<br />
I did a bit of brain-storming to think of (mostly pre-existing) controls or designs an application could implement to consume 'time' and 'resolve' resources:<br />
<ul>
<li>Simplicity. Attackers are drawn to complex functionality because complexity is the enemy of security. Complexity is where the vulnerabilities will be. The simpler the design and interface (a.k.a minimal attack surface area) the less opportunity the attacker will perceive.</li>
<li>Defensive counter controls. Any controls that react to block or limit attacks (or tools). </li>
<ul>
<li>Rapid Incident Response. Any detection of partial attacks should be met with the tightening of existing controls or attack-specific controls. If you can fix things faster than the attacker can turn a weakness into a vulnerability, then they may lose hope. I do not pretend this would be easy.</li>
<li>Random CAPTCHAs. Use a CAPTCHA (a good one that requires a human to solve), make these randomly appear on requests, especially if an attack might be under way against you.</li>
</ul>
<li>Offensive counter controls. Any control that misleads the attacker (or their tools).</li>
<ul>
<li>Random error response. Replace real error responses with random (but valid looking) error responses, the goal is to make the attacker think they have found a weakness but in reality they haven't. This could have a detrimental effect on actual trouble shooting though.</li>
</ul>
<ul>
<li>Random time delays. Vary the response time of some requests, especially those that hit the database. Occasionally adding a 1 second delay won't be an issue for most users but could frustrate attacker timing attacks.</li>
<li>Hang responses. If you think you are being attacked, you could hang responses, or deliver very slow responses so the connection doesn't time out.</li>
</ul>
</ul>
I'm certainly not the first to suggest this approach, it is known as "active defence". There are even some <a href="http://sourceforge.net/p/adhd/wiki/browse_pages/">tools</a> and a <a href="http://www.amazon.com/Offensive-Countermeasures-The-Active-Defense-ebook/dp/B00DQSQ7QY">book</a> about it (which I cannot vouch for). The emphasis may be more on network defensive controls rather than the application controls that my focus has been on.<br />
<div>
<br />
<span style="font-size: large;">TL;DR</span><br />
Defence in Depth in application security should involve redundant controls but may also include active defences.</div>
Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-51851018938666473862014-03-25T10:33:00.001+00:002014-03-25T10:33:27.929+00:00Making Java RIAs MIA - Enterprise StyleThis post is all about how to disable Java in the browser, or alternative making Java Rich Internet Applications (RIA) missing-in-action. There is a lot of information out there on how to do this, but I couldn't find a good source that covered it from an Enterprise point of, that is, how to disable Java in the browser at scale (rather than just doing it on our own machine using a GUI), across multiple browsers and OSes.<br />
<br />
First let's cover the obvious, why do we want to do this. See my page on <a href="http://www.samadhicsecurity.com/p/java-vulnerabilities.html">Java Vulnerabilities</a>. Basically if Java is enabled in your browsers then you are opening yourself up to a world of pain. If you have Java enabled in your browsers then you have malware somewhere in your Enterprise.<br />
<br />
What we want is to understand the options we have for disabling Java in the browser and understand what the settings are that we need to configure.<br />
<br />
<b><span style="font-size: x-large;">Browser Agnostic</span></b><br />
<span style="font-size: large;">Uninstall Java</span><br />
<div>
This is obviously a good option as it removes the risk of Java. However, depending on your environment user's may just re-install Java, so unless you are actively scanning for Java installations your Enterprise this approach might not be as effective as you'd like it to be.</div>
<div>
<br /></div>
<div>
For that reason I'm not going to focus on this solution too much, but for non-Windows machines see <a href="http://www.java.com/en/download/help/linux_uninstall.xml">Oracle's advice</a> on uninstalling and this <a href="http://askubuntu.com/questions/84483/how-to-completely-uninstall-java">post</a>.</div>
<div>
<br /></div>
<div>
For Windows you can detect the installed versions of Java using</div>
<div>
<div class="Code" style="text-align: left;">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">wmic product where "name like 'Java%'"
get name</span></span></div>
<div class="Code" style="text-align: left;">
<span style="font-family: inherit;">and uninstall these via</span></div>
<div class="Code" style="text-align: left;">
<span lang="EN-US"></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">wmic product where "name like 'Java%'"
call uninstall</span></span></div>
<h3 style="text-align: left;">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></h3>
<div style="text-align: left;">
<span lang="EN-US"><span style="font-family: inherit; font-size: large;">Java Deployment Options</span></span></div>
</div>
<div>
<span style="font-family: inherit;"><span lang="EN-US">From Java version 7 update 10 a <a href="http://docs.oracle.com/javase/6/docs/technotes/guides/deployment/deployment-guide/properties.html">deployment
option</a> exists to disable Java content in <b>all</b> browsers (these are the same options as available in the Java
Control Panel)</span>.
This option is a feature of Java and so is available on both Windows and
non-Windows OSes.</span></div>
<div>
<br /></div>
<div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">The option is stored in a
</span><span style="font-family: Courier New, Courier, monospace;">deployment.properties</span><span style="font-family: inherit;"> file. The user has
their own version of this file stored at (on Windows and non-Windows
respectively):</span><o:p></o:p></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;"><user_profile>\AppData\LocalLow\Sun\Java\Deployment\deployment.properties<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US"><span style="background-color: #cccccc; font-family: Courier New, Courier, monospace;"><user home>/.java/deployment/deployment.properties</span><o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span>
<span lang="EN-US"><span style="font-family: inherit;">A system version, containing the default
values for all users, can be created and stored at (on Windows and non-Windows
respectively):<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">%WINDIR%\Sun\Java\Deployment\deployment.properties<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US"><span style="font-family: Courier New, Courier, monospace;"><span style="background-color: #cccccc;">/etc/.java/deployment/deployment.properties</span><o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span>
<span lang="EN-US"><span style="font-family: inherit;">This system version is only used by Java if a
deployment.config file exists at (on Windows and non-Windows respectively): <o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">%WINDIR%\Sun\Java\Deployment\deployment.config<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US"><span style="font-family: Courier New, Courier, monospace;"><span style="background-color: #cccccc;">/etc/.java/deployment/deployment.config</span><o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">and contains an option that specifies the
location of the system deployment.properties to use e.g. on Windows:<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">deployment.system.config=file\:C:/WINDOWS/Sun/Java/Deployment/deployment.properties<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US"><span style="font-family: Courier New, Courier, monospace;"><span style="background-color: #cccccc;">deployment.system.config.mandatory=true</span><o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span>
<span lang="EN-US"><span style="font-family: inherit;">The system version contains the default
values that the user version will use, but the user can override these. That is unless the system version indicates
that the option should be locked (which means the user cannot change the option).<o:p></o:p></span></span><br />
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">So to disable the plugin across all
browsers and users the system deployment.properties should contain:<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">deployment.webjava.enabled=false<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US"><span style="font-family: Courier New, Courier, monospace;"><span style="background-color: #cccccc;">deployment.webjava.enabled.locked</span><o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span>
<span lang="EN-US"><span style="font-family: inherit;">Both the system deployment.config and
deployment.properties should be ACL’ed so that the user cannot edit them (but
can read them).<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">Although this option only applies from Java
version 7 update 10, there is no issue in settings this option for earlier
versions of Java, as the setting is ignored.
The benefit is that should Java be upgraded then this setting will be
adopted.</span><o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></div>
<span lang="EN-US"><span style="font-family: inherit; font-size: x-large;">Browser Specific</span></span></div>
<span lang="EN-US"><span style="font-family: inherit; font-size: large;">Internet Explorer (IE)</span></span><br />
<div>
<span lang="EN-US"></span><br />
<div class="MsoNormal">
<span lang="EN-US"><span lang="EN-US"><span style="font-family: inherit;">In response to the threat posed by Java in
the browser Microsoft released a <a href="http://support.microsoft.com/fixit/">FixIt</a>
solution that disables Java in IE - </span></span><a href="http://support.microsoft.com/kb/2751647" style="font-family: inherit;">http://support.microsoft.com/kb/2751647</a></span></div>
<div class="MsoNormal">
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">The executable disables Java in the browser
and appears as “IE Java Block 32bit Shim” in the list of installed
programs. Presumably the FixIt installs
a 64bit equivalent on 64bit versions of IE.</span></div>
<div class="MsoNormal">
<span lang="EN-US"><span lang="EN-US"><span style="font-family: inherit;"><br /></span></span>
<span style="font-family: inherit;">We can detect if the Microsoft FixIt is
installed by confirming that the following registry exists</span></span></div>
<span lang="EN-US">
</span>
<div class="Code">
<span lang="EN-US"><span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{01cf069a-f8a1-4067-adc4-5ef7e922733c}.sdb<o:p></o:p></span></span></span></div>
<span lang="EN-US">
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">If the Windows OS is 64-bit and 32-bit IE
is installed then the key will be under:<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">
</span></span></div>
<div class="Code">
<span lang="EN-US"><span style="background-color: #cccccc; font-family: Courier New, Courier, monospace;">HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall</span><o:p></o:p></span><br />
<br />
<span lang="EN-US" style="font-family: inherit;">There is some debate about the effectiveness of this FixIt. <a href="http://www.kb.cert.org/vuls/id/636312">Advice from CERT</a> states that it doesn't completely prevent RIAs from being invoked by IE. Alternative methods to disable Java exist that may be more reliable but seem to involve updating a list of blocked ActiveX controls with every release of Java. You'll have to decide on the risk yourself, but on Windows perhaps unnstalling Java or using the Java deployment options are the safest options.</span></div>
<div style="font-family: 'Times New Roman'; margin: 0px;">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></div>
<span lang="EN-US"><span style="background-color: white; font-family: inherit; font-size: large;">Firefox</span></span><br />
<div>
<span lang="EN-US"></span><br />
<div class="MsoNormal">
<span lang="EN-US"><span lang="EN-US"><span style="font-family: inherit;">Firefox has a configuration option that
controls whether the plugin is enabled.
The option can be viewed by going to about:config and is called “plugin.state.java”. It can take one of three values:</span></span></span></div>
<ul>
<li><span style="font-family: inherit;">0 = disabled</span></li>
<span lang="EN-US">
<li><span lang="EN-US" style="font-family: inherit;">1 = click to play</span></li>
<li><span lang="EN-US" style="font-family: inherit;">2 = enabled</span></li>
</span></ul>
<span lang="EN-US"><div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">The value of this option is stored in a
file called “prefs.js” in the user’s Firefox profile directory. If a user was to set this option via
about:config their selection would be saved in this file.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">It’s preferably to enforce all users (and
all user profiles) to disable the plugin.
Firefox supports a ‘lock’ on the preference so that the user cannot
(easily) change the option and enable the plugin (see <a href="http://kb.mozillazine.org/Locking_preferences">Locking preferences</a>). By creating a file called “mozilla.cfg” in
the <i>installation</i> directory of Firefox
with the entry:<o:p></o:p></span></span></div>
<div class="Code" style="margin-bottom: .0001pt; margin-bottom: 0cm;">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">//<o:p></o:p></span></span></div>
<div class="Code" style="margin-top: 0cm;">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">lockPref("plugin.state.java",
0);<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span>
<span lang="EN-US"><span style="font-family: inherit;">Firefox needs to be instructed to load this
lock file and this is done by creating a “local-settings.js” file in the
“defaults\pref” sub-directory of the Firefox installation directory with:<o:p></o:p></span></span></div>
<div class="Code" style="margin-bottom: .0001pt; margin-bottom: 0cm;">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">pref("general.config.obscure_value",
0);<o:p></o:p></span></span></div>
<div class="Code" style="margin-bottom: .0001pt; margin: 0cm;">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">pref("general.config.filename",
"mozilla.cfg");<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span>
<span lang="EN-US"><span style="font-family: inherit;">With these configuration files and options
in place the option plugin.state.java in about:config will be italicized and greyed
out.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">To prevent a user from changing “mozilla.cfg”
and “local-settings.js” these files should be ACL’ed so that the user cannot
edit them (but can read them).<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span>
<span lang="EN-US"><span style="font-family: inherit;">I think I am not sure of is in what version of Firefox the plugin.state.java configuration was introduced, or even the ability to lock configuration options. If your Enterprise is using a very old version of Firefox then this approach might not be available.</span></span><br />
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">Reference - </span></span><span style="font-family: inherit;"><span lang="EN-US"><span style="font-size: 7pt;"> </span></span></span><span lang="EN-US"><span lang="X-NONE"><span style="font-family: inherit;"><a href="http://stealthpuppy.com/prepare-mozilla-firefox-for-enterprise-deployment-and-virtualization/">http://stealthpuppy.com/prepare-mozilla-firefox-for-enterprise-deployment-and-virtualization/</a></span></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span lang="X-NONE"><br /></span></span></div>
<br />
<span style="font-size: large;">Chrome</span><br />
<div>
<div class="MsoNormal">
<span lang="EN-US">This method applies to Google Chrome and
Chromium (with minor differences). </span>Chrome supports configuration via <a href="https://support.google.com/chrome/a/answer/187202?hl=en">Policy</a> (see chrome://policy) and the policy value we want
to set is called <a href="http://www.chromium.org/administrators/policy-list-3#DisabledPlugins">DisabledPlugins</a>
(see list of plugins via chrome://plugins/).</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-US">The DisabledPlugins policy can be overridden by the <a href="http://www.chromium.org/administrators/policy-list-3#EnabledPlugins">EnabledPlugins</a>
and <a href="http://www.chromium.org/administrators/policy-list-3#DisabledPluginsExceptions">DisabledPluginsExceptions</a> policies though,
so these policies also need to be set to ensure the Java plugin is disabled. <o:p></o:p></span></div>
<br />
<b>Windows</b><br />
<div class="MsoNormal">
<span lang="EN-US">Chrome policy can be enforced via Group
Policy. Google provides ADM/ADMX <a href="http://dl.google.com/dl/edgedl/chrome/policy/policy_templates.zip">templates</a>. The GPO should be configured at the "Computer Configuration" level:<o:p></o:p></span></div>
<div class="MsoListParagraphCxSpFirst">
</div>
<ul>
<li>To set the DisabledPlugins
policy, the policy entry “Specify a list of disabled plug-ins” should be set to “Enabled” and an
entry of “Java(TM)” should be added.</li>
<li><span lang="EN-US">To set the EnabledPlugins
policy, the policy entry “Specify a list of enabled plug-ins” should be set to a random value. (1)</span></li>
<li><span lang="EN-US">The set the DisabledPluginsExceptions
policy, the policy entry “Specify a list of plug-ins that the user can enable or disable” should
be set to a random value. (1)</span></li>
</ul>
<br />
<div class="MsoNormal">
<span lang="EN-US">Chrome historically supported policy
configuration through the registry but this ability was <a href="http://code.google.com/p/chromium/issues/detail?id=259236">removed in
version 28</a>. The current policy is still written to the registry though, so this provides a mechanism to verify the policy exists on a machine.<o:p></o:p></span></div>
<br />
<b>Non-Windows</b><br />
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;">Chrome also supports configuration via <a href="http://www.chromium.org/administrators/linux-quick-start">policy on
Non-Windows machines</a> via a local JSON file containing the policy
configuration. The policy files live under /etc/opt/chrome for
Google Chrome (and /etc/chromium for Chromium).<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><br /></span></span></div>
<div class="MsoNormal">
<span lang="EN-US"><span style="font-family: inherit;"><span style="font-family: inherit;">Two types of policies exist; "managed", which
are mandatory policies, and "recommended" which are not mandatory. These policies are respectively located
under:</span><o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="background-color: #cccccc; font-family: Courier New, Courier, monospace;">/etc/opt/chrome/policies/managed/<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="background-color: #cccccc; font-family: Courier New, Courier, monospace;">/etc/opt/chrome/policies/recommended/<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal">
<span style="font-family: inherit;">To disable the Java
plugin create a file “policy.json” in
/etc/opt/chrome/policies/managed/ with the contents:<o:p></o:p></span></div>
<div class="Code">
<span style="background-color: #cccccc; font-family: Courier New, Courier, monospace;">{<o:p></o:p></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">“DisabledPlugins”:["Java(TM)"],<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">“DisabledPluginsExceptions”:[" "],<o:p></o:p></span></span></div>
<div class="Code">
<span lang="EN-US" style="background-color: #cccccc;"><span style="font-family: Courier New, Courier, monospace;">“EnabledPlugins”:[" "],<o:p></o:p></span></span></div>
<div class="Code">
<span style="background-color: #cccccc; font-family: Courier New, Courier, monospace;">}<o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: inherit;"><br /></span></div>
<div class="MsoNormal">
<span style="font-family: inherit;">The policy file should have ACLs applied to
it so that users cannot edit (but can read) it.</span></div>
<div>
<br />
<div id="ftn1">
</div>
</div>
</div>
<div>
(1) So why do we assign the EnabledPlugins and DisabledPluginsExceptions random values (or a space character)? Turns out if we set these policies to "Not configured" or "Disabled" then a user can enable Java in the browser by configuring these policies in the "User Configuration" policy section. If this seems like a security issue to you, well it seemed like one to me as well, that's why <a href="http://code.google.com/p/chromium/issues/detail?id=347987">I opened a bug with Google about it</a>. They didn't really agree, but I concede it's complicated (and potentially it's a bug in Microsoft's Group Policy functionality).</div>
</span></div>
</span></div>
<div class="MsoNormal">
<span lang="EN-US"><o:p></o:p></span></div>
Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-64759388170127191482014-03-04T10:02:00.000+00:002014-03-04T10:02:08.742+00:00Running Java applications without installing JavaAlthough it seems to be a dying breed, there are still some Java client-side applications that you may want or need to use e.g. the excellent <a href="http://portswigger.net/">Burp Suite</a>. But this means having Java installed on your workstation - queue dramatic music.<br />
<br />
This is less like handing over the keys to your computer than it used to be. Java can be configured so that <a href="http://www.java.com/en/download/help/disable_browser.xml">Java content is disabled in all browsers</a> (at least from version 7 update 10 and higher). But if the application requires an old version, or you just have little faith in the Swiss cheese that is Java's security controls, then it would be nice to isolate your Java installation.<br />
<br />
There are several ways to isolate Java, from hardening its configuration to only running it in a virtual machine. What I'm going to suggest here is another option, using Java without installing Java.<br />
<br />
This approach is already really common with server-side applications. It is common for server-side applications to install themselves with their own copy of the Java runtime, as this means there is no danger of compatibility issues with the current shared install of Java on the server or any updates to it. So why not take the same approach for client-side applications, and benefit from isolating Java to the one application that needs it.<br />
<br />
So here's what to do (using Windows notation, but it's similar for non-Windows):<br />
<ol>
<li>Obtain the <a href="http://www.java.com/en/download/manual.jsp">offline Java installation .exe</a></li>
<li>Follow these <a href="http://www.java.com/en/download/help/msi_install.xml">instructions</a> to get access to the Data1.cab file</li>
<li>Unzip the Data1.cab to get the core.zip file</li>
<li>Unzip the core.zip file to get the JRE files</li>
<li>In the .\lib folder there are several .pack files, you need to unpack these to their .jar equivalents using <a href="http://docs.oracle.com/javase/7/docs/technotes/tools/share/unpack200.html">unpack200.exe</a> (in the .\bin folder) e.g. "..\bin\unpack200.exe -r rt.pack rt.jar". <a href="http://stackoverflow.com/questions/11808829/jre-1-7-returns-java-lang-noclassdeffounderror-java-lang-object-in-opensolaris">Hat tip</a>.</li>
<li>(Optional) You can <a href="http://www.oracle.com/technetwork/java/javase/jre-7-readme-430162.html">remove several files</a> as they are usually unnecessary (importantly the browser plugin is one of them). This makes the total size smaller, but not by much.</li>
<li>Run you client-side Java application by directly invoking .\bin\java.exe</li>
</ol>
The JRE files come in at about 100Mb (for version 7 update 51) so I wouldn't necessarily be doing this separately for lots of applications.<br />
<br />
I got this working for Burp Suite, but I didn't regression test all of its functionality, so whilst the method seems to work, I can't give any guarantees it does. YMMV.<br />
<br />
DaveDave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com1tag:blogger.com,1999:blog-190261611237936759.post-86248765996525113372014-02-27T12:32:00.001+00:002014-02-27T12:33:10.337+00:00Java by numbersA lot has been written about how bad Java is from a security perspective. Specifically, if you have Java enabled in your browser, well, you're going to get pwned (unless you have some compensating controls). Assuming your business requires Java be enabled in browsers for certain applications to work, then whilst it's not the only compensating control that should be used, having the latest version installed is a good start.<br />
<br />
I've been looking at Java recently and wanted to know more precisely how much risk it posed. So I pulled together the number of vulnerabilities related to applets and web start applications (i.e. Rich Internet Applications) that have been discovered over the last couple years. You can see the numbers on my page <a href="http://www.samadhicsecurity.com/p/java-vulnerabilities.html">Java Vulnerabilities</a>. It makes for scary reading.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-4518362543331417462014-02-21T09:57:00.002+00:002014-02-21T09:57:24.537+00:00Is contextual output escaping too hard?So whilst writing my post on <a href="http://www.samadhicsecurity.com/2014/02/unobtrusive-javascript-and-json-data.html" target="_blank">Unobtrusive Javascript and JSON Data Islands</a> I got to thinking about how successful contextual output escaping (or encoding) is as a tactic to mitigate XSS vulnerabilities.<br />
<br />
I think there is an argument that it hasn't been very successful, and I don't think it is too hard to imagine the reasons for this:<br />
<ul>
<li>As a mitigation to XSS it involves a task that is essentially difficult. Knowing the context (either HTML, HTML attribute, JavaScript, CSS, etc.) where you are binding data into a template, either involves writing an HTML parser (to do it properly), or have developers manually specify the context for each binding site. That is not a solution as much as it is a whole new problem.</li>
<li>There is a reason why there is no "contextual SQL escaping" mitigation promoted by the security community, and why parameterised queries is the mandated mitigation to SQL injection. It's because history has shown us that people are terrible at combining data and code and that the best option is to let the component that executes the code (and hence has the best code parser) do the combining for us. XSS is the same problem, combining data with code (HTML), which suggests that the browser should be combining the data and code(1) and we shouldn't be suggesting people do this themselves.</li>
</ul>
<div>
Of course it's all very well saying that people should architect their web applications to have the browser combine code and data, but developing web applications is a complex problem and there are lots of considerations to take into account when architecting the application, not just security. My impression is that the main considerations are performance and convenience.</div>
<div>
<br /></div>
<div>
Performance is tough to comment on, and depends on many things, but potentially moving more work to the client-side could improve server performance (you could argue that client-side performance might suffer, and thus user experience, but JavaScript performance only improves). Though there is no doubt that some businesses are very sensitive to page load times.</div>
<div>
<br /></div>
<div>
Convenience isn't a good reason in my opinion. It might not be convenient but that doesn't mean it <i>can't</i> be convenient. If a web application framework was designed from the ground up to let the browser combine code and data then likely they would (eventually) do it in a way that was convenient. Of course relying on frameworks means getting popular frameworks(2) to change, and I think the security community has a role to play there.</div>
<div>
<br /></div>
<div>
I will say that I've spent a good deal of time telling businesses to use contextual output escaping, and it is always difficult to challenge a long held belief, but there is evidence and reasoning to suggest that parameterising HTML is the better mitigation. The jury is still out in my mind, but I would be keen to hear other points of view?</div>
<div>
<br />
<span style="font-size: x-small;">(1) See also "Insane in the IFRAME - The case for client-side HTML sanitisation" (<a href="http://www.youtube.com/watch?v=n18Hwaxycwc" target="_blank">video</a> - <a href="http://www.slideshare.net/404aspx/insane-in-the-iframe" target="_blank">slides</a>) from OWASP AppSecEU 2013.</span></div>
<div>
<span style="font-size: x-small;">(2) That's not to say there aren't some web application frameworks that have contextual output escaping built-in or support data binding on the client i.e. <a href="https://developer.mozilla.org/en-US/docs/JavaScript_templates" target="_blank">JavaScript templating engines</a> (although I'm not sure the browser does the escaping), but I think (but am not an expert) that most don't. There are a lot of web application frameworks out there.</span></div>
Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-3730841815631246882014-02-20T09:40:00.001+00:002014-02-20T09:40:26.917+00:00Unobtrusive JavaScript and JSON Data IslandsI saw a tweet the other day from Jim Manico<br />
<blockquote class="twitter-tweet" lang="en">
Hey <a href="https://twitter.com/ndm">@ndm</a>, <a href="https://twitter.com/adambarth">@adambarth</a> was right all along about next generation XSS defense architectures. <a href="http://t.co/fXf1sydKvJ">http://t.co/fXf1sydKvJ</a>
Rock on Adam!<br />
— Jim Manico (@manicode) <a href="https://twitter.com/manicode/statuses/433626781250486272">February 12, 2014</a></blockquote>
<script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script>
That link is to <a href="http://www.educatedguesswork.org/2011/08/guest_post_adam_barth_on_three.html">http://www.educatedguesswork.org/2011/08/guest_post_adam_barth_on_three.html</a>.<br />
<br />
The advice is basically to have more or less static HTML and JavaScript and all dynamic data should come from a JSON request. Not bad advice.<br />
<br />
Reading the article reminded me of a similar approach that I once proposed, using Unobtrusive JavaScript. Unobtrusive JavaScript is an approach to developing a web application that clearly separates HTML from JavaScript from CSS. Now if you are a developer then rest assured this is not an approach created by a security guy who prioritises security above all else, this is an approach suggested by people who actually know what they are talking about. <a href="http://en.wikipedia.org/wiki/Unobtrusive_JavaScript" target="_blank">The Wikipedia page has the details</a>, and interestingly is doesn't even mention the word 'security'.<br />
<br />
So why is Unobtrusive JavaScript good for security? Well the best approach to preventing XSS is to use contextual output encoding, however that hard part of that is the 'contextual' bit. It seems that figuring out on the server side how the data being bound to a template is going to be interpreted by the browser, as either HTMl, JS or CSS, is often quite difficult. But if all the HTML, JS and CSS reside in different files, then the context couldn't be easier to determine, which makes output encoding a much simpler task. Additionally, this separation can be enforced by using the Content Security Policy (CSP) so you don't get any naughty developers slipping in JS where they shouldn't.<br />
<br />
Of course if you are going to be suggesting Unobtrusive JavaScript then I wouldn't even mention security, there are so many other benefits that make it a good idea that you probably don't want any security related ulterior motives to be an issue ;)<br />
<br />
So there are obvious similarities to what Adam Barth suggested, but a difference that I proposed was an alternative to using AJAX requests. So instead of making a separate network request for the JSON data another option is to put the JSON data inside the the HTML, in a 'JSON data island' (sadly it seems someone else beat me to coining this phrase). The comes from <a href="http://ajaxpatterns.org/XML_Data_Island" target="_blank">XML Data Islands</a>, but obviously if the data is going to be used by JS it makes more sense for it to be JSON than XML.<br />
<br />
The JSON data island could be implemented as a hidden <div> element or a <script> element with type="application/json".<br />
<br />
The basic advantage is that it saves a network request. Admittedly this makes more sense for HTML pages that wouldn't be cached, so its appropriateness does depend on how the web application is architected (in terms of caching optimisations made). The data island would also need its own output encoding as the appropriate encoding for the server to do is to JS encode the data going into the JSON and then HTML encode the entire data island contents.<br />
<br />
Sadly my proposal represented too much of a change to the web application I suggested it for, so if you like the idea, I hope you have better luck than me.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-42002208577850908562014-02-02T15:29:00.000+00:002014-02-02T15:29:03.238+00:00Risky BusinessThere are a lot of <a href="http://www.tripwire.com/state-of-security/security-data-protection/32-of-the-best-and-worst-infosec-analogies/" target="_blank">metaphors for security</a>, which may mean we are in a technical field or it may mean that security folk often struggle to explain their point of view. One I have been thinking about recently is the health of a human body being a metaphor for the security of a business. I am not the first to use this analogy, see this <a href="http://prod.sandia.gov/techlib/access-control.cgi/2008/085381.pdf" target="_blank">paper</a> (the section "Cyber Wellness") or this <a href="https://blogs.rsa.com/changing-security-metaphors-%E2%80%93-from-war-to-medicine/" target="_blank">blog entry</a>.<br />
<br />
The part of the metaphor that I want to highlight is that you are responsible for the health of your own body i.e. you are responsible for a business or part of it, then you get to decide how to look after your body. Now there are a lot of people out there trying to tell you the best way to look after yourself, whether it be a product to buy or a lifestyle to adopt. Some of this will be good advice and some of it will be bad. Likely if you go to a doctor you will believe the doctor's advice, and they are likely to give you good advice on how to be healthy, or just become healthier.<br />
<br />
But not everyone follows their doctor's advice all the time. Often there are circumstances that don't allow you to put your health first, whether you want to or not - sitting all day in front of a computer for years on end is a risk to your health, but if it's part of your job then you do it; flying in a plane is a risk that most people are prepared to take; driving a car is the same. Sometimes you simple choose the unhealthy option because the opportunity cost of not doing it is much greater - flying to space is very risky but try convincing an astronaut to stop wanting to do it. And there are of course all the small unhealthy things we do out of necessity (e.g. crossing the street), convenience (e.g. fast food) or laziness (e.g. not brushing your teeth 3 times a day).<br />
<br />
A life lived without doing some unhealthy things, without taking some risk, is a life not lived at all.<br />
<br />
Similarly in business, as a business that takes no risk is a very risky business. In this sense, all businesses are in the business of taking risk.<br />
<br />
So the job of the security professional is to help the business understand the risk it is taking and help the business to minimise that risk, if that is what the business wants to do. Of course the key concern here is getting the business to understand the risk (using security metaphors for instance!), and ensure the business is not taking risks it doesn't understand. Doing so is not a technical skill but a business and communication skill, which should go to reminding security people that technical abilities alone are not the Holy Grail in being effective at our jobs.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-26110605742708804322013-06-14T13:47:00.000+01:002013-06-14T14:53:32.258+01:00My alternative OWASP Top10With the release of the 2013 <a href="https://www.owasp.org/index.php/Top10" target="_blank">OWASP Top10 Project</a> businesses around the world have a security bar against which they can measure themselves. This OWASP project, more than any other, has the ability to influence business, due in no small part to it being referenced in a variety of compliance standards e.g. PCI. So this means businesses are listening and the question then becomes, is the right message being sent?<br />
<br />
My concern with the OWASP Top10 is that it treats the symptoms and not the disease. It essentially lists specific vulnerabilities (or attacks) and my question is - might it not be better to list the factors that lead to these vulnerabilities? If we can address the disease, then the symptoms will also disappear. <br />
<br />
I would clarify that there are other OWASP projects to treat the disease, such as the <a href="https://www.owasp.org/index.php/Category:OWASP_Guide_Project" target="_blank">OpenSAMM</a>, <a href="https://www.owasp.org/index.php/Category:OWASP_Guide_Project" target="_blank">Development Guide</a>, or <a href="https://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project" target="_blank">ASVS</a>, but these projects don't have the reach and influence of the Top10. So in that sense there is potential benefit in having a Top10 that addresses the cause of the symptoms.<br />
<br />
So my alternative top ten are:<br />
<ol>
<li>Legacy code</li>
<li>Undue reliance on application frameworks</li>
<li>Security work is not allocated enough time</li>
<li>Insufficient security architecture/design</li>
<li>Insufficient security code review</li>
<li>Undue reliance on security libraries</li>
<li>Supporting legacy browsers</li>
<li>Inadequate authorisation</li>
<li>Ineffective authentication</li>
<li>Lack of security testing</li>
</ol>
<div>
If your business has minimised the risk of all of the above issues, across all your applications, then your application security is probably in reasonable shape.<br />
<div>
<br /></div>
Below I have provided a bit more detail and listed ways in which you might not be adequately securing your applications. If you answer in the affirmative to the points following each entry below, then the issue may affect you and you may need to address the risk it poses.</div>
<div>
<br /></div>
<div>
1. Legacy Code</div>
<div>
The definition of 'legacy' is going to be relative, but I'll suggest any code that isn't actively maintained, is sufficiently old or is based on a different technology than you are using for new code. Legacy code could be a problem if:</div>
<div>
- existing code is treated as sacrosanct so nobody wants to change it</div>
<div>
- there is code that no existing employee wrote, is un-documented, is magic, and no one wants to touch it</div>
<div>
- there is code being used no one knows about</div>
<div>
- there is code being used that was written before 'security' was an issue</div>
<div>
- there is code being used that was written for a different set of requirements</div>
<div>
- there is code being used that was written for a different operational situation</div>
<div>
- there is no budget to change or update legacy code</div>
<div>
- the risk of changing code is perceived greater than the security risk it poses</div>
<div>
<br /></div>
<div>
2. Undue reliance on application frameworks</div>
<div>
Most applications have an underlying framework on which they are built. Often this framework is developed externally, but the same applies if it is written in-house. Application frameworks can be a problem if:</div>
<div>
- it's assumed the framework solves all security issues</div>
<div>
- it's not understood how your framework protects against common vulnerabilities</div>
<div>
- the framework is relied on for security it doesn't provide</div>
<div>
- it's security features are used in the wrong way</div>
<div>
- it isn't realised the security features the framework provides are only a partial solution</div>
<div>
- the framework has plugins that extend security</div>
<div>
- the security plugins are numerous and it's difficult to know which to use</div>
<div>
- security plugins only provide a partial security solution (that they are trying to solve)</div>
<div>
- the framework/plugins are not routinely tested for security (by white hats at least)</div>
<div>
- the framework is not secure by default</div>
<div>
<br /></div>
<div>
3. Security work is not allocated enough time</div>
<div>
There is never enough time for most things, but too often there is no time explicitly dedicated to security. It's fairly well understood that the performance of your application depends on the amount of time you dedicate to performance coding and analysis - well the same is true for security. You might not have enough time dedicated to security if:</div>
<div>
- there are security requirements, but no allocated time in the project plan to ensure they are met</div>
<div>
- there is no security sprint</div>
<div>
- there is no security testing time allocated</div>
<div>
- security work is lumped together with other non-functional requirement time</div>
<div>
- code review has no dedicated time for security</div>
<div>
<br /></div>
<div>
4. Insufficient security architecture/design</div>
<div>
Software has a tendency to evolve, but failing to keep track of the bigger picture, especially when it comes to security, can be the source of vulnerabilities without easy solutions. You may not be investing enough in security architecture/design if:</div>
<div>
- security solutions are ad-hoc<br />
- ad-hoc security solutions are reused without re-evaluation<br />
- multiple applications use differing security solutions<br />
- a single application uses differing security solutions<br />
- new mitigations are bolted onto existing solutions<br />
- security solutions are not documented<br />
- incompatible security solutions has led to hacks for interoperability<br />
- security solution knowledge is held by a few key developers<br />
- teams aren't aware of existing security solutions<br />
- teams re-invent the security wheel<br />
<br />
5. Insufficient security code review<br />
Code needs to be reviewed specifically for security vulnerabilities, either by the security team or experienced developers. You might not be dedicating enough code review resources to security code review if:<br />
- the ratio of the security team to developers makes it impractical to review all code<br />
- the security relevant code is inline with the rest of the code meaning every line has to examined to just find the most obvious issues e.g. inline input validation.<br />
- the code does not lend itself to security static analysis tools<br />
- no metrics are used to measure the effectiveness of security code review<br />
- your code review capabilities do not scale<br />
- you don't automate review using static analysis<br />
- you security code review only major changes<br />
- your IDE does not have real time security review functionality<br />
<br />
6. Undue reliance on security libraries<br />
We try to make using security easier by providing libraries that expose common security functionality. But security is complicated and fragile and it's impossible to ensure all developers understand the nuances of using libraries. Security libraries may be a cause of concern if:<br />
- there are no wrappers around cryptographic libraries to hide cryptographic details or complexity<br />
- use of security libraries is optional<br />
- there is no auditing of the use of security libraries<br />
- there is no updating of security libraries<br />
- security libraries are misconfigured<br />
- there is no abstraction layer over security libraries so they can be replaced or complexities hidden<br />
- security libraries are used to solve problems that should be solved at the architectural or design level<br />
<br />
7. Supporting legacy browsers<br />
We all want our software to reach as many users as possible and we don't want to stop supporting existing customers, but at some point the focus must be more on the future than the past. Catering for legacy browsers might be a problem if you are:<br />
- supporting IE6<br />
- rejecting new security solutions because they don't work in legacy browsers<br />
- not leveraging new security features of current versions of browsers<br />
- not restricting your applications functionality if you must be compatible with legacy browsers<br />
- not applying the principles of Unobtrusive JavaScript<br />
- not applying the principles of Progressive Enhancement<br />
<br />
8. Inadequate authorisation<br />
Authorisation is a fundamental part of the security of any application. It is also one of the most difficult to implement, maintain and manage. Your authorisation may need to be re-evaluated if you:<br />
- don't have a framework for authorisation<br />
- rely on developers to include checks for permissions in business logic<br />
- are unable to audit authorisation<br />
- fail open<br />
- don't apply authorisation to each stage of multi-stage functionality<br />
- don't tie resource requests with unique IDs to the user associated with the resource e.g. in SQL<br />
- are making managing permissions so difficult that implementing 'least privilege' is impractical<br />
<br />
9. Ineffective authentication<br />
Authentication is a pre-requisite for authorisation and so it's important it is effective. Many applications fail to appreciate the various authentication mechanisms they employ have largely the same security requirements. Your current authentication mechanisms may not be effective if you are:<br />
- not storing passwords securely<br />
- thinking a session token needs any less protection than a password<br />
- sharing session tokens with 3rd party integrators<br />
- sharing user credentials with 3rd party integrators<br />
- designing a system so users have to share their credentials with 3rd party integrators<br />
- using OAuth<br />
- not telling users to use a password manager<br />
- not enforcing password complexity<br />
- not requiring 2FA for resetting a password<br />
<br />
10. Lack of security testing<br />
Vulnerabilities are inevitable, but security testing will minimise the amount that you ship. The amount of security testing should be proportional to the risks the application faces. You might not be doing enough security testing if you are:<br />
- not doing any security testing<br />
- not using a web application security scanner<br />
- not training testers to try negative test cases<br />
- not getting periodic 3rd party security testing<br />
- not insisting your service providers prove they do security testing<br />
<br /></div>
Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com1tag:blogger.com,1999:blog-190261611237936759.post-90023609650267908372013-03-26T16:59:00.001+00:002013-03-26T16:59:40.638+00:00Categories of XSSSo I was reading an article recently that talked about finding XSS where the malicious script was injected via a cookie. This got me thinking about which of the standard categories of XSS - Stored (or Persistent), Reflected (or Non-persistent) and DOM-based - that fell into. It wasn't immediately clear as it was persistent, but stored on the client, but the fix would need to be in the JavaScript which makes it similar to DOM-based XSS. This got me thinking about the basis for the standard categories or types of XSS.<br />
<br />
I think the first thing to point out is that at some time in history someone, or more likely a bunch of different people, discovered XSS and thought of it as a single type of attack. It's an interesting thought experiment to imagine you find that you can inject JavaScript into a web page, and that as far as you know no one has done it before. I imagine I would think of it as a code injection attack, where the code just happens to be JavaScript, and the novelty would be that I proxied the attack via the web server and that the victims were other users of the web server, rather than the server itself. Potentially I might not even think it was that interesting running arbitrary JavaScript on a client, as it wouldn't have been comparable to the usual arbitrary code execution attacks that existed already. But then if I could have predicted how popular the web would become, and what it would be used for, well I would have <insert genius idea> and profited.<br />
<br />
I'm going to make the completely unfounded assumption that Stored XSS was discovered first, as it's considered the more serious attack (which for now I'll state without justification) and then Reflected XSS was considered a variant type of attack, but different to Stored XSS because it required user interaction. But really the whole user interaction thing is a fairly weak argument these days, with the proliferation of ad-networks inserting arbitrary HTML in popular websites and 'watering hole' type attacks, a user just surfin' the net is far more likely than ever to hit upon a malicious iframe that exploits a reflected XSS attack. But granted, the user does have to be logged in to the web site so the window of opportunity for the attacker is smaller in a Reflected XSS attack.<br />
<br />
Comparing to a Stored XSS attack, well the user is most likely already logged in to the web site so the attacker's window of opportunity is much larger, however that attacker could face a second issue, the victim still has to navigate to the page with the injected script, so the attack is location limited. It's interesting to note that Reflected XSS is not location limited, since the attacker will direct the victim to precisely the page with the vulnerability. But then it doesn't actually make sense to say Stored XSS is location limited if Reflected XSS isn't, because whatever vector of attack Reflected XSS can use, so can Stored XSS i.e. a malicious iframe can point to a web page with Stored XSS.<br />
<br />
It would seem then that we should categorise XSS by the vectors of attack that can be used by an attacker to exploit a victim.<br />
<br />
<div class="nobrtable">
<table border="2" bordercolor="#0033FF" cellpadding="3" cellspacing="0">
<caption><b>Possible types of XSS when the attacker manipulates ...</b></caption>
<tbody>
<tr style="background-color: #0033ff; color: white; padding-bottom: 4px; padding-top: 5px;">
<th>XSS Type</th>
<th>the victim's response</th>
<th>the victim's request</th>
<th>the victim's DOM properties</th>
</tr>
<tr align="center" class="”alt”">
<td>Stored XSS</td>
<td>Y</td>
<td>Y</td>
<td>N</td>
</tr>
<tr align="center" class="”alt”">
<td>Reflected XSS</td>
<td>N</td>
<td>Y</td>
<td>N</td>
</tr>
<tr align="center" class="”alt”">
<td>DOM-based XSS</td>
<td>N</td>
<td>Y</td>
<td>Y</td>
</tr>
</tbody></table>
</div>
<br />
So it might not be clear what I mean by "the victim's response", what I mean is that the attacker does not control the victim's request, but is able to include data of their choosing in the response to the victim's request. This is the standard Stored XSS scenario where the data comes from the DB for instance. However Reflected XSS isn't possible because the attacker doesn't control the request and DOM-based isn't possible because either the vulnerable DOM property is set by the server, or set by the client with data from the server. <br />
<br />
When the attacker manipulates "the victim's request", Stored, Reflected and DOM-based XSS is possible. Stored XSS because the attacker can direct the victim to the infected page. Reflected XSS because the attacker controls the parameters of the request. Finally, DOM-based XSS because the attacker can set DOM properties that aren't sent to the server e.g. fragment identifier if the attacker can specify the URL, window.name if the attacker makes a request from script.<br />
<br />
In the case of "the victim's DOM property", what I mean is a DOM object property that is set client-side and not set by (or sent to) the server. In this case only DOM-based is possible, as the DOM property is not set by the server so Stored or Reflected XSS isn't possible. This is in fact a special case, or a subset, of the "the victim's request" scenario.<br />
<br />
I've taken a fairly strict definition of DOM-based XSS here, as I think this is required to truly separate it out from Reflected XSS. My logic is that if the malicious data comes from, or goes via, the server, then the attack can be considered to come via the server in the same way as Stored or Reflected XSS respectively, but if the malicious data never actually gets sent to the server (e.g. it's in the fragment identifier, window.name, see <a href="https://code.google.com/p/domxsswiki/" target="_blank">domxsswiki</a> for more examples) then that is quite plainly a different attack vector. Note that for DOM-based XSS a request still needs to be made to the server (most likely), so it shares that in common with the other attack vectors, but there is no malicious data sent as part of that request, unlike the other attack vectors.<br />
<br />
There are other definitions of DOM-based XSS that are looser, that include any DOM property, even if coming via the server, and that's fine for most practical purposes. For the purpose of categorisation though I think being stricter helps understand how the different attack vectors for XSS fit together.<br />
<br />
So what about the cookie-based XSS? Well if an attacker finds a way to set a victim's cookie such that client-side JavaScript reads the malicious cookie value resulting in code injection, I would say that the value of cookie goes via the server so that is Reflected XSS. Whilst I think that is probably the most common scenario, I can imagine scenarios where you would call it Stored XSS and other scenarios where it would be DOM-based XSS.<br />
<br />
So there you have it, my opinion on the different categories of XSS, some people will agree and some will disagree. Generally speaking the security industry has settled on some loose categories of XSS, and that's fine, it's not clear we need strict category definitions, but it can be an informative exercise to consider them.<br />
<br />
NB: I refuse to use Type-0, Type-1, or Type-2 names for the different types of XSS, they are meaningless names that only serve to confuse.<br />
<br />
<br />Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-31857002192744056072012-10-13T16:01:00.000+01:002012-10-13T16:01:00.279+01:00The horrible asymmetry of processing HTMLSo I've got my developer hat on for the purpose of this blog post.<br />
<br />
If you were designing a client/server system where you had to transport a representation of an object from the server to the client, I'm almost certain that the code you have that serialises and de-serialises that object would be identical on both the client and server. As an example imagine a web service, the representation of a request and response object defined in XML Schema and instantiated in XML, but both client and server code would only ever deal with objects and not directly with XML i.e. the serialisation and de-serialisation is handled automatically.<br />
<br />
So I had a kind of "Huh?, that's weird" moment when thinking about how the majority of web application frameworks handle HTML. Most frameworks have an HTML template on the server side and then some scripting language interspersed in the template that customises the template for that request. So that's how an HTML representation of a web page is generated on the server side, but how is that representation processed on the client side? Well, as we all know, the browser (or User Agent (UA)) creates a Document Object Model (DOM) from the HTML and from this internal object representation displays the web page to the user.<br />
<br />
So the client side UA receives a serialised representation of a the web page object (in HTML) and de-serialises that to a DOM (object representation) for processing. However, the server side starts with a serialised version of a the web page object (in HTML) and makes direct edits to the serialised version of the object (i.e. the HTML) before sending it to the client.<br />
<br />
The lack of symmetry hurts that part of me that likes nice design.<br />
<br />
So either the way web application frameworks work is particularly clever and those of us that have be serialising and de-serialising objects symmetrically in every other system we build have really missed a trick. Or, web application frameworks have evolved from the days of static HTML pages into a system where serialised objects are edited directly in their serialised form, a design that would be universally mocked in any other system.<br />
<br />
Now to be fair web application frameworks have evolved into convenient and efficient systems so it could be argued that this justifies the current design. I would worry though that that is an institutionalised point of view, since making changes to HTML (the serialised web page object) directly is all we have every known. Of course it's the right way to do it, because it's the only way we've ever done it!<br />
<br />
I'll be the first to raise my hand and say I'm not sure exactly how you might go about about generating web pages in a convenient and efficient way in code, before serialising it in HTML, but I certainly don't think it's impossible and I accept that any initial design would be less convenient and efficient to what we already have (but that seems inevitable and would change as we get experience of the new system).<br />
<br />
Now for all I know there may be great reasons why we generate the web pages the way we do, but if there are I'm guessing they're not widely known. I can certainly think of some advantages to manipulating a web page as an object on the server before serialising it and sending it to the client, and if you think about it for a bit, perhaps you can too?Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-57874458581121850192012-10-07T11:24:00.000+01:002012-10-07T11:24:57.439+01:00Scaling DefencesIn defending against vulnerabilities in code there is concept probably best summed up by this quote:<br />
<blockquote class="tr_bq">
We only need to be lucky once. You need to be lucky every time. </blockquote>
<blockquote class="tr_bq" style="text-align: right;">
IRA to Margaret Thatcher </blockquote>
The concept is basically that an attacker needs to find just one vulnerability amongst all possible instances of that vulnerability, whereas a defender needs to mitigate every instance. If you consider this in the sense of a single vulnerability type e.g. buffer overflows, then I don't think it's true. The reason I don't think it's true is that the actual problem is the way we create our defences, that is the security controls we put in place to mitigate the vulnerability.<br />
<br />
Take the buffer overflow example. If a developer tries to defend against this vulnerability by ensuring that each instance where he writes to his buffers, that his code never writes beyond the boundaries of his buffer, then if he fails to do this correctly in just one instance, then it might be possible for an attacker to find and exploit that vulnerability. But what if that developer is programming in C#? There is no need for the developer to be right everywhere in his code, as C# is (effectively) not vulnerable to buffer overflow attacks. So if we can choose the right defence, we don't need to mitigate every instance.<br />
<br />
For me the next question is what differentiates defences that are 'right'? I would argue that one important criteria, often overlooked, is scale. Taking the buffer overflow example again, if the developer has to check his bounds every time he writes to a buffer, then the number of vulnerabilities scales with the number of times the developer writes to their buffers. That's a problem that scales linearly, that is, if there are Y buffers referenced in code, then the number of places you have to check for vulnerabilities is X = aY, where a is the average number of times a buffer is written to. Other common compensating security controls we put in place to make sure the developer doesn't introduce a vulnerability also tend to scale linearly; code reviewers, penetration tests, security tests, etc. By this I mean if you have D developers and C code reviewers, then if you increase to 2D developers you will likely need 2C code reviewers. <br />
<br />
If you choose the 'right' defence though, so for example using C# or Java, then you don't need either the developer or compensating controls worrying about buffer overflows (realistically you would probably have some compensating controls for such things as unsafe code). Note, I'm suggesting changing programming languages is practical solution, I'm just trying to give an example of a control that completely mitigates a vulnerability.<br />
<br />
Below is a graphical representation of how the costs of certain types of security controls (defences) scale with the number of possible locations of a vulnerability.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-OSzf5oOXDeY/UHFGDmuqWJI/AAAAAAAAAAY/8awG6uaPcuU/s1600/Scaling+Defences.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-OSzf5oOXDeY/UHFGDmuqWJI/AAAAAAAAAAY/8awG6uaPcuU/s1600/Scaling+Defences.jpg" /></a></div>
The red line is showing the cost of a security control that scales linearly with number of possible locations of a vulnerability. The blue line is the cost of the 'right' security control, including an initial up-front cost.<br />
<br />
The purple line is for another type of security control, for situations where we do not have a 'right' security control that mitigates a vulnerability by design. This type of security control is one where we make accessible, at the locations where the vulnerability might exist, some relevant information about the control. For example, annotating input data with their data types (which are then automatically validated). If this information is then available to an auditing tool where it can be reviewed, then the cost of this type of control scales in a manageable way.<br />
<br />
What is also interesting to note from the graph is that the red line has a lower cost initially than the blue line, until they intersect. This implies that until there are sufficient number of possible locations for a vulnerability, it is not worth the initial cost overhead to implement a control that mitigates the vulnerability automatically. This perhaps goes some way to explaining why we use controls that don't scale, as the controls are just the 'status quo' from when the software was much smaller and it made sense to use that control.<br />
<br />
My main point is this; when we design or choose security controls we must factor in how the cost of that control will scale with the number of possible instances of the vulnerability.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-29488245516660360572012-09-09T10:11:00.000+01:002012-09-09T10:14:46.606+01:00The Model DeveloperIn security we spend a lot of time focusing on attackers and imagining all the possible ways they might be able to compromise an application or system. While I think we are currently immature in our ability to model attackers, the industry does seem to spend some time thinking about this, and generally ends up assuming attackers are very well resourced.<br />
<br />
I come from a cryptographic background, and in crypto you tend to define an adversary in a more mathematical way. When designing a new crypto algorithm the adversary is effectively modelled as another algorithm that is only limited in the sense it cannot be 'computationally unbounded' and it does not have direct access to the secrets of the crypto algorithm. Apart from that no real assumptions are made and it is most certainly expected that the adversary is much smarter than you.<br />
<br />
For all the time we spend thinking about what attackers can do, I wonder if we should also spend some of that time modelling what developers can do. Developers, after all, are an essential part of the systems we build. Let's try and model a developer:<br />
<ul>
<li>Knowledge. Developers understand how to code securely.</li>
<li>Experience. Developers have experience in coding securely. </li>
<li>Time. Developers are given sufficient time to write secure code.</li>
<li>Priority. Developers prioritise security over functionality.</li>
<li>Consistency. Developers code security the same way every time.</li>
<li>Reviewed. Developer code is thoroughly reviewed for security.</li>
<li>Tested. Developer code is thoroughly tested for security.</li>
</ul>
[We are actually modelling more than just a developer here, but also the environment or support structures in which they develop as that directly effects security too.]<br />
<br />
How accurate does that model seem to you? It would be great for people that design systems and their security if developers could be modeled in this way, it would make their jobs a lot easier. Unfortunately it seems that people who suggest security controls for vulnerabilities sometimes are making an implicit assumption about developers, they have modeled the developer in a certain way without even realising it, and that model is often fairly similar to one given above.<br />
<br />
My favourite example of this is when people say the solution to XSS is to output encode (manually i.e. all data written to a page is individually escaped). When this is suggested as a solution it is implicitly modelling that developer as; knowledgeable about how to output encode, experienced in output encoding, has the time to do write the extra code, will make it a priority, will be completely consistent and not forget to output encode anywhere, will have their code thoroughly reviewed and tested. Don't misunderstand me, some of these assumptions might be perfectly reasonable for your developers, but all of them? Consider yourself fortunate if you can model a developer this way.<br />
<br />
Much in the same way that we model an attacker to be as powerful as we can (within reason) when designing systems, I think we also need to model the developers of our system to be as limited as possible (within reason). It's not that I want people to treat developers as idiots, because they are clearly not, it's that I'd like to see the design of security controls have an implicit (or explicit) model of a developer that is practical.<br />
<br />
The Content Security Policy (CSP) is an example of a control that I think comes pretty close to having a practical model for developers, since; the developer requires knowledge about how to completely separate HTML and JavaScript and about how to configure the CSP settings, needs some experience using the CSP, has to take time to write separate HTML and JavaScript, doesn't need to prioritise security, doesn't need to try to be consistent (CSP enforces consistency), does require their CSP settings to be reviewed, does not require extra security testing. The CSP solution does model a developer as someone that needs to understand the CSP solution and code to accommodate it, which could be argued is a reasonable model for a developer.<br />
<br />
Ideally of course we want to model developers like this:<br />
<ul>
<li>Knowledge. Developers don't understand how to code securely.</li>
<li>Experience. Developers don't have experience in coding securely. </li>
<li>Time. Developers have no time to write secure code.</li>
<li>Priority. Developers prioritise only functionality.</li>
<li>Consistency. Developers code security inconsistently.</li>
<li>Reviewed. Developer code isn't reviewed for security.</li>
<li>Tested. Developer code isn't tested for security.</li>
</ul>
If our security controls worked even when a developer gave no thought to security at all, then in my opinion that's a great security control. I can't think of a lot of current controls in the web application space have this model of the developer. In the native application world we have languages like .Net and Java that have controls for buffer overflows that model the developer this way, as developers in these languages don't even have to think about buffer overflows. You might be thinking that's not a great example as developers are able to write code with a buffer overflow in .Net or Java i.e. in unsafe code, however I think we have to model developers to be a limited as possible, <i>within reason</i>, and the reality is it is a sufficiently corner case scenario that we can treat it like the exception it is.<br />
<br />
Modelling developers in a way that accounts for the practical limitations they face leads me to believe that creating frameworks for developers to work in, a sand-boxed environment if you will, allows for security controls to be implemented out of view of developers, enabling them to focus on business functionality. A framework allows a developer to be modeled as requiring some; knowledge, experience, and testing, but minimal; time, priority and consistency. A framework does still have substantial demands for review though (although I think automating reviews is the key for making this manageable).<br />
<br />
If we can start being explicit about the model we use for developers when we create new security controls (or evaluate existing ones) we can hopefully better judge the benefits and effectiveness of those controls and move closer to developing more secure applications.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-80599954033464133902012-09-09T08:20:00.000+01:002012-09-09T08:20:21.076+01:00Drupal 7 security notesSo I just put together a page on <a href="http://www.samadhicsecurity.com/p/drupal-7-security.html" target="_blank">Drupal 7 Security</a>. It doesn't require a lot of existing knowledge about Drupal, but some appreciation would probably help - at least knowing that Drupal is extendable via Modules and customisable via Hooks.<br />
<br />
The notes were created so I could give some advice on securing Drupal 7, and since I didn't have any knowledge about Drupal security, the goal of the notes is to bring someone up to speed on what mitigations or approaches Drupal makes available to solve certain security threats.<br />
<br />
Here are the topics I cover:<br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#TheBasics">The Basics</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#Sessions">Sessions</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#UserLogin">User Login</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#MixedModeHTTPHTTPS">Mixed Mode HTTP/HTTPS</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#CSRF">CSRF</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#AccessControl">Access Control</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#DynamicCodeExecution">Dynamic Code Execution</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#OutputEncoding">Output Encoding</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#Cookies">Cookies</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#Headers">Headers</a><br />
<a href="http://www.samadhicsecurity.com/p/drupal-7-security.html#Redirects">Redirects</a><br />
<br />
What is interesting after you understand what Drupal offers, is to think about the things it does not offer. I worry a lot about validating input and if you use the Drupal Form API then you get a good framework for validation as well, similarly for the Menu system. However for other types of input, GET request parameters, Cookies, Headers etc., you are on your own. There are a variety of 3rd party modules that implement various security solutions e.g. security related headers etc., but it would be good if these were part of Drupal Core, as security should never just be an add-on.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-13544721627387557312012-04-28T09:39:00.001+01:002012-04-28T09:39:12.087+01:00Why being a defender is twice as hard as being an attackerSo it occurred to me that being a defender is twice as hard as being an attacker (at least). I don't mean that in an absolute or measurable sense of course, just in some sense that will become obvious. I also will limit the context of that claim to applications although it may apply to other areas of security as well.<br />
<br />
The goal of an attacker is to find vulnerabilities in an application. An application is protected by defenders who design vulnerability mitigations and developers who implement functionality. Of course an attacker only needs to find a single weakness and a defender needs to try to protect against all attacks, which itself would probably support my claim, but it's not what my point is going to be.<br />
<br />
Conversely, a defender's goal is to minimise the number of vulnerabilities in an application. Defenders attempt to realise this goal by designing defenses that both limit what the attacker can do and limit the flexibility the developer has. However, it is not only attackers that will hack away at a defenders defenses, it's also the developer. The point of this blog post is that developers show surprisingly similar characteristics to attackers when they create novel ways to circumvent the defense mechanisms defenders put in place. After all developers have the goal of implementing functionality with the minimum amount of effort as possible, and defenses often make that more difficult (even if only marginally more difficult). <br />
<br />
Clearly the motivations are entirely different in the attackers and developers case, but at the end of the day the defenders are being attacked on twin fronts; by the attackers looking to get in and by the developers looking to break out.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-39721748467029213422012-04-23T20:02:00.003+01:002012-04-23T20:02:46.103+01:00CVSS doesn't measure up.I was doing some basic research into software metrics the other day and I came across something that I was probably taught once but had long since forgotten. It was to do with the way we measure things and is covered in the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Level_of_measurement" target="_blank">Level of Measurement</a>.<br />
<br />
Basically there are 4 different scales which are available to measure things:<br />
<ul>
<li>Nominal scale - Assigning data to named categories or levels.</li>
<li>Ordinal scale - A Nominal scale but the levels have a defined order.</li>
<li>Interval scale - An Ordinal scale but the difference between, or units, of each level are well defined.</li>
<li>Ratio scale - An Interval scale but with a non-arbitrary zero-point.</li>
</ul>
Why these scales are interesting is that only certain type of math, and therefore certain conclusions can be drawn from what you measure, depending on what scale the measurements belong. For instance we can order the finishing place of a horse race into 1st, 2nd, 3rd etc. (an Ordinal scale), but we can't meaningfully say what the average finishing place of a horse is as there is no magnitude associated with the difference between the levels. If on the other hand the races were over the same distance, we could could measure the time the horse took to complete the race (a Ratio scale) and calculate it's average time.<br />
<br />
Sometimes we have an Ordinal scale that looks like an Interval or Ratio scale, for instance when we assign a numeric value to the levels e.g. ask people how much they like something on a scale of 1 to 5. But this is still an Ordinal scale, and although we can assume that the difference between each level is a constant amount, nothing actually makes that true. Thus calculating the average amount that people like something e.g. 2.2, is often a meaningless number.<br />
<br />
When reading about this I was reminded of the way vulnerabilities are categorised and how we would so dearly like to be able to assign numbers to them so we can do some math and reach some greater insight into the nature of the vulnerabilities we have to deal with. The Common Vulnerability Scoring System (<a href="http://nvd.nist.gov/cvss.cfm?calculator&adv&version=2" target="_blank">CVSS</a>) suffers essentially from this problem; vulnerabilities are assigned attributes from certain (ordered) categories, and then a complicated formula is used to derive a number in a range from 1 to 10. It is basically optimistic to think that a complicated formula can bridge the theoretical problem of doing math on values from an Ordinal scale. I wouldn't necessarily go to the other extreme and say it makes CVSS totally without merit - just that it's not the metric you likely wish it was.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-76922887387593449072012-04-15T20:39:00.003+01:002012-04-15T20:39:52.679+01:00An idea for adding privacy to social mediaIt would be great if all the media that people upload to social media sites was still under the control of the owner and we didn't have to place complete trust in the social media site. Initially I thought that maybe OAuth might offer part of a solution, but really it isn't designed to solve this problem, it just allows you to share a resource on one web site with another web site. What we really want is to allow a user, the resource owner, to retain some control over their resources and not delegate all security to the social media site (to protect the innocent I'll reference a fictitious social media site called PinFaceTwitPlus).<br />
<br />
Of course one way to protect a photo or video (a resource) that you upload is to encrypt it, but then you have 2 problems; where to put the encryption key and how to let people you share with decrypt the resource. Obviously PinFaceTwitPlus can't have the key or we haven't gained anything, and you yourself can't store the key since you would have to run your own server. So the solution would seem to be another service that was responsible for handing out decryption keys to those privileged few you have authorised to view your resource. Let's call this service PrivacyPoint.<br />
<br />
Here is how I see this working from the point of view of adding a new resource to PinFaceTwitPlus. You go to PrivacyPoint in your browser, you select a locally stored photo and ask PrivacyPoint to protect it. In your browser PrivacyPoint encrypts the photo and embeds an encrypted key that can be used to decrypt the photo (the key is randomly generated and encrypted to a key unique to you), and also a reference to PrivacyPoint. You then upload the encrypted photo to PinFaceTwitPlus and share it with whoever amongst your friends is worthy.<br />
<br />
How this works from a friends point of view is that when PinFaceTwitPlus wants to show them the photo, the encrypted version gets downloaded to their browser and the page determines that it needs to decrypt the photo, so using the embedded link to PrivacyPoint, it sends an identifier for the friend and the encrypted key to PrivacyPoint. Using the friends identifier PrivacyPoint asks PinFaceTwitPlus if the friend is allowed to view the photo, if they are PrivacyPoint decrypts the key and returns it to the friend whose browser can now locally decrypt the photo and display it.<br />
<br />
The desirable properties of this system are these:<br />
<ul>
<li>PinFaceTwitPlus does not have (direct) access to your resources - a vulnerability in PinFaceTwitPlus would not immediately leak user resources.</li>
<li>Your resources do not pass through PrivacyPoint, even though PrivacyPoint could decrypt them - a vulnerability in PrivacyPoint would not immediately leak user resources.</li>
</ul>
This system is not perfect for the following reasons:<br />
<ul>
<li>PinFaceTwitPlus is acting as a decision point for which of your friends can see which of your resources. This means it could add itself to your list of friends and grant itself access to your resources. What makes this issue potentially tolerable is that to access your resources it must ask PrivacyPoint for the decryption key, which means PrivacyPoint has a record of it asking; a record that would be viewable by you to audit who accesses your resources.</li>
<li>PinFaceTwitPlus can impersonate any of your friends though, so in any access audit log it would just appear as a friend viewed your resource. I don't see a (sensible) technical solution to this, but I suspect there is pressure on PinFaceTwitPlus to not allow it's employees to impersonate it's users due to the negative business impact.</li>
</ul>
From a design point of view the goal has been to separate the resource from the social media site, and this has been done by distributing the trust between 2 services; PinFaceTwitPlus to store the encrypted resource and decide who has access, and PrivacyPoint to hold the decryption keys and enforce the resource access decision.<br />
<br />
Clearly I have left out vast amounts of detail (lots of which would need to be thought through more thoroughly) as it's difficult to express it all in a blog post.<br />
<br />
To be honest I wouldn't give this design high practicality points as it requires a fairly substantial infrastructure in order to work. Still, any practical system often starts out as something that's possible and evolves into something that's probable.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-48445281942172831102012-04-09T13:24:00.000+01:002012-04-09T13:24:55.330+01:00Good password paperSo I have just been reading <a href="http://research.microsoft.com/apps/pubs/?id=154077" target="_blank">A Research Agenda Acknowledging the Persistence of Passwords</a> (I actually read it in the Jan/Feb 2012 issue of IEEE Security & Privacy) and I wanted to make a quick post about it so I would have a record to reference in the future and because I thought it was an excellent read that challenges the perceived wisdom of "<a href="http://www.southparkstudios.com/clips/176013/drugs-are-bad-mkay" target="_blank">passwords are bad, mmmkay</a>". The point of the article is that is doesn't stand and up and say passwords are either good or bad, but that we lack the information to know that answer currently and that since there is no clear emerging solution on the horizon we have to take a more pragmatic approach to how we should be using passwords as a security solution.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-2591888430580882932012-04-01T10:04:00.001+01:002012-04-01T10:04:12.007+01:00OWASP Web Defence PresentationI went to the OWASP London meeting last Thursday where I watched the excellent <a href="https://www.owasp.org/index.php/User:Jmanico" target="_blank">Jim Manico</a> give a presentation on the state of the art in web defences (presentation available <a href="https://www.owasp.org/index.php/London" target="_blank">here</a>).<br />
<br />
The talk was largely focused on the most common web app vulnerabilities and how to defend against them. If I was to over-simplify the message of the presentation I would say that it focused on training developers and giving them libraries to help mitigate the threats.<br />
<br />
Whilst I agreed with everything that was said, and I fundamentally believe that training developers is an essential part of any SDLC, I would have liked to see the main emphasis be on developing frameworks that either virtually eliminate the vulnerability or allow for ease of auditing that developers are doing the right thing.<br />
<br />
I actually had a chat with Jim in the pub after the presentation and asked him about the focus of his talk; it turns out we actually pretty much agree that web defences need to be part of a framework (and he was clearly better informed than I am about the state of the art in that department). His talk it seems was focused on the practical things that can be done today.<br />
<br />
If I was going to give a similar presentation I think I would feel the need to focus on where we need to get to. To my mind vulnerabilities of all types, in any piece of software, are basically impossible to eliminate via training; this does not make training useless, training is still necessary, I just don't think it's sufficient.<br />
<br />
We need to make our web applications secure by design, secure by default and security auditable. The first 2 principles are commonly understood, the last I think is something that I don't see people talking about and I think it is the direction we need to move in. Any framework will have the ability to bypass the default and do something insecure, and that's fine as that kind of flexibility is usually essential. What we need is the ability to easily find deviations from security best practice and focus our review efforts on those areas. Unless we can identify the weak links in our security chain we will never be able to get the kind of security assurance we want.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-22483793259443847902012-03-14T20:17:00.000+00:002012-03-14T20:17:34.160+00:00How to become a Certificate AuthorityCAs - there are certainly a lot of them, but where do they come from?, how are they made? First of all don't believe anything I have to say on the matter; I have never gone through the process and anything you read here is speculation and incomplete information.<br />
<br />
However, if you were to start looking around the internet for how to become a CA's, here is some of the information you may find, some of which you may find interesting.<br />
<br />
So any business can become a CA but it's pointless unless you get your root certificate into a browser. To do this you first need to ensure you meet the requirements of the various browser certificate programs. These are:<br />
<br />
<a href="http://technet.microsoft.com/en-us/library/cc751157.aspx">Microsoft Root Certificate Program</a><br />
<br />
<a href="http://www.mozilla.org/projects/security/certs/policy/index.html">Mozilla CA Certificate Policy (Version 2.0)</a><br />
<br />
<a href="http://www.apple.com/certificateauthority/ca_program.html">Apple Root Certificate Program </a><br />
<br />
<span style="font-family: inherit;">Google </span>appears to use the underlying OS certificate store. So on Windows this will be the same as what IE uses, however on Linux, the different distributions seem to use the ca-certificates package. It is interesting to note it's description:<br />
<br />
<pre>$apt-cache show ca-certificates
Package: ca-certificates
...
Description-en: Common CA certificates
This package includes PEM files of CA certificates to allow SSL-based
applications to check for the authenticity of SSL connections.
.
It includes, among others, certificate authorities used by the Debian
infrastructure and those shipped with Mozilla's browsers.
.
Please note that certificate authorities whose certificates are
included in this package are not in any way audited for
trustworthiness and RFC 3647 compliance, and that full responsibility
to assess them belongs to the local system administrator.
...</pre>
<br />
The ca-certificates seems to include mostly Mozilla certificates, although a few others as well.
The ca-certificates package also seems to power the Java key store as well.
<br />
<br />
According to the above certificate programs, any of the following standards is sufficient to convince the browsers (depending on the browser maker) to include you as a CA.<br />
<br />
<table border="1">
<tbody>
<tr>
<td></td>
<td>Microsoft</td>
<td>Mozilla</td>
<td>Apple*</td>
</tr>
<tr>
<td><a href="http://www.x9.org/standards/search/details?standard_key=3f63731c02c1074686aa37e0c1b43e72556d008f">ANSI X9.79-1:2001</a></td>
<td>N</td>
<td>Y</td>
<td>N</td>
</tr>
<tr>
<td><a href="http://pda.etsi.org/pda/home.asp?wki_id=vRB.0b.A2uoprrwvH-WyI">ETSI TS 101 456 V1.2.1 (2002-04) or later version</a></td>
<td>Y</td>
<td>Y</td>
<td>N</td>
</tr>
<tr>
<td><a href="http://www.etsi.org/deliver/etsi_ts/102000_102099/102042">ETSI TS 102 042 V2.1.2 (2010-04) or later version</a></td>
<td>Y</td>
<td>Y</td>
<td>N</td>
</tr>
<tr>
<td><a href="http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35707">ISO 21188:2006</a></td>
<td>Y</td>
<td>Y</td>
<td>N</td>
</tr>
<tr>
<td><a href="http://www.webtrust.org/homepage-documents/item54279.pdf">WebTrust Principles and Criteria for Certification Authorities</a></td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td><a href="http://www.webtrust.org/homepage-documents/item54281.pdf">WebTrust for Certification Authorities—Extended Validation Audit Criteria</a></td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
</tr>
</tbody></table>
*Actually Apple may accept more, but you have to prove equivalence.<br />
<br />
The ANSI and ISO organisations should be familiar to most people, but what about ETSI and WebTrust? Well ETSI is the European Telecommunications Standards Institute and WebTrust is made up of the American Institute of Certified Public Accountants (AICPA) and the Canadian Institute of Chartered Accounts (CICA).<br />
<br />
The 2002 ETSI standard actually only applies to CAs that want to create certificates for digital signatures.<br />
<blockquote class="tr_bq">
"These policy requirements are specifically aimed at qualified certificates issued to the public, and used in support of qualified electronic signatures ..." (Section 1)</blockquote>
As a standard it is based on:<br />
<blockquote class="tr_bq">
"The present document makes use of the principles defined in IETF RFC 2527 [2] and the framework defined in ANSI X9.79 (see bibliography). The aim of the present document is to achieve best possible harmonization with the principles and requirements of those documents." (section 5.1 NOTE 2)</blockquote>
The later ETSI standard (which linked above is from 2011-12) is based on the earlier 2002 ETSI and covers the full range of CA activities. The later standard also references RFC 3647 (from 2003), which obsoleted RFC 2527 (1999). RFC 3647 is itself based on an American Bar Association document "PKI Assessment Guidelines, v0.30, Public Draft for Comment, June 2001", which references the WebTrust standard (at the time, which was v1.0 issued in August 2000 and was a licensed document).<br />
<br />
The version 2.0 WebTrust (AICPA/CICA) standards (linked above) were created by some folk from the big accountancy firms (Deloitte & Touche, Ernst & Young, KPMG) but declares that it is based on ISO 21188, and is consistent with ANSI and IETF (the RFCs) standards. <br />
<br />
<blockquote class="tr_bq">
"This document was developed by a CICA/AICPA Task Force using ISO 21188 “Public Key Policy and Practices Framework” and Version 1.0 of the AICPA/CICA WebTrust Program for Certification Authorities."</blockquote>
<blockquote class="tr_bq">
"The Principles and Criteria for Certification Authorities are consistent with standards developed by the American National Standards Institute (ANSI), International Organization for Standardization (ISO), and Internet Engineering Task Force (IETF). The Principles and Criteria are also consistent with the practices established by the CA Browser Forum (see www.cabforum.org)." (Page 6)</blockquote>
<br />
Although I didn't have access to the ISO 21188 (2006) document, I found a couple of references that implied it was based on the ANSI X9.79 (from either 2000 or the X9.79-1 from 2001) standard. Again, I didn't have access to the ANSI document, but I found <a href="http://www.mail-archive.com/mozilla-crypto@mozilla.org/msg06378.html">this</a> Mozilla mail threat from 2005 which seem to indicate that the ANSI document shared a lot in common with the v1.0 WebTrust standard (from August 2000).<br />
<br />
The ANSI X9.79 standard, according to these presentations, <a href="http://www.oasis-pki.org/pdfs/CA_Trust.pdf" target="_blank">here</a> and <a href="http://csrc.nist.gov/archive/pki-twg/Archive/y1999/presentations/twg-99-70.pdf" target="_blank">here</a>, draw the criteria that CA's must reach from several other ANSI, BS, FIPS, IETF and ISO standards. It does seem to be the granddaddy for all the other CA standards.<br />
<br />
Standards are one thing, but all the CA Certificate programs call for an independent audit against one of the standards. It seems that the WebTrust is the most used standard nowadays and even comes with a <a href="http://www.cabforum.org/WebTrustAuditGuidelines.pdf" target="_blank">draft report</a> that auditors can give to a prospective CA indicating hat passed the audit. A choice quote from that draft is (where ABC is the name of the CA):<br />
<blockquote class="tr_bq">
"ABC-CA’s management is responsible for its assertion. Our responsibility is to express an opinion on management’s assertion based on our examination."</blockquote>
So basically the CA is audited against what they assert and not a list of mandated requirements. This is not necessarily a bad thing, but it does make you wonder about the relationship between auditors and auditees when the later is paying the former. Admittedly that is a problem for auditors regardless of what they are auditing against.<br />
<br />
So how does this change your perception of how a CA becomes a CA? Well for me there aren't a lot of real surprises here, but I would be very interested to see the actual audit reports produced by the independent auditors. I think there is a real argument that since the public is putting so much trust in the CAs then they should make their audit reports publicly available, or at least release substantial parts of it.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0tag:blogger.com,1999:blog-190261611237936759.post-88518202300127067702012-03-01T17:29:00.000+00:002012-03-01T17:29:30.033+00:00Trust CAs? Yes. No. Probably.So <a href="http://www.darkreading.com/authentication/167901072/security/news/232601224/five-schemes-for-redeeming-trust-in-ssl.html">this</a> was an interesting read on the various proposals to change the way CA based PKI works. This lead me to want to learn more about some of the proposals, including Convergence by Moxie Marlinspkie (youtube talk <a href="https://www.youtube.com/watch?v=Z7Wl2FW2TcA">here</a>)<br />
<br />The Moxie talk was interesting as he talks a lot about where the trust in the CA system is and where it should be. I think we can go even further in the analysis of the trust.<br />
<br />
For starters the root of trust is not just the CAs, but also where the CA root certificates are stored, the browser/OS certificate store. We also need to trust how they get in the store as well. This is interesting as the root CA certificates would be updated (I assume) over an HTTPS connection which is ironic since we are relying of the trust-worthiness of the CA system in order to update the trust-worthiness of the CA system. This is fine, unless there are issues with the CA system, which I think the current zeitgeist indicates there is.<br />
<br />
There is also the browsers themselves, as if they were compromised in any way then the trust in the CA system would be broken. I don't think people consider this to be high risk, but let's not forget that companies like Microsoft or Google are not immune from political pressure and certainly not economic pressure, especially when applied from a nation state.<br />
<br />
To me that is the major design criteria that the CA system needs to achieve; protect people from the most powerful adversary, the government of their country. There are other powerful adversaries, but they seem to have balancing forces; organised crime have international police forces, untrustworthy CAs have the browser vendors and economic pressures. The government of your country does not have a balancing force (this is less so in a democratic country, but still not sufficiently balancing in my opinion).<br />
<br />
If you really wanted to be paranoid about trust you could include the implementation and design of the algorithms used in the code that implement the cryptography. However it's fairly easy to test that they work as expected and they can also be reverse engineered.<br />
<br />
Moxie also mentions that any authenticity system needs to worry about who you need to trust and for how long. I think the current CA system is not terribly broken in this way. We can and do revoke root CA certificates, so we don't trust CAs forever and although people can argue that we shouldn't be trusting them at all, well we have for the past 20 years and the Internet hasn't broken, by in large it all works fairly well. It's unreasonable to expect that CAs will never be bad, so as long as we have balancing forces (browser and economic pressures) then the current system works well enough. Saying that, I would like to see the problem of any CA being able to certify any domain be solved, I think that is a glaring vulnerability in the system.<br />
<br />
Fundamentally, users are not in a position to make a trust decision, and allowing them to choose might feel like empowering them, but in the end they will always choose the path of least resistance. This just leaves the option of having watchers watch the system and react to problems, which obviously makes it a reactive rather than a proactive system. So there will always be a certain amount of fraud or insecurity as a result. Until that escalates to a point where the cost out-weighs the benefits, the current solution of a CA-based PKI is likely to remain (largely) unchanged.Dave Solderahttp://www.blogger.com/profile/10764821366252176901noreply@blogger.com0