Wednesday 16 April 2014

Website specific password manager

Following on from my post about what the benefits would be if websites would enforce unique passwords for users, I thought I would try to come up with a better scheme that avoided the dichotomy of the ability of users to remember passwords and password complexity.

My goal was to devise a way a website could allow users to choose a password they could remember and gain the same benefits as if users chose unique passwords.  Additionally, the security of the scheme should ideally be better, but at least no worse than what (a well designed) password authentication scheme is today.

The basic idea is the same as the way a password manager works.  Password managers allow a user to choose a strong local password that is used on their client machine to protect a password for a certain website.  When a user enters their local password in their password manager, it decrypts the website password, and then that is sent to the website to authenticate the user.  Usually the encrypted password(s) are stored online somewhere so the password manager can be used on multiple devices.

The problem (I use this term loosely) with password managers is it requires users to go to the effort of using one, or perhaps understanding what it is and trusting it with their passwords.  It would be preferable to get some of the benefits of a password manager on a website without requiring the user to know or even be aware one was being used.

So what I describe below is effectively a website specific password manager (that manages just 1 password, the password of the website).  The password management will happen behind the scenes and the user will be none the wiser.  The basic idea is that when a user registers with a website and chooses a password, that password will actually be a local password used to encrypt the actual website password.  The actual website password will be chosen by the website, be very strong and be unique (to the website).  When a user tries to log on, the following would happen:
  • The user will be prompted for their username and (local) password,
  • The entered username will be sent to the website and used to recover the encrypted password store (encrypted with the local password and containing the actual website password) for that user, which is then sent to the client
  • On the client the local password is used to decrypt the encrypted password store to recover the website password
  • The website password is sent to the website to authenticate the user.
Ignoring security for a second, we can improve the efficiency of the scheme by caching the encrypted password store on the client.  In fact the only reason the encrypted password store is stored on the server is to accommodate login from multiple devices.  It's an assumed requirement that login from multiple devices is necessary and that registering a device with a website doesn't scale sufficiently for it to be an option.

To examine the security of the scheme we need more details, so I provide those below.  But first I will say what security threats we are NOT looking to mitigate.  This scheme doesn't concern itself with transport security (so it's assumed HTTPS is used for communications), doesn't protect against XSS attacks (attacker script in the client will be able to read passwords when they are entered), doesn't protect against online password guessing attacks (it's assumed the website limits the number of password guesses).

The threats we are concerned about though are:
  • An attacker that gets an encrypted password store for a user should not be able to recover the user's local password in an offline attack.
  • An attacker that gets a DB dump of password hashes should not be able to recover either the website password or the local password.
So let's get into the detail of the scheme.  Here is a description of the algorithms and parameters involved.
  • Hash(), a password hashing algorithm like PBKDF2, bcrypt or scrypt. It's output size is 256 bits.
  • HMAC_K(), this is HMAC-SHA256 using a secret key K.
  • K, a 256 bit secret key used by the website solely for password functionality, it is not per-user, it is the same for all users.
  • xor, this means the binary exclusive-or of two values.
  • PwdW, the website password, a 256 bit strong random value.
  • PwdL, the local password, chosen by the user.
Let's start with the registration process, a new user signs up:
  • A user chooses to register with the website.
  • The website generates a new PwdW and returns this to the client.
  • The user chooses a password (and username if appropriate), this is PwdL.
  • The client sends to the website, PwdW xor Hash(PwdL).
  • The website stores (against the username), PwdW xor Hash(PwdL) and also PwdW xor HMAC_K(PwdW).
I've skipped over some of the implementation details here (like the Hash parameters etc), but hopefully provided enough details for analysis.  One important point is that the website does not store the value PwdW anywhere, it only stores the 2 values in the last step, namely:
  • PwdW xor Hash(PwdL)      (this is the encrypted password store from the description above)
  • PwdW xor HMAC_K(PwdW)  (this is the equivalent of the password hash in most schemes, it is used to verify that any PwdW submitted by a client is correct)
Now let's see what happens during login:
  1. User visits the website and enters their username and PwdL
  2. The client sends the username to the website, which looks up the encrypted password store, PwdW xor Hash(PwdL) and returns it to the client
  3. The client generates Hash(PwdL) and uses this to recover PwdW by calculating (PwdW xor Hash(PwdL)) xor Hash(PwdL) = PwdW
  4. The client sends PwdW to the website
  5. The website recovers HMAC_K(PwdW) by calculating (PwdW xor HMAC_K(PwdW)) xor PwdW = HMAC_K(PwdW) (let's call this Recovered_HMAC).
  6. The website calculates HMAC_K(PwdW) using the received value of PwdW from the client (let's call this Calculated_HMAC).
  7. If Recovered_HMAC = Calculated_HMAC the user is successfully authenticated.
So let's address the threats we are concerned about, the first is "An attacker that gets an encrypted password store for a user should not be able to recover the user's local password in an offline attack."  Let's assume an attacker knows the username they want to target, and that they download the encrypted password store (PwdW xor Hash(PwdL)) for that user.  If we accept that PwdW is a random 256-bit value, then an attacker is unable to determine the value of Hash(PwdL) because there is a value of PwdW that gives every possible hash value of every possible local password that could be chosen.  This is the security property of the one-time pad.  Of course an attacker could instead generate Hash(PwdL) (by guessing PwdL) and calculate PwdW, but they have no idea if they are correct and would have to submit it to the website to determine if it was correct, however we are assuming there are controls in place that limit the number of online password guessing attempts.

The other threat we consider is "An attacker that gets a DB dump of password hashes should not be able to recover either the website password or the local password."  There are 2 scenarios we want to consider, when the attacker does not have access to the secret key K, and when they do.

If the attacker does not have access to K, then in this case the attacker has access to (PwdW xor Hash(PwdL)) and (PwdW xor Enc_K(PwdW)).  If the attacker tries to offline guess PwdW they have the problem of having no way to determine if a guess is correct, because they cannot verify the HMAC without the key K.  They can of course guess PwdW online by submitting it to the website, but we assume controls that mitigate this attack.  If the attack tries to guess PwdL, then they have exactly the same problem of being unable to offline verify if the calculated PwdW is correct.

If the attacker does have access to K then they have the same information that the website has and so can perform an offline (dictionary) attack against the local password PwdL.  How successful this attack is depends on the parameters of the password hashing scheme (Hash()) that were chosen and the complexity of the user chosen password PwdL.  For the purposes of this scheme though the attacker in this scenario has the same chance of success as they would have against the current industry best practice of storing a password hash in the DB (in a scenario where those hashes were extracted by the attacker and attacked offline).

So that's a quick and dirty description and security analysis of a novel password authentication and protection scheme for a website.  The benefits of this scheme are:
  • The website never gets to see the local password chosen by the user.  But this is not an absolute, it assumes a trusted website (and trusted 3rd party resources the website requests).  This is a benefit because:
    • Websites cannot accidentally expose the user's local password, either internally or externally.
    • A malicious insider working for the website would have a much more difficult task in obtaining local passwords with an intent of using those passwords on other websites.
  • A successful SQL Injection attack that dumped out the password hash DB table would not allow an attacker to compromise any accounts.
  • If the encryption key K was stored and used in an HSM, then it would be impractical under any circumstances for an attacker to mount an offline attack (short of physical access to the user!)
But let's also be clear about the problems:
  • You probably shouldn't be implementing any scheme you read about in a blog.  Such things need to be reviewed by experts and their acceptance needs to reach a level of consensus in the security community.  Anyone can create security system that they themselves cannot break.
  • Lack of implementation details.  Even if the theory is sound there are several pitfalls to avoid in implementing something like this.  This means it's going to be difficult to implement securely unless you are very knowledgeable about such things.
  • If you are using a salted password-hash scheme now and went through and encrypted those hashes (as another layer of security), you basically get the same level of security as this scheme proposes. The benefit of this scheme is that the user's password is not sent to the website.
  • Password changing.  It may be more difficult to mandate or enforce user's change their local password because the change must be done on the client side.
  • JavaScript crypto.  The Hash() algorithm would need to be implemented in JavaScript.  The first problem is implementing crypto algorithms in JavaScript in the browser raises several security concerns.  The second is the Hash algorithm is computationally expensive which means performance might negatively affect usability, perhaps more so when implemented in JavaScript, and perhaps even more so on certain devices e.g. mobile.
  • Lastly, I didn't quite achieve my goal of allowing users to choose weaker passwords as in the worst case attack (attacker has access to the secret key), the use of weak passwords would make recovering those passwords relatively simple.
But hey, it's a novel (as far as I know) approach, and it might be useful to someone else designing a solution for website authentication.

Friday 4 April 2014

Removing usernames and enforcing unique passwords

I often have "ideas" that, in the haystack of my mind, are usually straws rather than that elusive needle. One recent one was whether the username is necessary to login to a website.  Usually the Internet is excellent at helping identifying straws, and this idea is no exception, see http://blog.codinghorror.com/why-do-login-dialogs-have-a-user-field/.  But let's follow through on the idea and see where it takes us.

So the basic idea is that when a user wants to authenticate to a website, all they do is provide their password and the website will use that information to identify them.

The draw of removing usernames for me is really an efficiency one, as it's not immediately clear that usernames are required to authenticate to a website.

The obvious problem though is if someone tries to register with the same password as an existing user.  Since we can't have 2 users with the same password (more on unique passwords later), we must reject the registration and reveal that the chosen password is a valid credential for another user.  We can overcome this problem by forcing the existing user with that password to change their password (using (an assumed pre-existing) out-of-band password reset mechanism e.g. email).  Which highlights one benefit of authenticating with the password only, it creates an incentive for the user to choose a strong password (as otherwise they will continually have to change their password when new users attempt to register with the same password).

From the website's perspective there is a problem though, how do you identify the user?  If passwords are being stored using a salted password-hashing algorithm, then it would be too inefficient for the website to, for each row in the password hash DB table, fetch the salt for that row, then generate the password hash (using the password of the user trying to authenticate), and the compare it against the stored password hash in that row.  That approach simply does not scale.  We certainly don't want to use a password hash without a salt or with a fixed salt (as this makes dictionary attacks on the password hash DB table much quicker in the event the table is exposed).

One option is to encrypt the password and use that as the salt for the password hash.  To verify a login attempt the website would encrypt the password to create the salt, calculate the password hash (using the salt) and compare with the list of stored password hashes (potentially sorted for efficient searching).  It's important to point out this encrypted password, which we are using as the salt, is not stored, but calculated during the login attempt.

If the password hash store was ever compromised (and we always assume it will be) then it will be impossible to brute-force the passwords without the encryption key as well (as the salt will not be known and not be practical to guess).  Thus the security of this approach relies on protecting the encryption key.  The key should not be stored in the DB, as the likely extraction method of the password hash DB table is SQL Injection, meaning if the key was in the DB it too could be extracted.  The key should be stored somewhere the DB cannot access (the goal is to require the attacker to find a separate exploit to obtain the key).  It could be stored in configuration files, but the best option would be an HSM, with encryption done on-board.  At the stage an attacker has code executing talking to the HSM, it's game over anyway.  If the encryption key was obtained by an attacker then they could more efficiently (than attacking traditional salted passwords hashes) perform a dictionary attack on the password hash DB table.

We can make another optimisation for security as well.  We should keep track of the passwords that more than one user has chosen in the past i.e. password collisions discovered during user registration.  After all, we don't want to force a user to change their weak password, only for that password to become available for use again!  This way we will avoid the re-use of weak passwords e.g. 123456.  Now imagine we had a list of passwords that we know more than one person had thought of.  Now imagine we publicly shared that list (so other websites could avoid letting users choose those passwords as well).

So let's imagine we now have an application that doesn't require usernames for authentication and all users have unique passwords.  What are the threats?  Well a distributed dictionary attack against our application is a problem because every password guess is a possible match for every user.  Annoyingly the more users we have the better the chances of the guess being right.  Additionally, limiting the number of guesses is more difficult since authentication attempts are not user specific.  This makes clear the benefit of having usernames; they make online authentication attacks much harder.

So my conclusion was that although usernames might not be strictly necessary, they do offer significant security benefits.  From a user perspective as well there is minimal burden in using usernames as browsers (or other user agents) often remember the username for convenience.

But what about the benefits of unique passwords!  What about the incentive we gained when users were forced to choose strong passwords?  Well what if we keep usernames AND still forced passwords to be unique?  Could this be the best of both worlds?  Might I have just pricked my finger on a needle?

The 'sticking point' for me is the user acceptability of forcing unique passwords.  It may drive the uptake of password managers or strong passwords, or it might annoy the hell out of people.  Perhaps for higher security situations it could be justified.

Tuesday 1 April 2014

Revisiting Fundamentals: Defence in Depth

I think it is a worthwhile exercise to revisit fundamental ideas occasionally.  We are constantly learning new things and this new knowledge can have an effect on the way we understand fundamental concepts or assumptions, sometimes challenging them and sometimes offering new insights or facets to appreciate.  I recently learned about how Defence in Depth was used in some historic military battles and it really challenged my understanding of how the principle can be applied to security.

So what is Defence in Depth (DiD) and what is it good for?  My understanding is that DiD originated as a military tactic, but is more commonly understood in its engineering sense.  It turns out the term has different meanings in these different disciplines.  Largely I am going to focus on the meanings with regard to application security (as this is relevant to me).

Engineering
In engineering DiD is a design tactic to protect a resource by using layers of controls, so if one control fails (bypassed by an attacker) there is another control still protecting an asset.  By incorporating DiD into an application's security design we add (passive) redundant controls that provide a margin of safety for a control being bypassed.  The controls we add may be at different trust boundaries or act as only a partially redundant control for other controls e.g. input validation at the perimeter partially helps protect against XSS, SQLi, etc.

I would say that adding DiD in the engineering sense is essential to any security design of an application.  Security controls should be expected to fail, and so accounting for this by architecting in redundancy to achieve a margin of safety makes sense.  I will say that some disadvantages to redundancy have been identified that likely apply to DiD as well.  Briefly:
  • Redundancy can result in a more complex system.  Complex systems are harder to secure.
  • The redundancy is used to justify weaker security controls in downstream systems.
  • The redundancy is used as an excuse to implement risky functionality which reduces the margin of safety.  Multiple independent functionality relying on the margin can have an accumulated effect that reduces or removes the margin of safety.

Military
DiD in military terms is a layered use of defensive forces and capabilities that are designed to expend the resources of the attacker trying to penetrate them e.g. Battle of Kursk.  What's crucial to appreciate, I think, is that in the physical world the attacker's resources (e.g. troops, guns, bombs, time) are usually limited in supply.  In this way DiD grinds down an enemy and the defender wins if the cost/benefit analysis of the attacker changes so they either stop or focus on another target.  Additionally, defensive forces are also a limited resource and can be ground down by an attacker.  It is possible for an attack to result in stalemate as well, which may be considered a defensive win.

Taking this point of view then, is protecting an application analogous to protecting a physical resource?  We need it to be analogous in some way otherwise using DiD (in the military sense) might not be an appropriate tactic to use for defending applications.

Well one of the key points in the physical arena is the consumption of resources, but in the application arena it seems less accurate (to me) to say that computing resources are consumed in the same way.  If an attacker uses a computer to attack an application, the computer is not "consumed" in the process, or less able to be used again.  The same is true for the application defences.

So it isn't analogous that physical resources are consumed in the same way in the physical and application security arenas.  But there are other types of resources though, non-physical resources.  I can think of 2 resources of an attacker that are limited and are applicable to application security;
  • time - as a resource, if a defensive position can increase the time an attacker needs to spend performing an attack, the cost/benefit analysis of the attacker may change, the "opportunity cost" of that time may be too high.
  • resolve - in the sense that an attacker will focus on a target for as long as they believe that attacking that target is possible and practical.  If a defensive position can make the attacker believe that attacking it is impractical, then the attacker will focus their efforts elsewhere.
There is an irony in 'resolve' being a required resource.  The folks who have the appropriate skills to attack applications are a funny bunch I reckon, as they are exactly the type of people who are very persistent, after all, if someone can master the art of reverse engineering or blind SQL injection, they are likely one persistence SOB.  In a sense they are drawn to the problem of (application) security because it requires the very resource they have in abundance.

As an aside, this could be why APT attacks are considered so dangerous, it's not so much the advanced bit (I rarely read about advanced attacks), but the 'persistent' bit; the persistent threat is a reflection of the persistence (or resolve) of the attacker.  The risk being that most defences are going to be unlikely to break that resolve.

So if those are the resources of the attacker then that gives us a way to measure how effective our DiD controls are; we need to measure how they increase time or weaken resolve.

Layered controls work well in increasing the time the attacker has to spend.  The more dead ends they go down, the more controls they encounter that they cannot bypass, all that takes time.  Also, attackers tend to have a standard approach, usually a set of tools they use, so any controls that limit the effectiveness of those tools and force them to manually attack the application will cause a larger use of time.  Ironically though, consuming time could have an opposite effect on the resolve of the attacker, since there is likely a strong belief that a weakness exists somewhere, and also the "sunk cost effect",  meaning the attacker may become more resolved to find a way in.

I'm not a psychologist so I can't speak with authority on what would wear down resolve.  I suspect it would involve making the attack frustrating though.  I will suggest that any control that makes the application inconsistent or inefficient to attack, will help to wear down resolve.

I did a bit of brain-storming to think of (mostly pre-existing) controls or designs an application could implement to consume 'time' and 'resolve' resources:
  • Simplicity.  Attackers are drawn to complex functionality because complexity is the enemy of security.  Complexity is where the vulnerabilities will be.  The simpler the design and interface (a.k.a minimal attack surface area) the less opportunity the attacker will perceive.
  • Defensive counter controls.  Any controls that react to block or limit attacks (or tools). 
    • Rapid Incident Response.  Any detection of partial attacks should be met with the tightening of existing controls or attack-specific controls.  If you can fix things faster than the attacker can turn a weakness into a vulnerability, then they may lose hope.  I do not pretend this would be easy.
    • Random CAPTCHAs.  Use a CAPTCHA (a good one that requires a human to solve), make these randomly appear on requests, especially if an attack might be under way against you.
  • Offensive counter controls.  Any control that misleads the attacker (or their tools).
    • Random error response.  Replace real error responses with random (but valid looking) error responses, the goal is to make the attacker think they have found a weakness but in reality they haven't.  This could have a detrimental effect on actual trouble shooting though.
    • Random time delays. Vary the response time of some requests, especially those that hit the database.  Occasionally adding a 1 second delay won't be an issue for most users but could frustrate attacker timing attacks.
    • Hang responses.  If you think you are being attacked, you could hang responses, or deliver very slow responses so the connection doesn't time out.
I'm certainly not the first to suggest this approach, it is known as "active defence".  There are even some tools and a book about it (which I cannot vouch for).  The emphasis may be more on network defensive controls rather than the application controls that my focus has been on.

TL;DR
Defence in Depth in application security should involve redundant controls but may also include active defences.

Tuesday 25 March 2014

Making Java RIAs MIA - Enterprise Style

This post is all about how to disable Java in the browser, or alternative making Java Rich Internet Applications (RIA) missing-in-action.  There is a lot of information out there on how to do this, but I couldn't find a good source that covered it from an Enterprise point of, that is, how to disable Java in the browser at scale (rather than just doing it on our own machine using a GUI), across multiple browsers and OSes.

First let's cover the obvious, why do we want to do this.  See my page on Java Vulnerabilities.  Basically if Java is enabled in your browsers then you are opening yourself up to a world of pain.  If you have Java enabled in your browsers then you have malware somewhere in your Enterprise.

What we want is to understand the options we have for disabling Java in the browser and understand what the settings are that we need to configure.

Browser Agnostic
Uninstall Java
This is obviously a good option as it removes the risk of Java.  However, depending on your environment user's may just re-install Java, so unless you are actively scanning for Java installations your Enterprise this approach might not be as effective as you'd like it to be.

For that reason I'm not going to focus on this solution too much, but for non-Windows machines see Oracle's advice on uninstalling and this post.

For Windows you can detect the installed versions of Java using
wmic product where "name like 'Java%'" get name
and uninstall these via
wmic product where "name like 'Java%'" call uninstall


Java Deployment Options
From Java version 7 update 10 a deployment option exists to disable Java content in all browsers (these are the same options as available in the Java Control Panel).  This option is a feature of Java and so is available on both Windows and non-Windows OSes.

The option is stored in a deployment.properties file.  The user has their own version of this file stored at (on Windows and non-Windows respectively):
<user_profile>\AppData\LocalLow\Sun\Java\Deployment\deployment.properties
<user home>/.java/deployment/deployment.properties

A system version, containing the default values for all users, can be created and stored at (on Windows and non-Windows respectively):
%WINDIR%\Sun\Java\Deployment\deployment.properties
/etc/.java/deployment/deployment.properties

This system version is only used by Java if a deployment.config file exists at (on Windows and non-Windows respectively):
%WINDIR%\Sun\Java\Deployment\deployment.config
/etc/.java/deployment/deployment.config
and contains an option that specifies the location of the system deployment.properties to use e.g. on Windows:
deployment.system.config=file\:C:/WINDOWS/Sun/Java/Deployment/deployment.properties
deployment.system.config.mandatory=true

The system version contains the default values that the user version will use, but the user can override these.  That is unless the system version indicates that the option should be locked (which means the user cannot change the option).

So to disable the plugin across all browsers and users the system deployment.properties should contain:
deployment.webjava.enabled=false
deployment.webjava.enabled.locked

Both the system deployment.config and deployment.properties should be ACL’ed so that the user cannot edit them (but can read them).

Although this option only applies from Java version 7 update 10, there is no issue in settings this option for earlier versions of Java, as the setting is ignored.  The benefit is that should Java be upgraded then this setting will be adopted.

Browser Specific
Internet Explorer (IE)

In response to the threat posed by Java in the browser Microsoft released a FixIt solution that disables Java in IE - http://support.microsoft.com/kb/2751647

The executable disables Java in the browser and appears as “IE Java Block 32bit Shim” in the list of installed programs.  Presumably the FixIt installs a 64bit equivalent on 64bit versions of IE.

We can detect if the Microsoft FixIt is installed by confirming that the following registry exists
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{01cf069a-f8a1-4067-adc4-5ef7e922733c}.sdb
If the Windows OS is 64-bit and 32-bit IE is installed then the key will be under:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall

There is some debate about the effectiveness of this FixIt.  Advice from CERT states that it doesn't completely prevent RIAs from being invoked by IE.  Alternative methods to disable Java exist that may be more reliable but seem to involve updating a list of blocked ActiveX controls with every release of Java.  You'll have to decide on the risk yourself, but on Windows perhaps unnstalling Java or using the Java deployment options are the safest options.

Firefox

Firefox has a configuration option that controls whether the plugin is enabled.  The option can be viewed by going to about:config and is called “plugin.state.java”.  It can take one of three values:
  • 0 = disabled
  • 1 = click to play
  • 2 = enabled
The value of this option is stored in a file called “prefs.js” in the user’s Firefox profile directory.  If a user was to set this option via about:config their selection would be saved in this file.

It’s preferably to enforce all users (and all user profiles) to disable the plugin.  Firefox supports a ‘lock’ on the preference so that the user cannot (easily) change the option and enable the plugin (see Locking preferences).  By creating a file called “mozilla.cfg” in the installation directory of Firefox with the entry:
//
lockPref("plugin.state.java", 0);

Firefox needs to be instructed to load this lock file and this is done by creating a “local-settings.js” file in the “defaults\pref” sub-directory of the Firefox installation directory with:
pref("general.config.obscure_value", 0);
pref("general.config.filename", "mozilla.cfg");

With these configuration files and options in place the option plugin.state.java in about:config will be italicized and greyed out.

To prevent a user from changing “mozilla.cfg” and “local-settings.js” these files should be ACL’ed so that the user cannot edit them (but can read them).

I think I am not sure of is in what version of Firefox the plugin.state.java configuration was introduced, or even the ability to lock configuration options.  If your Enterprise is using a very old version of Firefox then this approach might not be available.



Chrome
This method applies to Google Chrome and Chromium (with minor differences).  Chrome supports configuration via Policy  (see chrome://policy) and the policy value we want to set is called DisabledPlugins (see list of plugins via chrome://plugins/).

The DisabledPlugins policy can be overridden by the EnabledPlugins and DisabledPluginsExceptions policies though, so these policies also need to be set to ensure the Java plugin is disabled.

Windows
Chrome policy can be enforced via Group Policy.  Google provides ADM/ADMX templates.  The GPO should be configured at the "Computer Configuration" level:
  • To set the DisabledPlugins policy, the policy entry “Specify a list of disabled plug-ins” should be set to “Enabled” and an entry of “Java(TM)” should be added.
  • To set the EnabledPlugins policy, the policy entry “Specify a list of enabled plug-ins” should be set to a random value. (1)
  • The set the DisabledPluginsExceptions policy, the policy entry “Specify a list of plug-ins that the user can enable or disable” should be set to a random value. (1)

Chrome historically supported policy configuration through the registry but this ability was removed in version 28.  The current policy is still written to the registry though, so this provides a mechanism to verify the policy exists on a machine.

Non-Windows
Chrome also supports configuration via policy on Non-Windows machines via a local JSON file containing the policy configuration.  The policy files live under /etc/opt/chrome for Google Chrome (and /etc/chromium for Chromium).

Two types of policies exist; "managed", which are mandatory policies, and "recommended" which are not mandatory.  These policies are respectively located under:
/etc/opt/chrome/policies/managed/
/etc/opt/chrome/policies/recommended/

To disable the Java plugin create a file “policy.json” in /etc/opt/chrome/policies/managed/ with the contents:
{
“DisabledPlugins”:["Java(TM)"],
“DisabledPluginsExceptions”:[" "],
“EnabledPlugins”:[" "],
}

The policy file should have ACLs applied to it so that users cannot edit (but can read) it.

(1) So why do we assign the EnabledPlugins and DisabledPluginsExceptions random values (or a space character)?  Turns out if we set these policies to "Not configured" or "Disabled" then a user can enable Java in the browser by configuring these policies in the "User Configuration" policy section.  If this seems like a security issue to you, well it seemed like one to me as well, that's why I opened a bug with Google about it.  They didn't really agree, but I concede it's complicated (and potentially it's a bug in Microsoft's Group Policy functionality).

Tuesday 4 March 2014

Running Java applications without installing Java

Although it seems to be a dying breed, there are still some Java client-side applications that you may want or need to use e.g. the excellent Burp Suite.  But this means having Java installed on your workstation - queue dramatic music.

This is less like handing over the keys to your computer than it used to be.  Java can be configured so that Java content is disabled in all browsers (at least from version 7 update 10 and higher).  But if the application requires an old version, or you just have little faith in the Swiss cheese that is Java's security controls, then it would be nice to isolate your Java installation.

There are several ways to isolate Java, from hardening its configuration to only running it in a virtual machine. What I'm going to suggest here is another option, using Java without installing Java.

This approach is already really common with server-side applications.  It is common for server-side applications to install themselves with their own copy of the Java runtime, as this means there is no danger of compatibility issues with the current shared install of Java on the server or any updates to it.  So why not take the same approach for client-side applications, and benefit from isolating Java to the one application that needs it.

So here's what to do (using Windows notation, but it's similar for non-Windows):
  1. Obtain the offline Java installation .exe
  2. Follow these instructions to get access to the Data1.cab file
  3. Unzip the Data1.cab to get the core.zip file
  4. Unzip the core.zip file to get the JRE files
  5. In the .\lib folder there are several .pack files, you need to unpack these to their .jar equivalents using unpack200.exe (in the .\bin folder) e.g. "..\bin\unpack200.exe -r rt.pack rt.jar".  Hat tip.
  6. (Optional) You can remove several files as they are usually unnecessary (importantly the browser plugin is one of them).  This makes the total size smaller, but not by much.
  7. Run you client-side Java application by directly invoking .\bin\java.exe
The JRE files come in at about 100Mb (for version 7 update 51) so I wouldn't necessarily be doing this separately for lots of applications.

I got this working for Burp Suite, but I didn't regression test all of its functionality, so whilst the method seems to work, I can't give any guarantees it does. YMMV.

Dave

Thursday 27 February 2014

Java by numbers

A lot has been written about how bad Java is from a security perspective.  Specifically, if you have Java enabled in your browser, well, you're going to get pwned (unless you have some compensating controls). Assuming your business requires Java be enabled in browsers for certain applications to work, then whilst it's not the only compensating control that should be used, having the latest version installed is a good start.

I've been looking at Java recently and wanted to know more precisely how much risk it posed.  So I pulled together the number of vulnerabilities related to applets and web start applications (i.e. Rich Internet Applications) that have been discovered over the last couple years.  You can see the numbers on my page Java Vulnerabilities.  It makes for scary reading.

Friday 21 February 2014

Is contextual output escaping too hard?

So whilst writing my post on Unobtrusive Javascript and JSON Data Islands I got to thinking about how successful contextual output escaping (or encoding) is as a tactic to mitigate XSS vulnerabilities.

I think there is an argument that it hasn't been very successful, and I don't think it is too hard to imagine the reasons for this:
  • As a mitigation to XSS it involves a task that is essentially difficult.  Knowing the context (either HTML, HTML attribute, JavaScript, CSS, etc.) where you are binding data into a template, either involves writing an HTML parser (to do it properly), or have developers manually specify the context for each binding site.  That is not a solution as much as it is a whole new problem.
  • There is a reason why there is no "contextual SQL escaping" mitigation promoted by the security community, and why parameterised queries is the mandated mitigation to SQL injection.  It's because history has shown us that people are terrible at combining data and code and that the best option is to let the component that executes the code (and hence has the best code parser) do the combining for us.  XSS is the same problem, combining data with code (HTML), which suggests that the browser should be combining the data and code(1) and we shouldn't be suggesting people do this themselves.
Of course it's all very well saying that people should architect their web applications to have the browser combine code and data, but developing web applications is a complex problem and there are lots of considerations to take into account when architecting the application, not just security.  My impression is that the main considerations are performance and convenience.

Performance is tough to comment on, and depends on many things, but potentially moving more work to the client-side could improve server performance (you could argue that client-side performance might suffer, and thus user experience, but JavaScript performance only improves).  Though there is no doubt that some businesses are very sensitive to page load times.

Convenience isn't a good reason in my opinion.  It might not be convenient but that doesn't mean it can't be convenient.  If a web application framework was designed from the ground up to let the browser combine code and data then likely they would (eventually) do it in a way that was convenient.  Of course relying on frameworks means getting popular frameworks(2) to change, and I think the security community has a role to play there.

I will say that I've spent a good deal of time telling businesses to use contextual output escaping, and it is always difficult to challenge a long held belief, but there is evidence and reasoning to suggest that parameterising HTML is the better mitigation.  The jury is still out in my mind, but I would be keen to hear other points of view?

(1) See also "Insane in the IFRAME - The case for client-side HTML sanitisation" (video - slides) from OWASP AppSecEU 2013.
(2) That's not to say there aren't some web application frameworks that have contextual output escaping built-in or support data binding on the client i.e. JavaScript templating engines (although I'm not sure the browser does the escaping), but I think (but am not an expert) that most don't.  There are a lot of web application frameworks out there.

Thursday 20 February 2014

Unobtrusive JavaScript and JSON Data Islands

I saw a tweet the other day from Jim Manico
That link is to http://www.educatedguesswork.org/2011/08/guest_post_adam_barth_on_three.html.

The advice is basically to have more or less static HTML and JavaScript and all dynamic data should come from a JSON request.  Not bad advice.

Reading the article reminded me of a similar approach that I once proposed, using Unobtrusive JavaScript. Unobtrusive JavaScript is an approach to developing a web application that clearly separates HTML from JavaScript from CSS.  Now if you are a developer then rest assured this is not an approach created by a security guy who prioritises security above all else, this is an approach suggested by people who actually know what they are talking about.  The Wikipedia page has the details, and interestingly is doesn't even mention the word 'security'.

So why is Unobtrusive JavaScript good for security?  Well the best approach to preventing XSS is to use contextual output encoding, however that hard part of that is the 'contextual' bit.   It seems that figuring out on the server side how the data being bound to a template is going to be interpreted by the browser, as either HTMl, JS or CSS, is often quite difficult.  But if all the HTML, JS and CSS reside in different files, then the context couldn't be easier to determine, which makes output encoding a much simpler task.  Additionally, this separation can be enforced by using the Content Security Policy (CSP) so you don't get any naughty developers slipping in JS where they shouldn't.

Of course if you are going to be suggesting Unobtrusive JavaScript then I wouldn't even mention security, there are so many other benefits that make it a good idea that you probably don't want any security related ulterior motives to be an issue ;)

So there are obvious similarities to what Adam Barth suggested, but a difference that I proposed was an alternative to using AJAX requests.  So instead of making a separate network request for the JSON data another option is to put the JSON data inside the the HTML, in a 'JSON data island' (sadly it seems someone else beat me to coining this phrase).  The comes from XML Data Islands, but obviously if the data is going to be used by JS it makes more sense for it to be JSON than XML.

The JSON data island could be implemented as a hidden <div> element or a <script> element with type="application/json".

The basic advantage is that it saves a network request.  Admittedly this makes more sense for HTML pages that wouldn't be cached, so its appropriateness does depend on how the web application is architected (in terms of caching optimisations made).  The data island would also need its own output encoding as the appropriate encoding for the server to do is to JS encode the data going into the JSON and then HTML encode the entire data island contents.

Sadly my proposal represented too much of a change to the web application I suggested it for, so if you like the idea, I hope you have better luck than me.

Sunday 2 February 2014

Risky Business

There are a lot of metaphors for security, which may mean we are in a technical field or it may mean that security folk often struggle to explain their point of view.  One I have been thinking about recently is the health of a human body being a metaphor for the security of a business.  I am not the first to use this analogy, see this paper (the section "Cyber Wellness") or this blog entry.

The part of the metaphor that I want to highlight is that you are responsible for the health of your own body i.e. you are responsible for a business or part of it, then you get to decide how to look after your body. Now there are a lot of people out there trying to tell you the best way to look after yourself, whether it be a product to buy or a lifestyle to adopt.  Some of this will be good advice and some of it will be bad.  Likely if you go to a doctor you will believe the doctor's advice, and they are likely to give you good advice on how to be healthy, or just become healthier.

But not everyone follows their doctor's advice all the time.  Often there are circumstances that don't allow you to put your health first, whether you want to or not - sitting all day in front of a computer for years on end is a risk to your health, but if it's part of your job then you do it; flying in a plane is a risk that most people are prepared to take; driving a car is the same.  Sometimes you simple choose the unhealthy option because the opportunity cost of not doing it is much greater - flying to space is very risky but try convincing an astronaut to stop wanting to do it.  And there are of course all the small unhealthy things we do out of necessity (e.g. crossing the street), convenience (e.g. fast food) or laziness (e.g. not brushing your teeth 3 times a day).

A life lived without doing some unhealthy things, without taking some risk, is a life not lived at all.

Similarly in business, as a business that takes no risk is a very risky business.  In this sense, all businesses are in the business of taking risk.

So the job of the security professional is to help the business understand the risk it is taking and help the business to minimise that risk, if that is what the business wants to do.  Of course the key concern here is getting the business to understand the risk (using security metaphors for instance!), and ensure the business is not taking risks it doesn't understand.  Doing so is not a technical skill but a business and communication skill, which should go to reminding security people that technical abilities alone are not the Holy Grail in being effective at our jobs.