Thursday, 20 October 2011

Fake it 'til you break it

So me and my Dad have the same initials (and obviously last name).  It was never really an issue until I got older and started receiving mail of my own.  It was common for me to open mail that I thought was addressed to me, but it was actually for him, and vice-versa.  Fortunately is was usually fairly clear from the context of the mail who the mail was actually for, but the fact remained that it was impossible to tell who the mail was for just by looking at the outside.

The same problem can occur when you use cryptography to protect information.  Encryption or signatures are often used to protect data so that an application can be confident that data that leaves its control can't be altered.  However, the keys used for this protection may very well be used by other parts of an application, for a very similar purpose.  So what happens if someone replays some protected data from one part of an application to a different part of that application?  After all if the data is protected in the same way it will pass any signature checks and decrypt successfully.

Initially it might be tempting to say, "use different keys!".  That would work, but it could also drastically increase the number of keys you have to manage.  Hopefully you know that the real problem with using cryptography is usually how to securely manage the keys.  For this reason keys are often re-used.

So what could go wrong if an attacker uses data that is impersonating other data in an attack?  Who knows?!  It clearly is totally dependent on the data that is protected and the implications of it being trusted by the wrong application.

There is a simply solution though, all you have to do is ensure that where your part of an application uses the data, you provide a way of identifying that the data belongs to you.  This could be as simple as adding a unique fixed prefix to the data you protect and ensuring when you recover that data you confirm the prefix is correct.

It's a bit of an odd attack, but I can guarantee that any attacker that suspects some data is encrypted will be injecting that encrypted data into any other place where encrypted data is being used.

Pick 'n' Mix Mitigations

I was just perusing the July/August 2011 IEEE Security & Privacy magazine and I came across an interesting article "Developer-Driven Threat Modeling: Lessons Learned in the Trenches" (pay-walled unless you are a member of the IEEE) by Danny Dhillon, that talks about EMC's experience implementing a threat modeling strategy in their organisation.

What I found particularly interesting was the section on how they mitigate risks that have been identified by their threat models.  From the article:
The software security field has matured
over the past decade, and a wealth of information
on how to mitigate many common issues is widely
available—but the quality and consistency of that
information varies.
And in reference to the approach EMC came up with:
The mitigation strategies include changes
that developers can make during the design phase
as well as downstream coding, documentation, and
maintenance considerations. Where appropriate, the
guidance includes sample code and references to rec-
ommended toolkits and frameworks. It also includes
alternative mitigations along with their implications
and when they should or shouldn’t be considered.
From the brief description of their approach it seems to me that it nicely complements the MMM ideas I have been espousing, which fundamentally are based on the same concerns (the immature mitigations that are readily available on the web).

They definitely do not have a "one size fits all" all approach to their mitigations, and you could almost refer to it as Pick 'n' Mix (albeit a bit flippantly).  The point is (and I'm reading between the lines here) is that they have different solutions for different situations, but solutions tailored to their needs, and the solutions are not just technical but address a range of activities from the development life-cycle.

It's good to see a company taking a well-rounded approach to their mitigation strategies, let's hope more companies can learn from them.

Tuesday, 27 September 2011


I read an interesting article in the latest IEEE Computer magazine called "Security Vulnerabilities in the Same-Origin Policy: Implications and Alternatives" (sadly you have to pay for the article).  Actually the article itself was basically an overview, so the majority of it covered well known information, however one of the 'alternatives' mentioned was something I hadn't come across before so I thought I would share it.

Escudo (link to original paper), is a web browser protection model positioned as an alternative to the Same Origin Policy.  I thought it had some interesting ideas and it's always good to read about alternatives as it always tends to increases your understandng of a topic.  I have to say though that I thought their solution was not going to be particularly developer friendly which may lead to more issues than it solves.

Have a read and make up your own mind.

Sunday, 24 July 2011

Password Sodium

So on food labels they often show the amount of Sodium in the food, which is useful, but less useful than if they showed the actual amount of Salt, which is what you need to watch out for (apparently the conversion is Salt = Sodium x 2.5).  Putting Sodium instead of Salt is a trick by manufacturers to make high in Salt products look like there is less Salt in the food than there actually is.

So I have been thinking about better ways to authenticate via passwords recently, which has nothing to do with food labels, but everything to do with hiding useful information.  I have been thinking about ways to make an authentication scheme secure even if an attacker gets access to the password hash.  So far the problem as I see it is that an attacker is always able to use a dictionary of passwords to try and get a match with a password hash.  We can make this computationally expensive, but we can't stop it without removing some vital piece of information.  I still believe there is scope for a creative solution here, but it occurred to me we could also transform the problem into something we are better at solving.

If we encrypt the password salt then if the attacker gets hold of the password hashes they will be of no use whatsoever because because it is computationally infeasible to guess the salt. I am going to call the encrypted password salt ... the password sodium.

This suggestion, as I mentioned, merely transforms the problem for the attacker from one of gaining access to the password hashes, to gaining access to the password hashes and the salt encryption key.  A practical example of this would be to say that instead of a SQL Injection vulnerability being enough to get the password hashes and begin cracking them, the attacker needs to compromise the machine as well (depending on where the encryption key is stored).

The hope here is that we have made the task more difficult for the attacker.  That hope depends on how and where we store our encryption keys, but that certainly is not a new problem and many companies have standard solutions.

Certainly with the numerous high-profile password hash disclosures that have happened over the last couple of months adding a little defense in depth by encrypting password salts should help protect people's passwords and minimise the damage of password hash disclosures.

Tuesday, 19 July 2011

Zero Knowledge Password schemes

So in my previous post I was lamenting that as it stands today, for the majority of the Internet, when I authenticate to a website I have to send them my password.  It's obvious that if that wasn't a requirement then this would be better.

This caused me to read up more on Zero-Knowledge Password authentication schemes, and I came across the Secure Remote Password (SRP) Protocol (very nicely summarised here).  However, in the security analysis (given on the linked page) of SRP there was the following property:

"If the host's password file is captured and the intruder learns the value of v, it should still not allow the intruder to impersonate the user without an expensive dictionary search."

To me, this is not good enough.  Or at least if I am giving my list of all criteria I would like my authentication protocol to meet, one of them would be that "even if the password file is captured an attacker cannot recover the password".  This sounds like an impossible dream, but I hope it is not. 

Initially I imagined a change to SRP that I thought might do the job.  If we had 2 generators for the group, g1 and g2 (so in the current scheme replace references to g with g1), and 2 passwords, P1 and P2, then we could construct v as:

  1. x1 = H(s1, P1)
  2. x2 = H(s2, P2)
  3. v = g1^x1 . g2^x2
The rest of the protocol could be minimally altered to accommodate this.  Hopefully you can see that an attacker with access to v, s1 and s2, could in fact hit upon multiple P1 and P2 values that create the same v value.  So even if the attacker was able to create all possible P1 and P2 values, they would not be able to tell if they were the actual password values.

But the suggestion has a drawback.  Well it has a couple.  Firstly it requires the use of 2 independent passwords.  Secondly, if an attacker got the password files for 2 different sites using the same g1 and g2 (and using the same group) then they could solve for P1 and P2 (from a list of candidates after a brute-force attack).  Lastly, my analysis is optimistic in saying that multiple solutions exist because the range of possible passwords is so much smaller than the range of possible x values.

Still, it's fun to play with these ideas, and you never if you might hit on the right combination :)

Sunday, 17 July 2011

Passwords - the worst kept secret

So I'm writing some training material and I'm making a note that we don't use HTTP Authentication on the Internet anymore, it has loads of disadvantages.  Well it is supposed to anyway, I really couldn't find much detail on exactly what those disadvantages were?  I mean sure if there is a MITM then if you use Basic Auth then the MITM will get your password, or if you use Digest Auth then a MITM can downgrade to Basic Auth, or perhaps mount a dictionary-attack on your password.

Still, since the vast majority of web sites use forms-based authentication, what is the real difference (assuming you are using SSL/TLS)?  You are still sending your password over the inter-tubes.  Maybe I am missing something, I don't know.

To me though both Basic and Digest Auth had one advantage, the browser asked for my password and not the web page.  Why is that important?  Well there are a lot of spoofing attacks out there that try to fool users into entering their passwords into fake pages.

Entering passwords into fake pages is clearly a problem but really it isn't the fundamental problem.  The fundamental problem is that I am entering my password into a web page and that my password is being sent to the server at all.  Authentication is based on something you know, have or are, and so passwords fall into something you know, which means they should be a secret.  It's much easier to keep a secret if you never have to enter it into a web page and send it across the Internet and have it processed and stored (in some form) by someone else.

The 'what you know' requirement is merely proving that you know something, and while telling someone what you know does prove it, there are other ways to prove you know something without giving it away (see zero-knowledge password proof).

This does of course assume that a user and a site have agreed on a password already, and although I couldn't dig up much on schemes that don't have this requirement, I don't see why it shouldn't be possible.  A user could register a f(password) (a function of their password) and then authenticate at any time by providing g(password) where the site can convert f to g.  Note, I say possible, not easy, I haven't really given this much thought.

If the browser could be relied upon to do this securely on behalf of the user then it would eliminate many authentication concerns plaguing the Internet today, such as; spoofing authentication pages, MITM authentication attacks, server-side password or password hash disclosure, user password re-use.

Seems like it is definitely worth some more investigation ...

Monday, 4 July 2011

SQL Injection MMM

So I introduced the concept of a Mitigation Maturity Model (MMM) in a previous post, and I created a model for Cross-Site Scripting.  In this post I would like to do the same thing for SQL Injection.

SQL Injection is both an easy and difficult attack to create a maturity model for; on the one hand the mitigations for SQL Injection are well known, on the other hand to rank them is less straight-forward.  Nevertheless, let's have a go:
  1. None.  No protection.  Some protection from input validation but no effort to specifically protect against SQL Injection.
  2. Basic.  Strip SQL syntax characters.  Depending on the string delimiter being used for SQL, either single or double-quote characters are stripped.
  3. Intermediate.  Parameterised Queries (a.k.a. Prepared Statements).  All SQL commands are made using parameterized queries.  Any stored procedures are rigorously reviewed for potential SQL Injection flaws.
  4. Advanced.  Object-Relational Mappings (ORM).  All SQL commands are replaced with ORM calls.  Any stored procedures are rigorously reviewed for potential SQL Injection flaws.
So the model I have provided here contains no surprises, but I think I have to justify why I think ORM is a more mature mitigation than a parameterized query.  I actually think both are perfectly valid mitigations in their own right, and I struggled to decide if I should choose one as more mature than the other.  In the end what tipped ORM as a more mature mitigation was simply that parameterized queries still take a string (the SQL query with placeholders), and short of reviewing all these strings, there is no way to know if that string was dynamically constructed.  For this reason I believe you can have more confidence in ORM, since there is no way for a developer to abuse it (short of using direct queries in the ORM API, but these can easily be audited).  I doubt everyone will agree with my logic on this, but I think it's a reasonable argument.

So SQL Injection is probably one of the easier attacks to put in a MMM since the mitigations are so well understood.  The real test of how useful the idea of a MMM will be is determining if it can be applied to the majority of high-risk attacks applications must face.

Sunday, 19 June 2011

XSS (Part 4) - A New Hope

So this is my last post in my series on XSS and I hope to follow up on my previous 3 posts by offering more detail on how you might implement a solution to XSS by progressing through the stages of the XSS Mitigation Maturity Model (MMM).

There isn't much point in offering any advice for the None and Basic levels; there is nothing to do for the None level, and the Basic level is what people currently do if they follow the advice online.

The purpose of the Intermediate level is to automate output encoding.  The reason this is an advantage over the Basic level is it takes the burden off of developers to manually apply (the correct) output encoding everywhere in the application.  Relying on developers to do this perfectly is unreasonably and to be honest not really where you want them focusing their efforts.

If you are fortunate you may be using a web development framework that offers automated contextual output encoding, however I am not familiar with enough different frameworks to know how broadly supported such a capability is; I'm going to assume not very.  Implementing this yourself involves extending your framework that writes data and code to web pages to understand the context of where the data or code is being written.  Typically this will have mixed results, it will work well for simple pages, but not so well for complex pages or pages where a combination of data and code is written simultaneously.  This limitation is acceptable though as we can use it to isolate those areas where we cannot use automated contextual output encoding and make those areas the focus of XSS testing.  For testing purposes your security team should be able to come up with a suitable range of tests or tools that test for basic XSS attack vectors.

Whereas the Intermediate maturity level can be applied to existing code with a manageable amount of effort, the Advanced maturity level implies creating a web page that is designed to be secure against XSS.  It's important to understand that the design of a web page should never just be about mitigating XSS, that wouldn't be very practical, but we can design pages so that speed, size and functionality are not the only considerations; we also consider security.

At the beginning of my first post in this series I talked about FireFox 4 as offering exciting new defenses against XSS, I was referring to it implementing the Content Security Policy (CSP) proposed standard.  The CSP allows a browser to restrict the scripts that run in the page to a white list provided by the server (as an HTTP header).

The CSP does impose constraints though, namely the page is not allowed any in-line script, instead all script must be sourced via HTML script tags.  Whilst this may seem like an unrealistic burden, it is just a design criteria, and in fact it is one that is satisfied by following the design guidelines of Unobtrusive Javascript.  The Unobtrusive Javascript approach offers numerous advantages (including making it easy to optimise for speed and size, and separating functionality from presentation), but does need to be designed into the construction of web pages and be understood by developers who implement it.

While implementing the CSP in your web app will be invaluable in defending against XSS, contextual output encoding still needs to be applied.  However, the constraints of the CSP and Unobtrusive Javascript make the accurate automated output encoding of data and code much easier as the context is trivial to determine; all Javascript will be in Javascript files, all CSS will be in CSS files, and the CSP will stop any Javascript or CSS from being written to HTML files.  To be fair it's Unobtrusive Javascript that helps the most as even without the CSP if you contextually apply output encoding that will eliminate most XSS (except perhaps in HTML attributes and the use of "javascript:" protocol handler).

As with the Intermediate level, any exceptions to designing security into a web page need to rigorously tested.

So there you have it.  It wasn't that painful was it?  Clearly I have only touched upon the topics involved and you would have to do some research yourself in order to flesh out a detailed technical solution at each maturity level.  But hopefully this post has given you some ideas about where to start.

Saturday, 11 June 2011

XSS (Part 3) - XSS defences grow up.

My previous posts laid out the technical solution to XSS but raised the issue that knowing the technical solution isn't enough to remove the XSS vulnerabilities in your application.  The purpose of this post is to sound out some ideas on a systematic approach to mitigating XSS.

When it comes to integrating security into a larger business process we fortunately have some prior art to draw on, namely the several Security Development Lifecycle frameworks that exist today, such as Microsoft's SDL, BSIMM, etc.  These frameworks take an approach to security whereby they attempt to lay out a strategy and well defined goals that can be integrated into you software development process.  Some of them also use an optimised or maturity model so that security can be integrated in stages rather than trying to shift the mountain that is your 'current practice' all in one go.

I really like this approach.  I think this approach can be applied to mitigating threats as well.  Even better, I'm going to give it a catchy name, a Mitigation Maturity Model (MMM).  Alliteration fans go wild.

The reason why I think a maturity approach works well for mitigations (admittedly not all of them), is that it is a business reality that the ideal technical solution cannot magically be applied to any large application.

To make this more concrete I have come up with an XSS MMM.
  1. (None) - No specific mitigation.  Input validation may prevent some XSS.
  2. (Basic) - Manual mitigation.  Specific output encoding is being applied on a 'best efforts' basis.  Developers have been trained in the problem and the solution and perhaps an encoding library has been developed to simplify the process.  There is no testing for XSS.
  3. (Intermediate) - Automate mitigation.  Automatic contextual output encoding is being done based on a 'best effort' at detecting where code/data is being written to HTML/JS/CSS.  There are likely some exceptions to this in certain places where the complexity makes automating the process difficult, but most pages get the benefit.  Developers receive mandatory training on the dangers.  Testing is focused on those areas that do not get autmated encoding.
  4. (Advanced) -  Mitigation by design.  Rigerous automated contextual output encoding is applied,  made possible by the complete separation of HTML from JS from CSS.  A handful of exceptions exist but these are rigerously tested.
As much as it pains me to say it, the security community could probably come up with a better XSS MMM than I have.  To be honest I hope they do.  There are some purposeful omissions as well, things like penetration testing haven't been explicitly called out since that should be part of a larger framework.  I have also been vague from a technical point of view and focused more on the 'what' needs to be done than the 'how' you would actually do it.  The reason for that is simply that there is unlikely one solution at each maturity level that would suit everyone.  However, in my next post I hope to provide some concrete technical solutions at each maturity level that would hopefully be applicable to many situations.

I also believe that the MMM can actually be applied separately to existing code and new code; they needn't be at the same level.  It makes sense that new code should aim to be at a higher maturity level than existing code.  The effort to mature exsiting code can be considerable, and it's always going to be a business decision to spend time refactoring old code.

Finally, one last parting shot at the XSS mitigation advice out on the Internet today.  If we look at the XSS MMM as I have laid it out, at what maturity level would the standard advice of contextual output encoding sit?  Basic.  It's no wonder then that XSS is still such a massive problem on the Internet if that is the maturity level of the advice the security community is offering.

Saturday, 4 June 2011

XSS (Part 2) - Down the rabbit hole

So my last post was all about how the common advice on solving XSS is to use input validation and output encoding.  Then I pulled the rug from underneath you and said I disagreed.  Does my arrogance know no limits!

Well perhaps.  But I think I have an argument here.  Everyone's situation is different but let's imagine a scenario where you have a large web application, with numerous developers, and you are forced by PCI compliance nobly decide to rid your application of XSS.

Following the standard advice would involve sending an email to the development team telling them to use contextual output encoding (I'll assume input validation is already in place).  You may provide the technical details on how to do this and even a library for them to use.  You then relax back in your chair with a certain satisfaction that is akin to saving the world from some great evil.

Evil on the other hand is not so easily defeated.  The development team gets back to your with some questions to clarify a couple of points:
 - do they have to output encode everything in all existing pages?
 - who is going to tell the business of the delays this will cause and/or pay for all the extra work?
 - who is going to do the regression testing on the changes to existing pages?
 - how to we test?
 - will developers get any training? (some of the new guys and graduates don't understand anything about security)
 - what will happen if we don't do output encoding and just ignore you?
 - do you really expect, with all the other pressures we have, for us to get this 100% right, all the time on every page?

Um, err ... Google!  Help me Google!  ... Google?  Didn't I just do some epic research on Google about XSS?  Why were none of these issues raised?

These issues weren't raised because output encoding is the correct technical solution to XSS, it just isn't the real problem you have to solve when you need to mitigate XSS.

This, like many technical solutions, needs to be solved by introducing process into your business.  This is where the security community has really let down program managers, technical leads and developers of the world.  I'll limit that criticism to XSS for now, although I have no doubt it applies to numerous other problems.

My next post will contain some suggestions to solve XSS through process and not (just) technology.

Thursday, 26 May 2011

XSS (Part 1) - Reconnaissance is rarely wasted

With the release of Firefox 4.0 there has been a giant step forward in the ability to defend against XSS attacks.  Whilst I will discuss this in more detail in a future post, I thought it was interesting to take another look at the perceived wisdom for mitigating XSS attacks that exists on the Internet today.

So if I was to put on the shoes of someone who was taking their first steps at finding a solution to XSS in my web application where would I start?  Google, obviously.  And what is the first hit in Google, Wikipedia, obviously.

So mitigation numero uno for XSS in Wikipedia is "Contextual Output Encoding/Escaping of String Input".  The other recommendations include input validation and then some fairly limited advice involving cookies, disabling scripts and emerging technologies.

It seems sensible to verify some of these mitigation suggestions via other sources.  Back to Google.  The next relevant result is from where they ask how vendors should protect themselves; "This is a simple answer. Never trust user input and always filter metacharacters."

Next up is OWASP, which I believe is considered by many, and quite rightly so, to be the defacto standard source of web application security information.  They have a cheat sheet that recommends output encoding (albeit in several different ways depending on context) and input validation.

Although there are a lot more results on Google, to cover our asses be on the safe side we should look beyond Google as well.  Where else can we look? Well what about books?  Off to Amazon.  Topping the list is "XSS Attacks: Cross Site Scripting Exploits and Defense" and it is authored by some fairly well known security folk so that gives us some confidence.  Their advice boils down to input validation and output encoding.  I have to say I think it was a bold move to use the word 'Defense' in the title of the book and then dedicate a whopping 14 pages out of 400 to 'Defense', only half of which contain advice for web application developers.  It doesn't really fill me confidence that they acknowledge it's a difficult problem and then only manage to squeeze out 7 pages discussing solutions.

Another favourite source of web application security information is "The Web Application Hacker's Handbook".  That too suggests input validation and output encoding, in no less than 5 pages.

Clearly you could spend a lot more time looking at solutions, but what I have described represents the basic standard line you find most everywhere.  There is of course some really poor advice out there as well, but that comes as no surprise.

It might seem that the purpose of this post is to say the input validation and output encoding mitigate XSS; but I actually disagree with that.

To me, saying input validation and output encoding is the solution to XSS is like saying feeding people is the solution to world hunger.  Of course feeding people would stop them from being hungry, but that's not the real problem, the real problem is how do you go from a world full of hungry people to food in peoples' mouths.  The problem is one of organisation and process.

This will be the topic of my next post.

Wednesday, 30 March 2011

The ghost of password future

I was reading about Pico and it raised an excellent point regarding the recommended length of passwords that got me thinking about the long term feasibility of passwords.  For as long as I have been in the security industry (just over a decade) an 8 character password (involving 3 character sets) has been considered, in rough terms, 'good enough'.  The reason that 8 characters is 'good enough' is if you do the math on how long it would take to brute force then the resulting time it would take is 'long enough'.

So what about Moore's Law?  Moore's Law says that we will double the number of computations every 18 months (the actual law is transistor count every 2 years), and these days the law relies on multiprocessor cores in order to be true (which is fine for this discussion since brute forcing passwords can be done in parallel). 

So according to Moore's Law, we need to increase the length of passwords by 1 bit every 18 months to keep constant the time it would take to brute force a password.  Unfortunately we use 8 bit bytes to represent characters in passwords (actually only 7 bits are used).  So this means every 10.5 years (7 bits x 1.5 years), call it a decade,  we need to add a another character to our passwords in order to protect them from brute-forcing. 

So in 2050 the recommended password length will be 13 characters.

A quick Google did not reveal a definitive answer to the length of password that people can typically remember, but I don't think it's a stretch to go with; shorter easier, longer harder.

So what does the future hold for passwords?  It looks pretty grim.  The length of passwords will need to go up, but our ability to remember long strings of characters is unlikely to change.  People will do what they always do and find a way to satisfy the password length requirement by creating a password that is easy to remember but which does not necessarily gain any security benefit from being longer.

[I have glossed over many of the technical details of brute-forcing, Moore's Law, processor power, character sets, etc. because I wanted to focus the point of the post on the fact that password lengths seemingly must get longer.]

Saturday, 12 March 2011

Password changing needs to change

Companies tell people they need to change their passwords but I think companies are the ones who need to change. 

In many enterprises users must change their passwords every 3 months or so.  When was the last time a website asked you to change your password?  What about your banking website, how often do they tell you to change your password?  It's funny isn't it, there seems to be a real double standard about changing passwords.

So why do companies tell people to change their passwords?  Well the thinking is that if you change your password then if it has been exposed at some stage you have minimised the opportunity to brute force it, and if cracked then you minimise the opportunity to use it.  You can't really argue with the logic, those are sensible reasons to change you password.

What you can argue with is why is the burden on users to mitigate the risks of their password being exposed?  The answer is clearly that we don't have any other good mitigations to those risks other than users changing passwords.  Hold on ... we just established that websites don't make users change their passwords, so websites are either mitigating the risk or accepting it.  So who is right, the companies or the websites?

Well as much as I would like to argue the merits of the security of either approach, or have a detailed discussion of risk as a function of the assets the passwords are protecting, I think I would be missing the point.  Websites don't ask users to change passwords because it's a usability issue, it would put too much burden on the user and that might drive the user to a competitors website.  Companies have a captive audience, and they aren't trying to make users happy, they are trying to secure business assets.

For some people this might be a perfectly reasonable justification, and clearly it is not without merit, but all decisions have consequences.  One of the consequences, and this is pretty common knowledge, when users have to change passwords they often choose related passwords, marginally different from their old one, so it is easy for them to remember.  The irony of this it circumvents the mitigation of changing passwords, as changing the password doesn't require the attacker to start from scratch again.

So what's the solution?  Well clearly there isn't a well known one otherwise more companies would be using it.  I think there is a lot that companies can learn from the web experience though:
  • Get users to use a local password manager, since their password never leaves their machine the risk of exposure is substantially minimised and the enterprise can still enforce a rule of changing passwords.  Moreover the quality to passwords used can be increased exponentially.  I'm not up to speed on password managers for the OS, but if they don't exist then someone out their should create the market for them.
  • Use mechanisms to identify the fraudulent use of passwords.  I'm not sure how mature this market is for systems within an intranet, but web sites have mechanisms, so there is at least some framework for new solutions.
  • Use Single Sign-On.  I'd be the first to admit that this can create more security issues if not implemented correctly, but done well it definitely helps minimise risk and is the ultimate in convenience for users.
 With most companies intranets resembling a mini-Internet of web applications it makes sense for companies to start incorporating some of the hard fought lessons web sites have learnt when it comes to the balance between usability and security.

Sunday, 6 March 2011


I have just uploaded a project to google code that I have been working on, it is called XDP, which stands for eXtended Data Protection.  It is like the Microsoft DPAPI, except better!

Why is it better?  Well the DPAPI only lets you encrypt data locally to yourself or the machine, whereas XDP allows you to encrypt data to another local machine user or group, or if you are part of a Domain, you can encrypt to other Domain users and Domain groups.  What makes XDP different to other technologies like PGP or GPG is that, like the DPAPI, it does not require the user to worry about key management, as it uses the DPAPI to manage keys.  It is the only non-certificate method I know of to encrypt data to other people in a Domain environment.

Check it out at

Feedback welcome!