How can a group of services determine each other's identity?

For the purpose of this discussion, the services are connected (indirectly) to each other over a network, and the services are capable of storing secrets, which can be securely distributed to the services. A central service can also exist with which all other services can authenticate i.e. have a trusted connection to.

If the central service establishes itself as the registration authority, it can issue credentials e.g. secret key, password, private key etc., then we have a group where every service that registers can authenticate itself back to the central service in the future. That doesn't immediately allow services to identify themselves to other services.

We usually have additional constraints though, such as

- service-to-service identification should not have to involve the central service every time services communicate, but bootstrapping via trusted centralised information, or occasionally talk to the central component is acceptable

- the overhead of proving identity should not be a burden

- direct connections between services don't exist e.g. there are intermediate network devices

- replay attacks should not work

- stealing credentials is not in our threat model, but stealing ephemeral credentials should not lead to permanent compromise of an identity

Secret Key pairs

A central service can hand out keys unique to every pair of services, and that way each service can sign a message and the receiving service will know it must have originated from them because only they have the key. For N services that requires managing O(N^2) keys. Keys would need an identifier, a lifetime and to be rotated, all without causing any downtime or miscommunication. Creating signatures would be simple and signatures would be small.

PKI

Each component can get an asymmetric key, and then sign a message and the receiving component will verify the signature via the public key, which it obtains from a central trusted location, and it will then know it must have originated from the signer because only they have the private key. For N services that requires managing O(N) keys. Keys would need an identifier, a lifetime and to be rotated, all without causing any downtime or miscommunication. Signatures that would be sent would be somewhat large ~ 1-2KB.

Mutual-TLS

This implies that either the TLS is terminated at each component, or if terminated at some intermediate network point the client certificate is transmitted as an HTTP header. The same constraints as for PKI exist.

OIDC/OAuth2

Each service (client) gets an ID Token for each other service (Relying Party) it needs to talk to (the ID Token audience would be the Relying Party service). The lifetime of the ID Token would control how often each component needs to talk to the central component. For N services that requires managing O(N) secrets. The tokens used would need to be regularly refreshed at the central service.

Kerberos

Designed to solve this problem, but difficult to understand and get assurance of security.

There are likely others I am missing.

## Thursday, 27 December 2018

## Friday, 30 March 2018

### Shall we play a game?

A while ago I watched a very interesting lecture series on Game Theory and I wanted to share some of the insights that I got out of it.

The reason I watched a lecture series on Game Theory is because it shares a lot in common with security. Generally speaking Game Theory studies how two (or more) interacting players should behave to get their desired result. It certainly isn't about studying "games"; games are just tools to enable analysis, it is perhaps better understood as the study of "strategic decision making".

One of the first things to understand about looking at a game is the idea of a payoff. It's the reason the players are playing, they want to maximise their payoff. Which leads to the first insight I got from the course:

To gain more insights it helpful to know there are different types of games:

From a security perspective if you are trying to choose a strategy to protect an asset against a malicious attacker, then you are playing an infinite, non-zero sum, mixed strategy, simultaneous, incomplete information, non-cooperative game. And yes, people actually study this type of game to determine what an optimal strategy might be (generally given more constraints).

An important type from the list above is the mixed strategy game. That means that a player will not choose a single strategy, but will choose from multiple strategies that they will play with certain probabilities. The payoff from a mixed strategy game for a player is the sum of the payoffs multiplied by the probability of playing the strategy with that payoff (like calculating an expected value in statistics).

Now whereas the payoffs for each strategy are fixed, the flexibility comes from a player assigning different probabilities to each strategy. Of course every other player is doing the same thing, and will certainly choose the probabilities they play a certain strategy based on the probabilities they think other players will choose.

Relating this back to security, if a strategy has a probability and payoff, then the probability is the likelihood of a certain strategy being chosen, and the payoff is the effect of that strategy, or impact we could say. Then the expected payoff for any strategy is likelihood multiplied by impact, which in security we recognise as how risk is calculated.

Now for the interesting bit if you are into security. Game Theory has determined that there is a "stable" set of strategies that all players can reach, something called the Nash Equilibrium:

This means there is an approach to security where we can construct a range of controls in such a way as to know exactly what our losses will be, and know that the attacker cannot come up with a better approach (set of strategies) as any other approach will lead to the attacker getting a reduced payoff. It's important to point our that a Nash equilibrium does not necessarily lead to the best payoffs for each player, or all players. What's also interesting is that for the type of game we play in security, at least one Nash equilibrium is guaranteed to exist.

Of course the problem is we don't know how to find the Nash equilibrium. Even if we knew all the possible strategies of all the players and the payoffs of all the strategies, there isn't a way to calculate it. Not to mention that clearly we don't know all the strategies or all the payoffs.

So useless right? Sure, largely that is right for a quantitative analysis. But there are still insights that can be useful. For instance in the type of games where we know how to find the Nash equilibrium (zero-sum or constant-sum games), the way a player finds a Nash equilibrium is by calculating the probability for each of their strategies by calculating probabilities that give the other player the same payoff regardless of which strategy the opposing player chooses (the minimax strategy).

So what does that mean in practical terms. It means this:

Don't choose your security controls based on the impact on you of the attacker's strategy. The goal being to force the attacker into a constant payoff strategy no matter what they do. This approach doesn't mean you won't suffer losses to an attacker, it just means those losses will be fixed.

Naturally this approach has a problem, what if the impact (or payoff) to you is unacceptable from a business perspective? Well that leads to another important insight:

What's interesting is this approach runs counter to the general approach of risk management today, which is to figure out what threats would have the most impact on your business, try to figure how likely those threats are, and try to minimise either the impact or likelihood. Game Theory isn't saying that isn't sensible, just that it might not be the best strategy. The simple reason is due to the very first insight, if you optimise to protect what is of most value to you, you might fail to protect the real target of an attacker, which ultimately might cost you more. Put another way, investing heavily in protecting something that is valuable to the business is a waste of time if no attackers are actually interested in it. For instance if you have IP that is core to your business, attackers won't care about that unless they can monetise it easily, especially if you have other assets they can monetise more easily.

The last insight that I wanted to mention is this quote:

Game Theory isn't about solve all the problems we have in security, but it seems a very complementary field that has many practical insights to offer. If you are looking to broaden your understanding of security I would thoroughly recommend taking a closer look.

The reason I watched a lecture series on Game Theory is because it shares a lot in common with security. Generally speaking Game Theory studies how two (or more) interacting players should behave to get their desired result. It certainly isn't about studying "games"; games are just tools to enable analysis, it is perhaps better understood as the study of "strategic decision making".

One of the first things to understand about looking at a game is the idea of a payoff. It's the reason the players are playing, they want to maximise their payoff. Which leads to the first insight I got from the course:

In general, the payoffs for different players cannot be directly compared.What this means is really interesting, as it says that the strategy you choose as a player is a strategy that maximises your payoff, and so does every other player, but all those strategies might be different because each player is maximising for a different payoff. Relating this to security, an example might be a company choosing security controls to minimise the chance of losing their intellectual property, but an attacker totally focused on gaining control of machines to use for mining cryptocurrency.

To gain more insights it helpful to know there are different types of games:

- finite games vs infinite games
- zero sum games vs non-zero sum game
- games of pure strategy vs games of mixed strategy
- sequential games vs simultaneous games
- games of perfect information vs games of incomplete information
- cooperative games vs non-cooperative games

From a security perspective if you are trying to choose a strategy to protect an asset against a malicious attacker, then you are playing an infinite, non-zero sum, mixed strategy, simultaneous, incomplete information, non-cooperative game. And yes, people actually study this type of game to determine what an optimal strategy might be (generally given more constraints).

An important type from the list above is the mixed strategy game. That means that a player will not choose a single strategy, but will choose from multiple strategies that they will play with certain probabilities. The payoff from a mixed strategy game for a player is the sum of the payoffs multiplied by the probability of playing the strategy with that payoff (like calculating an expected value in statistics).

Now whereas the payoffs for each strategy are fixed, the flexibility comes from a player assigning different probabilities to each strategy. Of course every other player is doing the same thing, and will certainly choose the probabilities they play a certain strategy based on the probabilities they think other players will choose.

Relating this back to security, if a strategy has a probability and payoff, then the probability is the likelihood of a certain strategy being chosen, and the payoff is the effect of that strategy, or impact we could say. Then the expected payoff for any strategy is likelihood multiplied by impact, which in security we recognise as how risk is calculated.

Now for the interesting bit if you are into security. Game Theory has determined that there is a "stable" set of strategies that all players can reach, something called the Nash Equilibrium:

A Nash equilibrium is a set of strategies where no player benefits by unilaterally changing his or her strategy in the profile

This means there is an approach to security where we can construct a range of controls in such a way as to know exactly what our losses will be, and know that the attacker cannot come up with a better approach (set of strategies) as any other approach will lead to the attacker getting a reduced payoff. It's important to point our that a Nash equilibrium does not necessarily lead to the best payoffs for each player, or all players. What's also interesting is that for the type of game we play in security, at least one Nash equilibrium is guaranteed to exist.

Of course the problem is we don't know how to find the Nash equilibrium. Even if we knew all the possible strategies of all the players and the payoffs of all the strategies, there isn't a way to calculate it. Not to mention that clearly we don't know all the strategies or all the payoffs.

So useless right? Sure, largely that is right for a quantitative analysis. But there are still insights that can be useful. For instance in the type of games where we know how to find the Nash equilibrium (zero-sum or constant-sum games), the way a player finds a Nash equilibrium is by calculating the probability for each of their strategies by calculating probabilities that give the other player the same payoff regardless of which strategy the opposing player chooses (the minimax strategy).

So what does that mean in practical terms. It means this:

Choose your security controls based on the payoffs to the attacker

Don't choose your security controls based on the impact on you of the attacker's strategy. The goal being to force the attacker into a constant payoff strategy no matter what they do. This approach doesn't mean you won't suffer losses to an attacker, it just means those losses will be fixed.

Naturally this approach has a problem, what if the impact (or payoff) to you is unacceptable from a business perspective? Well that leads to another important insight:

You can change a game in multiple ways, but ultimately it comes down to changing the payoffs for the other players. In practical terms this means changing the amount of value an attacker can derive from the assets of interest to them e.g. no longer store the credit cards that an attacker wants access to.Game Theory can help you determine if you are playing the wrong game.

What's interesting is this approach runs counter to the general approach of risk management today, which is to figure out what threats would have the most impact on your business, try to figure how likely those threats are, and try to minimise either the impact or likelihood. Game Theory isn't saying that isn't sensible, just that it might not be the best strategy. The simple reason is due to the very first insight, if you optimise to protect what is of most value to you, you might fail to protect the real target of an attacker, which ultimately might cost you more. Put another way, investing heavily in protecting something that is valuable to the business is a waste of time if no attackers are actually interested in it. For instance if you have IP that is core to your business, attackers won't care about that unless they can monetise it easily, especially if you have other assets they can monetise more easily.

The last insight that I wanted to mention is this quote:

My interpretation of "bind oneself" is the processes you put in place to ensure you have adequate security controls across your business. In appsec this would be your Secure SDLC. The quote is all about the benefits of strictly enforcing controls on your business, so that an adversary is also constrained.“The power to constrain an adversary depends on the power to bind oneself” - Thomas Schelling

Game Theory isn't about solve all the problems we have in security, but it seems a very complementary field that has many practical insights to offer. If you are looking to broaden your understanding of security I would thoroughly recommend taking a closer look.

Labels:
application security,
game theory,
risk

Subscribe to:
Posts (Atom)