This makes me curious if the likelihood of an attack can be expressed as the number of links that require breaking?
It seems probable that the likelihood of a successful attack decreases with the number vulnerabilities the attack requires. It is also difficult to estimate the skill or effort required to exploit any particular vulnerability as it depends on the skill and motivation of the attacker, something that is difficult to accurately estimate. It seems that often claims that an attack is too difficult to pull off do nothing but encourage certain communities to challenge that claim.
It's important for me to clarify that I'm not talking about the number of steps that it takes to exploit a vulnerability, like the hoops you need to jump through to exploit a buffer overflow with DEP and ASLR enabled. I am talking about the number of discrete vulnerabilities required to be chained together for successful exploitation.
At one end of the spectrum we have vulnerabilities like standard SQL Injection; this is a 1 stage attack if the goal is recover information from a database. Reflected Cross Site Scripting is a 2 stage attack, stage 1 is to create the payload (script) and attack vector (email, website) and stage 2 is to lure the target to follow a link (I going to say that people are always a discrete vulnerability). I can't think of any simple 3-stage attacks, except when there are specific mitigations put in place for defence-in-depth, such as a bank having a maximum withdrawal limit.
With our new metric for the likelihood of an attack succeeding, I'm just going to convert those numbers directly into qualitative likelihood measures:
- 1-stage = High likelihood
- 2-stage = Medium likelihood
- 3 or more stages = Low likelihood