What is the right price for a security vulnerability?
TL;DR: Vendors should focus on vulnerabilities, not on exploits. Vulnerabilities should be priced based on how difficult they are to find, not just on their exploitability.
I've been searching for an answer to this question for a while now. And this blog post is my attempt at answering it from my personal opinion.
The first answer is the economics from the security researchers perspective. Given that vendors do bug bounties as a way to interact with and give back to the security community, the rewards are mostly targeted towards compensating and showing appreciation. As a result, for these researchers, getting 5k USD for what they did over a few days as a research project or personal challenge is pretty neat.
In contrast, the "grey market" for those looking for vulnerabilities to exploit them (let's call them "exploiters"), the priorities are focused around the vulnerability reliability and stability.
As an "exploiter", you want good, simple, dumb, reliable bugs. For bug hunters, finding these *consistently* is difficult. It's not about giving yourself a challenge to find a bug in Chrome this month, but rather you seek to be able to create a pipeline of new bugs every month and if possible, even grow the pipeline over time. This is way more expensive than "bug hunting for fun".
Now, of course, there is an obvious profit opportunity here. Why not buy the bugs from those security researchers that find them in their spare time for fun, and resell them to "exploiters" for 10x the price? Well, that happens! Bug brokers do precisely that. So what happens is that then the prices from these "bug brokers" are just limited by how much the "exploiters" want to pay for them (which is a lot, more on that below).
However, and very importantly. We haven't discussed the cost of going from vulnerability to exploit. Depending on the vulnerability type, that might either be trivial (for some design/logic flaw issues) or very difficult (for some memory corruption issues).
Now, surprisingly, this key difference might give vendors a fighting chance. Software vendors in their mission to make their software better, actually don't care (or at least shouldn't care) about the difficulty to write a reliable exploit. Vendors want the vulnerability to fix it, learn from it, and find ways to prevent it from happening again.
This means that a software vendor should be able to get and find value from a vulnerability immediately, while if you wanted to make an exploit and sell it to those that want to exploit it, that would cost a significant amount of additional research and effort if there are a lot of mitigations along the way (sandboxes, isolation, etc).
So, it seems that the vendor's best chance in the "vendor" vs. "exploiter" competition is twofold: (1) to focus on making exploits harder and more expensive to write, and (2) to focus on making vulnerabilities as profitable to find and to report as possible. With the goal that eventually the cost of "weaponizing" a vulnerability is higher than the cost for finding the next bug.
The second answer to this question is the economics from the "exploiters" and the vendors perspective.
For the vendors, software engineering is so imperfect that if you have a large application, you will have a lot of bugs and you will introduce more the more you code.
So for software vendors, learning of a lot of vulnerabilities isn't as valuable as preventing those many from happening in the first place. In other words, being notified of a vulnerability is not useful except if that knowledge is used to prevent the next one from happening.
Prices then (for vendors) should be, first of all, set to match the traffic these vendors can handle not just the response but the corresponding remediation work. So if the vendor has 2 full time engineers staffed to respond to security vulnerabilities, the prices should be set to approximately 2 full time engineers time.
And then, on top of that, as many engineering resources as possible should be focused on prevention (to make vulnerabilities harder to introduce), optimizing processes (to be able to handle a larger number of reports), and finally making exploits harder to write (to make finding the next bug cheaper than writing an exploit).
For the "exploiters", if they didn't have these vulnerabilities, their alternative would be to do dangerous and expensive human intelligence operations. Bribing, spying, interrogating, sabotaging etc.. all of these expensive and limited by the amount of people you can train, the amount of assets you can turn, and the amount of money they will ask for, and then your ability to keep all of this secret. Human intelligence is really very expensive.
On the other hand, they could use these security holes - super attractive capabilities that allow them to spy on those they want to spy on, reducing (or sometimes eliminating) the need for any of the human intelligence operations and their associated risks. How can it get better than that?
So they have the budget and ability to pay large sums of money. However, the vulnerability market isn't as efficient as it should be for the larger price to matter as much.
What the market inefficiency means is that if someone can make $X00,000 a year by just finding vulnerabilities (and ignoring the exploit writing), then the risk of spending a month or two writing a reliable exploit, it's at the cost of the lost opportunity on the would have been found vulnerabilities. And vendors could be able to take advantage of this opportunity.
In conclusion, it seems to me like the optimal way to price vulnerabilities for vendors is to do so based on:
(1) Identifying those vulnerabilities in the critical path of an exploit.
(2) Ignore mitigations as much as possible, for the purpose of vulnerability reward decisions.
And that will only have the intended effect if:
(a) Vendors have to have a proper investment in remediation, prevention and mitigation, as otherwise one doesn't get any value of buying these vulnerabilities.
(b) Our reliance on requiring full PoCs from security researchers will need to change if we want to get vulnerabilities to learn from them.
Thank you for reading, and please comment below or on Twitter if you disagree with anything or have any comments.