Imagine a world where you, a security researcher, could make money on your open source contributions, and your expertise about the security of any software. Without the intervention of the vendor, and without having to sell vulnerabilities to shady (and not-so-shady) third-parties.
That is, on top of bug bounty programs, or 0-day gray markets, you could make money by following any disclosure policy you wish.
That's exactly what Rainer Böhme imagined over 10 years ago, on what he called "Exploit Derivatives", which is what I want to raise attention to, today.
Hopefully, by the end of this post you'll be convinced that such a market would be good for the internet and can be made possible with today's technology and infrastructure, either officially (backed by financial institutions), or unofficially (backed by cryptocurrencies).
The advantages to software users
First of all, let me explain where the money would come from, and why.
Imagine Carlos, a Webmaster and SysAdmin for about 100 small businesses (dentists, restaurants, pet stores, and so on) in Argentina. Most of Carlos' clients just need email and a simple Wordpress installation.
Now, Carlos' business grew slowly but consistently, since more and more business wanted to go online, and word of Carlos' exceptional customer service spread. But as his client pool grew, the time he had to spend doing maintenance increased significantly. Since each client had a slightly customized installation for one reason or another, breakage happened every time there was an upgrade scheduled, and attacks from DoS to simple malware looking for SQL injection bugs made Carlos capable to spend less and less time onboarding new customers.
Users that depend on the security of a given piece of software would fund security research and development indirectly. They would do this by taking part on an open market to protect (or "hedge") their risk against a security vulnerability on the software they depend on.
The way they would do this "hedge" is by participating on a bet. Users would bet that the software they use will have a bug. Essentially, they would be betting against their own interests. This might sound strange, but it's not. Let me give you an example.
Imagine that you went to a pub to watch a soccer game of your favorite team against some obscure small new team. There's a 90% chance your favorite team will win, but if your team happened to lose, you would feel awful (nobody wants to lose against the small new team). You don't want to feel awful, so you bet against your own team (that is, you bet that your favorite team will lose) to the guy sitting next to you. Since the odds are 9:1 in favor of your favorite team, you would have to pay 1 USD to get 10 USD back (making a 9 USD profit), which you can use to buy a pint of beer for you and your friend.
This way, in the likely case your favorite team wins, you forfeit 1 USD, but feel great, because your team won! And in the unlikely case your favorite team loses, you get 10 USD and buy a pint of beer for you and your friend. While in reality, loyalty in your team might prevent this situation from happening for most soccer fans, there's nothing stopping you to "bet" against your interests to do some damage control, and that's what is called "hedging" in finance.
This "hedge" could be used by
Carlos in our example by betting that there will be a vulnerability in case he has to do an out-of-schedule upgrade on all his clients. If it doesn't happen, then good! He loses the bet, but doesn't have to do an upgrade. If he wins the bet, then he would get some money.
Companies usually do this because it reduces their
volatility (eg, the quarter-to-quarter differences in returns), which makes them more stable, which in turn makes them more predictable, which means they become more valuable as a company.
Bug hunters joining the financial market
OK, this must sound like a ridiculous idea, and I must admit I worded it like this to make it sound crazy, but hang on with me for a second. Bug hunters are the ones with the most to win on this market, and I'll explain why.
Today's security researchers have a couple ways to earn money off their expertise:
- By contracting their expertise through a pentesting firm to software vendors, and users.
- By reporting bugs to the vendor, who might issue a reward in return (commonly called bug bounty programs).
- By reporting bugs in any way or form, and getting paid by interested parties (such as the internet bug bounty).
- And to a lesser extent, by hardening code, and claiming bounties for patches to OSS software (via open source bounties, or patch reward programs).
Böhme's proposal is to add a new source of revenue for bug hunters, as an financial instrument, that has the following properties:
- A one-time effort continues to yield returns in the long term
- Not finding bugs is also profitable (almost as much as finding them)
- Vulnerability price is driven by software security and popularity
- Provides an incentive to harden software to be more secure
These reasons make this market extremely attractive for bug hunters, specially for those able to find hard to find bugs, as explained below.
Introducing exploit derivatives
The idea is simple. Allow bug hunters to trade with each other on their confidence on the security of a given piece of software, and allow users to hedge their risk on such market.
Mallory reviews some piece of software used by Carlos, say a Wordpress Vimeo plugin. Mallory thinks that while it's very badly written, there are no vulnerabilities in the code as currently written (say, out of luck or simply because of old exploitable bugs being fixed as they were found). As a result, Mallory "bets" that there won't be a bug in the Wordpress Vimeo plugin within the next year, and is willing to bet 100 USD that there won't be a bug to anyone willing to give her 10 USD. If there's a bug, Mallory has to pay 100 USD, but if there isn't one, Mallory gets 10 USD. In other words, Mallory would get a 10% ROI (return of investment).
Carlos, as we explained above, has over a hundred customers, and deploying an update to all his customers would cost him at least one extra hour of work (in average). So, Carlos decides to take Mallory on the bet. If he loses the bet, no problem, he lost 10 USD, but if he wins, he wins some money for spending an hour on maintenance.
By allowing users to trade "bets" (or as they are called in finance
binary options) with bug hunters, the market satisfies the needs of both sides (the user "hedges" his risk to the market, and the bug hunter earns money). If there happens to be a bug,
Mallory has to pay, sure, and she might lose money, but in average
Mallory would earn more money, since she is the one setting her minimum price.
Trading contracts with other bug hunters
|
Fig. 1 from Böhme 2006 [*] |
An important aspect of this market is the ability for contracts to be traded with other bug hunters. That is because, over time the understanding of a given piece of code might change, as other bug hunters would have more time to review large code bases further and with more scrutiny.
As a result, it's important to be able to allow contracts to be exchanged and traded for a higher (if confidence in the software increased) or lower (if confidence in the software decreased) value.
Eve, on the other side of the world, did a penetration test of the Wordpress Vimeo plugin. Eve reaches a similar conclusion to Mallory, however Eve is a lot more confident on the security of the plugin. She actually had found many bugs in the plugin in the past, and all of them were patched by Eve herself, as a result, she is very confident on it's security, and offers a similar bet to Mallory's, but a bit cheaper. She is willing to bet 100 USD that there won't be a bug to anyone willing to give her 5 USD!
At this point, Mallory can make money by simply buying this bet from Eve, unless Mallory is extremely confident there will not be a bug in the plugin. This is because if Mallory buys the bet, she will get 5 USD back no matter what happens. For example: if there is a bug, and Mallory loses the bet, she will get 100 USD from Eve, and then Mallory would give the 100 USD to Carlos. If there is no bug, then Mallory just made a 5 USD profit. This is what is known in finance as an "arbitrage opportunity".
And obviously, this allows for speculators (people that buy and sell these contracts looking to make a profit, but aren't users nor bug hunters) and market makers to join the market. Speculators and market makers provide an important service, which is that they provide "liquidity" because they would provide more "supply" of contracts than those available by just users or bug hunters (liquidity is a financial term that means it's very easy to buy and sell things on a market).
Market makers - what? who? why?
The "market makers" are people that buy or sell financial instruments in bulk at a guaranteed price. They are essential to the market, not just because they make buying and selling contracts easier, but also because they make the price a lot more stable and predictable. Usually, the market makers are the ones that end up profiting the most off the market (but they also face a lot of risks in the process).
However, in the exploit derivatives market, profit isn't the only incentive to become a market maker. Ultimately, what the market price will define will be the probability of an exploit being published before a given date. The market makers can make the exchange of a specific type of exploit derivative a lot more accessible to participants (for both, bug hunters and users), making this predictions a lot more accurate.
To give an example, imagine a market maker buys 1,000 options for "there will be a bug", and 1,000 options for "there won't be a bug" (with the same example of Carlos, Eve and Mallory). Without the market maker, Carlos would have to find Eve and Eve would have to find Mallory, which might be difficult. On the other hand, if a market maker exists, then they just need to know of the market maker at the exchange and buy/sell to it.
As long as the market makers sell as many options for each side, the risk would be minimized. The market maker would then purchase as many options as it needs to balance it's risk, or rise the price as needed (for instance, if everyone wants to buy "there will be a bug", then it has to increase it's price accordingly).
And in case it wasn't obvious by now, the more "liquidity" on the market, then the more accurate the prices become. And the more accurate they become, the more value there is in exploit derivatives as a "prediction" market. Note that another service provided by this is the decentralized aggregated knowledge of the security community to predict whether a vulnerability exists or not, and that provides value to the community in and of itself
[*].
What happens if there's a bug?
When someone finds a bug, then the finder can make a lot of money! And that's the whole point of this blog post. This creates a strong financial incentive to do security research and make money off it in a more distributed manner that is compatible with existing vulnerability reward programs.
The way this would work is, when you find a bug, you just buy as many "bets" for "there will be a bug" as you can, and you just report it to the vendor, or post it to full disclosure.. whatever works for you. This will essentially guarantee you will win the bet, and make a profit.
What's most interesting about this, is that the profit you can make is defined by the market itself, not the vendor. This means that the software that the market believes to be most secure, as well as the most popular software would yield the most money. And this actually incentivizes bug hunters to look at the most important pieces of infrastructure for the internet at any given point in time.
This also is likely to provide a higher price for bugs than existing security reward programs, and since they aren't incompatible (as currently defined), there isn't a concern on getting both a reward and returns from the market.
Note this is actually the same as when you don't find a bug, except that not finding a bug yields profits from those trying to limit the risk of the existence of a vulnerability, and finding one yields profits from those that didn't find the bug.
Funding hardening and secure development
One of the best things from this market, is that it would create a financial incentive to write secure software, and harden/patch existing software, which is something that our industry hasn't done a particularly great job at.
There are two ways to fund secure development with exploit derivatives:
- By bug hunters that have a long bet on the security of a product, and protecting their liability
- By donations to software vendors contingent on having no vulnerabilities for some time
Let me explain. If a bug hunter has a one-year bet that there won't be a new bug, unless the project has very little development, and has a history of being very secure, the bug hunter has a strong incentive to make sure new bugs are hard to introduce. This bug hunter could do it by refactoring APIs to be
inherently secure, and improve test coverage, and the project's overall engineering practices.
The other way it funds secure development is by making donations contingent on the absence of vulnerabilities. It would be possible to make a donation, of say 50 USD to an open source project, and give an extra 50 USD only if there are no vulnerabilities for a year (and if there are, the user can get his money back).
Who decides what's a valid bug?
Who would be the decider for what is and isn't a bug? At the end, these people would have a lot of power on such a market. And per my experience on the vendor side of vulnerability reward programs, what is and isn't a bug, is often difficult to decide objectively.
To compensate for such risk, as with any financial decision, one must "diversify" - in this case, diversify, means choose as many "referees" (or "bug deciders") as possible.
On today's day and age CERTs provide such service, as well as vendors (through security advisories). As a result, one can make a bet based on those pieces of information (eg. CVSSv3 score above 7.0 or
bounty above $3,000). However, these might not always mean what the user wants to represent, and for those cases it would be better to create an incentive for market-specific deciders to make decisions.
One way to solve this problem, as pointed out by Böhme is to make the conditions as clear as possible (which could make arbitration very cheap). One could imagine a VM or a docker image that is preconfigured and includes some "flags", that if, under a specific attack scenario (eg, same network, remote attacker, minimal user interaction) might result in compromise of a "flag". This is akin to CTF competition, and would be quite obvious in the majority of cases.
For more complex vulnerabilities or exploits, one might want human judgment to decide on the validity of an exploit. For example, the recent
CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow has specific requirements that are clearly exploitable for most users, but might seem ridiculous to others. Having a trusted referee with sufficient expertise to decide and have a final say would go a long way to create a stable market.
And this actually creates another profit opportunity to widely known, and trusted bug hunters. If the referees get a fee for their services, they can make a profit simply for deciding whether a piece of software has a known valid vulnerability or not. The most cold-headed impartial referees would eventually earn higher fees based on their reputation on the community.
That said, human-based decisions would introduce ambiguity, and the more judgment you give the referees, the more expensive they become (essentially to remove the incentive of being bribed). While reputation might be something to prevent misbehavior by referees, in the long term, it would be better for the market (and a lot cheaper) to use CTF-style decisions, rather than depending on someone's judgment.
Vulnerability insurance and exploit derivatives
It's worth noting that while exploit derivatives look similar to insurance, they are quite different. Some important differences from insurance are:
- For insurance you would usually only get the money you can demonstrate a loss for. In this case, money changes hands even when there is no loss! In other words, if a user buys an insurance to cover losses for getting hacked, they will get all costs covered up to a specific amount. But in hedging, even if there is no loss you anyway get the money (this essentially makes hedging a lot more expensive than insurance).
- Another important difference is that insurers won't write an insurance policy to anyone, while anyone can trade a binary option as long as they have the funds to do so. This is because your personal risk of loss is irrelevant to the market. When you are trading options your identity is irrelevant.
That said, one could easily build an insurance contract on top of exploit derivatives. Insurance would reduce the price of the policy by only paying for actual demonstrable losses, and would fluctuate the price of the policy depending on the risk each policy holder bears.
Note, however, that this is not the only way to do it, there is a lot of research on the subject of hedging risks with "information security financial instruments" lead by Pankaj Pandey and Einar Arthur Snekkenes, whom explain how several different incentives and markets can be built for this purpose
[1][2][3][4].
Incentives, ethics and law
Exploit derivatives, might create new incentives, some that might look eerie, mostly related to different types of market manipulation:
- Preventing vulnerability disclosure
- Transfer risk from vendors to the community
- Spreading lies and rumors
- Legality and insider trading
- Introducing vulnerabilities on purpose
I think it's interesting to discuss them, mostly because while some raise valid concerns, some others seem to be just misunderstandings, which we might be able to clarify. Think of this as a threat model.
Preventing vulnerability disclosure
An interesting incentive that Micah Schwalb pointed out in
his paper from 2007 on the subject ("Exploit derivatives and national security"), is that software vendors in the United States might have several resources at their disposal to prevent the public disclosure of security vulnerabilities. In specific, he pointed out how several laws (DMCA, Trade Secrets, and Critical Infrastructure statutes) have been used, or could be used by the vendors to keep vulnerabilities secret for ever.
It's clear that if vulnerabilities are never made public, it's very difficult to create a market like Exploit Derivatives, and that was the main argument Schwalb presented. However, we live in a different world now, where public disclosure of vulnerabilities is not only tolerated by software vendors, but celebrated and remunerated via vulnerability reward programs.
The concerns that Schwalb presented would still be valid for vendors that attempt to silence security research, however, fortunately for customers, a lot of vendors take a more open and transparent approach where public discussion of (fixed) vulnerabilities is supported by our industry.
Transfer risk from vendors to the community
One interesting consequence of a market like this, as pointed out by
David, is that this market would transfer the financial risk of having a security vulnerability away from the software vendor, and push it towards the security community. That is because if the vendor is not involved, then the vendor can't lose any money for writing insecure software in the market.
However, the most important aspect to notice here, is that the vendor is part of the market in a way, even if it doesn't want to be, and that is because the vendor is the only one that can make the software better (for non-free software). If the software "security" trades for a very low price, then there's a clear financial incentive for the vendor to improve it's security (for example, if some given software has a demand of bets that pay 9:1 that there will be a bug, the vendor can earn 10x it's investment if it can make it so that there are no bugs anymore). So yes, the vendor doesn't have to absorb any risk on having a vulnerability, but it instead gets an incentive to improve the software.
Eventually, the vendor would stop investing in security as soon as the market stops demanding it, leading to software that is as secure as it's users want it to be (or more, if the vendor wants to, for any other reasons).
Spreading lies and rumors
Real markets have historically been vulnerable to social engineering. In fact, there are countless examples on cases when media misunderstandings or mistranslations have caused dire consequences to the economy.
Additionally, things like
ZDI's upcoming advisories list would suddenly become extremely profitable, if someone was to hack or MITM that page, at the right time. Same for "side channels" for security vulnerabilities, or embargoes of vulnerabilities in popular software.
But also, as
lcamtuf pointed out, some of these channels might even come without the need of any targeted attack. Things like an early
heads up from a trusted maintainer, which could be spoofed, or things like
wiki pages used to make security announcements, if anyone can edit them, they could easily be used to manipulate the market.
If any, the media tends to exaggerate news stories. A small vulnerability is always amplified a hundred times by the media, which could cause chaos in the market. At this point, referees wouldn't even have to be involved, and trading would just be based on speculation.
But this is a perfect arbitrage opportunity, for those that get a chance to notice it. In this case, one could check the headers of the email, or could double check the statement with the maintainer, or just see who made the update to the wiki. If something looks fishy, then an speculator would be able to capitalize on the market uncertainty and balance the price back to normal. This is essentially how the free market deals with this risk on other markets.
Finally, this would incentivize software maintainers to improve their security patch release coordination. Perhaps it would make CERT more popular, or at least it would create an incentive for CERT to act as a trusted third party for the release and scoring of software vulnerabilities, which would also end up being good for the market.
Insider trading and vulnerabilities
If you thought of insider trading when reading about creating a vulnerability market, then you are not alone. But insider trading is a commonly misunderstood concept, and exploit derivatives doesn't exactly apply for most freelance bug hunters (although, it might apply to developers and employees of some companies). Let me explain.
Insider trading is essentially when someone with a "fiduciary duty" (will explain below) with access to non-public information makes a trade in the market. You might think that this means that if someone finds a bug, they would be committing insider trading if they "bet" that there would be a bug, but for most bug hunters this really shouldn't be the case.
First of all, for open source software, where no binary analysis nor reverse engineering is required, the bug hunter would be acting with public information (the source code) when making the trade. The fact the bug hunter was able to see a vulnerability is equivalent to a financial analyst being able to see an arbitrage opportunity in the market.
Another point is that the majority of bug hunters shouldn't have "fiduciary duty" to other market participants, so there shouldn't be a conflict of interest there. Here's the definition of fiduciary duty.
A fiduciary duty is a legal duty to act solely in another party's interests. Parties owing this duty are called fiduciaries. The individuals to whom they owe a duty are called principals. Fiduciaries may not profit from their relationship with their principals unless they have the principals' express informed consent.
https://www.law.cornell.edu/wex/fiduciary_duty
Employees (either developers, bug hunters or else) of companies that use the software, however, might be restricted from engaging in such trades, depending on how they got the information, and when
[*]. For a final say on the subject, participants would have to get advice from a lawyer with experience in "Securities and Corporate Law".
Note, however, that insider trading wouldn't necessarily be bad for this market
[1][2], and this wouldn't even be something one would need to think about until a market like this is regulated (if it is regulated at all), but either way, that's a discussion better to be left between economists and lawyers, not security researchers :)
On the other hand, looking at the benefits of having companies participate in this market:
- Employers could give exploit derivatives as compensation to employees in bundles that pay if there are no public exploits, as a way to incentivize security, which would be good for both, the users and the market. Equity is already a big part of the compensation for most companies, specially startups, so adding a new instrument as compensation would provide a more concrete mechanism for developers to influence it's value (by writing more secure software).
- Victims of 0-day vulnerabilities would have a financial incentive to try and recover the exploit used to attack them (via forensics, reverse engineering or else), as today there's very little incentive for victims to speak out and share technical details. This could of course be done anonymously, as the community would only care about the technical details, and less so of the identity of the victim.
Introducing vulnerabilities on purpose
The most dangerous incentive that a market like this could create is that of introducing vulnerabilities on purpose. Fraud like this is also a danger to the trustworthiness of the market, and because of the distributed nature of open source software, it would be very hard for someone to even try to bring civil or criminal charges to someone that did this. However, this should be addressed by the market in two ways:
- Price - Software that accepts third-party contributions and has little oversight over them, or with a large development team that might not be as trustworthy would yield low returns for backdoors or bugs introduced on purpose. And while introducing a vulnerability without being noticed is definitely possible, software with strict engineering practices and stringent code reviews processes would end up being the highest valued in the market.
- Competition - Participants in the market with long-term investments on the security of software will compete with those trying to introduce bugs. Practically, the stakes are the same on both sides, so it would be as easy to make money by making software safer than the opposite (without the risk it incurs by introducing bugs on purpose).
Finally, these could also be mitigated by donations that pay maintainers only if there are no vulnerabilities. Contracts would be structured in ways to limit the risk (say, by requiring code to be in the code base for a long time) and make this avenue less profitable than making the software more secure.
At the end, we would end up incentivizing open source software maintainers to be more careful with the patches they accept, and employ safer engineering practices for the project. We would also create more oversight by interested third parties, and hopefully, by way of it, make software safer as a result.
Conclusions
If you've reached so far down the blog post, you might either be as excited about this as I am, or yu are waiting for me to pitch you my new startup! Well, fear not.. There is no startup, but I am most definitely ready to abuse your enthusiasm and encourage you to help make this a reality! Either by sharing your thoughts, sharing this to your friends or reviewing it's design.
Exploit derivatives was a bit popular a
few years after it was published, but it didn't get wide enough attention in the security community (although in 2011, projects like
BeeWise and
Consensus Point briefly experimented with it).
It seems, that in 2006 our industry had little experience on vulnerability markets, and in 2011 it was too complex and expensive to setup a prediction market, which probably made the upfront cost to setup this market extremely high. But fast-forward to 2016, when prediction markets for many fields abound
[1][2][3] for varied topics such as politics, economics, or entertainment. And when creating decentralized markets with cryptocurrency is not only possible and accessible, but also being very heavily invested by the security and open source community already (although, surprisingly, not for software security!).
I think experimenting on this market using a cryptocurrency would allow us to very quickly test the idea. For example, we could use
Augur[despite it's controversy], a recently launched project (currently in Beta, so no money is being exchanged yet), that creates a decentralized prediction market which can be used for exploit derivatives among other things (and most importantly, has an
API that could be used to build a security vulnerability prediction market on top of it).
Anyway, to finalize this blog post, I have one last request. If you are interested to discuss about exploit derivatives I just created a mailing list to talk about it here:
There's nothing on it yet, but I would love to hear your feedback! Positive and negative. Specially if you or someone you know has experience on security research, financial markets or security economics.
I would be particularly interested to hear from you if you think this might have other positive or negative incentives in the security or open-source community.
Thank you for reading, and thanks to Rainer, Pankaj, Michal, David, Miki, Dionysis and Mario for their comments.
Note that this is a personal blog. What I say in this blog does not imply in any form an endorsement from my employer, nor necessarily reflects my employer's views or opinions on the subject.