CORS is stupid

Technology CORS and valid in browsers domain restriction rule – those things that are often misunderstood. Below I will explain what they are and why it is time to stop worrying about them.

Comment: I'm going to talk about CORS and the same domain rule as one entity, so I'll often use the terms interchangeably from here on out. The point is that they are essentially parts of the same system, working in conjunction with each other to help you decide what you can do with which mixed-origin resources. Basically, if your requests come from different origins, you'll have to deal with CORS rules, policies, and mechanisms.

First of all, let me point out that CORS is a huge hack to help reduce the impact of bugs transmitted with legacy code. It provides protection both on an opt-out basis, in an attempt to partially mitigate XSRF attacks against unprotected or unmodified sites, and on an opt-in basis, so that the site actively enables self-defense. But neither of these measures is enough to solve a deliberately created problem. If your site uses cookiesthen you obliged actively take care of his safety. (Okay, this doesn't apply to any site, but better safe than sorry. Take the time to thoroughly audit your site or follow the simple steps below. Even with the most sensible patterns, you can still expose yourself to XSRF vulnerabilities).

Problem

The key issue here is how exactly the Web handles implicit credentials. Previously, browsers solved this problem in a catastrophic way: it was believed that such data could be included in cross-domain requests. As a result, this attack vector was opened.

1. Log in to https://your-bank.example.

2. Go to https://fun-games.example.

3. Then https://fun-games.example will perform fetch("https://your-bank.example/profile")which would give the attacker sensitive information about you, such as your address or your account balance. This method worked because when you visit a bank's website, the bank gives you a cookie that can be used to access your account details. Whereas un-games.example can't just steal that cookie, it can make its own requests to your bank's API, and the browser readily will attach this cookie to you could be authenticated.

Solution

This is where CORS comes in. This technology regulates how exactly cross-domain requests can be made and used. This technology is very flexible and at the same time completely incomplete.

By default, this policy allows you to make requests, but not read the results. So fun-games.example can't read your address from https://your-bank.example/profile. In addition, you can obtain information in a roundabout way – for example, through a delay, or find out details by checking whether the request was successful or unsuccessful.

But this technology only irritates everyone, and does not solve the problem itself! Yes, fun-games.example cannot read the result, but the request is still sent. Accordingly, the script can perform POST https://your-bank.example/transfer?to=fungames&amount=1000000000 and transfer a billion dollars to the account of his owner.

This is probably one of the most serious security holes ever made in the name of backward compatibility. The point is that the automatically provided cross-domain protection mechanisms do not work at all in practice. Absolutely all sites that use cookies must handle cookie interactions explicitly.

Yes, on each and every one of them.

How is this problem actually solved?

The key to protecting against these cross-site attacks is to ensure that implicitly passed credentials are not used inappropriately. A good place to start is to simply ignore all such data for cross-site requests, and then add specific exceptions to this policy as needed.

Attention: There is no such combination of headings Access-Control-Allow-*which would solve the problem with simple queries. They are executed long before any policy is checked. will have to process them differently. Don't try fix the situation with them by setting a CORS policy.

The best solution is to configure server-side middleware to ignore implicit credentials on all cross-site requests. The following example filters out cookies, but if you use HTTP authentication or client TLS certificates, be sure to ignore these as well. Fortunately, all modern browsers already have Headlines Sec-Fetch-* With their help, cross-site requests are easily identified.

def no_cross_origin_cookies(req):
	if req.headers["sec-fetch-site"] == "same-origin":
		# Одинаковый источник, OK
		return

	if req.headers["sec-fetch-mode"] == "navigate" and req.method == "GET":
		# GET-запросы не должны изменять состояние, так что это безопасно.
		return

	req.headers.delete("cookie")

This is a solid baseline. If needed, you can add specific exceptions for those endpoints that are specifically prepared to handle implicitly authenticated cross-domain requests. I strongly advise against using broad exceptions.

More about protection

Explicit credentials

One of the best ways to avoid this problem altogether is to avoid using implicit credentials. If all authentication is done through explicit credentials, you don't have to worry about the browser adding some unexpected cookies. Explicit credentials can be obtained by subscribing to an API token or through an OAuth flow. But in either case, the most important thing here is this: if you log into one site with some credentials, other sites won't be able to use them. The best way to do this is to pass the authentication token in the header. Authorization.

Authorization: Bearer chiik5TieeDoh0af

Enable Headline Authorization – This is the default behavior and is well supported by many tools. For example, this header will likely be removed from most logs by default.

But most importantly, it must be explicitly installed by all clients. This not only solves the XSRF problem, but also makes it much easier to support multiple accounts.

The main drawback of this technology is that explicit credentials are not applicable when working with sites using server rendering, since they are not included in the high-level navigation. On the other hand, server rendering greatly improves performance, so this technique is often not suitable.

SameSite cookies

Even though our server should ignore cookies on cross-domain requests, it is recommended that you avoid including them in requests if possible. You should set the attribute on all your cookies SameSite=Laxand then the browser will omit them for cross-domain requests.

Note: When I talk about “high-level” navigation, I mean the URL that appears in the browser's address bar. Accordingly, if you load through this line fun-games.exampleand the browser will make a request to your-bank.example That fun-games.example – this is a website top level.

It is important to remember that cookies are still included in navigation information on top-level GET transitionsTo avoid this, you can use SameSite=Strictbut in this case it will appear that the user has logged out on the first page after clicking on the cross-domain link (since there will be no cookie in that request).

If you use cookies SameSitethen cross-site form filling will also not work, and it is not possible to selectively opt out of it for a few specific endpoints. Fortunately, in practice this case is very rare, and it is quite possible to simply not provide for it. I definitely recommend setting this attribute to the default value and resorting to other mechanisms only in cases where it is clearly required.

CORS policy

Here's a simple policy you can copy and paste:

Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: *

That's it, it's done.

Attention: This policy does not simply mirror the headline Origin in the title Access-Control-Allow-Origin . That is, * not only plays the role of a joker, but also disables implicit credentials. This ensures that it will not be possible to make authenticated requests from other sources except within an explicit event stream, such as by including the header Authorization.

As a result of this policy, other sites can only make anonymous requests. That is, the security will be at the same level as if you were making such requests through a CORS proxy.

Don't you need more specifics?

Probably not. There are a couple of reasons for that:

  1. This can create a false sense of security. If another web page simply opens in a “properly functioning” browser and, of course, cannot make such malicious requests, this does not mean that such requests are impossible in principle. For example, CORS proxies are very common.

  2. This will remove read-only access to your site, which could be useful for URL previews, news feed selections or implementing other features. As a result, more and more CORS proxies are used, which has a negative impact not only on performance, but also on user privacy.

Remember, CORS is not about blocking access, it's about preventing implicit credentials from being accidentally reused.

My indignation

So why do I need to know all this, why the web isn't secure by default. Why do I have to deal with an ineffective policy that just pisses me off by default without solving any real problems?

Yes, I know, it's quite annoying. I think most of the problems described are rooted in backward compatibility. Sites implement features that are written right on top of these security holes, and browsers try to patch these holes as much as possible without breaking existing sites.

Fortunately, there are signs of a sobering up on the horizon – that is, browsers are actually willing to break some sites for the benefit of the user. Major browsers are moving toward top-level domain isolation. This technology goes by different names: in Firefox, it is State Partitioning (state sharing), in Safari – Tracking Prevention (prevent tracking) and Google prefers the term “cross-site tracking cookies”. In fact, it implements CHIPS– a system that requires active consent for use.

The main problem is that these approaches are implemented in the privacy plane, not in the security plane. Thus, they cannot be relied upon, since the heuristics they use Sometimes allows for cross-domain exchange of implicit credentials. CHIPS is even better in this regard, as it works reliably in browsers that support it. However, this system only supports cookies.

So it seems that browsers are moving away from using cookies that capture top-level contexts, but so far this is an uncoordinated fluctuation. It is also unclear which mechanism will become dominant – blocking third-party cookies (Safari) or separation (Firefox, CHIPS).

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *