How I got Netflix to work on Asahi Linux

I bought a macbook a year ago. Six months ago, macOS on him said “oh, that’s it,” and he got snarky. I decided not to reinstall the system, but to try Asahi Linux, and so far I have not regretted it. Although one thing was still annoying – Netflix and the official Spotify application did not work.

To be honest, I don’t really need Netflix – BitTorrent has a much better UX right now. But I am very attached to Spotify, and I prefer the interface of the official client, although this will seem strange to many. But there is no official Spotify client for Linux on the aarch64 architecture yet.

There is, of course, a web version. Or rather, it would be, if not for the error:

Playback of protected content is not enabled.

“Playback of protected content is disabled”, but more specifically, the module is not installed Widevine DRM. For the same reason, Netflix does not work either.

So we start our challenge try not to violate the DMCA 2023! Our task is to understand how to watch Netflix on Asahi Linux without bypassing or breaking DRM. (Without this condition, the solution will fit in 280 characters).

Widevine installation

Unfortunately, you can’t simply download and install Widevine. The only officially supported way to run Widevine is Chrome + Linux + x86_64. An attentive reader, of course, will immediately ask questions: why does it work in Firefox then? Why does it work on Android, is it also Linux on aarch64? Why does it work on Raspberry Pi?

Let’s take it in order.

Why does it work in Firefox + Linux + x86_64?

Web pages access DRM modules via API Encrypted Media Extensions. Chrome itself does not implement DRM, it delegates this to one of the CDM libraries, or the Content Decryption Module. In the case of Chrome + Linux + x86_64, this is the library libwedevinecdm.so – a proprietary blob that we are not allowed to look into.

Fortunately, we know how to communicate with this blob: header files for C++ available as part of the Chromium project. This allows Firefox to use exactly the same proprietary libwidevinecdm.so, taken in binary form directly from Chrome. Unfortunately, the same cannot be done for Asahi Linux – there is no ready-made library for Chrome + Linux + aarch64.

Why does it work in Android + aarch64?

In short, DRM on Android generally works differently. The APIs are very different, taking a compiled Widevine module for Android just won’t work, and DMCA interferes with disassembling it.

Why does it work on Raspberry Pi?

As I said, Chrome + Linux + aarch64 is not officially supported.

I lied.

Chromebooks. Chromebooks run Chrome, plus or minus Linux, and many of them are on aarch64. Sooner or later people realized this and wrote utilityto pull out libwidevinecdm.so from recovery images for chromebooks. The Raspberry Pi, as far as I know, pulls out the Widevine implementation just like that, even packaging it in .deb-plastic bag.

Unfortunately, there is a catch. Although Chromebooks have aarch64-processors and aarch64-Linux kernels, all their userspace is still compiled for 32-bit armv7l. This is not a problem for the Raspberry Pi, but Apple Silicon is unable to digest 32-bit code. Problem…

…no problem! More precisely, already no problem.

A few months ago, when I first started Widevine for Asahi, it was like this. But a couple of weeks ago, somewhere in Google, the 21st century did come, and on new Chromebooks, userspace is compiled to aarch64. Means, libwidevinecdm.so for Linux + aarch64 can now be pulled from ChromeOS recovery images that the Pi Foundation already done.

So, everything is ready for…

Widevine for Arch Linux on ARM

Of course, not everything is so simple. ChromeOS is not exactly Linux; among other things, in his glibc There are patches that are not compatible with Linux. If you just take and download libwidevinecdm.sowe get a segfault somewhere in the bowels glibc.

The package solves this problem. glibc-widevine. He patches glibc specifically for compatibility with Widevine. There are similar patches in glibc for Raspbian, unless they come out of the box.

You also need to rebuild Chromium with Widevine support – this is not officially supported on Linux + aarch64, so it is disabled in the standard build. For this there is also patch.

Total:

  • Google publishes ChromeOS image for aarch64 (including userspace);

  • The Pi Foundation, or someone else, uses a script to extract a Widevine blob from there;

  • blob is packed into .deb-package for Raspbian;

  • on glibc compatibility patches are being rolled

  • going patched Chromium for ARM with Widevine enabled

  • ???

Problem

Asahi Linux build with support 16K memory pages. The Widevine blob only supports 4K. Of course, it is possible to rebuild the kernel for a different page size, but now this requires crutches and a lot of time. And you can’t just disassemble and fix a proprietary blob.

The programmer looked inside the Widevine blob.  Photo in color.

The programmer looked inside the Widevine blob. Photo in color.

To understand exactly what the problem is, let’s look at how libwidevinecdm.so loaded into memory. Like others .so-libraries, inside it is ELF – Executable and Linkable Format – which is parsed by the loader – the kernel or ld.so – and tells it exactly how to load the code and data into memory and prepare them for execution.

Inside the ELF files there is a Program Header Table – a table of headers that describe program segments. For segments with type LOAD it describes how to load this segment into memory, and whether to allow this memory to be read/written/executed.

There is a problem with the alignment of these segments. They are loaded into memory by calls mmap()which requires:

  • so that the segment offset from the beginning of the file is a multiple of the memory page size;

  • so that the address in memory where the segment is loaded is aligned to the page boundary.

Loader checks these restrictions:

case PT_LOAD:
    /* A load command tells us to map in part of the file.
       We record the load commands and process them all later.  */
    if (__glibc_unlikely (((ph->p_vaddr - ph->p_offset)
         & (GLRO(dl_pagesize) - 1)) != 0))
      {
        errstring
    = N_("ELF load command address/offset not page-aligned");
        goto lose;
      }

In order not to become goto loserOh, you need to make sure that (vaddr - offset) % pagesize == 0Where vaddr – Virtual (memory) Address – address in memory where to load the segment, and offset — data offset in the library file.

Here is the Program Header Table for my copy libwidevinecdm.so:

Type             Offset     VAddr      FileSize   MemSize    Align      Prot
PT_PHDR          0x00000040 0x00000040 0x00000230 0x00000230 0x00000008 r--

PT_LOAD          0x00000000 0x00000000 0x00904290 0x00904290 0x00001000 r-x
PT_LOAD          0x00904290 0x00905290 0x00007500 0x00007500 0x00001000 rw-
PT_LOAD          0x0090b790 0x0090d790 0x00000df0 0x00c36698 0x00001000 rw-

PT_TLS           0x00904290 0x00905290 0x00000018 0x00000018 0x00000008 r--
PT_DYNAMIC       0x00909618 0x0090a618 0x00000220 0x00000220 0x00000008 rw-
PT_GNU_RELRO     0x00904290 0x00905290 0x00007500 0x00007d70 0x00000001 r--
PT_GNU_EH_FRAME  0x00524a24 0x00524a24 0x000010fc 0x000010fc 0x00000004 r--
PT_GNU_STACK     0x00000000 0x00000000 0x00000000 0x00000000 0x00000000 rw-
PT_NOTE          0x00000270 0x00000270 0x00000024 0x00000024 0x00000004 r--

I highlighted three segments with blank lines PT_LOAD.

If pagesize == 0x1000 (4 KB), then the limits are respected for all segments. But it’s worth increasing pagesize to 0x4000 (16 KB), like in Asahi Linux, and dnjhjq and third segments PT_LOAD will break it. For other segments, this is not so important – they are not loaded directly through mmap().

Solution

You cannot change the position of segments relative to each other in memory – this will break the relative offsets from one segment to another in the code. Moreover, this is a DRM library – it gets angry at any changes to itself in memory. And the DMCA is angry at digging into the code of this library.

Let’s look again at our ill-fated condition (vaddr - offset) % pagesize == 0. Change vaddr we cannot for the reasons above. But we can change offsetif we move the segments in the library file itself.

For the first PT_LOAD nothing needs to be done, but for the second we get vaddr - offset 0x00905290 - 0x00904290 = 0x1000. Let’s fix this by adding 0x1000 padding byte between the first and second segments in the file, not forgetting to correct offset. Now vaddr - offset == 0x00904290 - 0x00904290 == 0. We will do the same with the third segment.

When adding padding to ELF, some other fields need to be corrected as well. But we are only changing the ELF file itself – the code loaded into memory will be identical to the original running on a system with 4K memory pages. Therefore, the library, when self-checking, will not suspect anything and will not get angry.

Granularity of permissions

On systems with 4K pages, each 4KB of memory can have its own set of read/write/execute permissions. The library has been compiled with this feature in mind. But on systems with 16K pages, the granularity of such permissions is 16K. This gives rise to two problems.

First, some sections .text – executable code – and sections .data – data in memory – now have common pages. The first need access to execute, the second – to read and write. It is possible to give both, but this is a potential security hole. So far I haven’t found a way to avoid this.

Secondly, for the same reason, I had to disable RELRO – Relocation Read-Only – another security measure that marks some sections as read-only after loading.

In and of themselves, these are not vulnerabilities, but they are a weakening of protection against potentially existing ones. An attacker, theoretically, can write arbitrary code into such a page and then execute it. In practice, he will first need to find a vulnerability in the browser. If this scares you, you can use a separate browser for Netflix.

ELF patching

First I tried to use LIEF, but either because of bugs, or because of the crookedness of my hands, it didn’t work out for me. In the end, I uncovered in a caffeine trance hexedit and corrected everything by hand. To my surprise, it worked!

Not sure if I can legally distribute a patched ELF, but I wrote python script, which you can patch it yourself. It is enough to run this script, and now you have libwedevinecdm.sowhich can be downloaded by Firefox under Asahi Linux!

Final touches

Due to oddities in glibc on ChromeOS, which I mentioned earlier, I had to write a library with functions __aarch64_swp4_acq_rel And __aarch64_swp4_acq_relwhich I uploaded via LD_PRELOAD. It did not look very nice, and I began to think how I could add these functions to myself. libwidevinecdm.so.

Also remember that we had to add 0x1000 bytes there for alignment? They end up in executable memory, so I put those functions in there! I was afraid the library wouldn’t like it, but it doesn’t seem to notice them in their checks. The program receives their addresses through the Global Offset Table – a table of function offsets, which is filled in by the loader from the data in the ELF itself. I replaced this data so that it points to the place where I added new functions.

I included all this in my Python script, which, with the approval of the maintainer, I added to the package widevine-aarch64. Now it is enough to install widevine-aarch64 from the AUR, and Widevine on Asahi Linux will be ready to go!

Netflix Features

Spotify worked for me, but Netflix still refused to show anything. It was a User-Agent check. In the end, I changed it to one taken from ChromeOS:

Mozilla/5.0 (X11; CrOS aarch64 15236.80.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.5414.125 Safari/537.36

The Widevine version we received is called L3 — the least protected level. Higher levels of protection require hardware support. Apple Silicon has the necessary chips, but the library is not native, and their support is not there.

Most services do not allow you to watch 4K content at this level of protection – a maximum of 1080p. But Netflix excelled here too: by default, it gives 720p to such clients, and turns on 1080p only if you ask it in a special way at the level of the protocol itself. For this there is browser extensions. Not sure why they did that; perhaps some of the clients had problems due to lack of support L3-version of “iron” video decoding?

Conclusion

It amuses me that I did all this not to bypass DRM, but on the contrary, so that it would finally work properly. It is not right! From the fact that I was able to legally watch the content that I paid for, in the normal world, there should not be a detective article!

Dear Google, please add at least Ubuntu on aarch64 to your build matrix. I know it’s not difficult for you.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *