in defense of DEP-9

Hello dear reader, the topic of this publication is DEP-9 and its protection. DEP-9 is the “RFC” for async in django. If anything, this RFC wasn’t done in its entirety, so I’ll only defend the part that’s been done: you never know how the rest could have been done!

I will immediately answer the question that I asked in the preface – because maintaining intrigue is not my style. It consists in the following common belief that since django has not been able to get rid of blocking I / O in most of its ORM codebase, when using django in an asynchronous application, there is nothing left but to call the functions of this ORM in a separate thread. Such a leapfrog between synchronous and asynchronous threads cannot but have a detrimental effect on performance, and in general, it cannot be compared with native asynchrony.

And I will give you an answer to this question. Django does use blocking I/O when dealing with the database – some other frameworks use async. These are equivalent options, the performance is the same plus or minus – including when working with key-value databases, queues, and so on. However, there is a type of problem that, with the development of microservices, has become quite common. Guess which one?

This is a call to a third-party service, over http, for example. It lasts – not so long – the user can wait. But this is unreasonably long in the sense that one of the threads is idle in vain. Also, the response time of this third-party service is not guaranteed. That’s to solve this problem and there are all these adapters and running in a different thread. As for performance – everything is as it was, and remains – the same plus or minus. There are no particular problems with this approach. This is – if briefly, but there are nuances, read on.

All the most interesting is in the nuances. We talked about adapters. By adapters I mean things like sync_to_async (the name is not mine, in my opinion, the authors of these things introduced into use). So, they make it possible to have “blotches” of blocking code in asynchronous views – the same ones that are executed in another thread. For the purposes of this article, I’ll ask the reader to imagine another option: that we have, on the contrary, blocking functions that have “interspersed” asynchronous code. Just for variety.

With the permission of the reader, I will use my own new “syntax” to illustrate. Because there can’t be an article on Habré without exotics. In short, everything under the asynchronous context manager iois executed in another thread:

async def myview(request):
    # blocking code
    async with io:
        # this goes to separate thread
        async with httpx.AsyncClient() as client:
            response = await client.get(url)
    # blocking code

“io” – because “with asyncio”. The fact that before the function is written async – do not pay attention to this: the function is, in fact, blocking. “async” just means that this function is a generator. Yes, such stupid syntax. If you are interested, you can read about it in my New Year’s article. Within the framework of this article, the reader has every moral right to disagree with him: the syntax is not important, we could just as well use a separate function for asynchronous code.

What happens in this example? Our view contains a request for a third-party service, the same one – long and with a non-guaranteed response. We solve this problem by running in a different thread, while splitting the view into 3 sections – blocking, asynchronous and blocking again. Such is the “large-block asynchrony” we have. It can be considered that these are 3 different functions. Generally speaking, all 3 can run on different threads.

How can they be carried out? Most likely, we will have a thread pool of worker threads that will execute blocking code – let there be 2 or 3 of them. We also need a thread that will execute asynchronous code – with an event loop and coroutines. Let’s digress from the existing standards for now: we will not limit ourselves to WSGI, ASGI, or something else.

  1. The 1st blocking section is executed and starts the asynchronous task represented by the 2nd section.

  2. The worker thread that executed the 1st section is now free, and takes into processing the blocking sections of other views

  3. In the meantime, the asynchronous 2nd section has completed and is queuing the 3rd section for execution in the thread pool.

So, the 3rd section is queuing in the thread pool, waiting for a free worker – you have to wait a bit. As part of the blocking approach, this does not bother us much: it is important for us that the workers are loaded as much as possible, and that the load is divided into sufficiently small parts. In this case, we can expect normal performance in the end.

Considering that the reason why we generally use multiple sections within a function is because of the presence of long operations (http request, in our example), the time for “switching” between threads can be neglected. But, pay attention: the number of blocking sections inside the function is important: it is before executing the blocking section that we wait until the worker is released. So, if there is an opportunity to save money on some blocking section, then do it – this is already advice that applies to “real life” – that is, your django project.

Let’s get back to reality and remember that we usually deal with ASGI applications and asynchronous views. What does it change? Well, first of all, asynchronous views start and end with an asynchronous section, which means there are extra sections. But this is a small problem: we said that you only need to save on blocking sections. But wait – here we notice one extra blocking section! Guess which one? First! I mean, the one that was the first in our example. Now, we have to queue it and wait until the worker is free, while before we started executing it right away.

Well – the approach that django uses is not perfect – nothing in the world is perfect. There is, however, room for improvement.

The approach that DEP-9 uses is adequate. It does not contradict asynchronous frameworks in any way, and in particular, my greenlet project – fibers. Regarding greenlets, by the way, there is also a misconception that greenlets are lightweight and flows are heavy, so the version with greenlets is much better in performance. Here, for example, is what the author of sql-alchemy writes in correspondence with me (at that time I was not going to use greenlets and proved to him that greenlets are bad).

greenlets are extremely lightweight and are in practical terms not even measurable regarding performance overhead. The performance hit is when you are using threadpools, which I recall seeing that Django was using right now for asyncio. That’s a huge hit. the single greenlet context switch, not at all.

To be fair, zzzeek is not a django expert, so this was just his guess. But it is false. Greenlets are really lightweight, but we can’t compare greenlets in alchemy and flows in django: it’s what they call apples and oranges. In the case of greenlets, we “mix” asynchronous code with code that does not contain any I / O at all, in the case of django, we mix asynchronous code with blocking – these are two big differences. We now know that the point is different: simply, sql-alchemy with greenlets uses asynchronous I / O, and django uses blocking.

And – in conclusion – I think asynchronous I/O will also appear in django soon. Of course, I’m talking now about my fiber project: nothing else is visible on the horizon. It is even possible that this will be a more popular option. There is every reason to think so:

  1. Asynchronous services are already popular, FastAPI is async-only. “Even django has become asynchronous” sounds like another argument.

  2. Commercial projects gravitate toward versatility. Why deploy half the application as WSGI and half as ASGI when you can only deploy ASGI and everything will work?

  3. When it comes to a stack of libraries or frameworks, it makes sense to support either only blocking or only asynchronous I/O, but not both. Otherwise, all libraries will have to be delivered in two versions. And, of course, they will choose asynchronous, because “anyway, it is sometimes needed.”

If we were talking not about django, but about another framework, I would assume that when an asynchronous version appears, it will quickly slide into async-only. However, in the case of django, it’s hard for me to imagine this, to be honest. In short, my prediction is that both I/O will be well supported – to the benefit of the community.

I hope my article has brought a little more clarity to how “async” works in django, and that there will be less discussion on the subject within your team. Or maybe more? Be sure to write in the comments later.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *