Scaling GraphQL Subscriptions in Go Using Epoll and Event-Driven Architecture

“Make it work, make it right, make it fast.” This is a mantra you've probably heard before. This is a good mantra to help you focus on not overcomplicating the decision. I've found that it's usually enough to do it right, it's usually pretty quick if you do it right.

When we started implementing GraphQL subscriptions in Cosmo Router, we focused on making it work. This was a few months ago. This was good enough for the first iteration and allowed us to get feedback from our users and better understand the problem space.

In the process of doing it right, we reduced the number of goroutines by 99% and memory consumption by 90% without sacrificing performance. In this article I will explain how we achieved this. Using Epoll/Kqueue played a big role in this, but also rethinking the architecture to be more event driven.

Let's take a step back so we're all on the same page.

What are GraphQL Subscriptions?

GraphQL Subscriptions are a way to subscribe to events happening in your application. For example, you can subscribe to create a new comment. When a new comment is created, the server sends you a notification. This is a very powerful feature that allows you to create applications in real time.

How do GraphQL subscriptions work?

GraphQL subscriptions are typically implemented using WebSockets, but they can also operate over HTTP/2 using Server-Sent Events (SSE).

The client opens a WebSocket connection to the server, sending an HTTP update request. The server updates the connection and negotiates the GraphQL subscription protocol. There are two main protocols currently in use: graphql-ws And graphql-transport-ws.

Once the GraphQL subscription protocol is negotiated, the client can send a message to the server to begin the subscription. The server will then send a message to the client when new data arrives. To end a subscription, the client or server can send a message to stop the subscription.

How do GraphQL federated subscriptions work?

Federated GraphQL subscriptions are a bit more complex than regular GraphQL subscriptions. There is not only a client and a server, but also a gateway or router between them. However, the data flow is very similar.

The client opens a WebSocket connection to the gateway. The gateway then opens a WebSocket connection to the origin server. The gateway then forwards all messages between the client and the origin server.

Instead of configuring and negotiating one GraphQL subscription protocol, the gateway configures and negotiating two GraphQL subscription protocols.

What are the limitations of using classic GraphQL subscriptions with federation?

The idea behind GraphQL federation is that entity field resolution can be split across multiple services. An entity is a type defined in a GraphQL schema. Entities have a directive @key, which defines the fields used to identify the entity. For example, the entity User maybe a directive @key(fields: "id"). Keys are used to resolve (unite) entity fields across services.

The problem with entities and subscriptions is that subscriptions must have a single root field that associates the “cancellation” of the subscription with a single service. So if multiple services contribute fields to our entity Userthey must coordinate with each other to cancel the subscription.

This creates dependencies between subgraphs, which we want to avoid for two reasons. First, this means that we must implement some kind of coordination mechanism between subgraphs. Secondly, this means that the different teams owning the subgraphs can no longer move independently, but must coordinate deployments, etc… Both of these aspects defeat the purpose of using GraphQL federation.

Introduction to Event-Driven Federated Subscriptions (EDFS)

To address the limitations of classic federated GraphQL subscriptions, we introduced Event-Driven Federated Subscriptions (EDFS). In short, EDFS allows you to manage subscriptions from an event stream such as Kafka or NATS. This decouples the root subscription field from the subgraphs, solving the above limitations. You can read the full announcement about EDFS Here.

Setting up EDFS correctly

When we started implementing EDFS in Cosmo Router, we focused on making it work. Our “naive” implementation was very simple:

  1. Client opens WebSocket connection to router

  2. Client and router negotiate GraphQL subscription protocol

  3. The client sends a message to start the subscription

  4. The router subscribes to the event stream

  5. The router sends a message to the client when it receives a new event

We made it work, but there were a few problems with this approach. We set up a benchmark to measure some statistics with pprof when connecting 10,000 clients. We have configured our router to enable an HTTP server pprof and started benchmarking.

Using pprof to measure a running Go program

To measure a running Go program we can use an HTTP server pprof. You can enable it like this:

//go:build pprof

package profile

import (
	"flag"
	"log"
	"net/http"
	"net/http/pprof"
	"strconv"
)

var (
	pprofPort = flag.Int("pprof-port", 6060, "Port for pprof server, set to zero to disable")
)

func initPprofHandlers() {
	// Allow compiling in pprof but still disabling it at runtime
	if *pprofPort == 0 {
		return
	}
	mux := http.NewServeMux()
	mux.HandleFunc("/debug/pprof/", pprof.Index)
	mux.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
	mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
	mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
	mux.HandleFunc("/debug/pprof/trace", pprof.Trace)

	server := &http.Server{
		Addr: ":" + strconv.Itoa(*pprofPort),
	}
	log.Printf("starting pprof server on port %d - do not use this in production, it is a security risk", *pprofPort)
	go func() {
		if err := server.ListenAndServe(); err != nil {
			log.Fatal("error starting pprof server", err)
		}
	}()
}

Now we can run our program, connect 10,000 clients and run the following command to measure the number of goroutines:

$ go tool pprof http://localhost:6060/debug/pprof/goroutine

Additionally, we can measure heap allocation/memory consumption:

$ go tool pprof http://localhost:6060/debug/pprof/heap

Let's look at the results!

Goroutines (naive EDFS implementation)

First, let's look at the number of goroutines created.

goroutines (naive)

goroutines (naive)

That's a lot of goroutines indeed! It looks like we are creating 4 goroutines per client and subscription. Let's take a closer look at what's going on.

All 4 goroutines call runtime.gopark, which means they are waiting for something. 3 of them call runtime.selectgo, which means they are waiting for the channel to receive the message. The other one calls runtime.netpollblockwhich means it is waiting for a network event.

Of the 3 other goroutines, one calls core.(*wsConnectionWrapper).ReadJSON, so it waits for the client to send the message. The second one calls resolve.(*Resolver).ResolveGraphQLSubscription, so it waits on the channel for the next event to resolve. The third calls pubsub.(*natsPubSub).Subscribewhich means it is waiting for a message from the event stream.

There's a lot of waiting going on here if you ask me. You may have heard that goroutines are cheap and that you can create millions of them. You can indeed create many goroutines, but they are not free. Let's look at memory consumption to see if the number of goroutines affects memory consumption.

Heap allocation/memory consumption (naive EDFS implementation)

heap (naive)

heap (naive)

The heap is almost 2 GB, which means we are requesting about 3.5 GB of memory from the OS (this can be checked using top). Let's look at allocations to see where all this memory goes. 92% of memory is allocated by function resolve.NewResolvablewhich is called resolve.(*Resolver).ResolveGraphQLSubscriptionwhich is called core.(*GraphQLHandler).ServeHTTP. The rest is insignificant.

Next, let's compare this to an optimized EDFS implementation.

Goroutines (optimized EDFS implementation)

goroutines (optimized)

goroutines (optimized)

We now have only 42 goroutines left, which is a 99% reduction! How is it possible that we do the same thing with 99% fewer goroutines? We'll come back to this a little later. Let's look at memory consumption first.

Heap Allocation/Memory Consumption (Optimized EDFS Implementation)

heap (optimized)

heap (optimized)

The heap is down to 200MB, which is a 90% reduction! Now the main participants are bufio.NewReaderSize And bufio.NewWriterSizewhich are associated with http.(*conn).serve. Separately, it is worth noting that these allocations were previously not very noticeable, because they were hidden by other allocations.

Cause Analysis: How to reduce the number of goroutines and memory consumption for GraphQL subscriptions?

We need to answer two main questions:

  1. Why do we create 4 goroutines per client and subscription?

  2. Why resolve.NewResolvable allocates so much memory?

Let's eliminate them one by one, starting with the simplest ones.

Don't block in the ServeHTTP method

Here is the code for the WebsocketHandler ServeHTTP function:

func (h *WebsocketHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
	if isWsUpgradeRequest(r) {
		upgrader := websocket.Upgrader{
			HandshakeTimeout: 5 * time.Second,
			// TODO: WriteBufferPool,
			EnableCompression: true,
			Subprotocols:      wsproto.Subprotocols(),
			CheckOrigin: func(_ *http.Request) bool {
				// Allow any origin to subscribe via WS
				return true
			},
		}
		c, err := upgrader.Upgrade(w, r, nil)
		if err != nil {
			// Upgrade() sends an error response already, just log the error
			h.logger.Warn("upgrading websocket", zap.Error(err))
			return
		}
		connectionHandler := NewWebsocketConnectionHandler(h.ctx, WebSocketConnectionHandlerOptions{
			IDs:            h.ids,
			Parser:         h.parser,
			Planner:        h.planner,
			GraphQLHandler: h.graphqlHandler,
			Metrics:        h.metrics,
			ResponseWriter: w,
			Request:        r,
			Connection:     conn,
			Protocol:       protocol,
			Logger:         h.logger,
		})
		defer connectionHandler.Close()
		connectionHandler.Serve()
		return
	}
	h.next.ServeHTTP(w, r)
}

Method ServeHTTP blocks until the method returns connectionHandler.Serve(). This was convenient because it allowed us to use the operator defer to close the connection and we could use r.Context() to propagate the context since it was not canceled before the method returned ServeHTTP.

The problem with this approach is that it persists the goroutine for the entire duration of the subscription. Since the package go net/http creates a new goroutine for each request, this means that we keep one goroutine per client, although we have already captured (updated) the connection, so this is completely unnecessary.

But instead of just not blocking, we can do one better and eliminate yet another goroutine.

Don't read from a connection when you don't need to

Do we really need to block reads with one goroutine per connected client? And how is it possible that single threaded servers like nginx or Node.js can handle thousands of simultaneous connections?

The answer is that these servers are event driven. They do not block the connection, but instead wait for an event to occur. This is usually done using Epoll/Kqueue on Linux/BSD and IOCP on Windows.

With Epoll/Kqueue we can delegate the wait for an event to the operating system (OS). We can tell the OS to notify us when there is data to read from the connection.

The typical pattern for using WebSocket connections with GraphQL subscriptions is that the client initiates the connection, sends a message to start the subscription, and then waits for the server to send the message. So there's not a lot of data sharing going on here. This is perfect for Epoll/Kqueue.

Let's see how we can use Epoll/Kqueue to manage our WebSocket connections:

func (h *WebsocketHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
	upgrader := ws.HTTPUpgrader{
		Timeout: time.Second * 5,
		Protocol: func(s string) bool {
			if wsproto.IsSupportedSubprotocol(s) {
				subProtocol = s
				return true
			}
			return false
		},
	}
	c, rw, _, err := upgrader.Upgrade(r, w)
	if err != nil {
		requestLogger.Warn("Websocket upgrade", zap.Error(err))
		_ = c.Close()
		return
	}

	// After successful upgrade, we can't write to the response writer anymore
	// because it's hijacked by the websocket connection

	conn := newWSConnectionWrapper(c, rw)
	protocol, err := wsproto.NewProtocol(subProtocol, conn)
	if err != nil {
		requestLogger.Error("Create websocket protocol", zap.Error(err))
		_ = c.Close()
		return
	}

	handler := NewWebsocketConnectionHandler(h.ctx, WebSocketConnectionHandlerOptions{
		Parser:         h.parser,
		Planner:        h.planner,
		GraphQLHandler: h.graphqlHandler,
		Metrics:        h.metrics,
		ResponseWriter: w,
		Request:        r,
		Connection:     conn,
		Protocol:       protocol,
		Logger:         h.logger,
		Stats:          h.stats,
		ConnectionID:   h.connectionIDs.Inc(),
		ClientInfo:     clientInfo,
		InitRequestID:  requestID,
	})
	err = handler.Initialize()
	if err != nil {
		requestLogger.Error("Initializing websocket connection", zap.Error(err))
		handler.Close()
		return
	}

	// Only when epoll is available. On Windows, epoll is not available
	if h.epoll != nil {
		err = h.addConnection(c, handler)
		if err != nil {
			requestLogger.Error("Adding connection to epoll", zap.Error(err))
			handler.Close()
		}
		return
	}

	// Handle messages sync when epoll is not available

	go h.handleConnectionSync(handler)
}

If epoll is available, we add the connection to the epoll instance and return. Otherwise we handle the connection synchronously in the new goroutine as a fallback, but at least we are no longer blocking the method ServeHTTP.

Here is the code to use the epoll instance:

func (h *WebsocketHandler) runPoller() {
	done := h.ctx.Done()
	defer func() {
		h.connectionsMu.Lock()
		_ = h.epoll.Close(true)
		h.connectionsMu.Unlock()
	}()
	for {
		select {
		case <-done:
			return
		default:
			connections, err := h.epoll.Wait(128)
			if err != nil {
				h.logger.Warn("Epoll wait", zap.Error(err))
				continue
			}
			for i := 0; i < len(connections); i++ {
				if connections[i] == nil {
					continue
				}
				conn := connections[i].(epoller.ConnImpl)
				// check if the connection is still valid
				fd := socketFd(conn)
				h.connectionsMu.RLock()
				handler, exists := h.connections[fd]
				h.connectionsMu.RUnlock()
				if !exists {
					continue
				}

				err = handler.conn.conn.SetReadDeadline(time.Now().Add(h.readTimeout))
				if err != nil {
					h.logger.Debug("Setting read deadline", zap.Error(err))
					h.removeConnection(conn, handler, fd)
					continue
				}

				msg, err := handler.protocol.ReadMessage()
				if err != nil {
					if isReadTimeout(err) {
						continue
					}
					h.logger.Debug("Client closed connection")
					h.removeConnection(conn, handler, fd)
					continue
				}

				err = h.HandleMessage(handler, msg)
				if err != nil {
					h.logger.Debug("Handling websocket message", zap.Error(err))
					if errors.Is(err, errClientTerminatedConnection) {
						h.removeConnection(conn, handler, fd)
						return
					}
				}
			}
		}
	}
}

We use one single goroutine to listen for events from the epoll instance. If a connection has an event, we check to see if the connection is still valid. If so, we read the message and process it. We use a thread pool behind the scenes to process the message so that the epoll goroutine doesn't block for too long. The full implementation can be found at GitHub.

With this approach we have already reduced the number of goroutines by 50%. Now we have 2 goroutines left per client and subscription. We are no longer blocked in the function ServeHTTP and no longer block reading from the connection.

This leaves 3 problems. We need to eliminate 2 more subscription goroutines and reduce memory consumption resolve.NewResolvable. As it turns out, all these problems are connected.

Blocking reads versus event-based architecture

Let's look at the naive implementation ResolveGraphQLSubscription:

func (r *Resolver) ResolveGraphQLSubscription(ctx *Context, subscription *GraphQLSubscription, writer FlushWriter) (err error) {

	buf := pool.BytesBuffer.Get()
	defer pool.BytesBuffer.Put(buf)
	if err := subscription.Trigger.InputTemplate.Render(ctx, nil, buf); err != nil {
		return err
	}
	rendered := buf.Bytes()
	subscriptionInput := make([]byte, len(rendered))
	copy(subscriptionInput, rendered)

	if len(ctx.InitialPayload) > 0 {
		subscriptionInput, err = jsonparser.Set(subscriptionInput, ctx.InitialPayload, "initial_payload")
		if err != nil {
			return err
		}
	}

	if ctx.Extensions != nil {
		subscriptionInput, err = jsonparser.Set(subscriptionInput, ctx.Extensions, "body", "extensions")
	}

	c, cancel := context.WithCancel(ctx.Context())
	defer cancel()
	resolverDone := r.ctx.Done()

	next := make(chan []byte)

	cancellableContext := ctx.WithContext(c)

	if err := subscription.Trigger.Source.Start(cancellableContext, subscriptionInput, next); err != nil {
		if errors.Is(err, ErrUnableToResolve) {
			msg := []byte(`{"errors":[{"message":"unable to resolve"}]}`)
			return writeAndFlush(writer, msg)
		}
		return err
	}

	t := r.getTools()
	defer r.putTools

	for {
		select {
		case <-resolverDone:
			return nil
		case data, ok := <-next:
			if !ok {
				return nil
			}
			t.resolvable.Reset()
			if err := t.resolvable.InitSubscription(ctx, data, subscription.Trigger.PostProcessing); err != nil {
				return err
			}
			if err := t.loader.LoadGraphQLResponseData(ctx, subscription.Response, t.resolvable); err != nil {
				return err
			}
			if err := t.resolvable.Resolve(ctx.ctx, subscription.Response.Data, writer); err != nil {
				return err
			}
			writer.Flush()
		}
	}
}

There are many problems with this implementation:

  1. She's blocked

  2. It keeps a buffer for the entire duration of the subscription, even if there is no data to send

  3. We create a trigger for each subscription, although the trigger can be the same for multiple subscriptions

  4. We block reading from the trigger

  5. Function getTools allocates a lot of memory. Since we lock the parent function for the duration of the subscription, this memory is not freed until the end of the subscription. This is the line of code that allocates most of the memory.

To solve these problems, we need to remember Rob Pike's famous words about Go:

Don't communicate by sharing memory, share memory by communicating. (Don't communicate by sharing memory, share memory by communicating.)

Instead of having a goroutine that blocks on a channel, and another that blocks on a trigger, and all that memory that gets allocated while we block, we could instead have one goroutine that waits for events like: client subscribes, client unsubscribes, trigger has data, trigger is done etc…

This one goroutine will manage all events in one loop, which is actually quite simple to implement and maintain. Additionally, we can use a thread pool to handle events so that we don't block the main loop for too long. This is very similar to the epoll approach we used for WebSocket connections, isn't it?

Let's look at the optimized implementation ResolveGraphQLSubscription:

func (r *Resolver) AsyncResolveGraphQLSubscription(ctx *Context, subscription *GraphQLSubscription, writer SubscriptionResponseWriter, id SubscriptionIdentifier) (err error) {
	if subscription.Trigger.Source == nil {
		return errors.New("no data source found")
	}
	input, err := r.subscriptionInput(ctx, subscription)
	if err != nil {
		msg := []byte(`{"errors":[{"message":"invalid input"}]}`)
		return writeFlushComplete(writer, msg)
	}
	xxh := pool.Hash64.Get()
	defer pool.Hash64.Put(xxh)
	err = subscription.Trigger.Source.UniqueRequestID(ctx, input, xxh)
	if err != nil {
		msg := []byte(`{"errors":[{"message":"unable to resolve"}]}`)
		return writeFlushComplete(writer, msg)
	}
	uniqueID := xxh.Sum64()
	select {
	case <-r.ctx.Done():
		return ErrResolverClosed
	case r.events <- subscriptionEvent{
		triggerID: uniqueID,
		kind:      subscriptionEventKindAddSubscription,
		addSubscription: &addSubscription{
			ctx:     ctx,
			input:   input,
			resolve: subscription,
			writer:  writer,
			id:      id,
		},
	}:
	}
	return nil
}

We have added a function to the interface Trigger to generate a unique ID. This is used for unique identification Trigger. Internally, this function takes into account input data, request context, headers, additional fields, etc. to ensure we don't accidentally use the same Trigger for different subscriptions.

Once we have a unique ID For Triggerwe send an event to the main loop to “subscribe” to Trigger. That's all we do in this function. We are no longer blocked, there are no heavy allocations.

Next, let's look at the main loop:

func (r *Resolver) handleEvents() {
	done := r.ctx.Done()
	for {
		select {
		case <-done:
			r.handleShutdown()
			return
		case event := <-r.events:
			r.handleEvent(event)
		}
	}
}

func (r *Resolver) handleEvent(event subscriptionEvent) {
	switch event.kind {
	case subscriptionEventKindAddSubscription:
		r.handleAddSubscription(event.triggerID, event.addSubscription)
	case subscriptionEventKindRemoveSubscription:
		r.handleRemoveSubscription(event.id)
	case subscriptionEventKindRemoveClient:
		r.handleRemoveClient(event.id.ConnectionID)
	case subscriptionEventKindTriggerUpdate:
		r.handleTriggerUpdate(event.triggerID, event.data)
	case subscriptionEventKindTriggerDone:
		r.handleTriggerDone(event.triggerID)
	case subscriptionEventKindUnknown:
		panic("unknown event")
	}
}

This is a simple loop that runs in a single goroutine, waiting for events to occur before the context is canceled. When an event is received, it is processed by calling the appropriate handler function.

There is something powerful about this pattern that may not be obvious at first glance. If we run this loop in a single goroutine, we don't need to use any locks to synchronize access to the triggers. For example, when we add a subscriber to a trigger or remove it, we don't need to use locking because we always do it in the same goroutine.

Let's see how we handle trigger update:

func (r *Resolver) handleTriggerUpdate(id uint64, data []byte) {
	trig, ok := r.triggers[id]
	if !ok {
		return
	}
	if r.options.Debug {
		fmt.Printf("resolver:trigger:update:%d\n", id)
	}
	wg := &sync.WaitGroup{}
	wg.Add(len(trig.subscriptions))
	trig.inFlight = wg
	for c, s := range trig.subscriptions {
		c, s := c, s
		r.triggerUpdatePool.Submit(func() {
			r.executeSubscriptionUpdate(c, s, data)
			wg.Done()
		})
	}
}

func (r *Resolver) executeSubscriptionUpdate(ctx *Context, sub *sub, sharedInput []byte) {
	sub.mux.Lock()
	sub.pendingUpdates++
	sub.mux.Unlock()
	if r.options.Debug {
		fmt.Printf("resolver:trigger:subscription:update:%d\n", sub.id.SubscriptionID)
		defer fmt.Printf("resolver:trigger:subscription:update:done:%d\n", sub.id.SubscriptionID)
	}
	t := r.getTools()
	defer r.putTools
	input := make([]byte, len(sharedInput))
	copy(input, sharedInput)
	if err := t.resolvable.InitSubscription(ctx, input, sub.resolve.Trigger.PostProcessing); err != nil {
		return
	}
	if err := t.loader.LoadGraphQLResponseData(ctx, sub.resolve.Response, t.resolvable); err != nil {
		return
	}
	sub.mux.Lock()
	sub.pendingUpdates--
	defer sub.mux.Unlock()
	if sub.writer == nil {
		return // subscription was already closed by the client
	}
	if err := t.resolvable.Resolve(ctx.ctx, sub.resolve.Response.Data, sub.writer); err != nil {
		return
	}
	sub.writer.Flush()
	if r.reporter != nil {
		r.reporter.SubscriptionUpdateSent()
	}
}

In the first function you can see how we change the trigger and subscription structures. Remember that all of this still happens in the main loop, so it's safe to do this without blocking.

We create a wait group to prevent the trigger from closing before all subscribers have been notified of the update. It is used in another function if we close the trigger.

Next, you can see that we are sending the actual update resolution process for each subscriber to the thread pool. This is the only place where we use concurrency in event processing. Using a thread pool here has two advantages. First, and obviously, we are not blocking the main loop while the update is being resolved. Second, but equally important, we can limit the number of updates allowed at one time. This is very important because, as you know, in the previous implementation we allocated a lot of memory in the function getToolsbecause we didn't limit it.

You can see what we're calling getTools only in function executeSubscriptionUpdatewhen we actually allow the update. This function is very short lived and since we are using a thread pool to execute and sync.Pool for tools, we can effectively reuse tools and hence reduce overall memory consumption.

If you are interested in the full implementation of the resolver, you can find it at GitHub.

Summary

We started with a naive EDFS implementation that worked fine, but we realized it had some limitations. With the initial implementation in place, we were able to define a set of tests to “lock in” the expected behavior of the system.

We then identified the key issues with our initial implementation:

  1. We created 4 goroutines per client and subscription

  2. We allocated a lot of memory in the function resolve.NewResolvable

  3. We were blocked in a function ServeHTTP

  4. We blocked reading from the connection

We solved these problems as follows:

  1. Using Epoll/Kqueue to avoid blocking the connection

  2. Use event-based architecture to avoid blocking subscriptions

  3. Using a thread pool to avoid blocking the main loop while allowing a subscription update (and limit the number of simultaneous updates)

  4. Usage sync.Pool to reuse tools while allowing subscription updates

With these changes, we have reduced the number of goroutines by 99% and memory consumption by 90% without sacrificing performance.

We didn't “fish in the dark”, we used pprof to analyze exactly what was happening and where the bottlenecks were. In addition, we used pprof to measure the impact of our changes.

Thanks to our test suite, we were able to make these changes without breaking anything.

Final Thoughts

Perhaps we could reduce memory consumption even further since package allocations bufio are still quite noticeable in the heap profile. However, we recognize that premature optimization is the root of all ills, so we defer further optimizations until we really need them.

There is a spectrum between “fast code” and “quick to understand code”. The more you optimize, the more complex the code becomes. So far we are happy with the results and confident that we can effectively maintain the code.

If you are interested in learning EDFS, you can read the full announcement Here. Some documentation is also available on Cosmo Docs. If you prefer watching videos rather than reading, you can also watch EDFS demo on YouTube.

I hope you enjoyed this blog and learned something new. If you have any questions or feedback, feel free to comment or contact me at Twitter.

Acknowledgments

I took inspiration from the project 1M-go-websockets. Thanks to Eran Yanay for creating this project and sharing his knowledge with the community.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *