HTTP or gRPC

Command VK Cloud translated an article with a detailed technical comparison of two types of API: HTTP and gRPC. The author talks about his work experience and describes the nuances, advantages and disadvantages of each technology.


HTTP based API development

The most modern

REST API

use models of client-server interaction “request-response” and are built on the basis of HTTP version 1.1. This protocol has been around for a long time and is widely supported by many SDKs and browsers. But there is one subtlety: the HTTP 1.1 protocol relies only on transactional communication.

As an analogy, consider a situation where you need to ask a friend a few questions over the phone. With HTTP 1.1, we call a friend, ask one question, get an answer, and end the conversation. Then we dial the number again, ask the next question and hang up again. And so on, until we get all the information we need.

Developers actively use the REST API, study and understand the principles of its work well. REST is consistent with HTTP and the HTTP request method, so it’s easy for developers to understand exactly what needs to be done for development.

The API is also easy to implement due to the relatively flat learning curve: the very popular JSON format is typically used for data transfer. Many programming languages ​​serialize and deserialize JSON relatively quickly, which increases adoption rates. Other formats can be used, such as XML or BSON, each with its own advantages and disadvantages.

The REST API provides the flexibility to customize a company’s API. However, REST design principles and standards are often not followed. As a result, there is a shortage of tools for preparing documentation or code samples. This is where Postman comes to the rescue: along with the OpenAPI specification it allows you to easily generate documentation and code samples by defining the structure, endpoints, and results of the API. Developers planning to use the HTTP API should understand what endpoints are and how to use them correctly.

When using the REST API, responses with large payloads can occur, especially when we are dealing with resources that are inherently complex. In this case, the developers are worried that they are sending too little data. Because of this problem of finding the sweet spot in API design, there has been a lot of interest in GraphQL, in which developers can specify exactly what resources and attributes they would like to include in the response. REST is not well-suited for low performance clients and mobile devices with limited bandwidth, so due to data limits it is not ideal.

AT GraphQL API, which are also based on HTTP 1.1 and use the POST method, eliminated many of the problems of finding the golden mean inherent in the REST API: you need to send not “too much” data (which the client will not accept), and not “too little” (because in this case the client will have to ask for information a few more times). GraphQL was conceived as a solution by which a client would be able to make a request by name and tell the server which data fields it wants to receive in response. Here you have to understand what information can be extracted. To do this, developers need to know the structure of the database, and the interaction is still carried out mainly in text formats like JSON.

API development based on gRPC

In 2016, Google developed and released gRPC as a high-performance solution for interaction between microservices. In terms of design approach, gRPC is based on

RPC

created in the 1970s. Since the solution is based on

HTTP 2.0

it often uses the Protocol Buffers (Protobufs) data description language, which provides a strictly binary format for interaction between the client and the server.

The basic concepts of creating an RPC API and a RESTl API are very similar. Developers define the rules for interaction between the client and the server, as well as the methods to be applied. Clients communicate with servers by calling methods with arguments. However, unlike the REST API, which uses the GET and POST HTTP request methods to determine the desired action, the RPC API defines the method in the URL itself. And the query parameters define the statement to be executed.

gRPC implements four types of service methods for data transfer, which generally provide flexibility in use:

  1. Unary: the client makes a single request, and the server sends a single response, as is the case with REST over HTTP 1.1.
  2. Streaming from server to client: the client can send multiple requests to the server, the last one acknowledging that the streaming has ended. The server sends one response to all requests.
  3. Streaming from client to server: the client sends the initial request, notifying the server that it is ready to receive the data stream. The server sends multiple responses, the last of which confirms that the streaming has completed.
  4. Bidirectional streaming: after the initial unary connection between client and server, both parties can send streams of information.


Types of gRPC requests in Postman

Protocol Buffers provide a type-safe interface and are considered a “lightweight” communication format. It is a highly compressed format with immediate conversion to data structures supported by independent client and server programming languages.

Granted, JSON, and text formats can also be compressed before sending on many client-server systems that support compression algorithms such as gzip.

The main benefit of HTTP 2.0 and the use of Protocol Buffers is speed. According to software architect Ruvan Fernando, the gRPC API

have overall performance 7-10 times faster than REST API

. It is important to emphasize that the speed increases due not only to gRPC, but also to the network layer used and data transfer. Let’s go back to the phone conversation with a friend analogy. HTTP 2.0 allows you to call a friend once, read a list of questions, get a list of answers, and end the call. This reduces the reconnection traffic for each question.

gRPC allows a client to execute “remote” (server) instructions as if the server were part of the local system. Because less serialization or deserialization is required between systems, gRPC is beneficial for low power clients such as mobile and IoT devices.

Another benefit of gRPC is the search capability. gRPC servers can additionally broadcast a list of requests to clients on demand. Such “reflection in the server” is of great value when working with the gRPC API. This is certainly not a complete replacement for the documentation, but the list of supported instructions makes it easier to implement the API. In addition, gRPC has a built-in code generation function using protoc compiler.

gRPC APIs are hard to design and build, so gRPC is pretty hard to implement. Configuration and debugging problems arise because Protocol Buffer messages are inherently binary and are not meant to be read by humans. Although browsers have native support for RESTful libraries and interaction types like JSON, conversions between HTTP 1.1 and HTTP 2.0 gRPC require third-party libraries such as gRPC-web.

gRPC has recently started supporting third-party tools and libraries, so development teams often have to focus more on internal gRPC-based APIs rather than external APIs for customers. As a result, only a few gRPC APIs can be found in the Network Postman Public API, such as Wechaty and NOVA Security.

A Visual Comparison of HTTP and gRPC for Building an API

The API creator has a lot to choose from. REST conforms to the HTTP standards and provides universal support, so for many development teams this is a simple and convenient solution. The amount of data transferred in this case is larger, but they are available for viewing and reading. This makes it easier to develop and debug software.

The gRPC architecture style is a great choice for working with applications and microservices written in multiple languages. With bi-directional content streaming, it is effective for systems with communication loads more than single Q&A. Its binary data format is much more efficient than RESTful text transfers, and its use of HTTP 2.0 reduces the need for repeated server connections.

Resources are needed to develop your own APIs. VK Cloud provides them to companies in the cloud. Cloud Servers will help simplify the configuration and maintenance of your IT projects, ensuring high availability and performance of applications of any complexity. We give new users a bonus of 3000 rubles for testing.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *