You don't need a ready-made API to start writing front-end or a brief overview of ready-made solutions for data mocking
The idea to write this article came to me on the most ordinary working day, at the moment when a new task fell to me from the managers in Jira. The task itself was filled out normally – there was a detailed description, a link to documentation with business requirements, and mock-ups were attached.
However, its status was not “Ready for development”. You could also see that the task itself was waiting for another task to be completed – to develop an API with data. Here I started having questions, as well as a desire to once again explain to managers that there are no critical blockers for this task.
Over the years of service as a front-end developer, I can confidently say that one of the fairly common problems in the work process of web development teams is incorrectly structured communication between front-end and back-end developers, areas of responsibility that are not fully divided between them, and, in general, letting activities take their course , “maybe they will work together somehow, the main thing is that in the end it turns out to be a normal product.” Despite the fact that intra-team practices of interaction between developers in most companies are generally well-established, much less time is devoted to inter-team interaction, which is strange, considering that they often work on the same product.
One of the specifics of this general problem is that in most of the places where I worked, the work between backend and frontend developers was not really parallelized. And even if the layouts are ready and the feature itself has been analyzed inside and out by system analysts, that is, it is already clear what needs to be obtained as an output – the front-end development team still needs to wait for a working application before starting work. Moreover, this situation has not changed steadily over the years, and the lack of a ready-made server part of the feature by default is a blocker for the work of the client development team.
I even encountered absurd situations. So, in one of the startups, the manager, due to the heavy workload of the backendrs and their violation of deadlines, actually gave the front team a couple of days of rest, saying that it was no good to write the front while there was no back, at first, to my great joy, of course. The joy quickly evaporated when the same manager said that due to the incident of these days that he would have to go to work on Saturday, because the notes were already ready, and the front-line tasks had not yet been plowed. Of course, over the next few weeks and months I grumbled dissatisfiedly that everyone was doing everything wrong, and the front-end developer didn’t need any api to write a working front-end.
Yes, if at the time you start working on client logic you already have a ready-made REST-api with all the data you need, then this is an ideal option without pitfalls. But the absence of them is by no means a blocker.
I would especially like to highlight this issue if you do not have a monolith, but for example an SPA – an application that accesses an external API for data. In this case, your client application and the API to which it goes for data are formally two different programs and, in principle, there should be no connection between them, that is, when designing your SPA application, you need to immediately design the application so that moving to another version of the API was not particularly problematic, and the data layers had to be separated from the presentation layers.
Technology will help you parallelize client and server development tasks mocking data – use of simulated data that simulates real data. This technique is most widely used in testing, but is also widely used in development.
We'll look at specific tools for mocking a little further, but for now let's move on to organizing the front-end development process in the absence of real data or lack of access to it.
How to organize work on a task if you don’t yet have ready data
A lot depends on how the work processes in your team are structured. The best option is when you already know in advance what the entry with your data will look like and what will come in it.
This happens in cases where:
There is already a technical specification for the backend or documentation, with a clear description of how the API works, where there is both the request URL and the returned data, with types and names of keys, fully described. This often happens in teams where there are system analysts who describe in detail how the program looks and should work. This is an ideal option, a “write and forget” approach, you don’t even have to interact with backenders, everything is according to the specifications.
You have negotiated a contract with the backend developer. We met/phoned, agreed on what content would be in the response to the request, recorded it somewhere in writing in fat/confluence/postman and we were done.
If for some reason the structure, content and form of the API with data are unknown to you, and it is not possible to coordinate it, then this is also not a reason to postpone the development of a client application.
If you have a design layout and business requirements for a feature from analysts, then the contents of the API can be easily understood from the design layout of the feature being developed and its description. The only caveat here is that after work on the API from the backend development side is completed, you will need to adapt your code designed to work with embedded data to real data. An important point: since we do not know the exact contents of the api, we isolate all work with it in a controller-adapter specially designed for this purpose, which always outputs the same set of data as when working with a locked api – before it becomes the contents of the real API are known; and after we received the combat note in our hands. This is done so that we do not scatter data throughout the entire application, which leads to the fact that migration to another API becomes more labor-intensive.
Let's now take a quick look at the main tools for data mocking.
Tools and methods for data mocking
Tools that integrate directly into your front-end application.
Listed here are the tools that are installed as npm dependencies on your frontend application and configured/customized within it.
A library based on PretenderJS. Pretender works using monkey-patching browser implementations of XMLHttpRequest and fetch. As a result, calling browser api data in the browser leads to the controller you define, from where the encrypted data is sent. Mirage.JS itself simply adds a fairly extensive API around this functionality for working with the data layer, for example, an in-memory database, ORM and data models to solve the problem of linked data. Moreover, when the page is reloaded in the browser, the database will be reset. On the server, if suddenly your project uses ssr, it will not work. There is no support for GraphQL out of the box. There is no websocket support. Also among the minuses, I would like to note that requests do not appear in the Network tab in the browser developer tools, and also that the library itself has a rather low degree of support from the authors.
This API mocking tool is based on service workers. From the browser side, the execution of requests looks native (at least the requests are displayed in the Network tab). The library allows you to execute requests from the server side, there is built-in support for GraphQL. There is no websocket support. There is no built-in API for working with data models like in MirageJS, but this functionality can be added with additional libraries from the authors.
Unlike the previous ones, this tool raises a real server on the developer’s machine, which is different from the previous tools I mentioned. JSON Server has quite powerful working functionality out of the box: a fully working REST-api with reading, adding and deleting records from a database based on JSON files, as well as built-in ability to filter, paginate and sort data. And although, on the one hand, JSON.server provides a fairly powerful amount of functionality out of the box, there was no customization involved in this matter, and accordingly, this tool is not very suitable for replacing your combat API. But it’s great for creating prototypes, training materials, and other situations when you quickly need a server with the most basic functionality without the possibility of expansion/customization.
I have tried all of the above tools and they do a good job of their main function. However, what confused me while using it was the approach itself, specifically the fact that they are installed as dependencies on your project and change its sources. Yes, this is dev-dependency. However, in my practice, I still had problems with the information security department in the company where I worked – due to a potential npm vulnerability of one of the above tools when trying to add it as a “feature” in the next release.
Postman.
Postman is an indispensable tool nowadays, and not only for backend developers and testers. Frontenders use it as documentation and to access the hidden API. Postman has a ready-to-use function for setting up a server with a mock-api in the cloud, which you can access from any client and receive in response stubs that you wrote there. Swagger can also be classified as a documentation tool with the ability to raise a mock server, but I didn’t have time to try it out in combat.
BFF.
If your project uses a Backend-For-Frontend approach, then this is the ideal place to determine the return of hardcoded data according to a predetermined logic. However, since this is still a combat server, and not an auxiliary service for development needs, it has its limitations, and I would not recommend doing anything more serious than giving away hardcoded json.
Self-written solution.
For me personally, the most functional and simplest option was to build my own node.js server based on Express/Koa. Firstly, this is a real server, not an imitation. And also some kind of separation of the functionality for returning fake data from the business logic of your application. Secondly, there are no restrictions, you can implement everything you need: caching, proxying to a combat API, adding a lightweight database, the ability to work with both sockets and http requests, the ability to return static images, generating images on request and in basically everything your imagination is enough for.
It was precisely the need to generate images (with an inscription, a background and a certain specified size) that I encountered when, during the lockdown, security guards denied remote workers access to all static servers. And what really helped us was that we already had a ready-to-use local service for returning the encrypted data, the functionality of which was easily expandable.
What tools do you use for data mocking? I will be glad to add additions and discuss in the comments.
Conclusion
We looked at the main approaches to data mocking, which I have been successfully using in my work for many years, and also got acquainted with the variety of tools that can help us with this.
The main goal of this article is for as many developers and managers as possible to realize and accept as a fact that the development of both front-end and back-end can (and sometimes should) be carried out in parallel, without waiting for each other. If your team is still working on the front only on combat data, then be sure to notify your managers and team that the opportunity to not wait for a ready-made back to develop the client part EXISTS. This will help management in planning future features, and will help you start working on tasks earlier and in a calm mode, and not in the sweat of your brow in rush hour mode.
I will be glad to receive feedback in the comments, thanks in advance.