Next.js + Playwright. How we started writing automated tests and what came of it

Hi! My name is Danila, I am a frontend team lead at KTS.

In this article, I will share with you our experience of implementing automated tests on one of the production projects. I will tell you what tasks we faced, why we decided to take on automated testing and what results it brought.

Table of contents:

A little about the project

This is a server-side render application written in Next.js version 12. The site consists of many content pages, a testing module, and an administrative panel with a visual content designer. The project started with very tight deadlines, and now continues to live and actively develop for several years.

As the production load needs to be balanced, developers are added/disconnected from the project. This may cause bugs to appear during development/refactoring. Of course, the project has manual testers, but the human factor has not been cancelled, and it will take a long time to conduct a full regression for each task. We wanted to learn about bugs as early as possible. To continue to maintain the quality of development, we added automatic tests.

What tests will we write?

There are several types of tests (unit, integration, e2e). Of course, ideally you should use them all, but it is very time-consuming. On this project, we decided to write integration tests.

Let me explain why. Unit tests will point to a specific place where something is broken, but because of their number, you will have to spend a lot of time writing and supporting them. It is important for us to cover more user scenarios while spending less time. We decided for ourselves: it is enough for us to understand that something is broken, and not to receive an indication of a specific broken line in the code.

We already had tests written on the backend, and therefore full-fledged e2e were not so attractive to us either. Their execution takes a lot of time, and writing them is more difficult. We wanted to run tests with each merge request, and therefore they should pass quickly. To speed up, we decided to simply mock requests to the backend and focus on testing the frontend part.

Our main criteria before implementing automated tests:

Why Playwright?

On other projects we have used for similar tasks Cypress or React Testing Librarybut after doing a little research, we decided to try a solution from Microsoft Playwright and did not regret. This tool offers a wide functionality out of the box. The main features of Playwright in my opinion are:

Setup

Installation

To install Playwright, just run one line:

npm init playwright@latest

Next, you will need to select the necessary settings, and your project will have a structure that looks something like this:

playwright.config.ts
package.json
package-lock.json
tests/
  example.spec.ts
tests-examples/
  demo-todo-app.spec.ts

In general, everything is now with the help of commands npx playwright test or npx playwright test --ui you can run tests.

Mocks of Query Responses

As I wrote above, in order to speed up the execution of tests and not raise a real backend with all services and databases, we will mock responses from the backend.

Single Page Application

If we had a simple SPA application, this could be done using Playwright's built-in tools. Since all requests will be made in the browser, we would only need to use standard function from Playwright.

test("mocks a fruit and doesn't call api", async ({ page }) => {
  await page.route('*/**/api/v1/fruits', async route => {
    const json = [{ name: 'Strawberry', id: 21 }];
    await route.fulfill({ json });
  });

There would also be no obstacles for parallel test execution, since Playwright itself isolates test execution contexts. You can read more about this here. And Playwright also has a convenient setting webServerwhich you can use to configure how your application is launched. Playwright will launch the application, wait for it to start, run tests, and then terminate all processes.

Server Side Rendering

In our case, the application has SSR, and this adds some problems.

Let's start in order. In order to mock SSR requests, we need to set up our own Mock server and direct all server requests to it via environment variables. The following is ideal here: Custom ServerBefore running a test, we will populate the Mock Server with the data needed for the current test, run the test, and then clear the Mock Server and move on to the next test.

In order to run tests in parallel, we need to run multiple instances of our application with a Mock server. Playwright has a built-in mechanism for this. Worker processes.

Now we can move on to implementation. For ease of understanding, I left the main gist and comments in the code. Before running the tests, we need to raise our application and the Mock server, and after all the tests are completed, close the processes.

Let's specify in the Playwright config the paths to our scripts that will be launched before and after the tests are executed:

// playwright.config.ts
export default defineConfig({
  globalSetup: require.resolve('./global.setup'),
  globalTeardown: require.resolve('./global.teardown'),
  ...
});

Let's look at the global.setup.ts script.

In the function parameters we receive information about the workers. For each we generate a unique port and run the function startServer. There we create and launch a separate Node.js process with our Custom Server. Using environment variables, we specify the API backend address, in our case it will be the same application address. That is, requests from the application will be processed by the application itself at the server level. Then we remember the process pid in a global variable, it will be needed after all tests are completed for cleaning.

Here we subscribe to the output from the application and wait for the substring to be logged «> Ready on»so we will understand that our application has been successfully launched.

So, we wait until all application instances are launched and proceed to running tests.

Hidden text
// global.setup.ts
const HOSTNAME = '127.0.0.1'; // Адрес воркера
const WORKER_START_PORT = 9000; // Порт для первого воркера. Для второго будет 9001 и т.д

function startServer(port: number) {
  return new Promise((resolve) => {
    const name = String(port);
    // Запускаем отдельный процесс и указываем переменные окружения
    const child = spawn('node', ['./server.js', String(port), HOSTNAME], {
      env: {
        ...process.env,
        API_URL: `http://${HOSTNAME}:${port}`,
      },
    });

    // Запоминаем pid процесса, для того чтобы потом можно было его уничтожить
    global[`SERVER_${name.toUpperCase()}_PID`] = child.pid;

    
    child.stdout.on('data', (data) => {
      // Логируем вывод процесса
      console.log(`[${name}]:`, data.toString());

      // Дожидаемся когда приложение запустится
      if (data.toString().indexOf('> Ready on') === 0) {
        resolve(child);
      }
    });

    // Логируем ошибки
    child.stderr.on('data', (data) => {
      console.log(`[${name}]:`, data.toString());
    });
    
    // Логируем закрытие приложения
    child.on('close', (code) => {
      console.log(`[${name}][CLOSED]:`, code?.toString());
    });
  });
}

export default async (params: { workers: number }) => {
  const promises = [];

  for (let i = 0; i < params.workers; i += 1) {
    promises.push(startServer(WORKER_START_PORT + i));
  }

  await Promise.all(promises);

  console.log('All servers started');
};

Now let's look at the server.js script itself. We read the port and address from the script launch arguments. We create a global variable MockData to store mocks. And then we simply wrap our Next.js application through a native http server. We add endpoints to manage mocks:

Next, we add interception of all requests that start with /api/ (these are requests to the backend). When receiving such a request, we check whether there is a saved mock for it, if there is, we return the mock, otherwise we pass the request further to Next.js. Thus, we do not make any changes to the code of our application, but act one level higher.

Hidden text
// server.js
const http = require('http');
const next = require('next');

const PORT = parseInt(process.argv[2], 10);
const HOSTNAME = process.argv[3];

// Хранилище моков
const MockData = new Map();

// Вспомогательная функция для получения body из запроса
function getBody(request) {
  return new Promise((resolve) => {
    const bodyParts = [];
    let body;
    request
      .on('data', (chunk) => {
        bodyParts.push(chunk);
      })
      .on('end', () => {
        body = Buffer.concat(bodyParts).toString();
        resolve(JSON.parse(body));
      });
  });
}

async function start() {
  // Стартуем приложение Next.js (предварительно нужно сбилдить)
  const app = next({
    dev: false,
    hostname: HOSTNAME,
    port: PORT,
  });

  const handleNextRequests = app.getRequestHandler();
  await app.prepare();

  // Стартуем http server
  const MockServer = new http.Server(async (req, res) => {
    const route = req.url;

    // Резервируем роут для сохранения моков
    if (route === '/set-mock') {
      const {
        data: { status, body, endpoint },
      } = await getBody(req);

      MockData.set(endpoint, {
        body,
        status,
      });

      res.writeHead(200, { 'Content-Type': 'application/json' });
      res.end();
      return;
    } 
    
    // Резервируем роут для очистки моков
    else if (route === '/clear-mocks') {
      MockData.clear();
      res.writeHead(200, { 'Content-Type': 'application/json' });
      res.end();
      return;
    }

    // Подменяем ответы к backend
    if (route.indexof('/api/') === 0) {
      if (mockData.has(route)) {
        const { status, body } = MockData.get(route);
        res.writeHead(status, { 'Content-Type': 'application/json' });
        res.end(JSON.stringify(body));
        return;
      } else {
        res.writeHead(404, { 'Content-Type': 'application/json' });
        res.end();
        return;
      }
    }

    // Остальные запросы отправляем контроллеру Next.js
    return handleNextRequests(req, res);
  });

  MockServer.listen(PORT, () => {
    console.log(`> Ready on <http://$>{HOSTNAME}:${PORT}`);
  });
}

start();

Great, now before running tests, each worker will have its own application instance launched in a separate process.

Now we need to create helpers for interacting with the Mock server. Let's use Fixture from Playwright. This is an analogue of beforeEach and afterEach, which will allow us to add our code before and after each test. Here we can also get information about the test and pass it helper functions.

Let's look at the code next-fixture.ts.

Hidden text
// next-fixture.ts
import { test as base, Request, Route } from '@playwright/test';
import axios from 'axios';

import { HOSTNAME, WORKER_START_PORT } from '../../config';

export type NextContext = {
  mockApi: {
    setStatus200: (endpoint: string, body: Record<string, any>) => Promise<void>;
  };
  baseUrl: string;
};

const test = base.extend<{
  nextContext: NextContext;
}>({
  nextContext: [
    async ({ page }, use, testInfo) => {
	  // Перед выполнением теста определяем адресс воркера
      const port = WORKER_START_PORT + testInfo.parallelIndex;

      const baseUrl = `http://${HOSTNAME}:${port}`;

      // Выполняем тест и передаем вспомогательную функцию для установки мока
      await use({
        mockApi: {
          setStatus200: async (endpoint: string, body: Record<string, any>): Promise<void> => {
            await axios.post(`${baseUrl}/set-mock`, {
              data: {
                status: 200,
                endpoint,
                body,
              },
            });
          },
        },
        baseUrl,
      });

      // После выполнения теста, очищаем моки
      await axios.post(`${baseUrl}/clear-mocks`);
    },
    {
      scope: 'test',
    },
  ],
});

export default test;

Using fixture in our test looks like this:

import { expect } from '@playwright/test';

import test from 'playwright/fixtures/next-fixture';

import __MOCK_MAIN_SLIDER__ from '../__mocks/slider/success.json';

test.describe('Главная страница', () => {
  test('На странице отображается слайдер', async ({
    page,
    nextContext,
  }) => {
    // Мокаем запрос за слайдером
    await nextContext.mockApi.setStatus200('/api/main.slider', __MOCK_MAIN_SLIDER__);

    await page.goto(`${nextContext.baseUrl}/`);
    ...
  });
});

And, it would seem, that's it, the case is closed. Pages using getServerSideProps they get wet very well, but Next has great functionality with getStaticPropswhich allows you to cache pages. This method is executed on the server during the build, and you can also set the time for invalidating the cache. Since we have a limited number of copies of applications, the cache from previous tests will interfere. To solve this problem, you can raise a separate application for each test, but this takes more time and does not work very stably, so I started looking for another way.

And the solution was found – Preview Mode. This is a built-in functionality of Next.js. Its essence is simple: this mode is needed in order to view the page without caching, that is, to force getStaticProps executed on every page request. To activate it, you need to create a separate endpoint in your Next.Js application. It will set cookies via a specialized method setPreviewData. To protect the endpoint, it is worth closing it with a secret token from an environment variable or another mechanism.

// pages/api/preview.ts
import { NextApiRequest, NextApiResponse } from 'next';

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  if (!process.env.PREVIEW_TOKEN || req.query.secret_preview !== process.env.PREVIEW_TOKEN) {
    return res.status(401).json({ message: 'Invalid data' });
  }

  res.setPreviewData(process.env.PREVIEW_TOKEN);
  return res.json({ preview: true });
}

Since the endpoint has the /api prefix, let's add ignoring to our Mock server:

// server.js
...
 if (route.indexof('/api/') === 0 && !route.includes('/api/preview')) {
      ...
 }
...

Now let's add Preview Mode activation before each test is executed to the fixture.

// next-fixture.ts 
nextContext: [ 
    async ({ page }, use, testInfo) => { 
      ...

      await page.goto(${baseUrl}/api/preview/?secret_preview=${PREVIEW_TOKEN});

      await use(...); 
      ...

Hooray! Now all SSR requests are successfully mocked.

Continuous Integration

One of the key goals of implementing automated tests was to ensure that they are executed before each merge request. This allows us to detect problems at early stages and fix them in a timely manner. At KTS, we use our own GitLab to store project repositories. Playwright Documentation contains examples of integration with various CI solutions, which significantly simplified the process of introducing tests into the pipeline.

As the number of tests increases, the time it takes to run them will increase. For this case, Playwright has the option Shardingbut at the moment the execution time of our tests is not critical, so this mechanism has not been used yet, but it is nice to know that the tool can do this.

Let's move on to the results.

Pros

  1. Early detection of errors: Automated tests have repeatedly helped to detect problems early in development and prevent bugs from reaching production.

  2. Expertise: The acquired skills and experience allowed us to more quickly implement automated testing on other company projects.

  3. Ability to refactor and update libraries more safely: Having tests allowed us to make changes to the codebase with confidence, knowing that the core functionality would be tested and would still work.

  4. Running tests in a real browser: Tests are performed in conditions as close to real ones as possible, which makes them more accurate.

Cons

  1. Time costs: Initially, it took the team some time to get comfortable with writing and maintaining tests. This slowed down the development process in the short term until the team adapted and there were more examples to reuse.

If you are interested in learning more about the patterns we use when writing tests and how we optimize this process, write in the comments.

I also recommend that you read our other articles on web development:

CMS for 0 rubles: how we started using Strapi

• Connect the library to the project using npm/yarn link

• Flying Santa and dancing bullfinches: experience of CSS animation implementation and optimization

• How to format a letter so that it reaches the recipient as intended

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *