Once again about ASP.NET Core integration testing with testserver and testcontainers

The scheme of work is not original, perhaps I will not be much mistaken if I say that such an approach can be easily found in a large number of projects whose teams are still at the stage of cultivating their practices. With all the obvious advantages (ease of implementation, clarity and transparency for users, multifunctionality of the stand – you can not only run autotests, but also use them for demos), this approach also has significant disadvantages:

  • developers break away from testing processes, because what happens on the test bench is the concern of QA
  • shared dependencies (DB, Redis, Brokers) are filled with remnants of data from previous test cases, causing them to bloat or, in the worst case, cause unwanted side effects
  • QA spend a lot of effort on keeping the test bench up to date (when your application consists of more than 40 components with intricate relationships between them, and there are more than one command, this becomes a really difficult task)
    All this contributes to an increase in the number of false positives, which ultimately leads to the fact that in hot moments some releases can go into production without proper testing.
    Well, enough words, let’s get down to business. Let’s try to make integration testing fun again.

The object of today’s research will be a simple asp.net core web api application, with a single controller with a set of CRUD methods. As an out-of-process dependency, which would be difficult to lock, we will use a database (we say database – we mean Postgres).
The demo application differs from the template obtained with dotnet new webapi only the presence of ef core, so here I will not give the entire listing – the version of the application before testing
tagged v0 in the repository.
In the following sections, I will go from a naive approach to integration tests to stateless tests with real dependencies running in containers.

“Naive” approach

From the very beginning, from the moment when I decided to write this note, I wanted to call this approach “naive” or “head-on”. But, I must say that nothing came of it and it took me about two hours to break into the open door, because I could not get the tests to work. All calls to controllers failed and returned 404. In order for the application to find its controllers, changes had to be made to the DI setup code that would make us think about the reasonableness of actions. So for the sake of clarity, I’ll put the word “naive” in quotation marks.
What is the essence of the approach: the application is launched in test-runner “as is” and uses a real database available from the CI agent (it can be just a test bench database or a dedicated instance specifically for CI). Other out-of-process dependencies are also used real ones from the test bench.
Due to the fact that during the test run, the executable assembly is the assembly with tests, and not with the application, it is necessary to explicitly specify when setting up DI that controllers should be looked for in the application assembly:

builder.Services.AddMvc()
    .AddApplicationPart(typeof(Program).Assembly)
    .AddControllersAsServices();
builder.Services.AddControllers();

Now we can write some tests. To conduct testing, the application must be launched, which will make it possible to get access to its DI and pull out of it DbContextwhich can be used to check side-effects (changing the state of the database).

private WebApplication _app = null!;
private DataContext _context = null!;
private HttpClient _client = null!;
private IDataClient _refitClient = null!;
private IServiceScope _scope = null!;

[SetUp] public async Task Setup()
{
    var builder = WebApplication.CreateBuilder()
        .ConfigureServices();
    _app = builder.CreateApplication();
    _app.Urls.Add("http://*:8080");
    await _app.StartAsync();
    _scope = _app.Services.CreateScope();
    _context = _scope.ServiceProvider.GetRequiredService<DataContext>();
    _client = new HttpClient { BaseAddress = new Uri("http://localhost:8080") };
    _refitClient = RestService.For<IDataClient>(_client);
}

Also, when setting up the test class, an http client is created (_client) targeting the local application and a typed refit client (_refitClient), to call the controllers per se.
After the tests are completed, you need to stop the application and release the resources allocated for the http client:

[TearDown] public async Task TearDown()
{
    _scope.Dispose();
    await _app.StopAsync();
    _client.Dispose();
}

When the entire infrastructure for tests is up, you can make requests and check the functioning of the application logic:

[Test] public async Task PostData_WhenCalled_Returns200()
{
    //act
    var response = await _client.PostAsJsonAsync(new Uri("data", UriKind.Relative), "test");
    //assert
    response.StatusCode.Should().Be(HttpStatusCode.OK);
}

[Test] public async Task PostData_WhenCalled_ReturnsIdOfAddedRecord()
{
    //arrange
    var cntBefore = await _context.Set<UserData>().CountAsync();
    //act
    var id = await _refitClient.Create("test creation");
    //assert
    _context.Set<UserData>().Count().Should().BeGreaterThan(cntBefore);
    _context.Set<UserData>().Any(x => x.Id == id).Should().BeTrue();
    _context.Set<UserData>().Single(x => x.Id == id).Data.Should().Be("test creation");
}

Actually the goal is achieved: the tests pass, you can make real http requests, the database is filled, it is possible to access it from the test classes. But there are also difficulties:

  • a real database is used, credits for it must be stored in a repository or added at the testing stage
  • it is also necessary to ensure that the database is filled with initial data and cleaned up after all tests are completed
  • you need to be sure that the application port used on test-runner will be available
    Application code with these tests tagged v1.

testserver

The next step is to use the power of ASP.NET Core to perform integration testing. Let’s replace kestrel with test server!
For convenient access to DI and manipulation with the dependency injection container, let’s create a successor WebApplicationFactory<>:

public class CustomAppFactory : WebApplicationFactory<Program>
{
    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureTestServices(services =>
        {
            // Удалим зарегистрированный DataContext
            var descriptor = services.SingleOrDefault(d => d.ServiceType == typeof(DbContextOptions<DataContext>));
            if (descriptor != null)
                services.Remove(descriptor);

            // Зарегистрируем снова с указанием на тестовую БД
            services.AddDbContextPool<DataContext>(opts => opts.UseNpgsql("Host=localhost;Database=test_ci_db;Username=postgres;Password=;"));

            // Обеспечим создание БД
            var serviceProvider = services.BuildServiceProvider();
            using var scope = serviceProvider.CreateScope();
            var scopedServices = scope.ServiceProvider;
            var context = scopedServices.GetRequiredService<DataContext>();
            context.Database.EnsureDeleted();
            context.Database.EnsureCreated();
            // Здесь можно выполнить код "наполняющий" БД тестовыми данными...
        });
    }
}

This factory will ensure the launch of the application. Method ConfigureTestServices is called after setting up the DI run by the application, so it can override the dependency injection setting and target the application to the specific DB server instance used for tests run in CI.
The test code is somewhat simplified. When creating a test class, an application factory is created from which you can get services from the DI container and a ready-made http client aimed at the application being tested:

private CustomAppFactory _factory = new();
private DataContext _context = null!;
private HttpClient _client = null!;
private IDataClient _refitClient = null!;
private IServiceScope _scope = null!;

[SetUp] public void Setup()
{
    _scope = _factory.Services.CreateScope();
    _context = _scope.ServiceProvider.GetRequiredService<DataContext>();
    _client = _factory.CreateClient();
    _refitClient = RestService.For<IDataClient>(_client);
}

[TearDown] public void TearDown()
{
    _scope.Dispose();
    _client.Dispose();
}

The code of the tests themselves does not change. This stage of test application development tagged v2 in the repository.
What we got at the current stage:

  • kestrel not starting
  • there is control over the database used
  • there is control over the application services, you can use mocks instead of external dependencies

However, tests still need to have an external database and other out-of-process dependencies. So execution on CI is still not completely autonomous.

Test containers

Well, there is a solution for this problem. There is a project test containers, which provides, in their own words, lightweight, disposable instances of external dependencies. The library is built on top of the Docker remote API and actually allows you to run containers from any images for use in tests.

In order not to fight for ports and hostnames, let’s pass them to the application factory through parameters:

public class CustomAppFactory : WebApplicationFactory<Program>
{
    private readonly string _dbConnStr;

    public CustomAppFactory(string host, int port, string password)
    {
        var sb = new NpgsqlConnectionStringBuilder
        {
            Host = host, Port = port, Database = "test_ci_database", Username = "postgres", Password = password
        };
        _dbConnStr = sb.ConnectionString;
    }

    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureTestServices(services =>
        {
            // ...
            services.AddDbContextPool<DataContext>(opts => opts.UseNpgsql(_dbConnStr));
            // ...
        });
    }
}

And in the test class, create and run a container with postgres:

[OneTimeSetUp] public async Task SetupContainer()
{
    const string postgresPwd = "pgpwd";

    _pgContainer = new ContainerBuilder()
        .WithName(Guid.NewGuid().ToString("N"))
        .WithImage("postgres:15")
        .WithHostname(Guid.NewGuid().ToString("N"))
        .WithExposedPort(5432)
        .WithPortBinding(5432, true)
        .WithEnvironment("POSTGRES_PASSWORD", postgresPwd)
        .WithEnvironment("PGDATA", "/pgdata")
        .WithTmpfsMount("/pgdata")
        .WithWaitStrategy(Wait.ForUnixContainer().UntilCommandIsCompleted("psql -U postgres -c \"select 1\""))
        .Build();
    await _pgContainer.StartAsync();

    _factory = new(_pgContainer.Hostname, _pgContainer.GetMappedPublicPort(5432), postgresPwd);
}

Of the features: the container name and host name are chosen randomly (how random can be Guid.NewGuid()), the port is bound to a random external port. All this is done to avoid problems with other application instances and other test runs on the same machine.
The generated names and ports are easy to extract and pass to the factory to configure the SUT.
I will also pay attention to the life hack – .WithEnvironment("PGDATA", "/pgdata") tells the subd to store database data along the way /pgdata which is mapped into memory using .WithTmpfsMount("/pgdata"). So even if there are a lot of tests, or heavy test data is used during the tests, the disk space will not suffer, the database will only exist in-memory.
The second life hack is that before running the tests, you need to wait until the PG is fully up and initialized. You can achieve this by writing helchecks in a custom dockerfile, or you can use testcontainers calls: .WithWaitStrategy(Wait.ForUnixContainer().UntilCommandIsCompleted("psql -U postgres -c \"select 1\"")). Here, the calling application will wait until the database is fully alive and the command is executed select 1which will mean that the database is ready for our subsequent requests.
After the tests in the test class have completed, the container must be thrown away:

[OneTimeTearDown] public async Task DisposeContainer() =>
        await _pgContainer.DisposeAsync();

Now the application is being tested in a completely stateless manner, no environment setup is required. All you need to run tests is the dotnet sdk and docker.
This state code available under the tag v3

Launch in CI

Before that, all the talk was about CI, whose agents run in a controlled, modifiable environment. But Github Actions is not like that. It is free (to the extent possible), popular, but its agents live somewhere far away and there are no opportunities to raise some kind of database (well, at least a database) next to our application.
With test containers, this is not a problem!
Let’s add a template github action:

name: .NET

on:
  push:
    branches: [ "master" ]
  pull_request:
    branches: [ "master" ]
  workflow_dispatch:

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v3
    - name: Setup .NET
      uses: actions/setup-dotnet@v3
      with:
        dotnet-version: 7.0.x
    - name: Restore dependencies
      run: dotnet restore
    - name: Build
      run: dotnet build --no-restore
    - name: Test
      run: dotnet test --no-build --verbosity normal

And that’s it, the build passes, green testswe’ve run real integration tests in an environment that we can’t shape.

This state of me tagged v3.1.

Instead of a conclusion

Well, everyone saw everything for themselves, you can write full-fledged integration tests for asp.net core applications using real databases and real external dependencies without touching yaml magic and without making significant changes to the usual CI / CD pipeline. I hope this note was a good illustration and will help someone start using integration tests in their daily work.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *