Using docker multi-stage to build windows images

Hello everyone! My name is Andrey, and I work as a DevOps engineer at Exness on the development team. My main activity is related to the assembly, deployment and support of applications in docker under the Linux operating system (hereinafter referred to as the OS). Not so long ago, I had a task with the same activities, but Windows Server and a set of C ++ projects became the target OS of the project. For me, this was the first close interaction with docker containers under Windows and in general with C ++ applications. Thanks to this, I gained interesting experience and learned about some of the intricacies of containerizing applications on Windows.

In this article I want to tell you what difficulties I had to face, how they were solved. I hope this will be useful for solving your current and future tasks. Enjoy reading!

Why containers?

The company has an existing Hashicorp Nomad container orchestrator infrastructure and related components – Consul and Vault. Therefore, application containerization was chosen as a unified method for delivering a turnkey solution. Since the project infrastructure has docker hosts with versions of Windows Server Core 1803 and 1809, it is necessary to collect separately versions of docker images for 1803 and 1809. In version 1803, it is important to remember that the revision number of the assembly docker host must match revision number of the base docker image and the host where the container from this image will be launched. Version 1809 has no such flaw. You can read more here.

Why multi-stage?

Engineers of development teams do not have access to assembly hosts or are very limited; there is no way to quickly manage a set of components for building an application on these hosts, for example, install additional toolset or workload for Visual Studio. Therefore, we made a decision – to install all the components necessary for building the application into the assembly docker image. If necessary, you can quickly change only the dockerfile and start the pipeline to create this image.

From theory to business

In an ideal docker multi-stage image assembly, preparing the environment for building the application takes place in the same dockerfile script as the assembly of the application itself. But in our case, an intermediate link was added, namely, the step of preliminary creating a docker image with everything necessary for building the application. This is done because I wanted to use the docker cache feature to reduce the installation time of all the dependencies.

Let’s look at the main points of the dockerfile script to form this image.

To create images of different versions of the OS in dockerfile, you can define an argument through which the version number is transmitted during assembly, and it is also the tag of the base image.

A complete list of Microsoft Windows Server image tags can be found. here.

ARG WINDOWS_OS_VERSION=1809
FROM mcr.microsoft.com/windows/servercore:$WINDOWS_OS_VERSION

The default commands in the instructions RUN inside dockerfile on Windows runs in cmd.exe console. For the convenience of writing scripts and expanding the functionality of the commands used, redefine the Powershell command execution console through the instruction SHELL.

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]

The next step is to install the chocolatey package manager and the necessary packages:

COPY chocolatey.pkg.config .
RUN Set-ExecutionPolicy Bypass -Scope Process -Force ;
    [System.Net.ServicePointManager]::SecurityProtocol = 
    [System.Net.ServicePointManager]::SecurityProtocol -bor 3072 ;
    $env:chocolateyUseWindowsCompression = 'true' ;
    iex ((New-Object System.Net.WebClient).DownloadString( 
      'https://chocolatey.org/install.ps1')) ;
    choco install chocolatey.pkg.config -y --ignore-detected-reboot ;
    if ( @(0, 1605, 1614, 1641, 3010) -contains $LASTEXITCODE ) { 
      refreshenv; } else { exit $LASTEXITCODE; } ;
    Remove-Item 'chocolatey.pkg.config'

To install packages using chocolatey, you can simply pass them in a list or install one at a time if you need to pass unique parameters for each package. In our situation, we used a manifest file in XML format, which lists the required packages and their parameters. Its contents look like this:



  
  
  

Next, we install the build environment of the application, namely, MS Build Tools 2019 is a lightweight version of Visual Studio 2019, which contains the minimum necessary set of components for compiling code.
To fully work with our C ++ project, we need additional components, namely:

  • Workload C ++ tools
  • Toolset v141
  • Windows 10 SDK (10.0.17134.0)

You can install an extended set of tools in automatic mode using a configuration file in JSON format. The contents of the configuration file:

A complete list of available components can be found on the documentation website. Microsoft Visual Studio.

{
  "version": "1.0",
  "components": [
    "Microsoft.Component.MSBuild",
    "Microsoft.VisualStudio.Workload.VCTools;includeRecommended",
    "Microsoft.VisualStudio.Component.VC.v141.x86.x64",
    "Microsoft.VisualStudio.Component.Windows10SDK.17134"
  ]
}

The installation script is executed in dockerfile, and for convenience, the path to the build tools executables is added to the environment variable PATH. It is also advisable to delete unnecessary files and directories in order to reduce the size of the image.

COPY buildtools.config.json .
RUN Invoke-WebRequest 'https://aka.ms/vs/16/release/vs_BuildTools.exe' 
      -OutFile '.vs_buildtools.exe' -UseBasicParsing ;
    Start-Process -FilePath '.vs_buildtools.exe' -Wait -ArgumentList 
      '--quiet --norestart --nocache --config C:buildtools.config.json' ;
    Remove-Item '.vs_buildtools.exe' ;
    Remove-Item '.buildtools.config.json' ;
    Remove-Item -Force -Recurse 
      'C:Program Files (x86)Microsoft Visual StudioInstaller' ;
    $env:PATH = 'C:Program Files (x86)Microsoft Visual Studio2019BuildToolsMSBuildCurrentBin;' + $env:PATH; 
    [Environment]::SetEnvironmentVariable('PATH', $env:PATH, 
      [EnvironmentVariableTarget]::Machine)

At this stage, our image for compiling a C ++ application is ready, and you can proceed directly to creating a docker multi-stage assembly of the application.

Multi-stage in action

As an assembly image, we will use the created image with all the tools on board. As in the previous dockerfile script, we’ll add the ability to dynamically indicate the version / tag number of the image for easy code reuse. It is important to add a label as builder to the assembly image in the instructions FROM.

ARG WINDOWS_OS_VERSION=1809
FROM buildtools:$WINDOWS_OS_VERSION as builder

Now it’s the turn of building the application. Everything is quite simple here: copy the source code and everything connected with it, and start the compilation process.

COPY myapp .
RUN nuget restore myapp.sln ;
    msbuild myapp.sln /t:myapp /p:Configuration=Release

The final step in creating the final image is to specify the base image of the application, where all compilation artifacts and configuration files will be located. To copy compiled files from an intermediate build image, specify the parameter --from=builder in the instructions COPY.

FROM mcr.microsoft.com/windows/servercore:$WINDOWS_OS_VERSION

COPY --from=builder C:/x64/Release/myapp/ ./
COPY ./configs ./

Now it remains to add the necessary dependencies for the operation of our application and specify the start command through the instructions ENTRYPOINT or CMD.

Conclusion

In this article, I talked about how to create a full-fledged compilation environment for C ++ applications inside a container under Windows and how to use the capabilities of docker multi-stage assemblies to create full-fledged images of our application.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *