Github Actions for a startup

The main advantage of working in a startup is a variety of tasks that have to be dealt with in an extremely short time for a minimum budget. And such conditions allow finding and coming up with interesting solutions that are not accepted as a standard by large companies.

Today we’ll talk about how you can automate stand updates, build testing and database backups so cheap that it’s even free.

GitHub Actions logo
GitHub Actions logo

Hello, Khabrovites!

Once we got tired of merging branches from the repository to the virtual machine of the test server and writing commands to create backups for all databases. Also, because of the small hotfix, I really did not want to raise services in Docker, raise Docker itself, rebuild the container and run tests. We conducted research not only in the palace of the mind, but even on the second page of search engine results. And we came to a solution that more than suited us both in terms of price and capabilities – Github Actions.

So, we have a solution at our disposal with a microservice architecture (well, where can we go without it in our difficult times), consisting of 10 containers, among which there are databases on PostgreSQL, backend services on .NET Core, Nginx service, a small Telegram bot for fast collecting data on possible problems in the solution, as well as one container with a static page, which we want to take outside not only the architecture, but also the virtual machine as a whole before release. Naturally, this all works inside the Docker system. There is also a project with tests with which we check the performance of all the main modules of the solution.

Running autotests

Let’s start by creating a small flow to run our tests on github. We found many examples of how to run Unit tests for .NET, but you won’t be full of Unit tests alone: ​​most of our solution is covered by integration tests.Therefore, we implemented the assembly of our main services and run tests directly on Github.

Here is an example of such a flow with comments:

# Имя флоу
name: .NET Core

# Когда действие запустится (триггеры)
on:
  push:
    # при push в master
    branches: [ master ]

  pull_request:
    # при создании pull request на master
    branches: [ master ]

# Что будем делать (экшены)
jobs:
  # Имя действия, придумываем сами
  integration-tests:
    # На какой ОС будет работать виртуальная машина
    # Можно выбрать Ubuntu, Windows Server или macOS
    runs-on: ubuntu-latest
    # Шаги действия
    steps:
      # Шаг 1: собираем сервисы в режиме тестирования
      - uses: actions/checkout@v2
      - name: Build the stack
        run: docker-compose -f docker-compose.prod.yml -f docker-compose.test.yml up -d --build
 
      # Шаг 2: собираем проект с тестами
      - name: Build tests
        run: dotnet build GOT.Tests

      # Шаг 3: запускаем тесты с небольшой детализацией
      - name: Run tests
        run: dotnet test LOT.Tests --verbosity normal

After saving the file to the repository along the path RepositoryName/.github/workflows/FileName.yml we can go to the “Actions” tab and find our new flow there:

.NET Core is our new flow
.NET Core is our new flow

If everything was configured correctly, then when you open a pull request to master or when you commit to master (for the last there is a separate boiler in a known place), Github will launch a flow that will go through the steps and first build and launch your containers, and then build and launch the project with tests, simultaneously displaying all the information from the console of your virtual machine directly in your browser. If you wish, you can attach a Github bot that will report problems with tests, start new searches if the test has not been passed, and also inform you about problems in Slack / Telegram. Everything is limited only by imagination and timing. Tests are good, but what about shipping automation?

Delivery automation

But even here Github Actions has everything you need, but to fully understand the concept, it is worth taking a short excursion into the capabilities of Github:

  • Github Packages – used to store assemblies. In our case, we use it as an external Docker registry. Details can be found here: https://github.com/features/packages

  • Secrets – “secrets” at the repository or organization level. With their help, you can trust Github with such valuable things as logins, passwords, tokens. The advantage of secrets is that even if access to the account is lost, no one will gain access to the secrets: Github will display only their name, the values ​​themselves after clicking “Save” will remain a secret for everyone. More about secrets: https://docs.github.com/en/actions/reference/encrypted-secrets

  • Github Actions from third-party developers… The marketplace of various actions is growing very quickly, with every month there are hundreds of different integrations and opportunities for developers: Jira, Azure, Telegram, Slack, and so on. At the time of this writing, there were over 9,000 actions in the marketplace.

We will use all these features to implement a full-fledged CI to update the test bench.

Finding suitable actions

We will need actions to login to Github Packages, to build microservice images, to publish images to Github Packages. We also need an action that can establish an SSH connection to our virtual machine and execute arbitrary commands.

After a short search on the marketplace and studying the documentation, the following list of actions was picked up:

  • docker/setup-qemu-action@v1 – add-on for virtualization

  • docker/setup-buildx-action@v1 – Docker module for building images

  • docker/login-action@v1 – action for login to Docker Registry (in our case – Github Packages)

  • docker/build-push-action@v2 – action build and publish image

  • appleboy/ssh-action@master – an action to initialize an SSH connection and execute a script

Setting up secrets

One of the few cases when you can make a screen from a private production repository.

"Secrets" repository
“Secrets” of the repository

Small explanations:

REGISTRY_TOKER – token for authorization in Github Packages. You can get it here: https://github.com/settings/tokens

SERVER_HOST – Server IP address in the form “192.168.100.100”.

SERVER_KEY – PEM key to connect to the server. Usually issued by your virtual machine provider.

SERVER_PORT – server port for SSH connection. The default is 22.

SERVER_USERNAME – the name of the user under which we log in to the virtual machine.

As described above about secrets, we have no way to see what a particular secret is equal to: Github does not even give us a button to view these values.

It’s time to prepare a stand: let’s write docker-compose and log into Github Packages. After building and publishing the images to Github Packages, we need to connect to the remote machine, unload and start a new build. To do this, we will use almost the same docker-compose file as for local build and run, but edit it a little. First, we will use the link to Github Packages as the “image”, where the assembly will be published. Secondly, we will remove from the docker-compose file all the parameters that relate to the assembly: here we simply do not need them, since the server is no longer responsible for the assembly.

As a result, we got something like the following file (some parameters were changed to “template”):

version: "3.4"
services:
  template-server:
    image: ghcr.io/template-inc/prod-template-server:latest
    environment:
      DB_CONNECTION_STRING: "${DB_CONNECTION_STRING}"
      DOCKER: "true"
      EMAIL_SERVER: "http://template-email-server"
    restart: always
    links:
      - template-db
    networks:
      template-network: 

  template-db:
    image: postgres:11
    volumes:
      - db-volume:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: ${DB_NAME}
    restart: always
    networks:
      template-network:

  template-nginx:
    image: ghcr.io/template-inc/prod-template-nginx:latest
    restart: always
    networks:
      template-network:
    depends_on:
      - template-server
    ports:
      - "90:80" 

networks:
  template-network:

volumes:
  db-volume:

A small note: the link to image should not contain any capital letters. Even if the repository name or login contains capital letters, in the link cast them to the lower-case format. Otherwise, problems will arise.

Let’s publish this file directly in the start directory of the virtual machine so as not to complicate the startup script. For this example, let’s call it “docker-compose.prod-ci.yml”. Also, since we are already on the machine, we will immediately log into Github Packages:

«docker login ghcr.io --username yourGithubUsername --password yourGithubToken»

It is important to use the username and not email. For some reason, this is important for Github, and the first time we went bankrupt despite the fact that we received a message about successful authorization. This completes the work with the server.

Writing a workflow

We simply collect all our knowledge, which we gained while reading the documentation of the corresponding actions, and aggregate them in one file:

name: 'build and deploy test server'
on:
  release:
    types: [published]
  workflow_dispatch:
jobs:
  build:
    name: 'Build & Publish'
    runs-on: ubuntu-latest
    steps:
      - name: "Checkout repository"
        uses: actions/checkout@v2
        
      - name: "Set up QEMU"
        uses: docker/setup-qemu-action@v1
        
      - name: "Set up Docker Buildx"
        uses: docker/setup-buildx-action@v1

      - name: "Login to GitHub Registry"
        uses: docker/login-action@v1 
        with:
          registry: ghcr.io
          username: "yourGithubUsername"
          password: ${{ secrets.REGISTRY_TOKEN }}
          
      - name: "Build&Deploy template-server"
        uses: docker/build-push-action@v2
        with:
          push: true
          tags: |
            ghcr.io/${{ github.repository_owner }}/prod-template-server:${{ github.event.release.tag_name }}
            ghcr.io/${{ github.repository_owner }}/prod-template-server:latest
          secrets: |
            "ASPNETCORE_ENVIRONMENT=Release"
          build-args: |
            build_mode=Release
            
      - name: "Build&Deploy template-nginx"
        uses: docker/build-push-action@v2
        with:
          push: true
          tags: |
            ghcr.io/${{ github.repository_owner }}/prod-template-nginx:${{ github.event.release.tag_name }}
            ghcr.io/${{ github.repository_owner }}/prod-template-nginx:latest
          build-args: |
            build_mode=Release

      - name: "Run deploy on server"
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.SERVER_HOST }}
          username: ${{ secrets.SERVER_USERNAME }}
          key: ${{ secrets.SERVER_KEY }}
          port: ${{ secrets.SERVER_PORT }}
          script: |
            sudo docker-compose -f docker-compose.prod-ci.yml -p prod pull
            sudo docker-compose -f docker-compose.prod-ci.yml -p prod up -d

Let’s save this file in the repository along the path .github/workflows/test-server-ci.yml

This completes the setup. Go to the “Actions” tab and see the new “build and deploy” action. This action is triggered by two actions: publishing a new release and manually launching it.

Manual launch of an action can be done directly from the action page. To do this, click “Run workflow”, select the branch from which the build will be carried out, and run it.

Launching our actions
Launching our actions

To trigger a new release trigger, go to the “Releases” section of the repository and create a new release by clicking “Draft a new release”:

New release button
New release button

Let’s fill in the basic information about the new release:

New release description form
New release description form

Immediately after clicking “Publish release”, the trigger will fire and start performing all the steps that we described: the Github virtual machine clones its repository, logs into Github Packages, installs the tools necessary for Docker, builds images, publishes them, then connects to our virtual machine and execute a script that connects to Github Packages, pulls the latest service builds and deploys them. And the whole process can be closely watched right in the launch console on Github. Isn’t that wonderful?

Database backups

What are the chances that you will not lose your data if you make 2-3 updates a week, many of which migrate the database?

Spoiler alert: not much chance
Spoiler alert: not much chance

Therefore, almost immediately the question arose of how to automate the creation of database backups. Moreover, when there are several of them, and options with ready-made control systems are either expensive or very expensive. Once again, Github Actions comes to the rescue!

In the previous example, we looked at connecting to a server and executing some scripts. It seems that this is what we need, because the ports to the databases are closed and it is not a good idea to connect to them from the outside.

Let’s start by marking up directories on the server: at the root, create a directory db_backup, inside it – template-db and template-identity-db

In the catalog db_backup let’s create a small script:

#db username
dbUsername="<your db username>"

# container additionals
containerPrefix="prod_"
containerPostfix="_1"

#container names
templateDb="template-db"
templateIdentityDb="template-identity-db"

#create template-db backup
cd $templateDb
docker exec -t $containerPrefix$templateDb$containerPostfix pg_dumpall -c -U $dbUsername > dump_$(date +%Y-%m-%d_%H_%M_%S).sql
echo "Backup of" $templateDb "created success"
cd ../

#create template-identity-db backup
cd $templateIdentityDb 
docker exec -t $containerPrefix$templateIdentityDb$containerPostfix pg_dumpall -c -U $dbUsername > dump_$(date +%Y-%m-%d_%H_%M_%S).sql
echo "Backup of" $templateIdentityDb "created success"
cd ../

echo "All database backups created success!"

This is the maximum that we managed to achieve in 15 minutes by googling the general syntax of sh scripts. But that’s enough for us. Let’s save the file under the name create_backups.sh

Now let’s go to the repository and create a new workflow similar to the one above:

name: 'Create prod database backups'
on:
  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:
  
  # Run action every day
  schedule:
  - cron: "0 0 * * *"

jobs:
  build:
    name: 'Build & Publish'
    runs-on: ubuntu-latest
    steps:
      - name: "Run backup script on server"
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.SERVER_HOST }}
          username: ${{ secrets.SERVER_USERNAME }}
          key: ${{ secrets.SERVER_KEY }}
          port: ${{ secrets.SERVER_PORT }}
          script: |
            cd db_backup/
            sudo bash create_backups.sh

By the way, a new trigger type for this material has appeared – schedule… This type of triggers is launched on a schedule, but the launch time is described in the “cron” format (https://ru.wikipedia.org/wiki/Cron): hated by many, but at the same time an excellent format that allows you to prescribe the rule of time.

So, what we have as a result: once a day (at midnight UTC) a trigger is triggered on Github and starts an action, which, in turn, establishes a connection to our server and executes the script to launch our backup script. The backup script runs through the database containers and creates backups in the appropriate directories. We also have the ability to create backup backups manually from the page of the corresponding workflow. All you have to do is figure out where to send backups for long-term storage.

By the way, the automated creation of backups from 5 containers takes us 13-15 seconds together with the connection to the server. Not bad, isn’t it?

Conclusion

Today we talked about automation options, when you need to cheaply and quickly implement not the most complex scenarios for launching integration tests, updating the stand, creating backups. Of course, these scripts and scripts can and should be improved, but I hope that Habrovites will be able to use this material as a small example from which to start automating the routine.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *