building a web server


Foreword

It so happened that I have to work on a large number of sites, the tasks to solve are also different – from server settings to “make up a form”. And on one of the projects, a task arose – to upgrade to the current version of php (8.1 at the time of writing), upgrade to the current version of CMS (1C Bitrix), and, in general, “bring it to mind”.
Since the project has acquired a significant amount of functionality that is not directly related to the site (incremental and full scheduled backups with uploading to the cloud, compilation of dictionaries, synchronization with different providers), and work is carried out in 3 environments (locally, a test site and a production site), then I decided that this would be a good opportunity to move the entire infrastructure to Docker containers.
Since the technology is already established, it was expected that there would be a ready-made server template “out of the box” that would suit our needs. But having searched, it was not possible to find a complete solution – everywhere there were some nuances, because of which the solution did not fit. As a result, our own server was assembled for the site on 1C Bitrix. After that, everything related to this CMS was cut from the server and now it can be used for other projects without restrictions.

The code is available at github.

Server Components

For the full operation of the server, we need the following components:

  • database (MySQL);

  • PHP

  • NGINX;

  • proxy for sending mail (msmtp);

  • composer;

  • letsencrypt certificates;

  • backup and recovery;

  • optionally – a cloud for storing backups.

We also need to schedule different actions to run. For this, crontab will be used on hostand not in containers.

Before starting work

On the server, we need docker-compose. Instructions:

We will also need access to the smtp mail service and s3 storage for backups (optional).

About gmail smtp

Google informed, which from June 2022 suspends access to insecure applications (with authorization only by account password). To be able to use gmail smtp, you need to enable two-factor authentication in your account settings, create a separate authorization password for our site and use it. Detailed instructions are sufficient.

Services and environments

For flexibility in setting up the server, we create 4 separate compose.yml files:

  • compose-app.yml – the main services of our application (database, php, nginx, composer);

  • compose-https.yml – for site operation via https protocol. Includes certbot, as well as http to https redirect rules for nginx;

  • compose-cloud.yml – for storing backups in cold storage;

  • compose-production.yml – overrides restart rules for all containers.

compose-app.yml
version: '3'
services:
  db:
    image: mysql
    container_name: database
    restart: unless-stopped
    tty: true
    environment:
      MYSQL_DATABASE: ${DB_DATABASE}
      MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
      MYSQL_USER: ${DB_USER}
      MYSQL_PASSWORD: ${DB_USER_PASSWORD}
    volumes:
      - ./.backups:/var/www/.backups
      - ./.docker/mysql/my.cnf:/etc/mysql/my.cnf
      - database:/var/lib/mysql
    networks:
      - backend

  app:
    image: php:8.1-fpm
    container_name: application
    build:
      context: .
      dockerfile: Dockerfile
      args:
        GID: ${SYSTEM_GROUP_ID}
        UID: ${SYSTEM_USER_ID}
        SMTP_HOST: ${MAIL_SMTP_HOST}
        SMTP_PORT: ${MAIL_SMTP_PORT}
        SMTP_EMAIL: ${MAIL_SMTP_USER}
        SMTP_PASSWORD: ${MAIL_SMTP_PASSWORD}
    restart: unless-stopped
    tty: true
    working_dir: /var/www/app
    volumes:
      - ./app:/var/www/app
      - ./log:/var/www/log
      - ./.docker/php/local.ini:/usr/local/etc/php/conf.d/local.ini
    networks:
      - backend
    links:
      - "webserver:${APP_NAME}"

  composer:
    build:
      context: .
    image: composer
    container_name: composer
    working_dir: /var/www/app
    command: "composer install"
    restart: "no"
    depends_on:
      - app
    volumes:
      - ./app:/var/www/app

  webserver:
    image: nginx:stable-alpine
    container_name: webserver
    restart: unless-stopped
    tty: true
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./app/public:/var/www/app/public
      - ./log:/var/www/log
      - ./.docker/nginx/default.conf:/etc/nginx/includes/default.conf
      - ./.docker/nginx/templates/http.conf.template:/etc/nginx/templates/website.conf.template
    environment:
      - APP_NAME=${APP_NAME}
    networks:
      - frontend
      - backend

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

volumes:
  database:
compose-https.yml
version: '3'
services:
  webserver:
    volumes:
      - ./.docker/certbot/conf:/etc/letsencrypt
      - ./.docker/certbot/www:/var/www/.docker/certbot/www
      - ./.docker/nginx/templates/https.conf.template:/etc/nginx/templates/website.conf.template

  certbot:
    image: certbot/certbot
    container_name: certbot
    restart: "no"
    volumes:
      - ./log/letsencrypt:/var/www/log/letsencrypt
      - ./.docker/certbot/conf:/etc/letsencrypt
      - ./.docker/certbot/www:/var/www/.docker/certbot/www
compose-cloud.yml
version: '3'
services:
  cloudStorage:
    image: efrecon/s3fs
    container_name: cloudStorage
    restart: unless-stopped
    cap_add:
      - SYS_ADMIN
    security_opt:
      - 'apparmor:unconfined'
    devices:
      - /dev/fuse
    environment:
      AWS_S3_BUCKET: ${AWS_S3_BUCKET}
      AWS_S3_ACCESS_KEY_ID: ${AWS_S3_ACCESS_KEY_ID}
      AWS_S3_SECRET_ACCESS_KEY: ${AWS_S3_SECRET_ACCESS_KEY}
      AWS_S3_URL: ${AWS_S3_URL}
      AWS_S3_MOUNT: '/opt/s3fs/bucket'
      S3FS_ARGS: -o use_path_request_style
      GID: ${SYSTEM_GROUP_ID}
      UID: ${SYSTEM_USER_ID}
    volumes:
      - ${AWS_S3_LOCAL_MOUNT_POINT}:/opt/s3fs/bucket:rshared
compose-production.yml
version: '3'
services:
  db:
    restart: always

  app:
    restart: always

  webserver:
    restart: always

  cloudStorage:
    restart: always

And define the environment settings in the file .env

.env
COMPOSE_FILE=compose-app.yml:compose-cloud.yml:compose-https.yml:compose-production.yml
SYSTEM_GROUP_ID=1000
SYSTEM_USER_ID=1000

APP_NAME=example.local
ADMINISTRATOR_EMAIL=example@gmail.com

DB_HOST=db
DB_DATABASE=example_db
DB_USER=example
DB_USER_PASSWORD=example
DB_ROOT_PASSWORD=example

AWS_S3_URL=http://storage.example.net
AWS_S3_BUCKET=storage
AWS_S3_ACCESS_KEY_ID=#YOU_KEY#
AWS_S3_SECRET_ACCESS_KEY=#YOU_KEY_SECRET#
AWS_S3_LOCAL_MOUNT_POINT=/mnt/s3backups

MAIL_SMTP_HOST=smtp.gmail.com
MAIL_SMTP_PORT=587
MAIL_SMTP_USER=example@gmail.com
MAIL_SMTP_PASSWORD=example

Depending on what set of services we need in a particular environment, we specify in the variable COMPOSE_FILE set of compose-*.yml files

In catalog .docker/ we store settings for all services that are used in the application. There are 2 worth noting here:

  • For nginx we use a rules file .docker/nginx/default.conf and two patterns (.docker/nginx/templates/http.conf.template and .docker/nginx/templates/https.conf.template). Depending on which protocol we are working on, the appropriate nginx settings will be used. Templates are detailed on the image page. nginx;

  • For msmtp in a file .docker/msmtp/msmtp we specify patches of the form #PASSWORD#which will be replaced when the image is built.

.docker/msmtp/msmtprc
# Set default values for all following accounts.
defaults
auth           on
tls            on
logfile        /var/www/log/msmtp/msmtp.log
timeout 5

account        docker
host           #HOST#
port           #PORT#
from           #EMAIL#
user           #EMAIL#
password       #PASSWORD#

# Set a default account
account default : docker

Create a file Dockerfilein which we indicate the features of the assembly and, as mentioned earlier, for msmtp we set connection parameters from environment variables:

Dockerfile
FROM php:8.1-fpm

ARG GID
ARG UID
ARG SMTP_HOST
ARG SMTP_PORT
ARG SMTP_EMAIL
ARG SMTP_PASSWORD

USER root

WORKDIR /var/www

RUN apt-get update -y \
    && apt-get autoremove -y \
    && apt-get -y --no-install-recommends \
    msmtp \
    zip \
    unzip \
    && rm -rf /var/lib/apt/lists/*

COPY ./.docker/msmtp/msmtprc /etc/msmtprc

RUN sed -i "s/#HOST#/$SMTP_HOST/" /etc/msmtprc \
        && sed -i "s/#PORT#/$SMTP_PORT/" /etc/msmtprc \
        && sed -i "s/#EMAIL#/$SMTP_EMAIL/" /etc/msmtprc \
        && sed -i "s/#PASSWORD#/$SMTP_PASSWORD/" /etc/msmtprc

COPY --from=composer /usr/bin/composer /usr/bin/composer

RUN getent group www || groupadd -g $GID www \
    && getent passwd $UID || useradd -u $UID -m -s /bin/bash -g www www

USER www

EXPOSE 9000

CMD ["php-fpm"]

Backup

The backup consists of two parts: an archive with files and a database dump. We can store them locally or send them to the cloud. We use a script to generate cgi-bin/create-backup.sh.
Recovery – cgi-bin/restore-backup.sh respectively. If we have a cloud storage connected, then we will offer to restore from it:

create-backup.sh
#!/bin/bash

BASEDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")/../" &> /dev/null && pwd)

source "$BASEDIR/.env"

cd "$BASEDIR/"

# If run script with --local, then don't send backup to remote storage
moveToCloud="Y"
while [ $# -gt 0 ] ; do
    case $1 in
        --local) moveToCloud="N";;
    esac
    shift
done

# If backups storage is not mounted, then anyway store backups local
if ! [[ $COMPOSE_FILE == *"compose-cloud.yml"* ]]; then
    moveToCloud="N"
fi

# Current date, 2022-01-25_16-10
timestamp=`date +"%Y-%m-%d_%H-%M"`
backups_local_folder="$BASEDIR/.backups/local"
backups_cloud_folder="$AWS_S3_LOCAL_MOUNT_POINT"

# Creating local folder for backups
mkdir -p "$backups_local_folder"

# Creating backup of application
tar \
	--exclude="vendor" \
    -czvf $backups_local_folder/"$timestamp"_app.tar.gz \
	-C $BASEDIR "app"

# Creating backup of database
docker exec database sh -c "exec mysqldump -u root -h $DB_HOST -p$DB_ROOT_PASSWORD $DB_DATABASE" > $backups_local_folder/"$timestamp"_database.sql
gzip $backups_local_folder/"$timestamp"_database.sql

# If required, then move current backup to cloud storage
if [ $moveToCloud == "Y" ]; then
    mv $backups_local_folder/"$timestamp"_database.sql.gz $backups_cloud_folder/"$timestamp"_database.sql.gz
    mv $backups_local_folder/"$timestamp"_app.tar.gz $backups_cloud_folder/"$timestamp"_app.tar.gz
fi

# If we already moved backup to cloud, then remove old backups (older than 30 days) from cloud storage
if [ $moveToCloud == "Y" ]; then
    /usr/bin/find $backups_cloud_folder/ -type f -mtime +30 -exec rm {} \;
fi
restore-backup.sh
#!/bin/bash

BASEDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")/../" &> /dev/null && pwd)

source "$BASEDIR/.env"

cd "$BASEDIR/"

backupsDestination="$BASEDIR/.backups/local"

# If backups storage is mounted, ask, from where will restore backups
if [[ $COMPOSE_FILE == *"compose-cloud.yml"* ]]; then
    while true
    do
        reset
        echo "Select backups destination:"
        echo "1. Local;"
        echo "2. Cloud;"
        echo "---------"
        echo "0. Exit"

        read -r choice

        case $choice in
            "0")
                exit
                ;;
            "1")
                break
                ;;
            "2")
                backupsDestination="$AWS_S3_LOCAL_MOUNT_POINT"
                break
                ;;
            *)
                ;;
        esac
    done
fi
reset

# Select backup for restore
echo "Available backups:"
find "$backupsDestination"/*.gz  -printf "%f\n"
echo "------------"
echo "Enter backup path:"

read -i "$backupsDestination"/ -e backup_name

if ! [ -f "$backup_name" ]; then
    echo "Wrong backup path."
    exit 1
fi


backup_mode="unknown"
if [[ $backup_name == *"app.tar.gz"* ]]; then
    backup_mode="app"
elif [[ $backup_name == *"database.sql.gz"* ]]; then
    backup_mode="database"
fi

if [ $backup_mode == "unknown" ]; then
    echo "Unknown backup type"
    exit 1
fi

reset

if [ $backup_mode == "app" ]; then
    mkdir -p "$BASEDIR"/.backups/tmp
    cp "$backup_name" "$BASEDIR"/.backups/tmp/app_tmp.tar.gz

    tar -xvf "$BASEDIR"/.backups/tmp/app_tmp.tar.gz -C "$BASEDIR"

    rm -rf "$BASEDIR"/.backups/tmp
fi

if [ $backup_mode == "database" ]; then
    mkdir -p "$BASEDIR"/.backups/tmp
    cp "$backup_name" "$BASEDIR"/.backups/tmp/database_tmp.sql.gz

    gunzip "$BASEDIR"/.backups/tmp/database_tmp.sql.gz

    if ! [ -f "$BASEDIR"/.backups/tmp/database_tmp.sql ]; then
        echo "Error in database unpack process"
        exit 1
    fi

    docker-compose exec db bash -c "exec mysql -u root -p$DB_ROOT_PASSWORD $DB_DATABASE < /var/www/.backups/tmp/database_tmp.sql"

    rm -rf "$BASEDIR"/.backups/tmp
fi

Crontab

Scheduled launch is done on the host side. The file is used for initialization. cgi-bin/prepare-crontab.sh. During execution, the script collects all files from the directory .crontabreplaces the path to the application in them #APP_PATH# to the current one, and adds them to the crontab on the host.

prepare-crontab.sh
#!/bin/bash

BASEDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")/../" &> /dev/null && pwd)

# Load environment variables
source "$BASEDIR"/.env

# Create temporary directory
mkdir -p "$BASEDIR"/.crontab_tmp/

# Copy all crontab files to temporary directory
cp "$BASEDIR"/.crontab/* "$BASEDIR"/.crontab_tmp/

# Set actual app path in crontab files
find "$BASEDIR"/.crontab_tmp/ -name "*.cron" -exec sed -i "s|#APP_PATH#|$BASEDIR|g" {} +

# Set crontab
if [[ $COMPOSE_FILE == *"compose-https.yml"* ]]; then
    find "$BASEDIR"/.crontab_tmp/ -name '*.cron' -exec cat {} \; | crontab -
else
    find "$BASEDIR"/.crontab_tmp/ -name '*.cron' -not -name 'certbot-renew.cron' -exec cat {} \; | crontab -
fi

# Remove temporary directory
rm -rf "$BASEDIR"/.crontab_tmp/

Certbot

If https is not needed within this environment, then we skip this step.
We use certbot to get ssl certificates. But there is one feature here – to confirm ownership of the domain, we need to start nginx, but it will not start without certificates. It turns out a vicious circle. We use a script to solve cgi-bin/prepare-certbot.shwhich creates stub certificates, starts nginx, requests up-to-date certificates, installs them, and restarts nginx.
To update the certificates, create a file cgi-bin/certbot-renew.shwhich we will launch according to the schedule.

prepare-certbot.sh
#!/bin/bash

BASEDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")/../" &> /dev/null && pwd)

source "$BASEDIR/.env"

cd "$BASEDIR/"

if ! [ -x "$(command -v docker-compose)" ]; then
  echo 'Error: docker-compose is not installed.' >&2
  exit 1
fi

domains=($APP_NAME www.$APP_NAME)
rsa_key_size=4096
data_path="$BASEDIR/.docker/certbot"
email=$ADMINISTRATOR_EMAIL
staging=0 # Set to 1 if you're testing your setup to avoid hitting request limits

if [ -d "$data_path" ]; then
  read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
  if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
    exit
  fi
fi


if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
  echo "### Downloading recommended TLS parameters ..."
  mkdir -p "$data_path/conf"
  curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/_internal/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
  curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
  echo
fi

echo "### Creating dummy certificate for $domains ..."
path="/etc/letsencrypt/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
docker-compose run --rm --entrypoint "\
  openssl req -x509 -nodes -newkey rsa:$rsa_key_size -days 1\
    -keyout '$path/privkey.pem' \
    -out '$path/fullchain.pem' \
    -subj '/CN=localhost'" certbot
echo


echo "### Starting nginx ..."
docker-compose up --force-recreate -d webserver
echo

echo "### Deleting dummy certificate for $domains ..."
docker-compose run --rm --entrypoint "\
  rm -Rf /etc/letsencrypt/live/$domains && \
  rm -Rf /etc/letsencrypt/archive/$domains && \
  rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo


echo "### Requesting Let's Encrypt certificate for $domains ..."
domain_args=""
for domain in "${domains[@]}"; do
  domain_args="$domain_args -d $domain"
done

case "$email" in
  "") email_arg="--register-unsafely-without-email" ;;
  *) email_arg="--email $email" ;;
esac

if [ $staging != "0" ]; then staging_arg="--staging"; fi

docker-compose run --rm --entrypoint "\
  certbot certonly --webroot -w /var/www/.docker/certbot/www \
    $staging_arg \
    $email_arg \
    $domain_args \
    --rsa-key-size $rsa_key_size \
    --agree-tos \
    --force-renewal" certbot
echo

echo "### Reloading nginx ..."
docker-compose exec webserver nginx -s reload
certbot-renew.sh
#!/bin/bash

BASEDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")/../" &> /dev/null && pwd)

cd "$BASEDIR/"

docker-compose run --rm certbot renew && ocker-compose kill -s SIGHUP webserver
docker system prune -af

At this stage, the site is available and you can continue working with it.

A step-by-step installation process and a description of the variables are available at github.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *