Linux LiveCD based on CentOS and techniques for using it in PXE boot via Foreman

Getting a Managed LiveCD Build and Delivery System

Linux distribution makers offer user-friendly operating system images that can be run without installation, but generic builds are not well suited for hosting tasks. Let’s talk about how we HOSTKEY created their own LiveCD based on CentOS.

Without a so-called live Linux system, it is impossible to solve routine technical tasks in a hosting company: LiveCDs are needed to reduce the burden on engineers, increase the stability of service delivery and simplify changes to the deployment process. Unfortunately, the universal images available on the Web are not well suited to our needs, so we had to create our own, called SpeedTest. At first we used it to measure the performance of machines when disbanding, but then the functionality of the system was expanded to solve other problems with a variety of equipment.

The growth of needs revealed the shortcomings of the system with integrated static scripts. The main one is the lack of simplicity and convenience of product development. We didn’t have our own build system, we didn’t have the ability to add support for new (or old) hardware, we didn’t have the ability to change the behavior of the same image under different launch conditions.

Problems with the composition of the software in the image

Since our infrastructure mainly used CentOS (at that time the seventh version), regular imaging through Jenkins we organized based on this distribution. The imaging kitchen on RHEL/CentOS is perfectly automated with Anaconda Kickstart. The kickstart structure is described in detail in the RHEL documentation – it is not worth talking about it in detail, although we will focus on some points.

The header part of the KS file is standard, except for the description of the repositories for downloading the software from which the image will be compiled. This block contains the following directives:

repo --name=elrepo --baseurl=http://elrepo.reloumirrors.net/elrepo/el8/x86_64/

To the block packages we include directive excludedocsand to reduce the size of the image, be sure to base it on @core and specify the exception packages:

%packages --excludedocs
@core
vim
-audit
-man-db
-plymouth
-selinux-policy-targeted
-tuned
-alsa-firmware
-iwl1000-firmware
-iwl105-firmware
-iwl100-firmware
-iwl135-firmware
-iwl2000-firmware
-iwl2030-firmware
-iwl5000-firmware
-iwl3160-firmware
-iwl5150-firmware
-iwl6000-firmware
-iwl6050-firmware
-iwl6000g2a-firmware
-iwl7260-firmware

The image from the above example will include a group @core + package vim with dependencies, but a number of unnecessary packages will be excluded. Next steps post and post(nochroot) the configuration is fine-tuned by scripts. Next to kickstart in the repository are the files that should get into the image.

The assembly is carried out using the utility included in the standard CentOS repository livecd-creator. As a result, we get the image squashfs (we will give a part of the script executed in Jenkins):

 echo -e "\\nSpeedtest release ver ${BUILD_NUMBER}\\n" >> motd
        sudo livecd-creator --verbose -f speedtest-${BUILD_NUMBER} speedtest.ks
        7z x speedtest-${BUILD_NUMBER}.iso -oisoroot
        mv isoroot/LiveOS/squashfs.img ./speedtest-${BUILD_NUMBER}.squashfs

This passage is worth focusing on: be sure to number the images and insert the build number into the file motd (its addition to the image should be written in kickstart). This will allow you to clearly understand which build you are working on and track changes in it during debugging. We solve the issue of supporting hardware and additional software using our own RPM repository with packages that are not in the regular repositories or have been modified by our specialists.

Implicit system startup problems

  1. The kernel and its dependencies come into the system through the group @core, so with each new build, the latest available software versions are included in the image. Accordingly, we need this core and initramfs for him.

  2. Assembly initramfs requires root privileges, and on the system on which it occurs, you need the same kernel build that will be in squashfs.

Our advice: to avoid security issues and script errors, build in an isolated environment. It is highly discouraged to do this on a Jenkins master server.

Assembly initramfs we quote from the problem in the format Jenkins DSL:

   shell (
        '''
        set -x
        echo -e '\\n'ENVIRONMENT INJECTION'\\n'

        if [ $KERNELVERSION ];then
          echo "KERNEL=$KERNELVERSION" >> $WORKSPACE/env
        else
          echo "KERNEL=$(uname -r)" >> $WORKSPACE/env
        fi

        short_branch=$(echo $long_branch | awk -F/ '{print $3}')

        cat <<EOF>> $WORKSPACE/env
        WEBPATH=live-${short_branch}
        BUILDSPATH=live-${short_branch}/builds/${JOB_BASE_NAME}
        FTPSERVER=repo-app01a.infra.hostkey.ru
        EOF
        '''.stripIndent().trim()
        )

        environmentVariables { propertiesFile('env') }

        shell (
        '''
        echo -e '\\n'STARTING INITRAMFS GENERATION'\\n'
        yum install kernel-${KERNEL} -y
        dracut --xz --add "livenet dmsquash-live bash network rescue kernel-modules ssh-client base" --omit plymouth --add-drivers "e1000 e1000e" --no-hostonly --verbose --install "lspci lsmod" --include /usr/share/hwdata/pci.ids /usr/share/hwdata/pci.ids -f initrd-${KERNEL}-${BUILD_NUMBER}.img $KERNEL
        '''.stripIndent().trim()
        )

So, we have generated an image squashfs, initramfs and there is the last core. These components are enough to start the system through PXE.

Delivery and rotation of images

To deliver images, we used an interesting system, which is worth dwelling on in more detail. There is a central repository – this is our internal server from a private network, which responds via several protocols (FTP, RSYNC, etc.) and sends information via HTTPS through nginx.

The following directory structure was created on the server:

├── builds
│   ├── build_dracut_speedtest_el8.dsl
│   │   ├── initrd-${VERSION}.img
│   │   └── vmlinuz-${VERSION}
│   ├── build_iso_speedtest_el8.dsl
│   │   ├── speedtest-${BUILDNUMBER}.iso
│   │   └── speedtest-${BUILDNUMBER}.squashfs
├── initrd -> builds/build_dracut_speedtest_el8.dsl/initrd-${VERSION}.img
├── speedtest.iso -> builds/build_iso_speedtest_el8.dsl/speedtest-${BUILDNUMBER}.iso
├── speedtest.squashfs -> builds/build_iso_speedtest_el8.dsl/speedtest-${BUILDNUMBER}.squashfs
├── vmlinuz -> builds/build_dracut_speedtest_el8.dsl/vmlinuz-${VERSION}

To catalog builds with the subdirectories corresponding to the names of the build tasks, we add the last three successful builds, and in the root directory there are symbolic links to the latest build without specifying the version (they work with clients). If we need to roll back the version, we can quickly change the links manually.

Delivery to the server and work with links are part of the Jenkins build task: as a client, use ncftpand as a server – proftpd (data is transferred via FTP). The latter is important, because it requires a server-client connection that supports working with symlinks. Clients do not interact directly with the central repository: they connect to mirrors that are tied to geographic locations. This approach is needed to reduce the amount of traffic and speed up the deployment.

Distribution to mirrors is also organized quite interestingly: a configuration with proxying and directive proxy store:

    location ^~ /livecd {
        try_files $uri @central-repo;
    }

    location @central-repo {
        proxy_pass https://centralrepourl.infra.hostkey.ru;
        proxy_store /mnt/storage$uri;
    }

Thanks to this directive, copies of images are saved on mirrors after the first download by the client. Our system does not contain unnecessary scripting, and the latest build of the image when updating is available in all locations, besides, it is easy to instantly roll back.

Modifying Image Behavior via Foreman

Systems are deployed through Foreman, that is, we have an API and the ability to pass variables into the configuration files of PXE loaders. With this approach, it is easy to make one image to solve a whole range of tasks:

  1. for booting on hardware and investigating hardware problems;

  2. to install the OS (see our previous article);

  3. for automatic testing of hardware;

  4. to disband the equipment and completely clean up the hard drive after the client refuses the server.

It is clear that all tasks cannot be sewn into an image and forced to be executed simultaneously. We acted differently: services were added to the assembly kitchen systemd and scripts that start the execution of the necessary tasks. The script and the service have the same name (for example, let’s show the start of the Linux installation):

Lininstall.service
[Unit]
Description=Linux installation script
After=getty@tty1.service
Requires=sshd.service

[Service]
Type=forking
RemainAfterExit=yes
ExecStartPre=/usr/bin/bash -c "if [[ $SPEEDISO == lininstall ]];then exit 0;else exit 1;fi"
ExecStart=/usr/bin/bash -c "/usr/local/bin/lininstall.sh | tee /dev/console 2>&1"
TimeoutSec=900

[Install]
WantedBy=multi-user.target

The service starts the task only if the environment variable exists SPEEDISO with the value linintsall.

Now we need to pass this variable to the image, which is easy to do through the kernel command line in the bootloader. The example is given for PXE Linux, but the solution is not tied to the bootloader, since we only need the kernel command line:

LABEL <%= @host.operatingsystem.name %>
    KERNEL <%= @kernel %>
    MENU LABEL Default install Hostkey BV image <%= @host.operatingsystem.name %>
    APPEND initrd=<%= @initrd %> <%= mainparams %> root=live:<%= host_param('livesystem_url') %>/<%= host_param('live_squash') %> systemd.setenv=SPEEDISO=<%= host_param('hk_speedtest_autotask') %> <%= modprobe %> noeject
    IPAPPEND 2

Variable hk_speedtest_autotask should contain lininstall. In this case, when the system starts, the service of the same name is launched. If the variable does not exist or has a random value, a system will start from the image, to which it will be possible to connect via ssh (if the service start was configured via kick start when building the image).

Results

After spending some time developing, we ended up with a managed LiveCD build/delivery system that gracefully handles hardware support and software updates in the image. With its help, you can quickly roll back changes, change the behavior of the image through the Foreman API, as well as save traffic and have high autonomy of the service for different sites. Geographically spaced mirrors contain the latest successful builds of all used images and repositories, and the system is convenient, reliable and has helped us more than once during the three years of operation.

About how we HOSTKEY automated manual installation of OS on servers, read here.

_________

By the way, in HOSTKEY you can use all the features of the technologically advanced API for fast ordering and server management. Select network settings, operating system and get any server within 15 minutes. You can also collect custom configuration serverincluding professional GPU cards.

We can already add a new NVIDIA A5500.

Similar Posts

Leave a Reply