A quick guide to LXC in OS Elbrus

Not so long ago, Rostelecom announced the creation of a competence center for the development of software solutions for domestic processors. The first task that we had to solve was how to differentiate server resources between employees involved in porting and development, organize demo stands, and, in the future, with increasing loads, also manage resources such as memory and processor time.

Traditionally, this task is solved by means of virtualization, but, unfortunately, our processors do not yet support it. An alternative solution is containerization. Having studied the available options that can be implemented on the Elbrus OS, we settled on LXC as a stable solution. In this article I want to tell you how to use LXC in Elbrus OS.

What else have we considered
  • chroot – has the right to life, but I want an analogue of virtual machines, which will give us more flexibility.

  • docker – there is only in the experimental version, perhaps we will return to it, but I want to focus on work, and not find out why everything fell.

  • OS Elbrus has an assembled Bochs, but he can’t emulate e2k architecture development for which is the target for us.

  • QEMU – in the 2.6 kernel version for OS Elbrus worked QEMU in paravirtualization mode, then it was removed for revision. It is in the Elbrus OS packages, but I have not tested its performance (we will definitely tell you how to check it).

Initial data

So, to begin with, we have:

  • server based on Elbrus-8C processors;

  • Elbrus OS version 6.0.1 is installed on the server

  • LXC version 2.0.8

  • as rootfs backend in LXC we will use the directory (option lxc.rootfs.backend = dir), this will give us additional features, which I will discuss below.

Server preparation and configuration

Install packages lxc and lxcfs with the command:

sudo apt install lxc lxcfs

We set up a network for containers, for this we create a file /etc/default/lxc-net with the following content:

USE_LXC_BRIDGE="true"
LXC_BRIDGE="lxcbr0"
LXC_ADDR="192.168.103.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="192.168.103.0/24"
LXC_DHCP_RANGE="192.168.103.2,192.168.103.254"
LXC_DHCP_MAX="253"
#LXC_DHCP_CONFILE=""
LXC_DOMAIN=""

To apply network settings to newly created containers, we make changes to the file /etc/lxc/default.conf:

lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

To create a network bridge, you need to start the service lxc-net:

sudo service lxc start

Well, we enable the necessary services to automatically start at the start of the operating system:

sudo chkconfig lxc-net on
sudo chkconfig lxc on

Creating the first container

With OS Elbrus comes a template osl to create a container from an image of an installation disk. To create our first container, copy to the server iso-image (in my case, I filled it in /opt/iso) and mount it:

sudo mkdir -p /mnt/cdrom
sudo mount -t iso9660 -o loop /opt/iso/el-6.0.1-e8c-boot.iso /mnt/cdrom

It should be noted that you need to mount strictly in /mnt/cdrom since this path is hardcoded into the template script.

We create a container with the command:

sudo lxc-create -t osl -n osl-test

We are waiting for the end of the deployment, launch the container and connect to it:

sudo lxc-start -n osl-test
sudo lxc-attach -n osl-test

We have a working container that can already be used as a replacement for a virtual machine. In it, you will also need to configure and install the packages necessary for the operation.

If your plans do not include the frequent deployment of new containers, then you can use this method, but we have ambitious plans and therefore it was decided to make a template that would allow us to deploy new containers from several pre-configured tarballs.

Template for creating a tarball container

As I wrote earlier, as rootfs backend we use a directory, which means we can create a container in the first way, make the necessary settings in it and pack it into a tarball. When creating a new container, we just need to unpack the contents of the container into a directory rootfs a new container and add the necessary parameters to the file config

The new template was based on a similar template from SaltStack with minor modifications for the specifics of the Elbrus OS.

The first edit we need to make concerns the assignment. hostname, in OS Elbrus the hostname is set in the file /etc/sysconfig/network for this in the template file at the end of the function deploy_tar() add:

if [ -f "${rootfs_path}/etc/sysconfig/network" ]; then
    OLD_HOSTNAME=$(grep HOSTNAME ${rootfs_path}/etc/sysconfig/network)
    sed -i "s/$OLD_HOSTNAME/HOSTNAME=$name/" ${rootfs_path}/etc/sysconfig/network
fi

We are not planning to configure the network from the template, for this we remove the lines responsible for the network settings:

lxc_network_type="veth"
lxc_network_link="br0"
...
-t|--network_type)  lxc_network_type=${2}; shift 2;;
-l|--network_link)  lxc_network_link=${2}; shift 2;;
-r|--root_passwd)   root_passwd=${2}; shift 2;;
...
if [ ! -e /sys/class/net/${lxc_network_link} ]; then
    echo "network link interface does not exist"
    exit 1
fi

We set a new name for our file, in LXC the template names are of the form lxc-<имя шаблона>where part <имя шаблона> then used in the command lxc-create, for the file we set the rights to run and copy it to /usr/share/lxc/templates/… I named my template osl-img:

mv salt_tarball lxc-osl-img
chmod +x lxc-osl-img
TODO: уточнить права на шаблонах
chown root:root lxc-osl-img
sudo mv lxc-osl-img /usr/share/lxc/templates/

Create a templated rootfs image

First of all, we create a container according to the first part of our instructions and configure the container.

Fixing Errors When Running the chkconfig Command

These actions can be performed without starting the container by directly editing the files. By default, LXC creates new containers in /var/lib/lxc/<имя вашего контейнера> (further the paths will be indicated relative to this path).

We need to edit the file rootfs/etc/init.d/sysklogd by writing after the line

### BEGIN INIT INFO

line:

# Provides:          sysklogd

Delete the file rootfs/etc/rcsysinit.d/S05mknod and create a file rootfs/etc/init.d/mknod with the following content:

#!/bin/sh
### BEGIN INIT INFO
# Provides:          mknod
# Required-Start:    mountkernfs
# Required-Stop:     mountkernfs
# Default-Start:     S
# Default-Stop:      0 6
# Short-Description: 
# Description:       
### END INIT INFO

. /etc/sysconfig/rc
. ${rc_functions}

case "${1}" in
    start)
        mknod -m 660 /dev/loop0 b 7 0
        mknod -m 660 /dev/loop1 b 7 1
        mknod -m 660 /dev/loop2 b 7 2
        mknod -m 660 /dev/loop3 b 7 3

        (exit ${failed})
        evaluate_retval
        ;;
    *)
        echo "Usage: ${0} {start}"
        exit 1
        ;;
esac

Network configuration

In my tarballs, I use DHCP address assignment, if other settings are required, then this is already configured for specific containers. Network interface settings in Elbrus OS are located in /etc/sysconfig/network-devices/ifconfig.<имя интерфейса>/ipv4, for the interface eth0 create a file rootfs/etc/sysconfig/network-devices/ifconfig.eth0/ipv4 with the following content:

BOOTPROTO=dhcp
ONBOOT=yes
SERVICE=dhclient

Create a service user

I am opposed to user work root, therefore, we will create a service user under which further settings can be made.

We start our container lxc-start -n <имя контейнера>, and connect to it lxc-attach -n <имя контейнера>… Create a user admin:

useradd -m admin
passwd admin
usermod -a -G wheel admin

Editing the file /etc/sudoers allowing group members wheel execute commands through sudo… To do this, uncomment the line:

%wheel ALL=(ALL) ALL

We will not be superfluous to set a new password for the user. root:

passwd root

Autostart services

For the container to work correctly, we need to set the autoloading of some services, for this we execute the following commands inside the container:

chkconfig devpts on
chkconfig network on
chkconfig mknod on

Packing rootfs into tarball

We stop the container with the command lxc-stop -n <имя контейнера>… And we execute the command:

tar -cvzf osl-template.tar.gz -C /var/lib/lxc/<имя контейнера>/rootfs/ .

Move the resulting archive to a convenient storage location.

How to use?

I named my template osl-imgso it will appear with that name in the examples. To deploy a new container, we need to run the command:

sudo lxc-create -t osl-img -n tarball-test -- --imgtar /path/to/osl-template.tar.gz 

If everything is done correctly, then we will get a new container that even works.

Access to containers via SSH

Connect to the container via lxc-attach certainly good, but not very convenient, especially since some IDEs allow compilation on a remote host via SSH, which would be very convenient when porting and developing for Elbrus.

I have tried two methods. The first method works when using a bridge to the network interface, in this case the host system can be used as a jump host and the ssh client can be configured to work through it. To do this, you need to add to the file ~/.ssh/config the following lines:

Host e2k-proxy
    HostName <доменное имя или ip вашего хоста>
    ForwardAgent yes

Host 192.168.103.*
    ProxyCommand ssh e2k-proxy -W [%h]:%p

After these settings, all connections to hosts with addresses 192.168.103.1/24 (the network that we configured earlier in the file /etc/default/lxc-net) will be done through the host system.

The second option is to expose the container to the outside of the network interface. This is possible if instead of veth we will use macvlan, for this in the container configuration (/var/lib/lxc/<имя контейнера>/config) you need to make the following changes:

lxc.network.type = macvlan
lxc.network.macvlan.mode = bridge
lxc.network.link = eth0

If your container was started, then we restart it. We connect to it and configure the network interface. Sample file ipv4 for a static address looks like this:

IP=192.168.103.2
GATEWAY=192.168.103.1
PREFIX=24
CHECK_LINK=yes
ONBOOT=yes
TYPE=Ethernet
SERVICE=ipv4-static

Instead of a conclusion

The article does not touch on many aspects of working with LXC, I tried to focus more on the specifics of the Elbrus OS. As you already understood, working with LXC on Elbrus does not differ from that in other Linux distributions and on other processor architectures.

Based on the first results of the work of colleagues, we already have plans to improve the current system:

  • run alternative operating systems in containers (for Elbrus there are at least Alt and Astra Linux)

  • create containers with GUI for cases when you need to see how it will work for the user

  • Well, we have already begun work on automation, there is also a field that is not plowed and very interesting.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *