My backup server

I finally got around to making a personal backup server that won't be at my home. I've been thinking about this for several years now, made several attempts, but finally the stars aligned – I have the hardware for it, and enough disks, and I've finally more or less chosen an OS.

As hardware, I chose an old HP Microserver Gen7. I didn't see much point in selling it, or putting it into production somewhere either – it loads up to 100% at almost any sneeze, if you hang up some services or just copy files in several streams over a gigabit network. But it can still handle simple file storage.

This is not a step-by-step instruction, it is more of a short report on the topic of “how to do it”, maybe someone will get some ideas for themselves.

If you close your eyes to the processor, then the microserver turns out to be a pretty decent NAS – four or five slots for 3.5″ drives, plus you can throw an SSD inside the case, fortunately there are five SATA ports and an external e-SATA, which I installed inside.

The first time I tried to approach the task more or less seriously was at the beginning of the year, I installed TrueNAS Scale on it. But I had a hodgepodge of disks from 3 to 6 terabytes, and the largest disks were SMR. And TrueNAS is ZFS. And ZFS is the enemy of tiled disks (or tiled disks are the enemy of SMR). Plus, even if we consider that all disks are CMR, it is irrational to assemble arrays on ZFS from different disks. Either large losses, or you will have to cut and get into the loss of capacity or a headache with a possible replacement of disks.

So I demolished TrueNAS, put the server aside. At the beginning of summer I came back again, deciding to install Open Media Vault 7 on it.

I combined the disks using mergerfs, and wanted to get redundancy using snapraid. I played with all this for a while and realized that I like today's OMV even less than TrueNAS. They have almost abandoned the plugin system, and they still haven't come to normal Docker support. And I never really liked its web interface. Although what you can't take away from OMV is that it runs on any stool, and it is not at all demanding of resources. But otherwise, I get the feeling that the developers themselves do not yet know what they want to get in the end. Therefore, for me, OMV is a distribution for the simplest file dump on weak hardware for those cases when you want to have at least some kind of web interface and do not want to use zfs. If the hardware allows it, it is better to look at other options.

But then I finally got a bunch of disks of the same volume, and not tiled ones either. So I decided to give TrueNAS a second chance. Especially since in the latest beta (Electric Eel) they finally added more or less normal support for Docker, switching to it from kubrnetes.
This coincided with the death of the TrueCharts application catalog, which had previously been the primary source of containers for TrueNAS Scale.

But for me, as a person who only started communicating with Docker a couple of years ago, and about Kubernetes I only heard that “these are some kind of containers for enterprise”, such a coincidence is only beneficial.

So in the end I got the following hardware:

HP Microserver Gen7, with AMD Turion(tm) II Neo N40L processor – 1.5GHz, 2 cores (in terms of performance, this is approximately Rasperry Pi4).
16 GB RAM (regular, non-ECC)
4 6TB disks, one 4TB disk and 500GB SSD.
Additionally, I installed a USB controller and a second network card (for now it’s gigabit, but I’ll replace it with 2.5 later).

OS — TrueNAS SCALE 24.10

I won’t write about the installation, everything is simple there – boot from the CD, install the OS, log in through the web interface – use it.
Except for the little things:

1. I chose a 32GB Sandisk Extreme USB flash drive as the boot disk. In general, this is not particularly welcome, as I understand it, but Sandisk Extreme flash drives are very durable, unlike all sorts of Ultra. And I had nowhere to use the 32GB flash drive, the capacity is too small. So it came in handy. Fortunately, the server has an internal USB port.

2. When the installer asked about EFI, I decided that my system was old and it would be better to make a Legacy bootloader. As it turned out, this was a mistake – the installation was interrupted with the message “I can't find sda3”. I fiddled with the configs for a while (the error is not very rare, they recommended inserting sleep 20 into the installation script), but in the end everything was solved by simply answering “Yes” to the question about EFI.

3. Initially, I installed the stable version, 24.04. After installation, I found out that the TrueCharts application catalog had died, and TrueNAS Scale itself was moving towards Docker. Therefore, I simply updated the system via the built-in updater to 24.10, I didn’t even have to reinstall it.

Disks

I assembled four six-terabyte drives into a raidz2 array, deciding that it would be more reliable than something like raid10, albeit less productive. However, after the first reboot I got a collapsed array.

I plan to use the SSD for installing applications.

And the remaining 4TB disk is for something like an archive and backups of backups. Yes, it will work without backup, but even if it dies, it will only contain copies of copies and generally not very important data of the type “I don't want to lose it, but not so important that I shove it onto the main array.”

Server tasks

The main one is an offsite backup for a home server. Not everything, of course – I have 30 terabytes of space at home, but there are about 3-4 terabytes of space that I really don't want to lose. For now, these backups are made in OneDrive (with encryption), but the subscription there ends in a year and I have no particular desire to renew it – I already have enough terabyte cloud storage, Office 2010 or 2013, for which I have licenses, are no worse for my needs than Office365.

Secondary — a backup server for some services that run on a home server. For example, it can be useful to have a second instance of a service if there is no access to the main one. Even if the DB of this instance is not very relevant. Examples are RSS reader (freshrss), vaultwarden (password manager), uptime monitoring (uptime kuma) and other little things that do not require much processor power.

Installing applications

Since the truenas version is still in beta, the application library here is quite limited, although it is gradually expanding – a few days ago there were 96 applications in the catalog, now there are 102. And it is not immediately clear how to add your own applications. Previously, there was an “add container” button, but today it is not visible. Of course, you can go to the console and enter docker commands from there, but I did not leave OMV to mess around with the console.

The repository contains portainer and dockge. Personally, I think Portainer is too complicated for the simple task of “installing a container”. It can do too much and your eyes run wild from it. That's why I installed it at most to monitor the work of containers. And I wrote configs and installed everything from the console.

But dockge is simple and concise if we are talking about just launching a container. You paste the text of the yml file into the field and launch it.
Or you feed it a command like docker run – it converts it into a config, you read it – edit it as needed – run it.

The same command line, just in the browser. On the page you can stop, start, update the container, enter the container shell, read the logs…

At the same time, dockge sees “foreign” containers (which Truenas installed), but manages only its own. This seems to me a convenient compromise – what can be installed from the Truenas admin panel, what cannot – through dockge. When this something is added to Truenas – well, or when they return full-fledged work with containers – you can carefully migrate.

Access to the server “from the world”

I did not expose the server to the world, it only looks into the provider network and has a private IP, which in this case suits me perfectly – at home there is a connection to the same provider, so you don’t have to think about port forwarding, etc.

On the other hand, I want to have access to some services from the outside, if I’m going to make a backup instance here. I could have done port forwarding from my VPS, for example, but in this case it was easier for me to use Cloudflare tunnels (cloudflared). Yes, this is again a binding to a third party, which I’m kind of moving away from, but in this case it’s not critical. If suddenly cloudflare falls off, no one is stopping me from returning to options with VPN, port forwarding, or just buying a real IP.

Technically it all looks like this:

In the Cloudflare admin panel, in the zero tier section, create a tunnel, it tells you the tunnel ID.

On your server (or on any computer) install the cloudflared daemon and specify the tunnel ID in the settings.
And then, on the server side, you specify the subdomain you want to access the service from and the local port of this service.
The only limitation is that the domain must be managed by cloudflare. It is not necessary to buy it there (although I bought a separate domain specifically for this purpose), but the DNS must be registered there.

There are two tunnels here – to the admin panel and to the container with librespeed.

The speed seems to be cut threefold, but it’s hard for me to say who’s to blame – the tunnel is so slow or the microserver can’t handle it.
However, for me alone, such a tunnel is quite enough to access small services like pingovalka or vaultwarden.

If you measure the speed through the above-mentioned librespeed, then with a direct connection it is almost the declared 100 megabits.

But through the tunnel 40 megabits is the maximum that we managed to get. More often it fluctuates around 25-30. However, this is bearable if you do not download large amounts of data.

If cloudflare breaks or I want more speed, then I will solve the problem, fortunately there are many ways, this was just the easiest.

Backup as such

You can use whatever you like, but for now I've settled on two methods.

The biggest and most important thing I need to save is photos from many years, which are already about two and a half terabytes. Previously, I copied them to my home server via syncthing, and from there, using duplicati, uploaded them to onedrive. So simply adding another syncthing instance seemed quite logical. Well, and further work, probably, will be organized mostly within the framework of syncthing. Although you can use any other protocols if necessary.

Syncthing is in the application catalog, it is installed without any questions. Of course, you need to mount a folder in the container where you put the backups. At first, I looked at how it works through global detection and intermediaries – it did not work out very well. The maximum speed I saw was about 50 megabits, and otherwise it fluctuated around 10-20 megabits. I did not want to download two or three terabytes like that.

That's why I set up a direct connection with my home computer, exposing the syncthing port on my home router. That's how I achieved full speed – 100 megabits, the provider doesn't provide more. The processor load during synchronization fluctuates around 70%.

The second way is copying data from Linux servers via rsync. Everything is simple here, if only there was access to the server via ssh. In the TrueNAS settings there are rsync tasks, just add what to copy from, set up the schedule – and go.

Versioning — via zfs snapshots. I create a separate dataset for each backup folder, so I can configure snapshots quite flexibly, sometimes every hour, sometimes every week.

Plans for the future

From the hardware:
I'll try to replace the memory with ECC, after the jokes with the home server I began to take it more seriously. But the toad does not allow me to buy new memory for 7000 per module, and the ali is littered with registered memory, which the microserver does not understand. So I'll have to look at flea markets, which is not a quick matter.

I'll replace the gigabit network card with a 2.5 gigabit one. This is for “internal” communication – at work, in addition to the microserver, I have a small web server, which I'll repurpose into a proxmox node – I'll also need to make backups from it, so let them be done at a higher speed. Well, I'll also add a 2.5 network card to my work computer and plug it into the same switch.

Maybe I'll remove the USB-3 controller and install an adapter to the NVME disk instead – for the cache for the array.

As for the microserver itself, I can't recommend it if you're building a server from scratch. The only useful thing about gen7 is the case for 4-5 drives. But it only works well with the stock motherboard, and if you want to upgrade it, you'll have to spend a lot of effort to modify the case for miniITX. For me, it's easier to spend money on some Chinese case, like jonsbo, and then build a system on normal hardware. I used the microserver simply because I had one and it was more or less satisfactory in terms of performance. Otherwise, I would have taken the Jonsbo N2, a very nice case, although not very budget-friendly. Or the N1.

From the software:
I'll wait for the release of TrueNAS 24.10.

I'll attach backups from my clouds (OneDrive, Mail, Yandex).

I'll set up a mail backup from the same Gmails and Yandexes.

Otherwise, I don't see any serious ways for development. The main thing is done, and then only the intended use remains. All experiments and difficult tasks will go to the second server (the one with proxmox). But this is a separate story, not deserving a separate article, materials on the topic of “how I installed proxmox” from people who understand this more than me, in the network in bulk in any form.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *