Assembling network storage (NAS) XPenology

Hello! Alexander is with you again, DevOps from Banki.ru. Previous article «Home server based on Proxmox” aroused community interest and heated discussion in the comments.

Today I will continue the topic and talk about assembling a NAS (Network Attached Storage) with my own hands from currently available hardware. I will dwell on the selection process, purchases, and the estimated price of the overall assembly at the moment.

What is NAS and why do you need such network storage?

NAS is network data storage. It is a device that connects to a network to share files and data with multiple users or devices. Unlike traditional servers, NAS is designed solely for storing and managing data. Typically, such devices are easy to set up, support various file access protocols (for example, SMB, NFS, FTP) and can be used both for personal needs (for example, home media servers) and in corporate environments for storing backups, archives or distributed files .

I'll list what you might need a NAS for:

  • Make a personal analogue of Google Drive with possible synchronization with PC and mobile devices.

  • Deploy a personal repository of docker images called Harbor (which is perfectly used in work tasks within Banki.ru). Again, remember the Docker Hub precedent.

  • Store files: video, audio, photo archive.

  • Set up a home torrent downloader: downloaded it, set up a schedule for downloading at night, received the necessary files.

  • Media center. Downloaded and stored photo/video/audio files can be streamed to a TV connected to your home network.

  • NFS for Proxmox/Kubernetes. The OS I chose for the NAS can work out of the box in a Minio S3, which can eventually be connected to the k8s.

How I came to the decision to build my own NAS

So, a little background. After Docker Hub was blocked and sanctions were announced on September 12, I seriously thought about purchasing a NAS to store personal files. I use OneDrive from Microsoft quite actively (my notes from Obsidian are synchronized through it), plus I store a collection of documentation and books for work in it.

After looking through the options available on the Russian market, I came across the TerraMaster f-424 (4 cores/4 threads, 8 gigabytes of RAM and 4 HDD slots). The approximate price at the time of writing is ~625 dollars (about 60 thousand rubles).

A little upset about the price, I continued my search. At some point, the marketplace algorithms realized that I was looking for something related to NAS, and suggested IT. It was a Jonsbo N2 case with the ability to install five 3.5-inch drives and one 2.5-inch drive.

JONSBO N2 Case ITX NAS Server Home Office Storage 5+1 Hard Drive Drawer Hot Plug for PC Gaming Aluminum Mini Computer | AliExpress

JONSBO N2 Case ITX NAS Server Home Office Storage 5+1 Hard Drive Drawer Hot Plug for PC Gaming Aluminum Mini Computer | AliExpress

At this moment, the thought traditionally flashed through my mind: “Why not assemble everything yourself?” Thus began the search for iron.

I would like to note right away that initially I wanted to get a compact NAS with more or less modern hardware, so what is described below is my build option. If you wish, you can build something similar on old office hardware.

Selection and approximate cost of the “designer”

For the base I took the same body from Jonsbo. Next we needed to choose a motherboard. When choosing, my priority was to find a balance between the characteristics of the hardware and its cost. My choice fell on the BKHD 1264 NAS with an estimated cost of 12 thousand (here link to it and its specifications on the manufacturer’s website) and the following characteristics:

  • Intel n100 processor – 4 cores 4 threads.

  • 6 SATA ports and 2 m2 connectors.

  • A PCI-e slot that allows you to add expansion cards such as network cards, and maybe even video cards.

  • 4 network adapters with a speed of 2.5 Gb/s.

  • Possibility of power supply from a 24 pin connector (in fact, from a regular power supply).

  • Form factor: mITX, which fits the case.

Motherboard SZBOX 1264-NAS N100 4*I226 2.5G LAN DDR5 4800MHz max. support 16G dual display M2 SSD NVME/NGFF PCIE1X + 6*SATA | AliExpress

Motherboard SZBOX 1264-NAS N100 4*I226 2.5G LAN DDR5 4800MHz max. support 16G dual display M2 SSD NVME/NGFF PCIE1X + 6*SATA | AliExpress

Having ordered the motherboard, I started looking for a power supply. Jonsbo N2 requires an SFX power supply (small) with two MOLEX connectors, which is extremely important, since the board in the case where the HDDs are inserted has 2 MOLEX connectors for powering the drives.

The disk board itself looks like this:

I chose a 450-watt (yes, perhaps overkill, but in my opinion, this is a suitable unit in terms of price-quality ratio) power supply from Chieftec (BFX-450BS SFX), which just had 2 MOLEX connectors and an 80 Bronze Plus certificate.

In fact, the only thing missing for the first launch is RAM and storage. I found SO-DIMM DDR5 RAM on Avito.

“Why Avito?” – you ask. It's simple. Now new laptops come mostly with 16 gigabytes (2 sticks of 8 gigs each) of DDR5 memory. If the laptop has integrated graphics, then in games and work tasks with an emphasis on graphics, this memory becomes insufficient for it, since it is also divided into video memory. As a result, owners of new laptops often change the memory kit to 32 gigabytes or more. So practically new 8 GB sticks are often sold on Avito at the price of “just to pick it up”. So, I bought an 8 GB SO-DIMM DDR5 die for a ridiculous 1250 rubles.

The question remains about the drive. Many guides for assembling a NAS use a flash drive on which the bootloader and operating system for the NAS are recorded. Even my motherboard has usb 2.0 for such purposes. But loading from a flash drive is slower, and this option seemed unreliable to me. Since the case has 5 slots for 3.5-inch HDDs and 1 for 2.5-inch SSD/HDD, and the board has 6 SATA ports, I came to a different solution and purchased a 120 GB SATA SSD Apacer. The SSD was purchased for fast loading, since loading from a USB flash drive is slower, and the price for a good flash drive is comparable to the entire disk.

So, I’ll summarize the cost of hardware for such an assembly without taking into account the HDD:

  • Jonsbo n2 case. 9861 rubles.

  • ITX BKHD 1264 NAS motherboard. 11654 rubles.

  • Chieftec power supply (BFX-450BS SFX). 4667 rubles.

  • RAM Samsung SO-DIMM DDR5 8 GB. 1250 rubles.

  • SSD drive Apacer AS340X. 1450 rubles in DNS.

The final amount was 27,432 rubles – 2 times cheaper than the same TerraMaster f-424 (with similar characteristics), while my version does not consist of exotic iron, any component can be easily changed.

Building and installing XPenology

Having received all the components, we begin assembly. There is nothing special in the assembly and instructions can be found on any video hosting site, indicating the name of the case. In my opinion, very good instructions a foreign colleague who assembles a similar NAS in detail.

I’ll add my comments on the case and assembly:

First point. On the board that I purchased, there are no ports for usb type-C and usb 3.0, but there are on the front panel of the case. And here the choice begins: either throw the wires from the front panel and not connect them, using the ports from the motherboard, or look for adapters from usb 2.0 to usb-C and usb 3.0. As a result, I purchased an adapter from usb 2 to usb 3 for 272 rubles, which slowly, but uses the usb port on the front panel, leaving usb-c inoperative.

Second point. Unfortunately, the included 120 mm fan that cools the HDD section is quite loud. I tried to replace it with the quiet ID-Cooling TF-12015-W, but the experiment was not very successful. Yes, it has become quieter, but putting it next to it on the desktop is out of the question. This happens because the fan is closed on both sides by a mesh, the air passing through it makes noise. As a result, I replaced it with large grilles for 120 mm fans. Among the additional solutions that I found on the Internet was replacing the fan with an expensive Noctua of suitable sizes. But with the cost of the configuration being 27k, it seemed illogical to me to pay 3-4k for a fan. Let me add one more point. The fan can be installed not inside the case, as it was from the factory, but can be turned and screwed onto the standard mounts from the outside. It will stick out a little, but it's not critical. Plus, you can place a soft object under the NAS itself to reduce noise from resonance with the surface.

The hardware is assembled, now we will install XPenology.

XPenology is an operating system from Synology NAS installed on regular computer hardware.

  1. Download the XPenology loader, ARC loader. Let's go to GitHub and download the file that ends with *.img.zip.

  2. We remember this code – we will need it to unlock the bootloader functionality: 4ME3P7. This code is unique and only applies to bootloader version 24.7.14, the link to which I gave above. If you are downloading the latest version, then welcome to Developer Discordwhere by following the instructions you can get a fresh unlock code for the latest version of the bootloader. As far as I understand, for each new release there will be a new unlock code.

  3. Unpack the archive and write the image to the SSD. For this we will use Rufus. In the version with SSD, at this stage you will need a SATA-USB adapter to connect the SSD to the PC. We connect the media that we have chosen to the PC and launch Rufus. Select the unpacked bootloader file and start recording the image to the media.

For me it looks like this:

  1. We remove and connect the media to the motherboard. I connect via SATA. In the version with a flash drive, there is a usb 2.0 port on the board itself in the area of ​​the SATA ports – you can connect there.

I am attaching a photo before closing the lid. The M2 drive from Samsung will come in handy later. I would like to note that due to their small size, it is very difficult to lay out the cables beautifully and practically; in the end you will still end up with sloppy noodles, no matter how hard you try.

Among the requirements of the ARC loader is the presence of an HDD disk in one of the slots. Without this, the operating system will not start. I have a 500 GB WD Black lying around. In the photo he is already wearing special dampers from Jonsbo (complete with the body).

Additionally, I added an old 1TB HDD from Seagate and a 512GB SSD that I had on hand.

In the photo we see the problem: the 2.5-inch SSD actually hangs in the SATA slot. Ideas on how to fix this are to purchase special adapter slides from 2.5 to 3.5 inches.

Over time, I found a sled on Ali, it looks like this:

  1. We connect power and patch cord to our NAS and start it up. The motherboard automatically boots from available bootable media, so you don't have to go into bios. And after the bootloader starts from the media, it launches the web server and the installation can be continued through the browser, which allows us to install XPenology without connecting a monitor.

It is very important that your network equipment where the NAS is connected distributes an IP address via DHCP. Look at the issued address on your router.

  • Next we go back to the main menu and select item 1 Choose Model. We are looking for a suitable Synology model for our hardware. I chose DS1520+, since it is close in characteristics to our hardware.

  • After selecting the model, we get to the menu for selecting the OS version for the NAS. I chose 7.2 (the latest).

  • In the next menu we have the choice to unlock the functionality, for which we entered the code earlier. Select the setting that controls the CPU frequency

Descriptions of all items can be found at link from Wiki.

The final assembly of the bootloader on the SSD took me literally 10 seconds, after which I received a question about starting the system and started. You can feel the difference between a flash drive and an SSD drive: I booted into the disk in literally 30 seconds, while loading from a flash drive took 3–4 minutes.

To search for a device on the network, use the website finds.synology.com — after some time, the NAS will be found on the home network.

  • Next come questions about the license agreement and confidentiality (and, apparently, a system reboot: the network was unavailable for about two minutes), after which we find ourselves in the system setup menu, where we agree to boot DSM (OS for NAS) and format the installed disks. After this, we see a circle with the installation percentage, after which the system will reboot.

Remembering recent incidents with automatic updates, I preferred to only notify me about updates, rather than install them automatically. I already have a Synology account, so I'll jump straight into the setup step. If you don't have it, you can skip this step.

Creating QuickConnect is also at your discretion. QuickConnect is a functionality for accessing your NAS from anywhere in the world, provided the device is connected to the Internet.

After installation, we get to the desktop of our NAS.

We are immediately offered to install the basic software, and for this we need to create a resource pool. Therefore, we agree and go to create the first array.

Since I currently have 2 HDDs of different sizes, I will temporarily select RAID0.

Next, we select the installed HDDs and check the “do not check disks” checkbox. The check may take quite a long time and will depend on the disk size. And the OS itself passively checks the condition of the disks and will issue warnings about poor condition. We select the maximum volume, we are given a choice of file system. I selected “BTRFS as default” and “don't encrypt disk”.

We complete the creation of the resource pool and see the following:

Next we are interested in the Package Center. The screenshot below is what I currently have installed.

During my operating experience, I identified the following interesting packages for myself:

  • Container Manager. In fact, GUI for docker and docker-compose. A very useful thing, somewhat reminiscent of a Portainer.

  • Download Station is a file and torrent downloader.

  • A multimedia server is a convenient thing that allows you to play audio/video files and view photos over the network. For example, you can easily download several TV series via Download Station and watch them on Smart TV from the NAS itself.

  • Synology Photos and Synology Drive Server. Just those analogues of Google Photos and Google Drive. There are clients on Android and Windows for file synchronization.

  • Cloud Sync. At the time of writing, OneDrive and Google Drive are still working, which means you can connect to them through the Cloud Sync application. In fact, Cloud Sync connects to your cloud and uploads its backup. At the same time, you can connect several different clouds with several accounts from each (there should be a joke here about the fact that free 15 gigabytes of Google Drive, with proper management of accounts and folders, can be expanded to a couple of terabytes through this application).

Troubleshooting problems with Web Station and Virtual Machine Manage packages

  • Web Station. For some reason, nginxproxymanager from docker-compose did not work correctly (quick installation guide via Container Manager below). I managed to launch it, get an SSL certificate, but when proxying to other hosts on the network (in particular to the Proxmox host), I was constantly thrown back to the default page of the NAS itself. This was cured simply by uninstalling and reinstalling the Web Station package.

  • Virtual Machine Manager. In terms of interface, the package works fine, the problems begin when you start the virtual machine. The virtual machine is terribly slow and lags; the ls command in alpine was entered and took about 5 seconds to complete. In the end, I decided not to torture the storage and run virtual machines in Proxmox with a connected NFS disk. Perhaps I selected the wrong add-ons in ARC Loader during installation. Overall, the loss is not very big – Container Manager works great.

During testing, I purchased two more WD Black 500 gigabytes, which overall allowed me to make RAID5 from the Blacks. The wheels were used, the asking price was 3 thousand rubles. And the terabyte disk moved under the file dump and acquired an SSD cache of 256 gigabytes. In the future, there is a plan to purchase a couple of nvme disks for the cache, since this will open up the possibility of the cache working for reading and writing (now it is working only for reading). As a result, the disks in the storage manager look like this:

What ended up happening on the discs:

  • Pool 1. 3 WD Black disks of 500 gigabytes each are used as a RAID5 array for personal files (family photos/videos, synchronizing important files from a PC, synchronizing photos from smartphones).

  • Pool 2. SATA SSD 500 GB for fast NFS shares for virtual machines on Proxmox.

  • Pool 3. SATA Seagate with 1 terabyte connected for NFS, torrents and storage of unimportant files. An nvme SSD is connected to this pool for read cache, since the multimedia server was constantly crashing on the TV. Installing the cache eliminated frequent crashes.

Linking NAS with Proxmox

So, we’ve sorted out the assembly, installation, configuration and use. Now let's connect our NAS with Proxmox.

  • To do this, go to the NAS Control Panel and create a new folder on one of the available volumes, allow reading/writing to your user (for home, you can check the box for reading/writing for the guest user).

  • After creating the folder, we edit its parameters, adding NFS permissions (address 192.168.0.100 is the address of my Proxmox server).

(in the screenshot are test LXC containers for future articles 🙂

Overall, this is a simple, basic setup of NFS drives for Proxmox. In the NAS itself, you can limit the size of NFS folders, add users and limit rights for them, add HDD NFS folders, etc.

Bonus for those who read to the end

I mentioned above about trying to run Nginx Proxy Manager in a container on XPenology. Since I was unable to use the NAS as a hypervisor, we will use it as a platform for running Docker containers.

So, quick instructions for running NPM in XPenology:

sudo docker network create -d macvlan \

  --subnet=192.168.0.0/24 \

  --gateway=192.168.0.1 \

  -o parent=ovs_eth0 macvlan_network

This command will create a macvlan network to connect to your local network directly. Change the subnet and gateway to suit your needs. The working network interface can be found via IP, but usually it is ovs_eth0.

We insert the following docker-compose file, the contents of which, I think, most will understand without additional comments:

version: '3.8'
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    networks:
      macvlan_net:
        ipv4_address: 192.168.0.99
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

networks:
  macvlan_net:
    external: true
    name: macvlan_network

We skip setting up the Web portal and create a project.

If everything worked out for you, the NPM portal will open at the specified IP in the compose file (for me it is 192.168.0.99:81). You can read about further use and configuration of Nginx Proxy Manager in my previous article “Home server based on Proxmox”

Why do we use macvlan? The Web portal is already running at the address of our NAS; it occupies ports 443 and 80, which are needed for NPM. Therefore, I forwarded the container to my home network and got a working version with free ports.

The following resources were used to prepare this article:

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *