a little about assembly

There is nothing special to write about the assembly of system engineers. 95% of all such reports are reduced to the words “I went to the store, bought pieces of iron and assembled a computer from them.”

Therefore, I will not describe here how I screwed the motherboard and what thermal paste I smeared the processor with. And I will go over some issues of iron compatibility, cooling, power consumption and screwing a couple of things that are not provided there into the case.

CPU

Although the Ryzen 7 1700 started up after I rolled back the bios from the store to the previous version, it worked unstably, freezing at random moments. But most often this happened during a long rsync run. I could copy files in mc, krusader for ten hours or upload them over the network – and nothing happened. But it was worth starting rsync copying between local disks – and after 3-4 hours the computer silently hung.

Perhaps it was the incomplete compatibility of the processor with the motherboard (nevertheless, it is not officially supported), or perhaps the problem of early ryzens when working with Linux. If the latter, then it was partially corrected by some power settings in the BIOS, but did not completely go away.
But that the problem was in the processor – that’s for sure. I moved the Ryzen 5 3600X here from my desktop – and the computer worked without any problems, not a single break.

But the 3600X is still a pretty hot processor, plus a six-core one, and I’m already aiming for eight cores. Therefore, we managed to agree on the exchange of 1700 for 3700, under Windows this 1700 (on a motherboard with official support for 1700) is still working without problems. Like my 3700. And the 3600X will go back to the home desktop.

Cooling

I wrote that some tower coolers (including mine) on this motherboard will have to be installed so that the fan drives air into the ceiling of the case, and not into the back wall. But, given that the case is almost entirely made up of ventilation holes and allows you to hang fans anywhere, I just hung a 140mm fan on top so that it “picks up” air from the CPU cooler and throws it out through the holes in the ceiling.

Although, in general, the processor practically did not warm up anyway. I disabled the auto-overclocking of the processor, fixing it at 3600. At this frequency, I was not able to warm it up above 50 degrees with the AIDA64 stress test – and it heats the processors well. Maybe not the hardest job, but not far behind prime95 or occt.

For hard drives, I have so far hung two fans, 120mm on the front wall and 140mm on the back. Drive temperatures under load are within 40 degrees – but summer has not yet begun. If it gets hot, I’ll add another 120mm to the front wall. Or maybe I’ll just hang it up when there’s an extra one.

All native case fans were three-pin, so I threw them out and replaced them with four-pin ones. I do not use the fan speed switch built into the case.

Energy consumption

Approximately measured the power consumption from the outlet. Approximately – because this is a basic configuration, in the future more hard drives, a video card, etc. will be added. For now, Ryzen 3700@3.6GHz/32GB RAM/2xSSD, 6xHDD.

The computer is turned off, plugged into a power outlet, IPMI and a switch are working – 4-6 watts.
When turned on, peak consumption is 120 watts.
After loading, nothing is running, except for the OS itself, all hard drives are spinning – 70-73 watts.
An array is launched, one virtual machine, a couple of containers – at idle, without active work – 80-83 watts.
Parity check – 90-95 watts (activity – read to all hard drives)
100% load on all cores – Aida stress test plus parity check – 120-125 watts.
At idle, when all drives are stopped – 42-43 watts.
Three hyped discs – most likely, the standard state will be – about 50-55 watts.

For comparison, a Microserver with a Xeon E3-1265L consumed about 70 watts at idle (the disks were spinning, but there were 4 + ssds) and around 110 under full load.

Of course, this is not an ARM NAS or a six-watt Celeron, but I knew what I was getting into. In the future, power consumption will grow even more – when I add a couple more disks, a video card, 10 gigabits. I think that at the peak it can grow one and a half times. But real consumption will be significantly less, because the system rarely works under 100% load.

For ease of calculation, I consider the average server consumption per 100 watts. At current rates, it comes out to about 300 rubles. This, of course, is a significant part of family expenses for electricity – I pay 1,000 rubles a month. But in the total amount of “IT expenses” – the Internet, mobile phones, all kinds of subscriptions, clouds, white-haired horses – these 300 rubles do not make up such a large percentage. At least it’s not something I’ll start saving from if the need arises.

All sorts of addons and collective farming

SATA controller

I didn’t want to spend one of the three PCI-E slots on a controller, so when I found out about the existence of SATA controllers in the M.2 slot, I immediately ordered it for myself. Since I don’t need the speeds of NVME disks yet and I can completely manage with SATA SSDs, I decided to change one of the two M2 to 5 SATA without question.

Here I will not tell you in detail about the controller, who are interested – wrote separate review on mysku.

In short, it works. The declared maximum data transfer rate is 1600-1700 megabytes / sec. Only achieved in the gen3 x2 slot. In the gen2 slot x4 runs at half speed. All five ports work, no drivers are required, you can boot from the controller, TRIM is supported on the SSD, it does not noticeably heat up during operation, the LEDs are green. So it is quite possible to buy and use if something similar is required.

At home, I still put it in the gen2 x4 slot, where it works at half speed – up to 900 MB / s. But I plan to connect only hard drives of the second basket to it – and this speed is enough for four HDDs with a margin. The fifth port is still reserved. The first basket and SSD are connected directly to the motherboard.

Net

Where I have a server, there is no separate switch, but there is only a five-port router. Two ports are providers, one port goes to the TV, one port goes to the switch at my workplace, and the server was connected to the last port. Although the microserver had two network cards, I only used one. I also configured ILO so that it works through the main network controller, and not through my own.

The current motherboard does not know how, give it a separate cable for IPMI. But I didn’t want to install another switch, so I took a small five-port switch, screwed it inside the case, powered it from the server power supply, connected the motherboard to it with three short patch cords, and plugged the fourth one into a free port on the router.

In the future, I will redo the design – I will buy a larger router or I will start a separate switch on the mezzanine, but for now it will work like this. I don’t lose bandwidth – all the same, the server would hang on one port of the router.

USB

There are only four USB ports on the motherboard – two on the back, two on the motherboard – for connecting the front panel. Since unraid is loaded only from a flash drive, one of the ports would have to be “spent” on it. But I wanted to keep all external ports free. That’s why I bought a couple of adapters on Ali, a simple hub and nailed it inside the case, connecting the front ports to it. And I plugged the bootable flash drive into the first adapter – directly into the motherboard.

Adapter:

hub. A fan is planned here in the future, but I think it will be possible to place the wires so that they do not climb there.

Yes, I now have four ports connected to one, but I doubt that I will need full USB bandwidth to all these ports at once. The hub was attached to the hard drive compartment so that it could be accessed if necessary, but there are still two ports left, you can plug in some kind of dongle.

Of course, it would be ideal to have an internal hub, such as Sabrent‘a:

But they are nowhere to be found, the price is quite high, and I did not find anything like this among the Chinese. only seen similar to usb2.0 from nzxt.

Cable management

At first he was shy to show, but nevertheless he decided. The case, due to its design, is not particularly suitable for beautiful cable management, but I didn’t want to twist everything into a ball and shove it under the basket with hard drives.

That’s why I bought a couple 40 cm SATA power cables for a 4-port PSU. Native cables are meter long and there are only 2sata + molex + fdd and 3sata + molex. And these fit perfectly to the baskets, while the length is quite sufficient to safely remove the baskets from the case.

And for the same hard drives I bought half a meter on Ali SATA cables for four devices. They take up much less space than eight conventional cables. For SSD on the front panel, I use two conventional cables.

Motherboard compartment. The 140mm fan sits close to the memory without blocking it. At the bottom there are a couple more installation places for hard drives, and I have three free SATA. So sooner or later there will be something. The power cable of the motherboard turned out to be almost tight, but that’s exactly what it is – a little bit of freedom to connect and disconnect, you can safely. Of course, it would be possible to put it through a large hole on the side, but I still preferred it that way. Seems neater and doesn’t interfere with SATA.

Compartment for connecting hard drives. Dangling SATA cables for baskets, on the back wall diagonally there is an eight-pin cable to the motherboard, power to the switch is attached to it. It is powered by one of the PSU’s native power cables (which is 3sata+molex). From it, the SSD on the front panel is also powered – fortunately the cable is long. I twisted the excess and fixed it at the bottom of one of the regular ties (the second USB hub is fixed).

Compartment with hard drives. Not perfectly neat, but much better than the tangle that was with a temporary power supply (not modular) and ordinary SATA cables.

Hard drives above the power supply. According to reviews, I thought that SATA cables would break there – but it turned out quite well.

Of course, it could have been done more carefully, but I needed to leave the opportunity to calmly remove the baskets with disks and generally dig into the case without cutting a bunch of ties every time.

Financial

I won’t go into detail, but considering all the wires and fans, the server cost me about 65 thousand, with a planned budget of 50 thousand.

Considering the sold Microserver and a discount in a familiar store, I paid 10-15 thousand with real money, which I consider a very profitable option for switching from Xeon E3-1265L / 16GB to Ryzen 7 3700 / 32GB. Especially considering that nothing could be squeezed out of Xeon, and Ryzen has quite a lot of possibilities for further expansion. I do not take into account the cost of unraid here, because this is a separate expense item. It would be quite possible to put any other OS on this hardware.

The next part will be about using unraid – exactly how I replaced the functionality of the old server, with what programs.

Similar Posts

Leave a Reply