27 July 2021

Re-deploying My Old Server Supermicro X7DBE

So, I was originally moving from "real" servers to NAS units, in order to try and cut down on my power consumption a little bit and also to make managing dumb storage a lot easier, so I originally purchased a Buffalo LinkStation 441e 4-bay diskless NAS unit, with the original intention of plugging in four 6 TB HGST SATA 6 Gbps 7200 rpm HDDs in it, and when that didn't work, said NAS unit was relegated to only using four 3 TB drives instead.

Fast forward three years, and I guess that I just got tired of the fact that the Buffalo LinkStation 441e couldn't read/write the data at anything more than 20 MB/s with my Windows clients. So I decided that I am going to re-deploy my server as an actual server for dumb file storage and data serving tasks.

Hardware specs:
Supermicro SC826 12-bay, 2U, SATA rackmount chassis
Supermicro X7DBE dual Socket 771 motherboard
2x Intel Xeon E5310 (4-cores, 1.6 GHz stock, no HyperThreading available)
8x 2 GB DDR2-533 ECC Registered RAM
2x LSI MegaRAID SAS 12 Gbps 9240-8i (SAS3008)
SIMLP (it came with it, but I think that the IPMI card is dead)
4x HGST 3TB SATA 6 Gbps 7200 rpm HDDs

OSes that I tried:
1) TrueNAS Core 12.0 U1.1
2) Solaris 10 1/13 (U11)
3) CentOS 7.7.1908
4) TrueNAS Core 12.0 U1.1

I first tested the server using four HGST 1 TB SATA 3 Gbps 7200 HDDs in order to test the resiliency of ZFS in TrueNAS Core 12 (FreeBSD) by randomly yanking and plugging the drives back in into random locations to see what ZFS would do about it.

Of course, with a raidz pool, it was only fault tolerant to one drive (out of the four), which meant that, as expected, pulling out two drives killed the pool (took the pool permanently offline such that even if I plugged the drives back in, it would still report the pool as having failed).

I copied a little bit of data onto the pool to see what it would do.

And then I figured "well, since I was going to use ZFS, why not use the OS that started the whole ZFS thing in the first place?" - Solaris.

After anywhere from a day-and-a-half to three days, I finally got Solaris onto the system.

But then I wasn't able to get Samba up and running. (It still surprises me that the native samba package that ships with Solaris 10 - they NEVER got that working out-of-the-box, even with all of the updates to the Solaris 10 release cadence.) I found a samba package from OpenCSW, but that was only SMB 1.0, which is disabled in Windows 7 by default due to security vulnerabilities.

So, I wasn't able to get that up and running and quickly abandoned it.

Next, I tried to use CentOS.

I use CentOS for my micro cluster and also the micro cluster headnode, so I have some experience with it, but again, because of how my backplane was wired up where three of the 3 TB drives was connected to one of the HW RAID HBAs and the fourth drive was connected to the second one, it meant that I would have needed to use mdadm to make it work, which I was a little bit weary of, on the off chance that said md array would fail.

At least with ZFS, there appears to be more resources available for help, should I need it, as that seems to be all the rage in the Linux storage subspace right now, despite the fact that for years, I have been telling people that it's not my favourite filesystem to work with due to the fact that there are ZERO external data recovery tools for data that resides on hard drives that belonged to a ZFS pool. (A point which is still true today. i.e. if your ZFS pool dies, you can't do a bit read on the drive in order to salvage whatever information you can off the platters themselves and try and reconstruct the data/your files whereas with (at least a single NTFS drive, you CAN do a bit read on the drive and try and pick up/pick off whatever data you can with that).)

So, CentOS didn't really work like I thought it would've/could've.

So that sent me back to TrueNAS Core 12.0 because at least, it has a nice, web-based GUI (I'm so done and tired of looking up commands to copy-and-paste into a command prompt or terminal or over ssh to get the system setup).

I did consider UNraid, but the problem with UNraid is that it will fill up one drive and then the next and then the next, which means that the write speed of a single drive can quickly become a bottleneck for your entire server.

It's too bad that Qnap doesn't publish/sell their QTS software so that you can install it on any hardware because I really like Qnap's software. And it's also too bad that the only way that you can get the Qnap software is if you buy their hardware as well, which can be quite expensive for the hardware that you are getting and with that, it doesn't even necessarily support some of the other nice-to-haves that I would want, perhaps in the future, from my future storage server(s) (like being able to install a Mellanox/Nvidia (I'm old school, I still call it Mellanox because that's what it is) ConnectX-4 100 Gbps Infiniband network card.
 
So TrueNAS Core 12.0 became the selected candidate, and I proceed to get everything all set up and up and running.
 
As it stands, CIFS/SMB and NFS is running, although I AM running into a permissions issue between the two differing protocols right now (Windows clients connect over CIFS/SMB and my Qnap Linux based NAS units connect to it over NFS because for some reason, it fails to log in with CIFS/SMB to the TrueNAS Core 12.0 system). I posted the question in the forums, and it seems that nobody has identified a cause nor a fix for this yet.

Luckily, most of the time, it's the Windows clients that uses this newly re-deployed server rather than my Qnap NAS units.

But this partially documents and chronicles my journey with TrueNAS Core 12.0 and also my old Supermicro server.

It took almost like 4 days to get the server back up and running. But it's humming quite nicely now.

The only downside is that I think it's consuming somewhere around like 205 W of power or something like that vs. the Buffalo LinkStation 441e only had a 90 W AC adapter.

And the weight of having to move/lug the server around before I finally put it into the rack. (It's just sitting on top of stuff. I don't know if I actually have rails for it.)