23 February 2022

Vastly differing results in WSL2 between 5950X and 12900K in Windows 10 21H2

A little while ago, I came across this video which was talking about how you can run Linux graphical applications natively in Windows (more specifically, in Windows 11).

However, when at the time when I watched said original video, I didn't have any hardware that could actually really run that probably. My "newest" system that I had was an Intel Core i7-6700K and as far as I know, it didn't have the Trusted Platform Module (TPM) anywhere (whether it is as an external add-on dongle) or integrated into the motherboard firmware/BIOS.

So, I didn't really make much of it back then.

But since then, I've built both my AMD Ryzen 9 5950X system and also my Intel Core i9-12900K system and I figured that with some of the work that I needed the systems to be doing over with, I had a little bit of time with the system to do some more testing with it.

So I grabbed two extra HGST 1 TB SATA 6 Gbps 7200 rpm HDDs (one per system), threw Windows 10 21H2 on it, and proceeded with the instructions on how to install and configure Windows Subsytem for Linux 2 (WSL2). I installed Ubuntu 20.04 LTS (which really, turned out to be 20.04.4 LTS), and proceeded to try and install the graphical layer/elements to it.

So that's all fine and dandy. (Well, not really because in both instances, neither of the systems was able to start the display and I can't tell if it is because I have older video cards in the system (Nvidia GeForce GTX 980 and a GTX 660 respectively - because as a CentOS 7.7.1908 system, it didn't really matter what I had in there since I was going to remote in over VNC anyways).)

But, since I had it installed, AND by some miracle, Windows 10 picked up on the Mellanox ConnectX-4 dual port VPI 100 Gbps Infiniband cards automatically, I just had to manually give each card in each system an IPv4 address so that it can talk to my cluster headnode (which was still running CentOS along with the OpenSM), and connect up to the network shares that I had set up. (SELinux is a PITA. But I got Samba going on said CentOS system so that on the Linux side, it can connect up to the RAID arrays using NFS-over-RDMA whilst in Windows, it's just through "normal" Samba (i.e. NOT SMB Direct).)

So, I might as well benchmark the systems to see how fast it would be able to write and read a 1024*1024*10240 byte file.

And for fun, I also installed Cygwin on both of the systems as well, so that I can compare the two together.

Being that both systems was able to pick up the Mellanox ConnectX-4 card right away (I didn't have to do anything special, install the Mellanox drivers, etc.), I was able to connect up to my cluster headnode and the Samba shares were visible immediately. As a result of that, I was able to right-click on both of those shared folders and map it to a network drive directly and automatically.

Now, in WSL2, I had to mount the mapped network drive using the command:

$ sudo mount -t drvfs V: /mnt/V

(Source: https://superuser.com/questions/1128634/how-to-access-mounted-network-drive-on-windows-linux-subsystem)

And then once that was done, I was able to run the follow commands in both Ubuntu on WSL2 and also in Cygwin:

Write test:
$ time -p dd if=/dev/zero of=10Gfile bs=1024k count=10240

Read test:
$ time -p dd if=10Gfile of=/dev/null bs=1024k

Here are the results:

Huh. Interrresting.

I have absolutely NO clue why WSL2 on the 5950X is so much slower compared to WSL2 on the 12900K.

But what is interesting though is that the speeds are close, with the 5950X being a little bit faster under Cygwin than the 12900K, also under Cygwin.

I decided to blog about this because there is a potential possibility that for those that might be working with WSL2, the hardware that you pick MAY have an adverse performance impact.

I'm not sure who, if anybody, has done a cross-platform comparison like this before but to be honest, I haven't really bothered to look for it either because you might have reasonably expected that this significant performance difference wouldn't/doesn't exist, but the results clearly show that there's a difference. And a rather significant difference in performance at that.

Please be aware and you should do your own testing for your workload/case/circumstance if you get a chance to be able to do so.

22 February 2022

Why is Intel keeping the overall physical dimensions of their Intel 670p Series 2 TB SSD a secret?

I recently submitted my order for a Minis Forum HX90 (specs) and being that I am looking to use it to replace my very hot Intel NUC that I had previously written about (it's back up to 100 C nominal now), and that I might also be offload all of the virtualisation duties as well from my Intel Core i7-6700K system and onto this new system instead. As such, I didn't know if said new system would support RAID0 with my two existing Samsung EVO 850 1 TB SATA 6 Gbps SSDs that are no longer currently deployed in a system, so I figured that I was going to get a 2 TB NVMe SSD just to be safe and I landed on this - an Intel 670p Series 2 TB NVMe 3.0 x4 SSD (specs).

Whilst browsing through YouTube, I stumbled my way upon a video where they were talking about NVMe SSD and putting heatsinks on them and how they would thermal throttle the performance if said NVMe SSD got too hot whilst it was being used/under load.

So, that got me thinking - should I start looking and seeing if I should be getting a NVMe SSD heatsink of my own for this drive?

So, I reached out to the customer support at Minis Forum (based out of Hong Kong, which is interesting because their first email back to me was written entirely in Traditional Chinese), so I asked them about a SSD heatsink (because some of the review units that they've sent to other tech YouTubers included a NVMe SSD with a heatsink pre-installed in the system) and they told me that the total height that the HX90 can take, INCLUDING the NVMe SSD is 7 mm.

So, ok. No problems, right? If I can find out what's the overall height of the Intel 670p Series 2 TB NVMe 3.0 x4 SSD, then I can figure out what's the maximum height of a heatsink the HX90 can accept, and then I can start to look into what are my purchasing options.

So, then I reached out to Intel's customer support, because of course, lo and behold, the overall height of the Intel 670p Series 2 TB NVMe 3.0 x4 SSD isn't listed on their spec page.


Huh. No overall physical dimensions listed on Intel's website.

So I reached out to Intel's customer service and asked them this basic question and also told them that it was because the manufacturer of the computer has told me what the maximum height of the combined SSD and heatsink can be so that I can properly size and purchase said heatsink. Their customer service rep said that they understand why I was asking for this information and would need to do further research on this topic/matter and that they would get back to me. Okay. Not a big deal.

Well earlier today, I got an email from said customer service rep stating quote:


Why would Intel keep the overall physical dimensions of their product under a NDA?

So, at this point, it seemed awfully suspicious.

I told them that I am not asking on behalf of the company where I work, and therefore; I have no idea if they have a signed NDA with Intel or not. (And frankly, that shouldn't matter because a customer should be able to ask for the overall physical dimensions of their product (and not the overall dimensions of the box/packaging that their product gets shipped in either).)

I then told them that I will just measure my drive when it arrives and that as such, I will not be signing a NDA in regards to this.

Well, about 3 hours later, my drive arrived.

So, for those that are interested in knowing, the overall physical dimensions of the Intel 670p Series 2 TB NVMe 3.0 x4 SSD are:





Overall length: 80.12 mm
Overall width: 22.05 mm
Overall height: 2.0525 mm (average of 2.09 mm, 2.06 mm, 1.97 mm, and 2.09 mm)

So, in case you're out trying to shop for a NVMe heatsink, and you're trying to use it for a small form factor (SFF) or ultra compact form factor (UCFF) build, now you know the height of the NVMe heatsink you can get.

08 February 2022

A friendly reminder to periodically clean your NUC

I have an Intel BOXNUC8i7BEH (specs) and I have been using it to run a VM and also as a host system/unit.

Lately, it's been having issues where even when I tried to run it without the chassis (i.e. running it in an "open case" configuration, the temps were still hitting a peak of 100 C whilst downloading something in the VM and also with 12 Firefox tabs open on the host itself.

So, given that it was still running so hot, even with it running out of the case/in the "open case" configuration, I figured that I would shut the unit down, wait for it to cool off a bit, and proceed with the further disassembling the unit.

Once I took the fan off, there was a LOT of dust that had been trapped where the inlet to the copper heatsink was, so I was able to clean that off with damp tissue paper.

And I also figured that since I had some Thermal Grizzly's Kyronaut sitting around, that I might as well also remove the plate that the heatpipes are connected to, clean off the old thermal paste that's on the CPU, and give it some new thermal paste whilst I'm at it.

Lo and below, the current system, still doing exactly what it was doing before (picking up from where it left off when I powered down the system) is now sitting at a cooler 85 C or so.

Yay!

Moral of the story: remember to periodically clean your NUC!