In the 2020 version of Mellanox's (now Nvidia Networking, but I'm old school, so I still call it Mellanox)
IB Adapter Product Brochure, it states that their Infiniband network adapters support NFS(o)RDMA.
|
Page 4 from the Mellanox IB Adapter Product Brochure
|
But when I actually purchased said Mellanox ConnectX-4 dual 100 Gbps VPI port cards (MCX456A-ECAT), and I tried to use Mellanox's OFED drivers for Linux (MLNX_OFED_LINUX-4.6-1.0.1.1-rhel7.6-x86_64.iso), said NFSoRDMA wasn't working, so I posted on the Mellanox community forums to try and get some help (because sometimes, it can very well be that either I didn't read nor execute the installation instructions correctly or that I am not understanding something or that there was another step that I needed to do that wasn't documented in the installation instructions).
After fiddling around with it for a little while, I ended up reverting back to the "inbox" driver, i.e. the driver that ships with the OS (originally in my case, CentOS 7.6, and then I eventually moved up to CentOS 7.7.1908), because for some strange reason, NFSoRDMA was working with the "inbox" driver, but not the Mellanox OFED driver.
Turns out, the NFSoRDMA feature was disabled in that version of the Linux driver (and may have been actually disabled in some versions prior to that as well).
|
Oh really? |
So with that admission, I was able to get Mellanox to go on record stating that their then-own-driver actually did NOT do what their advertising material states that their products can do, which constituted false advertising (which would be illegal).
Since then, I'm not sure at what point or version, but their latest driver now has NFSoRDMA re-enabled.
|
Source: https://docs.mellanox.com/display/MLNXOFEDv541030/General+Support
|
I am writing about this now because this was quite the adventure in trying to get NFSoRDMA up and running on my micro cluster.
As a result of Mellanox (Nvidia Networking) having previously taken away a feature/functionality before, I no longer trust that they won't do it again.
Therefore; as a result, I currently still will only use the "inbox" drivers that ships with the OS because at least I know that it'll work.
The ONLY time where this may not work or won't be true is if I were to start using Infiniband with the Windows platform (either on the server side and/or on the client side). Then I might have to use the Mellanox drivers for that, but for Linux, I can vouch for certain that the "inbox" driver that ships with 'Infiniband Support' for CentOS 7.6.1810 and 7.7.1908 works like a charm!