First, a high level run down
My homelab serves a variety of purposes.
"> First, a high level run downMy homelab serves a variety of purposes.
"> First, a high level run downMy homelab serves a variety of purposes.
" />01 March 2020
My homelab serves a variety of purposes.
I’ve got a custom DHCP and DNS server (PiHole) running atop a Raspberry Pi Model 3 B+. I needed a router that could handle a multi-WAN environment, and pfSense handles that quite well. It’s running as a VM atop of a Dell R210ii running ESXi, along with three other “mission-critical” VM’s: one Ubuntu VM acting as an NFS server, another Ubuntu VM with Nginx running a reverse proxy and a web server for a few self-hosted websites, and another running the vCenter Server Appliance to manage the entire cluster of ESXi hosts.
A second R210ii serves as a stable development ESXi host. An Ubuntu Server VM runs a few dev tools and allows me to work on school projects remotely and from any computer using VS Code remote. A Windows Server VM runs an Active Directory domain for ease of login on other Windows machines. Several other VM’s see sporadic use, including a RHEL VM for playing around with RHEL/CentOS packages, and a GNS3 VM for playing around with virtual network topologies.
My first whitebox I built is compute-optimized, running two E5v4 processors and plenty of RAM, with a couple of GPUs. An Ubuntu VM and Windows 10 VM run atop this box, with the GPUs passed through to the VM directly. I used to run a FreeNAS VM with an HBA card passed through, but that has since migrated to a new whitebox.
My NAS build again runs ESXi, with FreeNAS as a VM with a SATA HBA and a set of NVMe SSDs passed through directly.
All four servers are connected with 10GbE, my latest project.
Over the past quarter, I’ve been working primarily on my NAS and a recent 10GbE deployment. Although the NAS build was aimed primarily at raw storage capacity and fault tolerance, the motherboard I chose also featured a few 10GbE baseT ports. I subsequently upgraded my main switch to a Mikrotik CRS326-24S+2Q-RM to support 10GbE throughput throughout the apartment. Intel X520 NICs were retrofitted throughout the cluster and MMF with SR transceivers at either end replaced Cat6 for power requirements. A Mikrotik CRS305-1G-4S+IN replaced a satellite 1GbE dumb switch. Among Ubuntu VMs, 10Gbps was easily achieved with iPerf on a single TCP connection. However, on FreeBSD or macOS, I could only achieve 3-5 Gbps. I had to set a few tunables in FreeBSD to allow for increased TCP buffering and TSO and LRO optimizations. On macOS, all that was required was to enable jumbo frames (and subsequently on all other hosts and switches).
As for optimizing FreeNAS for the increased available bandwidth, it required the same optimizations, plus the tuning of ARC and SLOG, to allow for increased throughput to the disks as well. I added a couple of NVMe SSDs as an L2ARC and a SLOG for both read and write caching for increased performance.
An interesting limitation exists in the Mikrotik CRS326–It only supports 10Gbps throughput when acting on pure L2 features. Enable L3 routing and the MIPS CPU suffers greatly, reducing throughput to 300 Mbps.
On the network administration side, I’m looking to implement VLANs–testing the waters on whether or not this crushes the Mikrotik’s performance.
On the hardware side, I just ordered a few used Samsung DCT 983s along with a few Mellanox ConnectX-3 40GbE NICs. So if I thought optimizing for 10GbE was difficult, I’m in for a real treat of a challenge.
Software
Hypervisor/Management
Operating Systems
Applications/Packages
Hardware
Servers
Networking