First, a bit of back story
During the fall 2018 semester, I started exploring the world of enterprise hardware. During my internship that summer, I saw the world of enterprise networking hardware–racks and racks of network switches, neatly laid out and well optimized.
"> First, a bit of back storyDuring the fall 2018 semester, I started exploring the world of enterprise hardware. During my internship that summer, I saw the world of enterprise networking hardware–racks and racks of network switches, neatly laid out and well optimized.
"> First, a bit of back storyDuring the fall 2018 semester, I started exploring the world of enterprise hardware. During my internship that summer, I saw the world of enterprise networking hardware–racks and racks of network switches, neatly laid out and well optimized.
" />Enterprise hardware in a non-production environment
During the fall 2018 semester, I started exploring the world of enterprise hardware. During my internship that summer, I saw the world of enterprise networking hardware–racks and racks of network switches, neatly laid out and well optimized.
While that marked the beginning of my current homelab, here at my apartment in Ithaca, my first server was an old 2008 Xserve 2,1 that I bought on eBay in high school. It came preinstalled with Mac OS X Server 10.5 and I promptly upgraded to OS X Lion Server, and then found a hacky way to run up to OS X Mavericks Server with a GPU upgrade and firmware reflashing. It was a modest server, consisting of a simple four-core Intel Harpertown processor with 10GB DDR2 RAM. I enjoyed learning about hotswap disks with the ever-so-elegantly Apple disk trays. As a 1U server, it sounded like a jet engine even when idle, but it landed a spot under my bed and kept my room about 5 degrees warmer than the rest of the house. I (or rather, my mom) also discovered the relative power inefficiency of server hardware when compared to consumer chips when our home energy usage (and subsequent monthly bill) shot up rather dramatically.
I also played around with some software features of OS X Server, implementing RADIUS for our home network, an Active Directory domain for my own enjoyment, and even hosting some NetBoot and NetInstall images for quick troubleshooting of our family’s Macs. It wasn’t long until I got a “dumb” switch and ran a cat5e cable from the router downstairs to my bedroom upstairs and around the corner. I’ll be honest, I definitely didn’t understand how the networking aspect worked then–other than the fact that wired ethernet was significantly more reliable and that I could get a consistent 30 Mbps down and 5 Mbps up on our Comcast broadband connection. I knew some about 802.11 configuration and radio types, but definitely didn’t have a handle on a multi-WAP setup or configuration.
Fast forward four years to my off-campus college apartment, and I was starting to build my lab again. I wasn’t able to bring over my trusty ol’ Xserve, but I was able to finally invest in a short depth rack unit. I started with 8U and that quickly grew to 12U (thanks to Amazon’s generous return policy), and it’s likely that when I move into my first apartment after college, I’ll grow that to a full half-rack.
Most of the motivation for creating this lab was implementing a multi-WAN setup for my apartment’s internet connection. I found out that our building had upgraded to 1 Gbps symmetrical fiber but implemented it behind a single NAT for everyone in the building (complete with client isolation and presumably some VLANs). I wanted to harness that speed for day-to-day use, but we were already locked in to a year-long contract with Spectrum for measly 120/20 Mbps over DOCSIS. That being said, Spectrum at least provided me with a public-facing IP, so I did want to at least hold on to that.
With some exploration, I landed upon pfSense, a FreeBSD based firewall/routing operating system. I also found a short depth, much quieter 1U server (Dell R210ii) with decent performance at a reasonable ($150) price. I was also able to grab an academic license for VMware’s ESXi and vSphere suite. One R210ii turned into two soon after, and soon the mostly empty rack turned full.
I grabbed a managed 16 port switch, which turned into a 24 port switch, and later an Arista 7050S-64 and a Mikrotik CRS326-24S+2Q-RM. Gray bundles of cat6a laid out around the apartment turned into a mixture of blue and yellow SMF and MMF with a mixture of LR and SR transceivers (whatever I could get ahold of).
I had an itch to build some whitebox servers, and so I ended up with a dual socket X10-based heavy-duty compute server that also served dual purpose as a FreeNAS box as well. The FreeNAS VM grew into a FreeNAS dedicated machine (still virtualized for ease of administration) with a dedicated 8-bay SATA backplane. An NVMe-based pool is on the horizon, as are some 40GbE upgrades.
For a more complete description of what I’ve been up to with this ongoing project, check out my “State of the Rack” posts here.
Hypervisor/Management
Operating Systems
Applications/Packages
Servers
Networking