
Let me kick this off by saying that two months ago this home lab did not exist. Building out a lab (on physical hardware) was always something I wanted to do but the cost of it always kept me from pulling the trigger. I usually just ran nested VMware Fusion ESXi VMs on my laptop and ran other VMs on top of that. The old Inception dilemma – a dream, within a dream, within a dream, etc. The problem with this approach is trying to get more than a handful of VMs running is always a challenge… just not enough resources. Customers would ask, “Steve, why does it sound like your laptop is going to explode?”. Finally, I decided that the benefits of building a proper lab (education, customer enablement, product integration testing, etc.) outweighed the monetary costs.
So the search began… what components should I buy? A LOT of research went into it and I finally determined that the lab I really wanted was going to cost me upwards of $10k. Certainly, I would not get approval from the CFO (wife). So I decided to start with a different approach: buy “ok” components up front that won’t cost a small fortune and upgrade pieces over time. Everyone knows it’s easier to fund a project if you can purchase smaller bits here and there rather than all at once. Luckily, a great opportunity presented itself… one of my customers (and a good friend) was selling his “older” home lab hosts that he replaced with a newer, more powerful, SuperMicro server. His new server is a beast and will likely be the next step in my home lab evolution. Alas, I ended up purchasing both of his older hosts – HP MicroServer Gen8’s.
These little suckers pack quite a punch. The original processors were upgraded to quad-core (8 cores with HT) E3-1265L V2’s which run at 2.5 GHz. These hosts are super quiet too… no louder than a desktop.
They each have an 8GB Micro SD card which is perfect for running ESXi.
Each host was already maxed out at 16GB of RAM; not a lot, but certainly an upgrade from running things in Fusion on my laptop!
The only problem is that the previous owner had an iSCSI/NFS NAS (Synology I believe) so there were no internal drives, which means I had no storage. Each host has 4 drive bays so I ended up purchasing (4) Samsung 850 EVO 250GB SSD drives. I put (2) SSDs in each host and set them up in a RAID 0 configuration. I know, I know… ZERO REDUNDANCY. But hey, I ended up with ~500GB of space on each host (more than enough with thin provisioning) and the performance is excellent. It’s a home lab after all so it shouldn’t be too catastrophic if a drive fails. My strategy here is that someday I will be able to purchase a NAS device and I will be able to populate it with the already purchased SSDs which would give me shared storage and redundancy.
So what’s missing? The network of course! I initially started looking for switches that had both 1GbE and 10GbE ports (future proof). But since my hosts do not have 10GbE NICs and the costs were pretty steep, I ended up purchasing a 24-port gigabit EdgeSwitch Lite from Ubiquiti. I was so impressed with Ubiquiti that I purchased an EdgeRouter Lite and a UniFi AP AC PRO access point to with it. The Ubiquiti products offer enterprise capabilities at a fraction of the price!
So that is a high-level overview of the hardware components of my new home lab. I can already tell that the lack of memory is going to be a constraint, but hey, it’s a good start! Keep an eye out for Part II of this blog series where I dig in further into the network and vSphere configuration.
Like this post and I’m totally with you on Ubiquity products. I got an EdgeMax Light years ago and slowly have moved to more of their products.
Right!?! Their stuff is amazing! Next phase is to get NSX up and running so I can stretch a layer 2 network across a WAN to a colleague’s lab…
Great post! I’ll definitely follow your journey in hopes of collecting ideas to build my own lab (without taking out a 2nd mortgage). Looking forward to Part Deux.
Highly energetic article, Ӏ enjoyed thɑt a lot. Wіll theгe be a pɑrt 2?