Home Lab Introduction – Part II – Install/Config

Categories Hardware, Home Lab, Storage, vSphere

Welcome to Part II of my Home Lab Introduction blog series where I will be focusing further on the installation/configuration of the VMware environment.  If you missed Part I where I reviewed the lab from a hardware perspective it can be found here.  I will not be covering the install of ESXi itself; however, I will mention that I leveraged the HP specific build which can be located in the my.vmware portal.

ESXi Custom ISOs

Network

Let’s start by taking a look at the network configuration before we jump into the vSphere configuration.  My hosts each have (2) 1GbE NICs and (1) 1GbE NIC for iLo.  Below is a diagram that outlines the network connections.

Home Lab Network Connections

Since I only have two NICs on each host, I set them up in a Port Channel, or Link Aggregation Group (LAG), which requires the “Route based on IP hash” Load balancing algorithm on the vSwitch.

LAGs

vSwitch Load Balancing

My initial setup is leveraging the vSphere Standard Switch (vSS).  In the near future I will be upgrading to the vSphere Distributed Switch (vDS) which is a requirement for NSX (routing/switching/edge services).  I setup 3 additional VLANs in addition to the default VLAN and trunked them down to my hosts.  Since I am only leveraging internal storage at this time I did not setup an IP Storage VLAN for either iSCSI or NFS.  I will most likely be leveraging vSAN when I upgrade my hosts… hopefully in the near future.  I will also need to add a VLAN for the NSX VXLAN, but will tackle this in a future blog post.

VLANs

vSwitch Topology

Storage

Now let’s focus a bit on storage.  If you recall from Part I, I mentioned that I the two local SSDs on each host were setup in a RAID 0 configuration to maximize space and perform well.  Again, no redundancy, but I like to live life on the edge!

I also like to live dangerously

Logical Volume

Everything was going well to this point; however, once I powered on just a single VM (Windows Server 2016) I began experiencing terrible performance which appeared to be related to the disk.  The average datastore latency was around 10ms (spiking up to 65ms) and the amount of time it took to RDP into the VM and launch Server Manager was 2 minutes 34 seconds!

Disk Latency

After doing some research I discovered that other HP Proliant MicroServer Gen8 users were having similar issues after upgrading from ESXi 5.5 or 6.0 to 6.5.  I decided to try downgrading the driver for the “HP Dynamic Smart Array B120i RAID Controller”.  The driver included in the HP image for ESXi 6.5 was 5.5.0.102-1(17 Nov 2016) (5.5.0.102-1OEM.550.0.0.1331820.x86_64.vib).  I replaced this version with 5.5.0-88.0(9 Sep 2014) (5.5.0-88OEM.550.0.0.1331820.x86_64.vib) using the steps outlined below:

  1. Shutdown any VMs running on the host.
  2. Place host in maintenance mode.
  3. Copy the new driver (VIB file) to a local datastore.
  4. Enable SSH on the host and login.
  5. Browse to the datastore where the VIB was placed.
  6. Copy the VIB to the /var/log/vmware/ directory.
    • cp scsihpvsa5.5.088OEM.550.0.0.1331820.x86_64.vib /var/log/vmware/
  7. Remove the old driver.
    • esxcli software vib remove n scsihpvsa f
  8. Install the new driver.
    • esxcli software vib install v file:scsihpvsa5.5.088OEM.550.0.0.1331820.x86_64.vib force nosigcheck maintenancemode
  9. Reboot the host.

I also recorded a video (here) if you want to watch me walk through the process.

Once I changed the driver the latency dropped down to almost nothing and the same RDP login/Server Manager launch test only took 17 seconds!

No Disk Latency

VM Configuration

Once the storage issue was resolved I built the remainder of the environment (11 VMs total).  I resized the Software Defined Datacenter (SDDC) components to be a little bit smaller… definitely NOT BEST PRACTICE in a production environment!

  • Domain Controller – (2) vCPUs & (2) GB of RAM
  • Platform Services Controller (PSC) – (2) vCPUs & (2) GB of RAM
  • vCenter Server Appliance (vCSA) – (2) vCPUs & (4) GB of RAM
  • NSX Manager – (2) vCPUs & (4) GB of RAM
  • vRealize Log Insight (vRLI) – (2) vCPUs & (4) GB of RAM
  • vRealize Operations Manager (vROPs) – (2) vCPUs & (6) GB of RAM
  • vRealize Orchestrator (vRO) – (1) vCPUs & (2) GB of RAM
  • vRealize Identity Manager (vIDM) – (2) vCPUs & (2) GB of RAM
  • View Connection Server – (2) vCPUs & (2) GB of RAM
  • Unified Access Gateway (UAG) – (1) vCPUs & (1) GB of RAM
  • Remote Desktop Session Host (RDSH) – (2) vCPUs & (2) GB of RAM

This configuration provides terrific performance.  Next, I will be diving deeper into all the specific components of the SDDC.

2 thoughts on “Home Lab Introduction – Part II – Install/Config

  1. Nice…the progress you’ve made is impressive. Keep pushing the envelope, my friend. This is a fantastic project you have going!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.