Welcome to Part II of my Home Lab Introduction blog series where I will be focusing further on the installation/configuration of the VMware environment. If you missed Part I where I reviewed the lab from a hardware perspective it can be found here. I will not be covering the install of ESXi itself; however, I will mention that I leveraged the HP specific build which can be located in the my.vmware portal.
Let’s start by taking a look at the network configuration before we jump into the vSphere configuration. My hosts each have (2) 1GbE NICs and (1) 1GbE NIC for iLo. Below is a diagram that outlines the network connections.
Since I only have two NICs on each host, I set them up in a Port Channel, or Link Aggregation Group (LAG), which requires the “Route based on IP hash” Load balancing algorithm on the vSwitch.
My initial setup is leveraging the vSphere Standard Switch (vSS). In the near future I will be upgrading to the vSphere Distributed Switch (vDS) which is a requirement for NSX (routing/switching/edge services). I setup 3 additional VLANs in addition to the default VLAN and trunked them down to my hosts. Since I am only leveraging internal storage at this time I did not setup an IP Storage VLAN for either iSCSI or NFS. I will most likely be leveraging vSAN when I upgrade my hosts… hopefully in the near future. I will also need to add a VLAN for the NSX VXLAN, but will tackle this in a future blog post.
Now let’s focus a bit on storage. If you recall from Part I, I mentioned that I the two local SSDs on each host were setup in a RAID 0 configuration to maximize space and perform well. Again, no redundancy, but I like to live life on the edge!
Everything was going well to this point; however, once I powered on just a single VM (Windows Server 2016) I began experiencing terrible performance which appeared to be related to the disk. The average datastore latency was around 10ms (spiking up to 65ms) and the amount of time it took to RDP into the VM and launch Server Manager was 2 minutes 34 seconds!
After doing some research I discovered that other HP Proliant MicroServer Gen8 users were having similar issues after upgrading from ESXi 5.5 or 6.0 to 6.5. I decided to try downgrading the driver for the “HP Dynamic Smart Array B120i RAID Controller”. The driver included in the HP image for ESXi 6.5 was 188.8.131.52-1(17 Nov 2016) (184.108.40.206-1OEM.5220.127.116.111820.x86_64.vib). I replaced this version with 5.5.0-88.0(9 Sep 2014) (5.5.0-88OEM.518.104.22.1681820.x86_64.vib) using the steps outlined below:
- Shutdown any VMs running on the host.
- Place host in maintenance mode.
- Copy the new driver (VIB file) to a local datastore.
- Enable SSH on the host and login.
- Browse to the datastore where the VIB was placed.
- Copy the VIB to the /var/log/vmware/ directory.
cp scsi–hpvsa–5.5.0–88OEM.522.214.171.1241820.x86_64.vib /var/log/vmware/
- Remove the old driver.
esxcli software vib remove –n scsi–hpvsa –f
- Install the new driver.
- esxcli software vib install –v file:scsi–hpvsa–5.5.0–88OEM.5126.96.36.1991820.x86_64.vib —force —no–sig–check —maintenance–mode
- Reboot the host.
I also recorded a video (here) if you want to watch me walk through the process.
Once I changed the driver the latency dropped down to almost nothing and the same RDP login/Server Manager launch test only took 17 seconds!
Once the storage issue was resolved I built the remainder of the environment (11 VMs total). I resized the Software Defined Datacenter (SDDC) components to be a little bit smaller… definitely NOT BEST PRACTICE in a production environment!
- Domain Controller – (2) vCPUs & (2) GB of RAM
- Platform Services Controller (PSC) – (2) vCPUs & (2) GB of RAM
- vCenter Server Appliance (vCSA) – (2) vCPUs & (4) GB of RAM
- NSX Manager – (2) vCPUs & (4) GB of RAM
- vRealize Log Insight (vRLI) – (2) vCPUs & (4) GB of RAM
- vRealize Operations Manager (vROPs) – (2) vCPUs & (6) GB of RAM
- vRealize Orchestrator (vRO) – (1) vCPUs & (2) GB of RAM
- vRealize Identity Manager (vIDM) – (2) vCPUs & (2) GB of RAM
- View Connection Server – (2) vCPUs & (2) GB of RAM
- Unified Access Gateway (UAG) – (1) vCPUs & (1) GB of RAM
- Remote Desktop Session Host (RDSH) – (2) vCPUs & (2) GB of RAM
This configuration provides terrific performance. Next, I will be diving deeper into all the specific components of the SDDC.