VMware vSAN™ is the storage component for VMware®’s Hyper-Converged Infrastructure offering. It can be deployed (albeit with its own licensing), as a part of a VMware vSphere® cluster. The release of vSAN is generally tied to a specific release of VMware vSphere, so the 6.5 Update 1 release includes 6.6.1.
Testing VMware vSphere in a lab can be desirable for many reasons, for example training or development. However, it can be quite hardware intensive, and adding in vSAN makes matters more complicated. To properly test vSAN, we need multiple nodes and good networking (10Gb for all-flash). However, it’s possible to use VMware Workstation (or VMware Fusion® for you Apple fans) to test this in a nested environment, where a set of virtual machines running VMware vSphere are hosted on one physical system.
One of the key features in VMware Workstation 14 (and Fusion
is the addition of Virtual Machine Hardware version 14. This is significant for this exercise, as one the features of this version is the ability to add virtual NVMe drives to Virtual Machines. This is also useful as we don’t need to mess about with the host VM or guest vSphere configuration to spoof the software into thinking it has SSDs.
What’s the Shopping list?
Well, you’re going to need a machine with a decent CPU (an Intel i7 or similar) and 32GB RAM with some fast disk (SSD). It’s worth remembering that you’re going to be standing up quite a bit if you squeeze this in one machine. From my experience, you will need:
A Windows Domain Controller (for authentication, DNS etc) I keep this pretty lean with 2GB RAM and a single vCPU
A VMware vCenter 6.5U1 appliance, trimmed back to 6GB RAM (Another feature of Workstation 14 is the ability to import the OVA for the appliance straight in and configure it, which saves much messing around with VMX files or having to nest the appliance in an ESXi host VM)
Three ESXi host VMs, all configured with the following:
- 6GB RAM
- 2 vCPU
- 15GB disk (for vSphere ESXi binaries)
- 2 NICs
- For vSAN, I added a 5GB NVMe as my Cache layer and a pair of 20GB NVMe as my data tier
Using this toolkit, I can build out my own vSphere cluster. In addition to this, I also use a Distributed Switch with a Management port group and a VM port group (more for tidiness than anything else). Each host needs a VMKernel for Management (adding vMotion to this also works for efficiency) and I added a separate one purely for vSAN.
This is pretty simple – browse through the vSphere Web Client to the Cluster, go to the Configure tab, select vSAN>General and hit Configure. This will then launch a wizard that takes you through the process of enabling vSAN.
Feature wise, to keep the load down I didn’t enable de-duplication & compression or encryption. So long as vSAN is enabled on a VMKernel port on each host, the network validation should be fine.
The disk claiming is spot on. As you can see, the NVMe disks are recognised correctly as flash disks and selected for the relevant tiers based on size.
This will complete the configuration and you’ll have a vSAN:
Fixing Some Warnings…
There will be a few amber warnings that require attention.
For the simpler warnings, you’ll want to enable the Performance Service as well as update the vSAN HCL Database. If you’re not connected to the internet you can download and apply this manually using ‘update from file’. The file can be downloaded from https://partnerweb.vmware.com/service/vsan/all.json
The remaining Amber warnings are trickier. The first warning that you will encounter will be that the hardware isn’t on the vSAN HCL. (This shouldn’t be much of a surprise given we’re running this in Workstation.)
The second will be the vSAN Build Recommendation Engine Health component. This requires an internet connection and that the service is logged into my.vmware.com with credentials. For a disposable lab, this is unlikely to be desirable.
These warnings can be silenced via the Ruby vCenter Console (RVC. via either SSH or the console, log on as root and access the Shell. Then log onto RVC using vCenter credentials directed at the vCenter FQDN…)
Mark a shortcut to the vSphere cluster with vSAN:
mark (vcenter FQDN)/(datacenter object)/computers/(Cluster)
To silence the given service:
vsan.health.silent_health_check_configure -a (service) ~vsan
The following services will need attention:
- vumconfig – vSAN Build Recommendation Engine Health check
- controllerdiskmode – Controller Disk Group Mode
- controllerdriver – Controller Driver is VMware Certified
- controllerfirmware – Controller firmware is VMware certified
- controllerreleasesupport – Controller is VMware certified for ESXi release
- controlleronhcl – SCSI Controller is VMware certified
For example, the vSAN Build Recommendation Engine Health Check:
You can use the command vsan.health.silent_health_check_status ~vsan to check the status. -a switch can be used for specific entries.
Expanding the Cluster
If you want more nodes, then you will need to add them as additional hosts. This will allow you to test RAID5 and RAID6 Erasure Coding (four and six hosts respectively) and add capacity and performance, but you need more tin for this.
To shut down the lab, you should simply:
- Check that all re-sync operations are complete:
- Place the hosts in Maintenance Mode with ‘No Data Evacuation’ selected and don’t move powered off VMs
- Power off the hosts
With respect to setting up a vSAN lab, it’s “job done” – go celebrate!
If you’re interested in exploring the ways in which you can modernise your datacentre but not sure where to start, please contact Xtravirt, and we’d be happy to use our wealth of knowledge and experience to assist you.