VMware VVols with HPE 3PAR StorServ

On one of my more recent projects, I have been involved in helping a customer transition from traditional LUN based VMFS storage on HPE 3PAR over to using VMware Virtual Volumes (VVols). There are plenty of articles on the internet about what VVol based storage is, so I won’t attempt to reinvent the wheel. However one thing I have always found a little lacking is an explanation as to how VVols differ between storage vendors, and to further confuse matters, between different versions of firmware that the storage arrays are running.

VVols work by the vCenter server calling on APIs exposed by the storage array. This is known as APIs for Storage Awareness (VASA). In simple terms the vCenter server asks the storage system about its capabilities, from there we can build storage policies within vCenter and apply them to entire VMs or individual virtual disks. Rather than a VM residing on a VMFS datastore on LUN 14 as an example, in the background the array creates a virtual LUN for each storage context (disk, swap, config etc) and presents it to the host(s) over these virtual LUNs.

HPE have an excellent white paper on how to implement VVols on 3PAR located here. If the hyperlink fails in the future please tweet me and I will update it.

What I have found is that it isn’t very clear and the purpose of this post, is the question of great, VVols is working, now what do we do?

The way that 3PAR works is that you create a Common Provisioning Group (CPG) based on the class of storage you want to use. So for an example, you might have two CPGs for an array with a combination of flash (SSD) and Fast Class (FC) based storage. These CPGs are carved up in RAID sets which the array will distribute over the physical disks in small data chunklets which are 1GB in size. You can create different RAID sets depending on your uses, so you may have RAID 5 over 12 FC disks with 2 parity disks, or RAID 1 over 5 SSDs with 1 parity disk. These CPGs are created by the storage administrators in order to meet your particular application and environmental requirements.

With VMFS based storage, you would create as an example 10x 8TB 3PAR virtual volumes (not to be confused with VMware Virtual Volumes) and present them over LUNs 1 through 10 to your hosts. These virtual volumes would reside on one CPG, so 4 may be on the SSDs and 4 may be on the FC disks. Another option is that Adaptive Optimisation is turned on where the array will actively move hot data onto the faster disks and cold data onto the slower disks. Some organisations want a more finite way of controlling this, so VMs which require fast storage such as SQL databases reside on the RAID 1 SSDs its OS disk resides on RAID 5 FC storage.

If you are following the link above, you’ll have presented one single VVol datastore. If you don’t create any storage policies, the VMs on this datastore will be put on the array however the array decides to distribute the data. However, as mentioned above, we might want to have an element of control over how the data is stored on the disks.

As you can see from the screenshot above, all of the 3PAR’s 6x configured CPGs are listed. It might be tempting to stick with a single VVol datastore, create multiple policies for each of the CPGs on the 3PAR and go from there. However, depending on the version of the 3PAR firmware you’re using, you may find issues. This is due to needing to be on a particular 3PAR version before you can change the CPG policies on the fly. As an example, with an older 3PAR firmware, if you created a storage policy with FC RAID 5 and tried to migrate it to SSD RAID 1, nothing would happen apart from the VM becoming non-compliant with its assigned storage policy. The reason for this this is because the 3PAR can not move the data between the CPGs until it is upgraded.

What you have to do to overcome this is to create a different VVol datastore for each CPG that is created on the 3PAR. As an example, going back to having two CPGs on the 3PAR (one SSD and one FC), you’d have to set the default datastore policy on the VVol you want to consume the SSDs to match the CPG itself, and the one having the FC to match that CPG. There is another method, and that is to Storage vMotion to VMFS and back to VVol, but since VVols are supposed to make managing storage in vSphere easier, this seemed a step sideways to me. Rather than reading endless 3PAR release notes to find the version required to overcome this, I reached out to HPE support on and they provided an excellent response. In short, to have the ability to change a CPG on the fly, you have to be running 3PAR OS Code 3.3.1 MU4 or above.

Once you are running this OS, you simply need one VVol datastore and you can change the policies against VMs and/or their virtual hard disks and the 3PAR itself will do the heavy lifting of data without having to perform a Storage vMotion.

You may now be tempted to have a single VVol datastore in your environment, however, I personally still prefer having one or two regular VMFS datastores for things like VIBs, ISO files, Content Libraries and other administrative functions. Also, don’t forget that VVols comes with its own limitations, such as no support for RDMs (at present), no shared hard disks (vSphere 6.5 and earlier). So check the documentation for the particular versions you are running.

I personally think VVols are great and they should be adopted where possible as once fully set up, they provide a much more simplified and streamlined process for managing storage. That said, they can create issues of their own. My advice is to speak to your storage vendor and ask them what their particular storage array can offer and decide from there.

As a footnote, this blog post is mainly focussed on 3PAR CPGs only, and not other capabilities such as thin provisioning and deduplication settings, which you can change on the fly depending on the storage subsystem capabilities, ie, you can’t have deduplication on fast class disks. With that said, another great VVol capability is to be able to change the 3PAR provisioning method on the fly, whereas before VVols this would have to be set on the 3PAR virtual volume level and a Storage vMotion would have to be carried out to change the setting. With VVols, it’s with the click of a button.

I hope this posts helps some people out when I was looking into this, I found very little information.