HC 4.10 has just been released and with it come a whole host of excellent new features. You can read the release notes here: https://docs.vmware.com/en/VMware-HCX/4.10/rn/vmware-hcx-410-release-notes/index.html
In this post, I want to concentrate on what will be welcome to customers who operate their own private datacentres and have no use for encryption. Historically, the traffic between fleet appliances (IX, NE, and the Sentinel) would be encrypted. This was by design, as we had customers migrating from on-premise to cloud environments over the public internet. However, if you are migrating within your own datacenter, or over a private cloud circuit such as an AWS DirectConnect which is known to be secure, some customers may want to disable this encryption as it adds additional overheads:
- It increases CPU usage on the appliances
- It reduces the available MTU (by 28 bytes if you’re interested)
- It decreases Network Extension throughput
- It decreases migration throughput (although not as severe as NE traffic)
Having the option to disable encryption is a welcome addition to some customers who know that the whole network path is known to be secure. There are a couple of requirements which must be met before encryption can be disabled. The reason for this is we do not want customers to accidentally disable encryption. All this detail is in the excellent HCX User Guide, however a quick recap:
- Application Path Resiliency must be enabled on the Service Mesh (this will require a redeploy)
- The backing Compute Profile on both HCX managers must have the ‘Is Secure’ option checked
- The Service Mesh must be edited and the check box for encrypted tunnels must be clear
In addition to being able to disable encryption, there is a new option called GRO (Generic Receive Offload). This will increase NE traffic throughput by resembling smaller packets into large ones, and you can read more about it here. GRO will help whether encryption is enabled or not, and it is something which customers who must use encryption should look to enable.
Although I had seen internal data with the test results for Network Extension traffic, I wanted to test myself. Before I go into it, I just wanted to highlight that simulated tests != real world performance. Whilst you would definitely see an improvement, there are several external factors which can affect the performance of the product.
- End to end network underlay
- Number of VMs in migration waves
- Amount of VM churn, which is the VM memory and/or disk change rate
- Host CPU/Memory/Storage capabilities
- HCX configuration
The lab I used in the test:
- Source host: Intel NUC Extreme (NUC11BTMi7), 64 GB RAM, 25 GbE pNIC
- Destination host: Custom built Xeon W-1290, 128GB RAM, 25 GbE pNIC
- iSCSI storage backed by SATA SSDs
- iPerf3 on Ubuntu 22 LTS, using 1, 8, 16, 24, and 32 threads
- Uplink MTU set to 1500 bytes – I see this as the most common setting in deployments
- TCP Flow Conditioning enabled
- vSphere 8U3
Effectively 10th and 11th generation Intel CPUs with 3.2 GHz base clock speed and 25 Gbps networking available.
Rather than show iPerf3 outputs and multiple screenshots, I thought I would just show a table of the results. As you can see, there are significant improvements to be had!
Number of threads | Encryption enabled, GRO disabled | Encryption enabled, GRO enabled | Encryption disabled, GRO disabled | Encryption disabled, GRO enabled |
1 | 1.6 Gbps | 2.8 Gbps | 3.6 Gbps | 7 Gbps |
8 | 4.4 Gbps | 7.2 Gbps | 7 Gbps | 14 Gbps |
16 | 5.5 Gbps | 8 Gbps | 7.2 Gbps | 14 Gbps |
24 | 6.2 Gbps | 8 Gbps | 7.2 Gbps | 14 Gbps |
32 | 6.6 Gbps | 8 Gbps | 7.2 Gbps | 14 Gbps |
In short, enabling GRO alone increased throughput from 1.6 Gbps to 2.8 Gbps for a single flow, and from 6.6 Gbps to 8 Gbps for multiple flows. Combined with Encryption disabled, this increases to 7 Gbps and 14 Gbps respectively. I want to reiterate again that these are lab results, and what you achieve may vary.
I will be writing a new post on all of the extra features within 4.10 soon. I personally think 4.10 is a huge release.
Thanks for reading.
1 thought on “HCX 4.10 – encryption-less tunnels performance testing”