Deploying NSX part 2 – Cluster deployment, Host preparation and Logical Switch setup

Now that the NSX manager appliance is installed and set up with vCenter, it’s time to deploy the control cluster and prepare the hosts. But before we do this, we need to define IP Pools for the Controller cluster and hosts to use.

Head over to Groups and Tags from within the Networking and Security dashboard and then IP Pools, then add.

Enter the details as appropriate to your environment.

Repeat the above for a host pool that you’re using NSX on. This will be the VTEP vmk interface that the host uses to perform VTEP encapsulation and de-encapsulation.

Once the IP pools have been created, the next stage is to deploy the controller cluster. Head to Installation and Upgrade, then NSX Controller Nodes.

Now choose add.
You must choose a strong password conforming to the following:
Password length must be between 12-255 characters in length. It must contain at least one uppercase character, one lowercase character, one number and at-least one special character. Password must not contain the username (admin) as a substring. Any character must not consecutively repeat 3 or more times.
Give the controller a name and ensure it is set to the appropriate Port Group. You can see the IP Pool I created earlier.

The controller will take about 5 minutes to deploy depending on your environment. Repeat the process for the additional two controllers.

Once all three controllers have deployed, head to Host Preparation.

Choose Install NSX here and then yes at the confirmation dialogue.

If you receive a license error, go to vSphere licensing and put in the NSX license and then try again. The install will take a few minutes.


Once the process finishes, configure VXLAN.
The switch itself is trunking from my underlay switch, so no VLAN is required. I’m choosing MTU of 9000 as my switch supports jumbo frames. If you are using 10 Gbit and want line rate performance between logical networks, this is important. As per the controller cluster, you should have an IP pool ready.

Next we need to deploy a transport zone. A transport zone governs which hosts can talk to other hosts using VTEP. A single host can be part of multiple transport zones. For my homelab, initially I’m going to have a single transport zone which every host is part of.

Go to Instillation and Upgrade, Logical Network settings and Transport Zones, then click add.

I’ve called mine Global-TZ so I know exactly what it is doing from the description

Once the TZ has been created, we need to create some segment IDs. Think of a segment ID as a VLAN for a logical switch. The VTEP vmk uses the segment ID (which is part of the VXLAN IP header) to work out where to switch the traffic onto when it receives it from the underlay network.

Under Installation and Upgrade, Logical Network settings select Edit next to segment IDs.

I’m never going to use 1000 logical switches but I like round numbers!

Next head to Logical Switches and add.

In my homlab I’m going to firstly create the web tier. Note that I’ve added the logical switch to a transport zone. For the sake of simplicity, use Unicast as this requires no physical network configuration for NSX to communicate.
Here we can see our new logical switch. Select add and then VM

On the logical witch view, select the new logical switch and then add VM.

I already have a Web VM built. Press next and select the Network adaptor for the VM then complete the task.

I only have two VLANs at home, the native VLAN where my PC, Phone etc connects to and VLAN 2, which is used for my homelab. One of the major advantages of NSX is that you can route between two subnets without having to use a router or layer 3 switch. Unless you have bags of money, these devices are extremely expensive at 10Gbit and all traffic hairpins in and out of your homelab environment, which is not ideal. I’m going to use a couple of 172.16.x.x subnets for the three tiers amongst other things, so if I don’t have these subnets defined at the physical level, how do they communicate? Well the answer is an NSX Distributed Logical Router (DLR) and some basic routing.

I use a UniFi USG firewall at home, and since I haven’t looked into setting up OSPF on it, I’m just going to keep it simple and use some static routes.

The first route is so I can log into my VDSL modem and look at stats, the others are for my NSX environment. You can see the next hop is 10.0.0.253, that is the uplink interface of the DLR which I am now going to deploy. I’m not going to show how the Photon Web VM can’t talk to the network right now as it should be fairly self explanatory as it’s completely isolated.

Head to Networking and Security, then NSX Edges. Choose add, and then select Distributed Logical Router. Give the DLR a suitable name and press next. Since we’re only using static routing and no other Edge services, we do not need a Control VM.

Give them new VM a password. I like to enable SSH and I also drop the logging level down to Emergency only. It’ll want a complex password like the control cluster does. Press next and go to step 4 Interface.

We need two interfaces for the time being, one Uplink (to the Management port group for external traffic) and one Internal connected to the new Logical Switch that has been created. Firstly here is my Uplink, you can see the .253 address that my UniFi USG uses as its next hop. Remember to use 9000 as the MTU if you’re trying to reach line performance.

Press Okay when it’s configured correctly and then do the same for the Internal Interface. We already know what IP range we want as it’s defined on my UniFi USG. I like to use the last usable address for a gateway but in reality it’s not important at all.

Once both interfaces are configured, the DLR will need a default gateway to send traffic to for subnets which aren’t directly connected to. That would be my UniFi USG, this will allow the VMs to access networks not defined in NSX, such as the Internet.

Once done, review and then deploy the DLR. If everything is working fine, you should be able to ping the DLRs internal interface, which I can from my PC sat on the native VLAN.

PS C:\Users\Chris> ping 172.16.0.254

Pinging 172.16.0.254 with 32 bytes of data:
Reply from 172.16.0.254: bytes=32 time=1ms TTL=63
Reply from 172.16.0.254: bytes=32 time<1ms TTL=63
Reply from 172.16.0.254: bytes=32 time<1ms TTL=63
Reply from 172.16.0.254: bytes=32 time<1ms TTL=63

Ping statistics for 172.16.0.254:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 1ms, Average = 0ms

The Web VM didn’t have an IP address set and without anything acting as a DHCP server, I need to do that manually. Rename a file called
99-dhcp-en.network to 99-static-en.network in /etc/systemd/network/ and set appropriately.

Reboot the VM and voila, you should have full connectivity.

That’s it. Repeat the process for the application and database VMs, ensuring that you add new interfaces to the DLR so that they can all communicate with one another.