In hyperconverged setups the servers usually have a very limited amount of physical network interfaces. So when using your ESXi hypervisor hosts as NSX-T transport nodes you often can’t use dedicated vmnic devices as VTEPs.
This posts shows how you can use the same pyhsical adapters for VTEP traffic and for VMkernel adapters (e.g. for vSAN or vMotion) by migrating them to an N-vDS switch while configuring the hosts for NSX-T.
Starting point in this example is a hosts with two network cards, one quad port 10 GbE card and a dual 100 GbE card, resulting in six available ports. The first two are used by a Virtual Distributed Switch, which contains a port group for the management VMkernel adapter (vmk0). The next two ports are reserverd for future use (e.g. iSCSI), so the last two ports are supposed to function as uplink for our N-vDS. Both ports will be used as active uplinks with the teaming policy “LOADBALANCE_SRCID”.
To be able to migrate the vSAN and vMotion VMkernel adapters they need to be created first.
If you are using PowerCLI you can use this command:
New-VMHostNetworkAdapter
In the vSphere Client open the Configure/VMkernel adapters view and click on “Add Networking…”:
As the port group is going to be replaced by a logical switch anyway it does not matter which network is selected:
Set up the port settings depending on its purpose:
Configure the IP address settings according to your design:
Repeat the steps for the vMotion VMkernel adapter. The use of the custom vMotion TCP/IP stack is recommended:
Finally our two additional adapters are created:
In the NSX-T GUI you can accomplish the goal to migrate VMkernel adapters to N-vDS in three different ways, depending on how you configure your host transport nodes.
If the host is not part of a cluster which has a Transport Node Profile assigned it can be configured manually as shown here:
After configuring the details like transport zones etc. the VMkernel migration can be set up after clicking on “Add Mapping”:
Add a mapping for each vmk-adapter:
Select which logical switch should be used for connectivity for each vmk-adapter:
In the second case a transport node is already configured for NSX, but no mappings have been added as shown above. Select the host transport node and click on the “Migrate ESX VMkernel and Physical Adapters” entry in the “Actions” menu:
The third way is to create a Transport Node Profile which contains “Network Mappings for Install” as shown above.
When the profile is attached to a cluster as shown below any hosts added to that cluster in vSphere is automatically configured for NSX-T (including the vmk-adapter mappings) accordingly:
A green checkmark next to the attached profile is shown for the cluster when all NSX-T is finished configuring all hosts:
In the vSphere client you can verify whether the correct logical switches are used for the migrated VMkernel adapters:
Also the phyiscal adapters used as uplinks for the N-vDS are visible in the vSphere client:
If your hardware only has two physical interfaces you can migrate the management VMkernel adapter (usually vmk0) to the N-vDS as well. The NSX-T product documentation shows this in a diagram and offers some additional consideratios, e.g. that the DVS port group type should be set to Ephemeral when reverting back from a N-vDS.