Migrating VMkernel adapters to logical switches through NSX-T N-vDS

In hyperconverged setups the servers usually have a very limited amount of physical network interfaces. So when using your ESXi hypervisor hosts as NSX-T transport nodes you often can’t use dedicated vmnic devices as VTEPs.
This posts shows how you can use the same pyhsical adapters for VTEP traffic and for VMkernel adapters (e.g. for vSAN or vMotion) by migrating them to an N-vDS switch while configuring the hosts for NSX-T.

Starting point in this example is a hosts with two network cards, one quad port 10 GbE card and a dual 100 GbE card, resulting in six available ports. The first two are used by a Virtual Distributed Switch, which contains a port group for the management VMkernel adapter (vmk0). The next two ports are reserverd for future use (e.g. iSCSI), so the last two ports are supposed to function as uplink for our N-vDS. Both ports will be used as active uplinks with the teaming policy “LOADBALANCE_SRCID”.

vSphere Client – Physical adapters before migration

To be able to migrate the vSAN and vMotion VMkernel adapters they need to be created first.
If you are using PowerCLI you can use this command:

New-VMHostNetworkAdapter 

In the vSphere Client open the Configure/VMkernel adapters view and click on “Add Networking…”:

vSphere Client – Adding VMkernel adapters

As the port group is going to be replaced by a logical switch anyway it does not matter which network is selected:

vSphere Client – Adding VMkernel adapters, Select target device

Set up the port settings depending on its purpose:

vSphere Client – Adding VMkernel adapters, Port properties vSAN

Configure the IP address settings according to your design:

vSphere Client – Adding VMkernel adapters, IPv4 settings

Repeat the steps for the vMotion VMkernel adapter. The use of the custom vMotion TCP/IP stack is recommended:

vSphere Client – Adding VMkernel adapters, Port properties vMotion

Finally our two additional adapters are created:

vSphere Client – VMkernel adapters before migration

In the NSX-T GUI you can accomplish the goal to migrate VMkernel adapters to N-vDS in three different ways, depending on how you configure your host transport nodes.
If the host is not part of a cluster which has a Transport Node Profile assigned it can be configured manually as shown here:

NSX-T – Fabric/Nodes/Host Transport Nodes

After configuring the details like transport zones etc. the VMkernel migration can be set up after clicking on “Add Mapping”:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX

Add a mapping for each vmk-adapter:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX – Add Network Mappings for Install

Select which logical switch should be used for connectivity for each vmk-adapter:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX – Network Mappings for Install

In the second case a transport node is already configured for NSX, but no mappings have been added as shown above. Select the host transport node and click on the “Migrate ESX VMkernel and Physical Adapters” entry in the “Actions” menu:

NSX-T – Fabric/Nodes/Host Transport Nodes, Migrate ESX VMkernel and Physical Adapters

The third way is to create a Transport Node Profile which contains “Network Mappings for Install” as shown above.

NSX-T – Fabric/Profiles/Transport Node Profiles

When the profile is attached to a cluster as shown below any hosts added to that cluster in vSphere is automatically configured for NSX-T (including the vmk-adapter mappings) accordingly:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX for a cluster

A green checkmark next to the attached profile is shown for the cluster when all NSX-T is finished configuring all hosts:

NSX-T – Fabric/Nodes/Host Transport Nodes, Transport Node Profile attached

In the vSphere client you can verify whether the correct logical switches are used for the migrated VMkernel adapters:

vSphere Client – VMkernel adapters after migration

Also the phyiscal adapters used as uplinks for the N-vDS are visible in the vSphere client:

vSphere Client – Physical adapters after migration

If your hardware only has two physical interfaces you can migrate the management VMkernel adapter (usually vmk0) to the N-vDS as well. The NSX-T product documentation shows this in a diagram and offers some additional consideratios, e.g. that the DVS port group type should be set to Ephemeral when reverting back from a N-vDS.