VMworld Europe 2019 – Day 1 recap

Pre VMworld VMUG Event

Russel O’Connor and the other nice guys from the VMUG Barcelona team once more organised an event on Monday morning which I decided to start the day with. The event with the theme “Kubernetes in the spotlight” was again held at a conference room in the Hotel Porta Fira, which was crowded. This was probably caused by some prominent presenters. Besides some engineers from OVHcloud (the main sponsor of the event) Scott Lowe and Cormac Hogan talked about Kubernetes (“K8s”) in general and the interaction with VMware products today, e.g. Cluster API used to deploy and manage K8s, either used directly or under the hood by Tanzu Mission Control. Cormac explained how the Cloud Native Storage introduced with vSAN 6.7u3 can be used for stateful applications, which usually use Persistent Volume Claims or NFS shares.

Pre VMworld VMUG Event – Kubernetes in the spotlight

Kicking off VMworld

Afterwards it was time to head over to the “Fira” and receive my badge at one of the plenty registration desks. All in all everything is very well organised, even small things as a coat and baggage drop area with a short queue at most time, especially considering nearly 13.000 people were attending.

VMworld 2019 Europe – Hall 8

First things first: The obligatory photograph at the oversize VMworld sign with a couple of mates, this time Jörn Rusch, Christoph Villnow and Niclas Sieveneck:

At this time the activities in the VMvillage started with lots of fun things to do and people to meet.

New to the VMvillage was the VMware Champions booth which offered snacks, games, prizes and give-a-ways for people already participating in the program, or new “recruits”. The program is based on an app called Advocate Hub and offers challenges like taking part in the community, reading news articles, provide feedback or refer new members.
It was fun to chat with the guys at the booth and take part in the challenge to get most points during VMworld, which would grant the winner with a longboard hand signed by Pat Gelsinger. After myself being at the top of the leaderboard for a while in the end a the vCommunity’s own Andy won.

VMworld 2019 Europe – Champions booth social media wall

Most regular sessions ran from Tuesday to Thursday as Monday is TAM and partner day, but some interesting 3-4 hour long workshops, like “Running Kubernetes on vSphere”, “vSAN operations best practices”, “VMware cloud on AWS” or “Operating the Ultimate Hybrid Cloud with VMware Cloud Foundation” took place in parallel which according to other experts were all packed.

Partner Forum – General Session

Especially held for VMware’s ecosystem of partners the Partner Forum in the afternoon gave a glimpse of their vision and strategy (explained more thoroughly in the key notes held in the next two days), as well as presenting the new program how the company will interact with its partners, called Partner Connect. It aims to bring the several different parts of the current partner network, like resellers, solution providers or service providers under one cloud services oriented umbrella. Partners participating are going to show their expertise either with Solution Competencies or Master Solution Competencies (requiring several certified employees and reference projects) in these focus areas:

  • Data Center Virtualization
  • Cloud Management and Automation
  • VMware Cloud on AWS
  • VMware PKS
  • Network Virtualization
  • Digital Workspace
Partner Forum – General Session – Introduction by Pat Gelsinger

Rubrik Party at Pacha

A nice way to end the first day was a party sponsored by Rubrik taking place at the famous nightclub Pacha, directly located on the beach:

Rubrik Party at Pacha – Dancefloor
Rubrik Party at Pacha – Terrace

VMworld Europe 2019 – Day 0 recap

VMUG leader gathering

My second VMworld experience started with the VMUG leader gathering on Sunday, 3rd November 2019. It is an informal gathering of various represantatives of the VMUG program and leaders from all around the world. It was great chance to get to know the VMUG President Steve and Brad from VMUG leadership and other leaders who are in the business a bit longer than our newly created chapter “Rhein-Ruhr“.
The gathering was held in a small but very stylish bar with a comfy atmosphere. Big thanks for the team organizing it!

VMworld Europe 2019 – VMUG leader gathering

vRockstar Party

Another great event organized by Patrick Redknap, Marco Broeken, Michael Letschin and a few others for the 8th time to kick off VMworld Europe. It gave everyone a great chance to do some networking while enjoing some drinks at the ‘Cabaret – The Barcelona EDITION’ club, this year’s venue. Cohesity, Comdivision, Kemp, Veeam, VMUG and Zerto were kind enough to sponsor everything.
In between a small panel was held to introduce the sponsors and to discuss what to expect of VMworld and the time to come:

VMworld Europe 2019 – vRockstar Party panel

Getting ready for VMworld Europe 2019

vCommunity

Many people put a lot of effort in empowering people who are not active in the vCommunity or aren’t even aware of it. I am myself quite new to this network of people sharing interest in virtualization and cloud technology and what is possible with it, but can already say the support both virtually (mostly on Twitter, Slack) and in person (e.g. at VMUGs and conferences) is amazing.
One of such people is Yadin Porter de Leon who founded the Level Up Project. The project’s goal is to make it easier to “newbies” to join the vCommunity and make the most of this network, be it learning, network or to advance careers.
The Level Up Project’s contributers provided a guide with tons of information called the “vTrail Map” which was distributed during last year’s VMworld conferences in the US.
This year a couple of volunteers, called ambassadors, were invited to do bring the vTrail Map to VMworld 2019 Europe in Barcelona, which is about to start Monday. The ambassadors were instructed how to distribute the map in various areas of VMworld (recording online) so it reaches everybody.
It also is available online.

News & Resources

A good place to start reading about VMworld is the official blog page.
VMware Champions is an app which features gamification (collection of points) to distributed news and gather feedback. There are currently several “challenges” online to inform about VMworld.
For specific and up-to-date news follow the official twitter account and the hashtags, #VMworld2019, #VMworld or #VMworldEU.
At the end of next week there will be a collection of articles written by a couple of vExperts who were selected for a so called Blogger pass. A preview of this with articles from VMworld US 2019 can be found here.

Parties & Events

There are plenty of events happening before and during VMworld, but must of them are invite only.
Here is an inofficial VMworld 2019 EMEA Parties and Events list giving an overview about everything which happens.
Andreas Lesslhumer and Manfred Hofer already wrote about this extensivly. (Big thank you!)

Swag

On top of the complimentary VMworld backpack (this year co-sponsored by Rubrik) every visitor with a full pass receives after checking in, there are plenty of ways to get free promotional items. At the booths in the hall crawl you have good chances to get free stuff from sponsors and exhibitors of the VMware ecosystem if you engage in talks, attend mini presentations or enter raffles.
Also the VMworld blog lists some more ways to win or obtain interesting giveaways, e.g. a Oculus Quest VR set.

Deploying a vRealize Network Insight 4.2 Collector/Proxy to receive NetFlow data from OPNsense routers

vRealize Network Insight [vRNI] supports receiving and processing flow information from a variety of network equipment from different vendors out of the box, but also offers the possibility to ingest NetFlow/IPFix data from third party devices, e.g. physical routers.

Assuming you already have you vRNI instance deployed head to the Settings page of your vRNI WebGUI and click on “Acounts and Data Sources” to add such data sources.
If not you can deploy vRNI quickly using the vRealize Suite Lifecycle Manager as described in this older blog post. It shows an older version of vRNI (4.1), but the process is the same for 4.2.

vRealize Network Insight – Accounts and Data Sources

The button “Add Source” brings you to a list of all supported sources: (The option “Phyiscal Flow Collector” is only available if you have an Enterprise license registered in vRNI)

vRealize Network Insight – Accounts and Data Sources, Add Data Source

The minimum deployment of vRNI has a platform VM, which you use to administer and use the tool, and a collector VM (formerly called proxy), which can be selected as target for most data sources.
To be able to receive NetFlow data from a physical device however you need another dedicated collector VM. If this was not created earlier the screen informs you that no collector VM is available:

vRealize Network Insight – Accounts and Data Sources, Add Phyiscal Data Source, No Collector VM available

It does however offer you a button “Add Collector VM” to help create one.
When clicking the button a shared secret is displayed in a popup, which should be stored, as it is needed later on:

vRealize Network Insight – Accounts and Data Sources, Add Phyiscal Data Source, Add Collector VM

Download the “vRealize Network Insight – Proxy OVA file” (7 GB) from my.vmware.com and either deploy it via command line (see further below) or the vSphere WebGUI:

vSphere Client – Deploy OVF Template

Enter the shared secret from before in the step “Customize template”:

vSphere Client – Deploy OVF Template, Customize template

An alternative to deploying the OVA via the WebGUI is VMware’s OVF Tool, which allows you to deploy virtual appliance from the command line of your operating system (Windows, Linux or MacOS). The virtual appliances are distributed as file bundles, which usually contain the description (.ovf), the virtual disks (.vmdk in case of VMware environments) and a manifest (.mf) file containing hashes of the other files. For easier handling a tar archive with the file extentions .ova is created, containing these files.

To use the OVF Tool first download the current version (as of writing this post 4.3.0 U2) from VMware {code} and install it.
Then you can deploy the OVA directly to your vCenter with the following command: (modify datastore name, VM folder, VM name, port group name, download path, credentials, data center and cluster names according to your environment and enter the shared secret from before in the placeholder xxxxxx)

/Applications/VMware\ OVF\ Tool/ovftool -dm=thin -ds="vSAN xyz" --vmFolder="Management\ VMs" --acceptAllEulas --allowAllExtraConfig --name=vrni-collector2 --deploymentOption=large --net:"VM Network"="vRack-DPortGroup-vRealize" --prop:Proxy_Shared_Secret=xxxxxx  /home/user/Downloads/VMware-vRealize-Network-Insight-4.2.0.1562947515-proxy.ova vi://username:password@vcenter.rainpole.local/Datacenter/host/Cluster/

The prefix “/Applications/VMware\ OVF\ Tool” is only needed if you are running MacOS and did not add the directory where the OVF Tool was installed to to the $PATH environment variable.
Select one of the deployment options, depending on your expected system load:

Deployment Options:
                medium: vCPUs: 4, Memory: 12GB.
                large: vCPUs: 8, Memory: 16GB.
                extra_large: vCPUs: 8, Memory: 24GB.

After a while the deployment should succeed with the following messages:

Opening OVA source: VMware-vRealize-Network-Insight-4.2.0.1562947515- -proxy.ova
The manifest validates
Opening VI target: vi://username@vcenter.rainpole.local/Datacenter/host/Cluster/
Deploying to VI: vi://username@vcenter.rainpole.local/Datacenter/host/Cluster/
Transfer Completed
Completed successfully 

If you forgot to supply the shared secret as an argument you will receive the following error upon trying to power up the VM:

vSphere Client – Collector VM power on failed

You can still enter or, if entered false informarion earlier, correct the shared secret in the vApp Options properties as shown below:

vSphere Client – vApp properties of Collector VM

Upon clicking the edit button this popup allows adjusting the value:

vSphere Client – vApp properties of Collector VM, set value

After powering it up the appliance needs to be initially configured via the VM console. Login with the presented credentials (consoleuser / ark1nc0ns0l3) and enter “setup”:

vSphere Client – Collector VM setup start in VM console

Follow the wizard and enter the configuration options according to your environment:

vSphere Client – Collector VM setup finished in VM console

After finishing the configuration of the collector (formerly called proxy) you can select it from the drop-down list when adding a new physical netflow source at the “Accounts and Data Sources” page as shown in the beginning of the post. Don’t forget to give it a nickname: (e.g. the name of the collector VM or Netflow_collector)

vRealize Network Insight – Accounts and Data Sources, Add Phyiscal Data Source, Collector VM available

Now you can send NetFlow information from physical sources to port 2055 of the collector VMs IP address. NetFlow versions 5, 7, 9 and IPFIX are supported by vRNI, but keep in mind, that version 5 does not support IPv6.

To test the deployment I used the free open source firewall distribution OPNsense, based on FreeBSD.
As described in the OPNsense Wiki NetFlow destinations and capture details can the configured in the “Reporting” section:

OPNsense configuration, Reporting: NetFlow

After a while vRNI should have received some flows, visible in the “Accounts and Data Sources” page:

vRealize Network Insight – Accounts and Data Sources, Flow count

A quick test can be done with the following query suggested by Martijn Smit’s blog:

flow where Flow Type = 'Source is Physical' and Flow Type = 'Destination is Internet'

Further configuration of the NetFlow source or mapping in vRNI may be needed, e.g. regarding DNS or VLAN, which is both mentioned in Martijn Smit’s blog article.

Upgrading the VCSA via SSH

As two days ago the VMware vCenter Server Appliance 6.7 Update 2c patch (build 14070457) was released to resolve minor issues and update the Photon OS kernel to version 4.4.182 to resolve a couple of security issues (release notes), it was time to update a couple of VCSA appliance I set up for a client. After verifying the backup schedule was still working as intended and taking a snapshot I decided to start the upgrade via CLI and not via the vCenter Server Appliance Management Interface (VAMI).
To be able to use the “software-packages” binary required for this we first need to change the standard shell of the root user, which usually looks like this when connecting via SSH:

Using username “root”.
Pre-authentication banner message from server:
|
| VMware vCenter Server Appliance 6.7.0.31000
|
| Type: vCenter Server with an embedded Platform Services Controller
|
End of banner message from server
root@vcenter[ ~ ]#

Enter the following commands to do the change:

chsh -s /bin/appliancesh root
logout

After reconnecting the prompt should now look like this:

Using username "root".

Pre-authentication banner message from server:
|
| VMware vCenter Server Appliance 6.7.0.31000
|
| Type: vCenter Server with an embedded Platform Services Controller
|
End of banner message from server

Keyboard-interactive authentication prompts from server:

End of keyboard-interactive prompts from server

Connected to service

* List APIs: "help api list"
* List Plugins: "help pi list"
* Launch BASH: "shell"

Command>

Now connect the patch ISO to the VCSA VM (via PowerShell or the vSphere Client) and start the upgrade with these commands:

software-packages stage --iso
software-packages list --staged
software-packages install --staged
reboot

If everything works as intended the result would look like this:

VMware vCenter Server Appliance upgrade process in SSH session

To speed up the process or when placing the upgrade ISO on a network share instead of a local storage replace the first command by one of these lines:

software-packages stage --iso --acceptEulas
software-packages stage --url --acceptEulas

Creating workload domains in VMware Cloud Foundation 3.7.2 with NSX-T and vSAN

In VMware Cloud Foundation (VCF) workloads usually are deployed in one or more dedicated virtual infrastructure (VI) workload domains. During the VCF deployment (as shown in my earlier posts) the management workload domain (MWLD) is created with a minimum of four hosts. The WLD contains among other components the management vCenter and the SDDC manager.
For each VI workload domain (WLD) created using the SDDC manager a separate vCenter is deployed in the MWLD. The vCenters manage the WLD’s hosts and use the vSphere linked mode. As only fifteen vCenters can be linked as per current configuration maximums, currently up to 14 WLDs are supported.
Before the SDDC manager can create a WLD enough hosts (minimum three per WLD) need to be commissioned. Click on the button “Commission hosts” either in the Dashboard or the Inventory/Hosts view:

VMware Cloud Foundation – SDDC Manager, Commission hosts

The hosts need be be prepared similarly to the VCF deployment. This includes ESXi version, hardware configuration and network settings (e.g. DNS resolution) and shown in below checklist. In a later post I will provide some helpful PowerCLI snippets to accelerate the host preparation.

VMware Cloud Foundation – SDDC Manager, Commission hosts, Checklist

After clicking on “Proceed” the details of the hosts need to be provided. Either add each individual host manually (Select “Add new”) or perform a bulk commission by preparing and uploading a JSON file:

VMware Cloud Foundation – SDDC Manager, Commission hosts, host addition

The JSON template provided looks like this:

{
    "hostsSpec": [
        {
            "hostfqdn": "Fully qual. domain name goes here",
            "username": "User Name goes here",
            "storageType": "VSAN/NFS",
            "password": "Password goes here",
            "networkPoolName": "Network Pool Name goes here"
        },
        {
            "hostfqdn": "Fully qual. domain name goes here",
            "username": "User Name goes here",
            "storageType": "VSAN/NFS",
            "password": "Password goes here",
            "networkPoolName": "Network Pool Name goes here"
        }
    ]
}

Not only the host’s details (FQDN, credentials) and the storage type (preferably vSAN) needs to be provided, but the network pool to be used. Later on also license keys are required. A total of three license keys for vSphere, vSAN and NSX should be entered in the “Administration/License” screen of the SDDC manager.
Network pools are created in the “Administration/Network settings” screen. In this case VLAN-IDs and subnet for vMotion and vSAN separate from the default pool (used by the MWLD) are used:

VMware Cloud Foundation – SDDC Manager, Network pools

After the hosts are commissioned they show up in the “Usassigned hosts” tab:

VMware Cloud Foundation – SDDC Manager, Inventory/Hosts

Click on a host to show its details, e.g. manufacturer, model and storage capacity:

VMware Cloud Foundation – SDDC Manager, Inventory/Hosts, Host details

To create a new WLD use the “+ workload domain” button in the inventory:

VMware Cloud Foundation – SDDC Manager, Workload Domains

Select your storage in the next dialog box. vSAN and NFS are fully supported out of the box (Fibre Channel can be added later on manually, but must be managed independently):

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 0

In the first step of the VI configuration wizard enter names for the WLD, the first cluster and the organization the domain is intended for:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 1

Then enter a free IP address in the management subnet, a FQDN configured in your DNS servers and root password for the WLD’s vCenter:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 2

The most interesting part if you are enthusiastic for VMware’s SDN portfolio is the networking screen, which allows you to choose between the legacy product NSX-V or the 2019 released NSX-T version 2.4.
In both cases FQDNs, IP addresses and root/admin password for the NSX managers must be entered, as well as a VLAN ID used for the overlay transport (VXLAN for NSX-V; Geneve for NSX-T):

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 3

If you selected vSAN as primary storage provider in the first step you need to enter the PFTT (primary failure to tolerate) parameter in step four. “One failure to tolerate” means each data set is replicated once, similar to RAID 1. This means that any of the three required hosts can fail at any point in time without data loss.
If you have at least five hosts you can select PFTT=2, which means data is replicated twice, so two hosts may fail simultaneously. This is only the default setting however. PFTT can be set for each object via storage profiles later on, too.

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 4

In the next steps select the hosts which shall be used for initial WLD creation. Further hosts can be added to the WLD later. The host selection screen previews the accumulated resources of the selected hosts:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 5

In the License step select the license keys entered before from the drop down menus. Each license should provide enough capacity for each product (e.g. enough CPU socket count) and not be expired:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 6

The last two screens show a review of all entered parameters and a preview of the component names which will be created:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 7
VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 8

After finishing the wizard the creation progress can be tracked in the Tasks view in the bottom of the SDDC manager. If you click on the task all of its subtasks and their status are shown below:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Subtasks 1
VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Subtasks 2

After some time the WLD creation tasks should succeed:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Creating VI WLD succeeded

Open the overview of the newly created WLD under the “Inventory/Workload Domains” to show its status. The “Services” tab features links to the vCenter and the NSX-T manager GUIs:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Details of WLD

After a host is removed from a workload domain or the entire WLD is deleted the hosts are found under the tab “Unassigned hosts” again, but their state shows “Need Cleanup”:

VMware Cloud Foundation – SDDC Manager, Inventory/Hosts, Decommissioning

First select the checkbox on the left of each host needing cleanup and click on the button “Decommission selected hosts”.

Then login into the SDDC manager using SSH (e.g. “ssh vcf@sddc-mgr.local”) and prepare a JSON file containing the hosts and their management credentials as follows:

[
   {
     "host1.local":
       {
         "root_user": "root",
         "root_password": "VMware123!"
       }
   },
   {
     "host2.local":
       {
         "root_user": "root",
         "root_password": "VMware123!"
       }
   }
 ] 

Now run the following commands found in the VCF documentation to commence the cleanup:

su 
cd /opt/vmware/sddc-support
./sos --cleanup-decommissioned-host /tmp/dirty_hosts.json
VMware Cloud Foundation – SDDC Manager, Host cleanup script

Afterwards however there is still the task of the network cleanup, which requires access to Direct Console User Interface (DCUI).
If the network cleanup is not performed you will be presented with errors as shown below when trying to re-commission the hosts:

VMware Cloud Foundation – SDDC Manager, Host addition of partly cleaned up hosts, Error 1
VMware Cloud Foundation – SDDC Manager, Host addition of partly cleaned up hosts, Error 2

When logging into the ESXi management GUI in your browser you can see the left over distributed virtual switch and its port groups from the previous WLD:

VMware ESXi, Network settings

Perform the network cleanup by logging into the DCUI with the root user and then select “Network Restore Options”:

VMware ESXi, DCUI, Network Restore Options

Then select “Restore Network Settings” option which resets any network settings and devices to the defaults:

VMware ESXi, DCUI, Network Restore Settings
VMware ESXi, DCUI, Network Restore Settings, Done

Re-configuration of management network settings like IP address, subnet mask, default gateway and VLAN is needed afterwards.
Now of the cleaned hosts are ready to be re-commissioned, which works as shown in the beginning of this post.

Migrating VMkernel adapters to logical switches through NSX-T N-vDS

In hyperconverged setups the servers usually have a very limited amount of physical network interfaces. So when using your ESXi hypervisor hosts as NSX-T transport nodes you often can’t use dedicated vmnic devices as VTEPs.
This posts shows how you can use the same pyhsical adapters for VTEP traffic and for VMkernel adapters (e.g. for vSAN or vMotion) by migrating them to an N-vDS switch while configuring the hosts for NSX-T.

Starting point in this example is a hosts with two network cards, one quad port 10 GbE card and a dual 100 GbE card, resulting in six available ports. The first two are used by a Virtual Distributed Switch, which contains a port group for the management VMkernel adapter (vmk0). The next two ports are reserverd for future use (e.g. iSCSI), so the last two ports are supposed to function as uplink for our N-vDS. Both ports will be used as active uplinks with the teaming policy “LOADBALANCE_SRCID”.

vSphere Client – Physical adapters before migration

To be able to migrate the vSAN and vMotion VMkernel adapters they need to be created first.
If you are using PowerCLI you can use this command:

New-VMHostNetworkAdapter 

In the vSphere Client open the Configure/VMkernel adapters view and click on “Add Networking…”:

vSphere Client – Adding VMkernel adapters

As the port group is going to be replaced by a logical switch anyway it does not matter which network is selected:

vSphere Client – Adding VMkernel adapters, Select target device

Set up the port settings depending on its purpose:

vSphere Client – Adding VMkernel adapters, Port properties vSAN

Configure the IP address settings according to your design:

vSphere Client – Adding VMkernel adapters, IPv4 settings

Repeat the steps for the vMotion VMkernel adapter. The use of the custom vMotion TCP/IP stack is recommended:

vSphere Client – Adding VMkernel adapters, Port properties vMotion

Finally our two additional adapters are created:

vSphere Client – VMkernel adapters before migration

In the NSX-T GUI you can accomplish the goal to migrate VMkernel adapters to N-vDS in three different ways, depending on how you configure your host transport nodes.
If the host is not part of a cluster which has a Transport Node Profile assigned it can be configured manually as shown here:

NSX-T – Fabric/Nodes/Host Transport Nodes

After configuring the details like transport zones etc. the VMkernel migration can be set up after clicking on “Add Mapping”:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX

Add a mapping for each vmk-adapter:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX – Add Network Mappings for Install

Select which logical switch should be used for connectivity for each vmk-adapter:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX – Network Mappings for Install

In the second case a transport node is already configured for NSX, but no mappings have been added as shown above. Select the host transport node and click on the “Migrate ESX VMkernel and Physical Adapters” entry in the “Actions” menu:

NSX-T – Fabric/Nodes/Host Transport Nodes, Migrate ESX VMkernel and Physical Adapters

The third way is to create a Transport Node Profile which contains “Network Mappings for Install” as shown above.

NSX-T – Fabric/Profiles/Transport Node Profiles

When the profile is attached to a cluster as shown below any hosts added to that cluster in vSphere is automatically configured for NSX-T (including the vmk-adapter mappings) accordingly:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX for a cluster

A green checkmark next to the attached profile is shown for the cluster when all NSX-T is finished configuring all hosts:

NSX-T – Fabric/Nodes/Host Transport Nodes, Transport Node Profile attached

In the vSphere client you can verify whether the correct logical switches are used for the migrated VMkernel adapters:

vSphere Client – VMkernel adapters after migration

Also the phyiscal adapters used as uplinks for the N-vDS are visible in the vSphere client:

vSphere Client – Physical adapters after migration

If your hardware only has two physical interfaces you can migrate the management VMkernel adapter (usually vmk0) to the N-vDS as well. The NSX-T product documentation shows this in a diagram and offers some additional consideratios, e.g. that the DVS port group type should be set to Ephemeral when reverting back from a N-vDS.

Upgrading vRealize Network Insight v.4.1.1 with vRLCM

Recently when checking the vRealize Suite Lifecycle Manager GUI in the lab I am working on I noticed a new notification (red dot at the bell symbol in the upper right corner). Further inspection of the notifications showed the availability of the Product Support Pack 2 (Content Version 2.1.0.4), as shown in the lower entry in below screenshot.
It is also mentioned at the vRealize LCM release page at “VMware docs”.

vRealize Suite Lifecycle Manager – Notifications

Comparing the supported product versions of this new version with its predecessor (Version 2.1.0.2) reveals that vRealize Network Insight 4.1.1 is now supported: (highlighted in blue)

vRealize Suite Lifecycle Manager – Settings/Update

The release notes show all fixed issues, which are mostly focused on performance and stability.

After applying the new version a new entry in the Product Support section appears. As usual start the download in the “Actions” column.
If your My VMware credentials are not configured in the Lifecycle Manager or your deployment is a at a dark site, you can always download the product binaries manually, upload them via SCP and map them yourself, as shown in my previous post.

vRealize Suite Lifecycle Manager – Settings/Product Support

After the product binaries are available you can either deploy a fresh vRNI deployment or upgrade existing environments as shown in this screenshot below. You can also import existing vRNI deployments into an LCM environment which were not created by an LCM or by a different LCM.

vRealize Suite Lifecycle Manager – Environments

Follow the wizard by clicking on “Next” or on “Check compatibility matrix” to make sure the products used in your environment are supported:

vRealize Suite Lifecycle Manager – Environments/Upgrade

vRealize Network Insight 4.1.1 supports all recent VMware products, like NSX, vCenter Server & vRealize Log Insight as shown in the compatibility matrix: (NSX-T is not mentioned, but is

vRealize Suite Lifecycle Manager – vRNI 4.1.1 compatibility matrix

Before upgrading you should run the the pre-check validations. If any items do not show the “Successful” status you should follow the recommendations before proceeding:

vRealize Suite Lifecycle Manager – Environments/Upgrade/Precheck

Once the upgrade request is submitted you can check the status on the “Requests” section:

vRealize Suite Lifecycle Manager – Requests (In progress)

Depending on the specifications of your environment, e.g. cluster size, computing power etc. the upgrade process will take some time so complete. In this lab it took almost 50 minutes.

vRealize Suite Lifecycle Manager – Requests (Completed)

To verify the successful upgrade log into your vRNI GUI and open the “About” page in the “Settings” section. The version string should show the following:

vRealize Network Insight – Settings/About

Deploying vRealize Network Insight 4.1.0 with vRSLCM

In the beginning of May vRealize Network Insight 4.1 [vRNI] was released with a lot of interesting new features and enhancements described in the release notes.

It is getting more and more popular to use the vRealize Suite Lifecycle Manager appliance to deploy vRealize components like vRNI. In earlier posts I described how to deploy and update this tool to the current version as shown on below screenshot:

vRealize Suite Lifecycle Manager Version 2.1.0 Patch 1

In that version however support for vRNI 4.1.0 does not come out of the box. You rather have to install a product support package available in the VMware Marketplace / Solution Exchange first.

Download page for vRealize Network Insight 4.1.0 product support pack for vRealize Suite Lifecycle Manager

After installing the .pak file in the vRSLCM GUI under the “Settings/System Administration” page the new version needs to activated by clicking on the “Apply version” button:

vRealize Suite Lifecycle Manager – Installing a product support pack

You can check which products are supported by your deployment any time by clicking on the user name in the top right corner and then on “Products”, which opens up a pop up window.
The message “Policy successfully refreshed” confirms the new version is applied correctly:

vRealize Suite Lifecycle Manager – Applying a installed product support pack

Of course vRSLCM needs access to the product binaries. If the appliance has internet access and you would provide your my.vmware.com credentials it can download the .ova files directly.
For dark sites you can download both the “proxy” and “platform” .ova files on your workstation and upload them using SCP/SFTP: (screenshot shows WinSCP)

Uploading .ova files to vRealize Suite Lifecycle Manager using WinSCP

You need to add the product binaries to the product binary repository by entering the base location where you uploaded the .ova files earlier and then click on the “Discover” button. Finally select the added binaries and click “Add”:

vRealize Suite Lifecycle Manager – Adding product binaries

It takes a while until the product binaries are mapped and show up in the list:

vRealize Suite Lifecycle Manager – Adding product binaries in progress

Now you can deploy vRNI using vRSLCM by adding it to an existing environment or by creating a new environment. You have two deployment options for vRNI: Standard (1 Platform VM and 1 Cluster VM) or Cluster (3 Platform VMs and 1 Cluster VM). If you select “Cluster” only large nodes will be deployed, otherwise you can choose from “Standard” or “Large”.

This blog post shows all the required steps in between (prodiving certificate information, network details like IP addresses, subnet mask, gateway, portgroup and so on). Although the post is based on older versions of both vRealize Suite Lifecycle Manager and Network Insight the steps are mostly the same.

After entering all the details for creating a new environment you should run the pre-check validations:

vRealize Suite Lifecycle Manager – Pre-checks for deploying vRealize Network Insight in progress

If the validation succeeds you can commence the environment creation:.

vRealize Suite Lifecycle Manager – Pre-checks for deploying vRealize Network Insight successful

During the environment creation you can track the progress under the corresponding “In progress” request:

vRealize Suite Lifecycle Manager – Deploying vRealize Network Insight in progress

Once the request completes the deployment is ready to use:

vRealize Suite Lifecycle Manager – Deploying vRealize Network Insight successful

You can access the vRNI GUI via HTTPS on the configured address. Use the default admin user “admin@local” and the password you selected:

vRealize Network Insight login page

After first login the main features are explained in four separate screens:

vRealize Network Insight welcome page 1/4
vRealize Network Insight welcome page 2/4
vRealize Network Insight welcome page 3/4
vRealize Network Insight welcome page 4/4

You can use the self service wizard which helps you configure and learn about your vRNI deployment. Among the first steps it suggests to add data sources like vCenters and NSX managers:

vRealize Network Insight – Self Service

Apart from physical devices like routers and switches a whole variety of transport and infrastructure components can be added as data source:

vRealize Network Insight – Adding accounts and data sources

After some time to record flow information vRealize Network Insight is ready to display the first example path, in this case how a VM, which is attached to a logical switch (NSX-T 2.4 segment), connects to the Internet. The path from the T1 distributed router on the same host as the VM (cyan background) to the service router on the Edge Transport Node (purple background) is visible. As the physical switches and routers behind the NSX-T edges have not been configured as data source (yet) no further topology information is available between the service router and the Internet.

vRealize Network Insight – First packet flow/path

Upgrading VMware NSX-T to version 2.4.1

One week ago NSX-T version 2.4.1 (Build 13716575) was released. Dozens of resolved issues are listed in the release notes. The process of upgrading a deployment is depicted in this post.

First step is to download the 7,5 GB upgrade bundle file and upload it in the first screen of the NSX-T GUI’s Upgrade section:

VMware NSX-T 2.4.1 upgrade: Upgrade bundle upload

After the upload is complete the bundle is extracted and its compatibility matrix is checked. Afterwards the upgrade process can be started:

VMware NSX-T 2.4.1 upgrade: Upgrade bundle upload completed

The obligatory End User License Agreement has to be accepted as usual:

VMware NSX-T 2.4.1 upgrade: Upgrade step 1

First step in the upgrade process is to upgrade the “Upgrade Coordinator” component:

VMware NSX-T 2.4.1 upgrade: Upgrade step 2

When this step is completed three boxes with the current and new versions for the hosts, edges and management nodes are displayed:

VMware NSX-T 2.4.1 upgrade: Upgrade step 3

It is recommended to run the pre-checks first, which check if the environment correctly configured for the further upgrade steps, e.g. whether the vSphere clusters are configured for DRS:

VMware NSX-T 2.4.1 upgrade: Upgrade step 4 (Pre-checks)

When the pre-checks are completed successfully you can proceed to the second step of the ugprade process which is upgrading the hosts. All of the hosts known to NSX via Fabric/Nodes are displayed and grouped according to their clusters in vCenter. The order of the hosts in each group can be changed, as can the upgrade order (parallel or one after the other). The upgrade mode “Maintenance” is recommended for productive environment, which evacuates (vMotion) each host while placing it in maintenance mode before installing the new NSX VIBs.
For test deployments the “In-place” upgrade mode can be selected, which might lead to service interuptions of the network functions offered by NSX to the running VMs.

VMware NSX-T 2.4.1 upgrade: Upgrade step 5 (Host groups)

The overall group upgrade order defines whether the host groups should be upgraded simultaneously:

VMware NSX-T 2.4.1 upgrade: Upgrade step 6 (In progress)

During the upgrade the invidual status of each group can observed by clicking on it:

VMware NSX-T 2.4.1 upgrade: Upgrade step 7

When all hosts are upgraded you can contine to the next step by clicking on “Next”:

VMware NSX-T 2.4.1 upgrade: Upgrade step 7 (Completed)

All edge VMs have to be part of an edge cluster as those correspond to the edge groups, by which the edges are upgraded. During the upgrade the status reveals that a new operating system is installed on these:

VMware NSX-T 2.4.1 upgrade: Upgrade step 8 (Edges)

When all edges are upgraded you can contine to the next step by clicking on “Next”:

VMware NSX-T 2.4.1 upgrade: Upgrade step 8 (Completed)

With the NSX-T 2.4 upgrade the controller functionality was moved from the dedicated controller VMs to the manager, which was in turn changed from a single VM to a cluster, the fourth step is obsolete and can be skipped by clicking on “Next”:

VMware NSX-T 2.4.1 upgrade: Upgrade step 9

The upgrade of the NSX-T manager cluster should be communicated to concerned parties (e.g. network admins) as functionality will not be available during the maintenance window:

VMware NSX-T 2.4.1 upgrade: Upgrade step 10

The three manager VMs are upgraded in parallel:

VMware NSX-T 2.4.1 upgrade: Upgrade step 10 (In progress)

By clicking on “More information” the detailed upgrade logs are displayed:

VMware NSX-T 2.4.1 upgrade: Upgrade step 10 (Recent logs)

After completing the upgrade the manager VMs are rebooted. Until the services are available again this message is displayed:

VMware NSX-T 2.4.1 upgrade: Upgrade step 11

With the management nodes being upgraded successfully the upgrade process is completed:

VMware NSX-T 2.4.1 upgrade: Upgrade completed

The upgrade history can be tracked by clicking on “Show Upgrade History”:

VMware NSX-T 2.4.1 upgrade: Upgrade history