Migrating VMkernel adapters to logical switches through NSX-T N-vDS

In hyperconverged setups the servers usually have a very limited amount of physical network interfaces. So when using your ESXi hypervisor hosts as NSX-T transport nodes you often can’t use dedicated vmnic devices as VTEPs.
This posts shows how you can use the same pyhsical adapters for VTEP traffic and for VMkernel adapters (e.g. for vSAN or vMotion) by migrating them to an N-vDS switch while configuring the hosts for NSX-T.

Starting point in this example is a hosts with two network cards, one quad port 10 GbE card and a dual 100 GbE card, resulting in six available ports. The first two are used by a Virtual Distributed Switch, which contains a port group for the management VMkernel adapter (vmk0). The next two ports are reserverd for future use (e.g. iSCSI), so the last two ports are supposed to function as uplink for our N-vDS. Both ports will be used as active uplinks with the teaming policy “LOADBALANCE_SRCID”.

vSphere Client – Physical adapters before migration

To be able to migrate the vSAN and vMotion VMkernel adapters they need to be created first.
If you are using PowerCLI you can use this command:

New-VMHostNetworkAdapter 

In the vSphere Client open the Configure/VMkernel adapters view and click on “Add Networking…”:

vSphere Client – Adding VMkernel adapters

As the port group is going to be replaced by a logical switch anyway it does not matter which network is selected:

vSphere Client – Adding VMkernel adapters, Select target device

Set up the port settings depending on its purpose:

vSphere Client – Adding VMkernel adapters, Port properties vSAN

Configure the IP address settings according to your design:

vSphere Client – Adding VMkernel adapters, IPv4 settings

Repeat the steps for the vMotion VMkernel adapter. The use of the custom vMotion TCP/IP stack is recommended:

vSphere Client – Adding VMkernel adapters, Port properties vMotion

Finally our two additional adapters are created:

vSphere Client – VMkernel adapters before migration

In the NSX-T GUI you can accomplish the goal to migrate VMkernel adapters to N-vDS in three different ways, depending on how you configure your host transport nodes.
If the host is not part of a cluster which has a Transport Node Profile assigned it can be configured manually as shown here:

NSX-T – Fabric/Nodes/Host Transport Nodes

After configuring the details like transport zones etc. the VMkernel migration can be set up after clicking on “Add Mapping”:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX

Add a mapping for each vmk-adapter:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX – Add Network Mappings for Install

Select which logical switch should be used for connectivity for each vmk-adapter:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX – Network Mappings for Install

In the second case a transport node is already configured for NSX, but no mappings have been added as shown above. Select the host transport node and click on the “Migrate ESX VMkernel and Physical Adapters” entry in the “Actions” menu:

NSX-T – Fabric/Nodes/Host Transport Nodes, Migrate ESX VMkernel and Physical Adapters

The third way is to create a Transport Node Profile which contains “Network Mappings for Install” as shown above.

NSX-T – Fabric/Profiles/Transport Node Profiles

When the profile is attached to a cluster as shown below any hosts added to that cluster in vSphere is automatically configured for NSX-T (including the vmk-adapter mappings) accordingly:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX for a cluster

A green checkmark next to the attached profile is shown for the cluster when all NSX-T is finished configuring all hosts:

NSX-T – Fabric/Nodes/Host Transport Nodes, Transport Node Profile attached

In the vSphere client you can verify whether the correct logical switches are used for the migrated VMkernel adapters:

vSphere Client – VMkernel adapters after migration

Also the phyiscal adapters used as uplinks for the N-vDS are visible in the vSphere client:

vSphere Client – Physical adapters after migration

If your hardware only has two physical interfaces you can migrate the management VMkernel adapter (usually vmk0) to the N-vDS as well. The NSX-T product documentation shows this in a diagram and offers some additional consideratios, e.g. that the DVS port group type should be set to Ephemeral when reverting back from a N-vDS.

Upgrading vRealize Network Insight v.4.1.1 with vRLCM

Recently when checking the vRealize Suite Lifecycle Manager GUI in the lab I am working on I noticed a new notification (red dot at the bell symbol in the upper right corner). Further inspection of the notifications showed the availability of the Product Support Pack 2 (Content Version 2.1.0.4), as shown in the lower entry in below screenshot.
It is also mentioned at the vRealize LCM release page at “VMware docs”.

vRealize Suite Lifecycle Manager – Notifications

Comparing the supported product versions of this new version with its predecessor (Version 2.1.0.2) reveals that vRealize Network Insight 4.1.1 is now supported: (highlighted in blue)

vRealize Suite Lifecycle Manager – Settings/Update

The release notes show all fixed issues, which are mostly focused on performance and stability.

After applying the new version a new entry in the Product Support section appears. As usual start the download in the “Actions” column.
If your My VMware credentials are not configured in the Lifecycle Manager or your deployment is a at a dark site, you can always download the product binaries manually, upload them via SCP and map them yourself, as shown in my previous post.

vRealize Suite Lifecycle Manager – Settings/Product Support

After the product binaries are available you can either deploy a fresh vRNI deployment or upgrade existing environments as shown in this screenshot below. You can also import existing vRNI deployments into an LCM environment which were not created by an LCM or by a different LCM.

vRealize Suite Lifecycle Manager – Environments

Follow the wizard by clicking on “Next” or on “Check compatibility matrix” to make sure the products used in your environment are supported:

vRealize Suite Lifecycle Manager – Environments/Upgrade

vRealize Network Insight 4.1.1 supports all recent VMware products, like NSX, vCenter Server & vRealize Log Insight as shown in the compatibility matrix: (NSX-T is not mentioned, but is

vRealize Suite Lifecycle Manager – vRNI 4.1.1 compatibility matrix

Before upgrading you should run the the pre-check validations. If any items do not show the “Successful” status you should follow the recommendations before proceeding:

vRealize Suite Lifecycle Manager – Environments/Upgrade/Precheck

Once the upgrade request is submitted you can check the status on the “Requests” section:

vRealize Suite Lifecycle Manager – Requests (In progress)

Depending on the specifications of your environment, e.g. cluster size, computing power etc. the upgrade process will take some time so complete. In this lab it took almost 50 minutes.

vRealize Suite Lifecycle Manager – Requests (Completed)

To verify the successful upgrade log into your vRNI GUI and open the “About” page in the “Settings” section. The version string should show the following:

vRealize Network Insight – Settings/About

Deploying vRealize Network Insight 4.1.0 with vRSLCM

In the beginning of May vRealize Network Insight 4.1 [vRNI] was released with a lot of interesting new features and enhancements described in the release notes.

It is getting more and more popular to use the vRealize Suite Lifecycle Manager appliance to deploy vRealize components like vRNI. In earlier posts I described how to deploy and update this tool to the current version as shown on below screenshot:

vRealize Suite Lifecycle Manager Version 2.1.0 Patch 1

In that version however support for vRNI 4.1.0 does not come out of the box. You rather have to install a product support package available in the VMware Marketplace / Solution Exchange first.

Download page for vRealize Network Insight 4.1.0 product support pack for vRealize Suite Lifecycle Manager

After installing the .pak file in the vRSLCM GUI under the “Settings/System Administration” page the new version needs to activated by clicking on the “Apply version” button:

vRealize Suite Lifecycle Manager – Installing a product support pack

You can check which products are supported by your deployment any time by clicking on the user name in the top right corner and then on “Products”, which opens up a pop up window.
The message “Policy successfully refreshed” confirms the new version is applied correctly:

vRealize Suite Lifecycle Manager – Applying a installed product support pack

Of course vRSLCM needs access to the product binaries. If the appliance has internet access and you would provide your my.vmware.com credentials it can download the .ova files directly.
For dark sites you can download both the “proxy” and “platform” .ova files on your workstation and upload them using SCP/SFTP: (screenshot shows WinSCP)

Uploading .ova files to vRealize Suite Lifecycle Manager using WinSCP

You need to add the product binaries to the product binary repository by entering the base location where you uploaded the .ova files earlier and then click on the “Discover” button. Finally select the added binaries and click “Add”:

vRealize Suite Lifecycle Manager – Adding product binaries

It takes a while until the product binaries are mapped and show up in the list:

vRealize Suite Lifecycle Manager – Adding product binaries in progress

Now you can deploy vRNI using vRSLCM by adding it to an existing environment or by creating a new environment. You have two deployment options for vRNI: Standard (1 Platform VM and 1 Cluster VM) or Cluster (3 Platform VMs and 1 Cluster VM). If you select “Cluster” only large nodes will be deployed, otherwise you can choose from “Standard” or “Large”.

This blog post shows all the required steps in between (prodiving certificate information, network details like IP addresses, subnet mask, gateway, portgroup and so on). Although the post is based on older versions of both vRealize Suite Lifecycle Manager and Network Insight the steps are mostly the same.

After entering all the details for creating a new environment you should run the pre-check validations:

vRealize Suite Lifecycle Manager – Pre-checks for deploying vRealize Network Insight in progress

If the validation succeeds you can commence the environment creation:.

vRealize Suite Lifecycle Manager – Pre-checks for deploying vRealize Network Insight successful

During the environment creation you can track the progress under the corresponding “In progress” request:

vRealize Suite Lifecycle Manager – Deploying vRealize Network Insight in progress

Once the request completes the deployment is ready to use:

vRealize Suite Lifecycle Manager – Deploying vRealize Network Insight successful

You can access the vRNI GUI via HTTPS on the configured address. Use the default admin user “admin@local” and the password you selected:

vRealize Network Insight login page

After first login the main features are explained in four separate screens:

vRealize Network Insight welcome page 1/4
vRealize Network Insight welcome page 2/4
vRealize Network Insight welcome page 3/4
vRealize Network Insight welcome page 4/4

You can use the self service wizard which helps you configure and learn about your vRNI deployment. Among the first steps it suggests to add data sources like vCenters and NSX managers:

vRealize Network Insight – Self Service

Apart from physical devices like routers and switches a whole variety of transport and infrastructure components can be added as data source:

vRealize Network Insight – Adding accounts and data sources

After some time to record flow information vRealize Network Insight is ready to display the first example path, in this case how a VM, which is attached to a logical switch (NSX-T 2.4 segment), connects to the Internet. The path from the T1 distributed router on the same host as the VM (cyan background) to the service router on the Edge Transport Node (purple background) is visible. As the physical switches and routers behind the NSX-T edges have not been configured as data source (yet) no further topology information is available between the service router and the Internet.

vRealize Network Insight – First packet flow/path

Upgrading VMware NSX-T to version 2.4.1

One week ago NSX-T version 2.4.1 (Build 13716575) was released. Dozens of resolved issues are listed in the release notes. The process of upgrading a deployment is depicted in this post.

First step is to download the 7,5 GB upgrade bundle file and upload it in the first screen of the NSX-T GUI’s Upgrade section:

VMware NSX-T 2.4.1 upgrade: Upgrade bundle upload

After the upload is complete the bundle is extracted and its compatibility matrix is checked. Afterwards the upgrade process can be started:

VMware NSX-T 2.4.1 upgrade: Upgrade bundle upload completed

The obligatory End User License Agreement has to be accepted as usual:

VMware NSX-T 2.4.1 upgrade: Upgrade step 1

First step in the upgrade process is to upgrade the “Upgrade Coordinator” component:

VMware NSX-T 2.4.1 upgrade: Upgrade step 2

When this step is completed three boxes with the current and new versions for the hosts, edges and management nodes are displayed:

VMware NSX-T 2.4.1 upgrade: Upgrade step 3

It is recommended to run the pre-checks first, which check if the environment correctly configured for the further upgrade steps, e.g. whether the vSphere clusters are configured for DRS:

VMware NSX-T 2.4.1 upgrade: Upgrade step 4 (Pre-checks)

When the pre-checks are completed successfully you can proceed to the second step of the ugprade process which is upgrading the hosts. All of the hosts known to NSX via Fabric/Nodes are displayed and grouped according to their clusters in vCenter. The order of the hosts in each group can be changed, as can the upgrade order (parallel or one after the other). The upgrade mode “Maintenance” is recommended for productive environment, which evacuates (vMotion) each host while placing it in maintenance mode before installing the new NSX VIBs.
For test deployments the “In-place” upgrade mode can be selected, which might lead to service interuptions of the network functions offered by NSX to the running VMs.

VMware NSX-T 2.4.1 upgrade: Upgrade step 5 (Host groups)

The overall group upgrade order defines whether the host groups should be upgraded simultaneously:

VMware NSX-T 2.4.1 upgrade: Upgrade step 6 (In progress)

During the upgrade the invidual status of each group can observed by clicking on it:

VMware NSX-T 2.4.1 upgrade: Upgrade step 7

When all hosts are upgraded you can contine to the next step by clicking on “Next”:

VMware NSX-T 2.4.1 upgrade: Upgrade step 7 (Completed)

All edge VMs have to be part of an edge cluster as those correspond to the edge groups, by which the edges are upgraded. During the upgrade the status reveals that a new operating system is installed on these:

VMware NSX-T 2.4.1 upgrade: Upgrade step 8 (Edges)

When all edges are upgraded you can contine to the next step by clicking on “Next”:

VMware NSX-T 2.4.1 upgrade: Upgrade step 8 (Completed)

With the NSX-T 2.4 upgrade the controller functionality was moved from the dedicated controller VMs to the manager, which was in turn changed from a single VM to a cluster, the fourth step is obsolete and can be skipped by clicking on “Next”:

VMware NSX-T 2.4.1 upgrade: Upgrade step 9

The upgrade of the NSX-T manager cluster should be communicated to concerned parties (e.g. network admins) as functionality will not be available during the maintenance window:

VMware NSX-T 2.4.1 upgrade: Upgrade step 10

The three manager VMs are upgraded in parallel:

VMware NSX-T 2.4.1 upgrade: Upgrade step 10 (In progress)

By clicking on “More information” the detailed upgrade logs are displayed:

VMware NSX-T 2.4.1 upgrade: Upgrade step 10 (Recent logs)

After completing the upgrade the manager VMs are rebooted. Until the services are available again this message is displayed:

VMware NSX-T 2.4.1 upgrade: Upgrade step 11

With the management nodes being upgraded successfully the upgrade process is completed:

VMware NSX-T 2.4.1 upgrade: Upgrade completed

The upgrade history can be tracked by clicking on “Show Upgrade History”:

VMware NSX-T 2.4.1 upgrade: Upgrade history

Installing vRealize Suite Lifecycle Manager 2.1.0 Patch 1

In the middle of May the first patch for the current version of everyone’s favorite tool to deploy and manage vRealize components was released.
In this KB the 12 issues resolved are listed.

To install it first download the patch file from my.vmware.com:

vRSLCM 2.1.0 Patch 1 download

Then open up “System Administration” page in the management GUI:

vRSLCM 2.1.0: System Administration

After clicking on the “Install Patch” button select the file download previously and wait for it being uploaded:

vRSLCM 2.1.0 Patch 1 Installation Step 1

Click on “Next” and review the details before finishing with the “Install” button:

vRSLCM 2.1.0 Patch 1 Installation Step 2

As with every update of a VMware product taking a snapshot and/or backing up the configuration before proceeding is recommended:

vRSLCM 2.1.0 Patch 1 Installation Step 3

After a short while the new build version is visible in the GUI:

vRSLCM 2.1.0 Patch 1 installed

Updating VMware Cloud Foundation dark site deployment from 3.5.1 to 3.7

Two weeks ago the latest and greatest in VMware’s SDDC came out: VMware Cloud Foundation (VCF) version 3.7.
Apart from including the current security patches (e.g. ESXi 6.7 EP 06 / build number 11675023) a couple of new automation features have been added, as you can see in the release notes.
Also the cloud builder for setting up greenfield Cloud Foundation and VMware Validated Design deployments have been merged. As you can see in my tweet from a while back this used to be to separate OVA files.

VCF 3.7 can be installed as a new deployment or upgraded from the previous version (3.5.1), which is what I did to a dark site I am maintaining.
The process is the same as in my previous posts:

VMware Cloud Foundation update version 3.7 – Bundle download

After downloading the bundle files (around 21 GB) on a PC with internet access and importing those into the SDDC manager you can trigger the first phase of the update process, which is updating the SDDC manager itself:

VMware Cloud Foundation update version 3.7 – Update in Progress

This took less than 22 minutes on current Dell EMC hardware:

VMware Cloud Foundation update version 3.7 – Finished phase 1 update

The new build numbers are 12695026 / 12695044 (UI).

VMware Cloud Foundation update version 3.7 – Build numbers after phase 1 update

After triggering the next update phase the vCenter and PSC instances are bumped from build number 10244745 to 11726888, which is the most current security update available:

VMware Cloud Foundation update version 3.7 – Build numbers after phase 2 update

The last step is upgrading the ESXi hosts to build number 11675023 which was released on 01/17/2019. Only recently (03/28/2019) a more current security patch was released, which will presumably be included in one of the future VCF upgrades.

VMware Cloud Foundation update version 3.7 – Build numbers after final update

Having all VCF 3.7 patches installed is confirmed by the displayed text “There is no update available”.

VMware Cloud Foundation: Install custom SSL Certificates with XCA

If you set up a VMware Cloud Foundation (VCF) deployment you will notice all components (SDDC manager, vCenter, Platform Service Controllers, NSX manager & vRealize Log Insight) are using self-signed SSL certificates for their web services.
If you have a Microsoft Active Directory server or cluster you can use their Certificate Authority (CA) functionality to generate trusted certificates as described in the official documentation.
However there is an alternative if you are not willing to setup Microsoft servers or pay their license fees. You can create your own certificates by your internally trusted CA and let SDDC manager do the work of distributing them among the various VCF components.

In this example based on the corresponding documentation page I will use the freeware software XCA, which is a graphical frontend to create and manage X.509 certificates. It is available for Windows, macOS and Linux.

When you have downloaded and installed the software and are opening it for the first time, you need to create a new database (see “File” menu) as a starting point. It will ask you for a filename and a password which you need to enter each time you are accessing the database. Also you should set the default hash algorithm to “SHA256” in the options menu, as “SHA 1” is deprecated.

In the simplest case you would create a CA by hitting “New Certificate”, selecting the “CA” template (followed by “Apply all”), giving it at least a name (Internal name, commonName) and generating a private key for it.

In my case however I already had a CA up and running elsewhere, which I used to create an intermediate CA called “xca”. To be able to use that to create certificates in the XCA tool I first had to import the already created private key:

XCA – Private Keys

Then I imported both the certificates of the root and intermediate CA:

XCA – CA certifcates

If you are not using a self created CA as described above you need to select the externally created root CA and click on “Trust” in the context menu: (the intermediate CA is then trusted automatically)

XCA – Trust root CA

Now it was time to generate the certificate signing requests using the SDDC manager interface. Select all resources you want their certificates to replace and click on the “Generate CSR” button: (found under the “Security” tab of your workload/management domain)

VMware Cloud Foundation – Generate CSR

This will let you download an tar.gz archive named like your workload domain. So for the management domain it is called “MGMT.tar.gz”. Extract that archive with your favorite tool, e.g. using “tar -xzf MGMT.tar.gz” for *nix. For Windows desktops 7-Zip is working fine, although you might need to extract in two steps (.tar.gz -> .tar -> extract contents).

After extraction you should have a folder also named like your workload domain with sub-directories named like the hostnames of your VCF components, containing a .csr file each. Import those in the “Certificate signing requests” tab in XCA using the “Import” button:

XCA – Import CSRs

Pick a CSR, open the context menu and click on “Sign”:

XCA – Sign CSR

The following window will appear. Make sure that the correct root or intermediate CA is selected for “Use this Certificate for signing”, and that a supported hash algorithm like “SHA 256” is selected: (ignore the Template selection)

XCA – Generate certificate – Source

In the next tab you can enter the time range the certificate will be valid. After entering a number you need to hit the “Apply” button. As all other important settings are already filled out from the CSR no further modifications are needed. Maybe the “X509v3 Subject Alternative Name” (SAN) field would be a good idea to fill out with the respective FQDNs and IP addresses (I will explain later on why).

XCA – Generate certificate – Extensions

After having the signing procedure repeated for all CSRs the “Certificate” tab of XCA should look like the next screenshot.
Here you need to export the created certificates to the same folders you imported the CSRs from with the same filename (with file extension “.crt”). Also make sure the export format is set to “PEM”:

XCA . Export signed certificates to individual sub-directories

You also need to export a certificate chain of the trusted CAs to file called “rootca.crt” placed in the extracted directory where the other sub-directories are located. This can be done with XCA as shown below:

XCA – CA chain export

For the SDDC manager to be able to import the certificate structure (including the previously exported CSRs) the folder structure needs to be in an tar.gz archive once again. You might need to delete the old archive downloaded previously as the same name is used.
In *nix use “tar -czf MGMT.tar.gz MGMT/”. Using 7-Zip it is again a two step procedure. First add the folder to a tar archive like this:

7-Zip create tar archive

Then add the tar archive to a gzip archive using the default settings:

7-Zip – Create tar.gz archive

The resulting tar.gz file can then be uploaded in the menu opening after clicking on “Upload and install”:

VMware Cloud Foundation – Upload and Install Certificates

If everything is done correctly the result should look like this:

VMware Cloud Foundation – Upload and Install Certificates successful

All services except for the SDDC manager are restarted automatically, but you may need to close browser sessions if you still have old ones open or even clean your browsing cache.
If you do not want to reboot your SDDC manager use SSH and the “vcf” user to log into it and run the following commands:

su
sh /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh

Of course you still need to import your locally created CA into the trusted folder of your browser of choice so that it show as “valid” HTTPS. This howto should help to accomplish this. In the end it should look like this:

VMware vCenter with secure HTTPS connection

One issue I found was that the connection between vCenter and NSX manager was no longer working with the new certificates. Searching the symptoms (vCenter displaying “No NSX Managers available. Verify current user has role assigned on NSX Manager.”) in the VMware knowledge base led me check the lookup/registration page of the NSX manager appliance. It appears that Cloud Foundation sets up both URLs using their IP addresses. After changing both to the PSC/vCenter FQDNs (as shown in below screenshot) and restarting the VMs everything was working again:

VMware NSX manager

Another solution to solve this could be to add the IP addresses of each VCF
components into the individual “SAN” field when creating the certificates, as described above, so that the HTTPS connection is trusted in both ways.

Updating VMware Cloud Foundation from 3.5 to 3.5.1

Business as usual to keep a dark site VCF deployment up to date…

  • Generating marker file on the SDDC manager
  • Using a workstation with internet access to download the update bundle(s) (in this case “bundle-8203.tar”, almost 7 GB big) and delta file
  • Uploading and importing them into the SDDC manager
  • Fixing file permissions
  • Install the update using the web interface of the SDDC manager:
VMware Cloud Foundation update version 3.5.1 – Import successful
VMware Cloud Foundation update version 3.5.1 – Update initialization successful

A countdown timer appears at the top of the SDDC manager admin page when you use the “Schedule Update” function:

VMware Cloud Foundation update version 3.5.1 – Update about to start
VMware Cloud Foundation update version 3.5.1 – Update in Progress
VMware Cloud Foundation update version 3.5.1 – Update Updated ðŸ™‚

Everything went smoothly as expected in less than 21 minutes.

The release notes list what’s new:

  • Multi-Cluster NSX-T Support – Enables deployment of multiple clusters in a NSX-T based Workload Domain.
  • Custom ISO Support for Lifecyle Management – Enables customer-specified ISOs in place of VMware stock images for ESXi upgrades.
  • Miscellaneous Bug Fixes – Includes multiple bug fixes from the Cloud Foundation 3.5 release.

Installing PowerCLI on MacOS Mojave

After just getting started with PowerCLI on my company Windows 10 notebook I read that you could also run it on Linux and MacOS systems since last year. As I just started to like the functionality (took some time when you are only accustomed to Bash and Python) I wanted to give that a try on my private Macbook Pro, so here are the steps I took:

PowerShell stable release download page on Github

First download the latest stable release for MacOS (shown above), currently “powershell-6.1.2-osx-x64.pkg“, and install it.
Then open a shell, either by clicking on “PowerShell” in the Launchpad or open a Terminal window and enter “pwsh”.

PowerCLI installation in PowerShell on MacOS

Then simply enter the following:

Set-PSRepository -Name PSGallery -InstallationPolicy Trusted
Install-Module -Name VMware.PowerCLI -Scope CurrentUser

If you skip the first line the PSGallery repository, which hosts the PowerCLI packages, is not trusted, resulting in the following warning:

Untrusted repository
 You are installing the modules from an untrusted repository. If you trust this repository, change its InstallationPolicy value by running the Set-PSRepository cmdlet. Are you
  sure you want to install the modules from 'PSGallery'?
 [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "N"):

Updating to a new version works as follows:

Update-Module -Name VMware.PowerCLI

Cleaning up after the VMware Cloud Foundation 3.5 update

When a VMware Cloud Foundation deployment was updated to the current version, as described previously, a few tasks should be done afterwards.
First the vSAN datastore disk format version might need an upgrade. To check this head to the “Configure” tab of your DC in vCenter and click on “vSAN /Disk Management”:


vCenter cluster overview after VMware Cloud Foundation Update 3.5

Of course you should run the pre-check by clicking on the right button. If everything is working as it should it would look like this:

vCenter cluster overview after VMware Cloud Foundation Update 3.5 (vSAN upgrade pre-check)

Now you can click the “Upgrade” button, which informs you this can take a while. Also you should backup your data/VMs elsewhere, especially if you select “Allow Reduced Redundancy”, which speeds up the process:

vCenter cluster overview after VMware Cloud Foundation Update 3.5 (vSAN upgrade)

As you can see now the disk format version has changed from “5” to “7”:

vCenter cluster overview after VMware Cloud Foundation Update 3.5 (vSAN upgraded)

However still some vSAN issues are displayed:

vCenter cluster overview after VMware Cloud Foundation Update 3.5 (vSAN issues)

As this deployment is a “dark site”, meaning no internet access is available, the HCL database and Release catalog have to be updated manually.

vCenter cluster overview after VMware Cloud Foundation Update 3.5 (vSAN Update)

The URL to download the 14.7 MB file can be found in a post from William Lam from 2015 or in this KB article. The release catalog’s URL is taken from another KB article. This file is less than 8 KB in size.
After uploading both using the corresponding “Update from file” buttons the screen should look like this:

vCenter cluster overview after VMware Cloud Foundation Update 3.5 (vSAN updated)

The last remaining issue in this case was the firmware version of the host bus adapter connecting the vSAN datastore devices could not be retrieved (“N/A”):

vCenter cluster overview after VMware Cloud Foundation Update 3.5 (vSAN Health)

Since the firmware version listed in the hosts iDRAC (see next screenshot) matches one of the “Recommended firmwares” from above I decided to rather hit “Silence alert”. Eventually one could look for an updated VIB file allowing the ESXi host to retrieve the firmware version from the controller.

iDRAC overview of storage controllers

One more effect of the upgrade from 3.0.1.1 to 3.5 is the appearance of three more VMs in vCenter. These are the old (6.5.x) instances of the platform service controllers and the vCenter. New instances with version 6.7.x have been deployed during the upgrade. After all settings had been imported from the old ones, these were apparently powered off and kept in case something would have gone wrong.
After a period of time and confirming everything works as expected those three VMs may be deleted from the datastore:

vCenter VM overview showing old PSCs and vCenter instances