VMworld Europe 2019 – Day 2 recap

vBreakfast

Fred Hofer ‘imported’ the idea of a joint breakfast where some vCommunity members could start their day together with some coffee and snacks from Shane Williford who introduced this nice tradition in VMworld US. The first Barcelona edition took place in 2015 with only three people, and the now world famous ‘grumpy waiter’, who has his own Twitter hashtag now. Over the years the event grew bigger and is now usually sponsored by RuneCast (third time in a row). It was a fun experience and great way to have a chat or even meet new people.

vBreakfast Europe 2019

Tuesday General Session key note

After a brief introduction by Jean-Pierre Brulard (Senior Vice President & General Manager, EMEA) VMware CEO Pat Gelsinger took the stage of the general session, supported by Principal Engineer Joe Beda (back at VMware after their acquisition of Heptio, where he was CTO) and COO Sanjay Poonen with a couple of guests providing insights on how VMware has and will continue to change the areas of cloud, mobility, networking and security.
If you don’t have time to view the recording of the session here are the topics in short form:

  • VMware strategy: Customers should be able to run any app using any cloud on any device with intrinsic security. This strategy has been around for ca. six years, but their portfolio is broader than ever. It is now organised into five blocks: Build, run, manage, connect and protect.
  • Force for Good initiative: VMware has the mission that technology impacts humanity in a positive way (i.e. sustainability, education). They bring education to lesser developed regions like Africa.
  • VMware Tanzu: The Cloud native portfolio consists of various new products focussed around Kubernetes:
    • Build Modern Applications with a Modern Software Supply Chain: Pivotal, Spring, broad ISV ecosystem, pre-tested application catalogs (Bitnami and the newly announced enterprise quality Project Galleon)
    • Run Modern Apps: vSphere with Native Kubernetes and App-focused Management (Project Pacific)
    • Manage Multi-cloud, Multi-cluster Infrastructure: Tanzu Mission Control (Supporting all Kuberentes platforms, like AKS, GKE, OpenShift etc.; Currently in closed beta) and Velero (Backup tool; formerly called Heptio Ark)
  • Project Pacific performance: vSphere with Native Kubernetes can be 30% faster than Kubernetes on KVM and up to 8% faster than Kubernetes on Baremetal Linux Servers because of better CPU/NUMA scheduling.
  • Cloud computing analysis: CloudHealth (now part of VMware) offers cost management, governance, automation, security and performance reporting in a SaaS model.
  • Hybrid cloud: VMware introduced Cloud Foundation as a consistent SDDC platform available both on-premises and from CSPs
    • With ‘Cloud Director Service’ partners will be able to consume various infrastructure providers with a single portal.
    • The global presence of cloud data centers using VMware technology has reached over 10.000 (with AWS having the biggest footprint).
    • No expensive refactoring of legacy applications is needed when these already run on a VMware based platform and should be migrated to a VCF based cloud. Customers can then modernise their architectures at their own pace.
    • Thomas Saueressig (Board member at SAP) talked about the journey of his company together with VMware, e.g. how their business model has changed. SAP products are now used more often running on cloud infrastructure than on-premises data centers. For next goals they target containerise their software, Hyper-individualization and industry 4.0.
    • Access to CSP specific services like DBaaS (e.g. AWS RDS) etc. can be used when running workloads in a VMware based cloud. The services offered vary between the CSPs (IBM, Azure, GCP, AWS etc.) resulting in many customers making use of multiple CSPs (Mix-and-match), resulting in a Multi Cloud strategy.
    • Another trend of Hybrid Cloud is to bring CSP specific service also to private Clouds, e.g. AWS outpost.
    • Sari Granat (EVP, IHS Markit) gave insights and lessons learned into the company’s cloud journey heavily involving VMware Cloud on AWS (VMC) with the long term goal to reduce their own data centers. Being an early adopter of VMC and having plenty of VMs (accumulated because of company acquisitions) they already have about 30% of workloads migrated to AWS.
  • VMware and Microsoft cooperation: After Azure VMware Solutions (allowing you to run VMware workloads natively on Azure; supported by HCX) introduced earlier this year the following services were presented: Azure SQL 2019 (on premises service), VMware SD-WAN (Velocloud) and Microsoft Azure Virtual WAN working together, e.g. for Azure IoT Edge and Workspace One for Microsoft Endpoint Manager for Windows 10.
  • DC-as-a-Service: With VMware Cloud on Dell EMC (based on VCF & VXrail) Dell EMC offers enterprise services (e.g. Hardware-as-a-Service) to simplify the operation of on-premises data centers.
  • Hybrid cloud services: Several vRealize products can be consumed directly out of the cloud instead of running them locally to monitor and manage deployments. Other services offered by VMware include HCX, DRaaS, Appliances and Cloud Marketplaces, Wavefront, Horizon DaaS, Data protection.
  • Edge computing: VMware is defining use cases where running applications in the Edge cloud make sense and group their offerings in thick, medium and thin (e.g. Pulse IoT Center).
  • 5G: The upcoming standard for mobile broadband internet will be affecting everything from daily life (e.g. retail experience) to manufacturing & automotive. How this will affect VMware’s product and ecosystem remains to be seen.
  • Telco cloud: With Project Maestro VMware plans to extend the engagement with telecommunication providers. It is an orchestration and operations framework leveraging a consistent cloud infrastructure to deliver services to (currently) over 100 ISPs.
    Another offering for mobile network operators is Uhana (Ai-powered predictive analytics for Radio Access Networks)
  • NSX advancements: VMware’s SDN portfolio consists of NSX Service mesh (Microservice interoperability), SD-WAN (Velocloud) and NSX Datacenter, which will be extended by the following:
    • Advanced load balancing to (AVI networks, acquired earlier this year)
    • NSX Distributed IDS/IPS
  • NSX opportunities: By replacing dedicated network and security appliances with NSX solutions CAPEX and OPEX have been reduced by more than 50%.
    • A customer perspective into this was given by Pauline Flament (Global Network Director of IT at Michelin) explaining how the company first introduced Micro segmentation using NSX and then SD-WAN for their branched using Velocloud.
  • Intrinsic security: According to VMware the IT security in general is “broken”; The landscape of infrastructure components supposed to deliver security is too complicated for most customers and often not effective. It needs to be built into the entire stack and enhances with intelligent components (interpreting information from network, workload, endpoint, identity and cloud platform to provide analytics)
  • Cloud endpoint security: VMware aims to bring next-generation antivirus & rogue device detection and compliance reporting with product resulting from the recent Carbon Black acquisition, in addition to what AppDefense already does today (introspection of application behaviour in guest VMs to detect anomalies). Some of the features on VMware’s roadmap in this context are:
    • Agentless workload security for vSphere (even for Antivirus)
    • Unified workspace security for Workspace One (client side)
    • Embedded network thread analytics (NSX)
    • Integrated cloud security solution (Secure state)
    • Managed security devices (Trusted devices by Dell: Secureworks)
  • Endpoint Management: VMware is the only company offering management of all popular devices for customers (MacOS, Windows, Android, iOS) using Workspace One in a simple and secure (enterprise grade) way.
  • Digital employee experience: Onboarding new employees and using day-to-day tools and processes shouldn’t be annoying. A good digital workspace should increase the employee’s engagement and productivity. The new AI powered ‘virtual assistant’ and ‘intelligent hub’ offered by Workspace One can help with this.
Tuesday General Session key note – Announcement of NSX Distributed IDS/IPS

Inner Circle Panel & Luncheon

The ‘Inner Circle’ is a platform by VMware which invites participants to give feedback to the company’s products and services. The goal is to improve on quality in various areas, like UI design, licensing models, interaction between solutions etc.
During VMworld five leadership representatives from different areas were available for a Q&A in a panel.
Afterwards, during lunch, they joined the guests at different tables, which each had a topic assigned, to have a discussions regarding that topic.

Inner Circle Panel & Luncheon

Odyssey Hands-on Lab Competition

The guys behind the VMware Hands-on Labs introduced a new variant of the platform which lets you test drive products in a browser based VDI setup, called Odyssey. The idea behind Odyssey is to have a competition where contestants have to solve the goals provided in a Hands on Lab with goal to be the fastest. The labs are however modified that the steps and description how to accomplish the goals usually displayed in a lab are not available.
During VMworld several rounds of this competition in various knowledge areas were held. I took part in a round with other bloggers/vExperts with tasks focussed on vSphere performance optimisations. Over the course of the week a elimination tournament with small teams was held, called the Odyssey Cup.
I think this is a great idea und fun way to compare your skill with others, and a much better way to give away goodies than e.g. a raffle.
Odyssey will be touring across vForums and VMUG Usercons around the world.

Hands-on Labs Tour

A large portion of the VMvillage floor space is dedicated to the Hands-on Labs. Here you can take a lab on your own or schedule a classroom like experience, where usually one of the creators of each specific lab is eager to help and explain the individual contents. Also you can take brand new labs during VMworld, which will then be released only a short while after. This time new additions were focussed on “Project Pacific”, “NSX-T” and the bleeding edge versions of the vRealize Suite.

Hands-on Labs

Additionally on taking a lab you could also have a glimpse behind the curtain to meet the people and the technology which bring the Hands-on lab to life. At the entrance of the tour you were given a wireless headset so the tour guides didn’t have to shout.
The different stations we visited were the monitoring stations where admins could make sure the experience was as expected. This is especially interesting when you know that all labs are hosted in Cloud environment either hosted by VMware or a partner like AWS. A specific amount of lab instances are always pre-provisioned, so that users don’t have to wait until the required resources are provisioned when they start a new session. Used bandwidth, latency (using Thousand Eyes) and availability of the individual links and providers are some of the KPIs which are monitored.
Another stop the tour covered is the Help desk where issues of lab users are solved and hardware (e.g. MacBooks) is given to people who are interested in a demo of the BYOD-provisioning service of VMware Workspace One.

Hands-on Lab Tour

The tour finished at the Command Center which is a wall of displays showing various parameters of the technology components involved in delivering the labs, like bandwidth, amount of resources used in parallel or summed up. All dashboards make use of VMware’s own products.
Ibrahim Quraishi recorded a video explaining these in more detail.

Hands-on Labs Command Center

vExpert NSX Briefing

If you were currently a vExpert and also selected for the Network Virtualization sub-program were invited to a special briefing, where upcoming news and the SDN portfolio of VMware was teasered. Also the leadership of the NSBU (network & security business unit) present tried to answer questions from the vCommunity members, as well as they could (future release timeline and feature details are kept secret of course).

vExpert NSX Briefing

vExpert Celebration Party

As this was my second VMworld, but my first time being in Barcelona while being a vExpert, being at the ‘vExpert Celebration Party’ was a premiere for me. It was held at a nice little restaurant/bar at the beach directly underneath the famous W Hotel and was visited by ca. 70 vExperts in total, including ‘Mr. vCommunity’ himself: Corey Romero.

vExpert Celebration Party – Christoph, me, Corey & Andy

It was great to meet all of the people I mostly knew from Twitter and had a great evening chatting. To put the cherry on top later Pat Gelsinger showed up unannounced, started to sing Happy Birthday for Yves Sandfort and took the time to talk to everybody. Of course we took the opportunity to take a selfie with him to capture this special moment…

vExpert Celebration Party – Selfie with Pat

Veeam Party

To finish off a great day one more highlight took place in a venue not far from the Plaza de España: The annual party sponsored an organised by backup & recovery software vendor Veeam.

Veeam Party – Outside

The party had plenty of finger food and drinks and was music-wise a good mix between a DJ duo and a fantastic cover band, which would return to VMworld Fest two days later.

Veeam Party – Inside

VMworld Europe 2019 – Day 1 recap

Pre VMworld VMUG Event

Russel O’Connor and the other nice guys from the VMUG Barcelona team once more organised an event on Monday morning which I decided to start the day with. The event with the theme “Kubernetes in the spotlight” was again held at a conference room in the Hotel Porta Fira, which was crowded. This was probably caused by some prominent presenters. Besides some engineers from OVHcloud (the main sponsor of the event) Scott Lowe and Cormac Hogan talked about Kubernetes (“K8s”) in general and the interaction with VMware products today, e.g. Cluster API used to deploy and manage K8s, either used directly or under the hood by Tanzu Mission Control. Cormac explained how the Cloud Native Storage introduced with vSAN 6.7u3 can be used for stateful applications, which usually use Persistent Volume Claims or NFS shares.

Pre VMworld VMUG Event – Kubernetes in the spotlight

Kicking off VMworld

Afterwards it was time to head over to the “Fira” and receive my badge at one of the plenty registration desks. All in all everything is very well organised, even small things as a coat and baggage drop area with a short queue at most time, especially considering nearly 13.000 people were attending.

VMworld 2019 Europe – Hall 8

First things first: The obligatory photograph at the oversize VMworld sign with a couple of mates, this time Jörn Rusch, Christoph Villnow and Niclas Sieveneck:

At this time the activities in the VMvillage started with lots of fun things to do and people to meet.

New to the VMvillage was the VMware Champions booth which offered snacks, games, prizes and give-a-ways for people already participating in the program, or new “recruits”. The program is based on an app called Advocate Hub and offers challenges like taking part in the community, reading news articles, provide feedback or refer new members.
It was fun to chat with the guys at the booth and take part in the challenge to get most points during VMworld, which would grant the winner with a longboard hand signed by Pat Gelsinger. After myself being at the top of the leaderboard for a while in the end a the vCommunity’s own Andy won.

VMworld 2019 Europe – Champions booth social media wall

Most regular sessions ran from Tuesday to Thursday as Monday is TAM and partner day, but some interesting 3-4 hour long workshops, like “Running Kubernetes on vSphere”, “vSAN operations best practices”, “VMware cloud on AWS” or “Operating the Ultimate Hybrid Cloud with VMware Cloud Foundation” took place in parallel which according to other experts were all packed.

Partner Forum – General Session

Especially held for VMware’s ecosystem of partners the Partner Forum in the afternoon gave a glimpse of their vision and strategy (explained more thoroughly in the key notes held in the next two days), as well as presenting the new program how the company will interact with its partners, called Partner Connect. It aims to bring the several different parts of the current partner network, like resellers, solution providers or service providers under one cloud services oriented umbrella. Partners participating are going to show their expertise either with Solution Competencies or Master Solution Competencies (requiring several certified employees and reference projects) in these focus areas:

  • Data Center Virtualization
  • Cloud Management and Automation
  • VMware Cloud on AWS
  • VMware PKS
  • Network Virtualization
  • Digital Workspace
Partner Forum – General Session – Introduction by Pat Gelsinger

Rubrik Party at Pacha

A nice way to end the first day was a party sponsored by Rubrik taking place at the famous nightclub Pacha, directly located on the beach:

Rubrik Party at Pacha – Dancefloor
Rubrik Party at Pacha – Terrace

VMworld Europe 2019 – Day 0 recap

VMUG leader gathering

My second VMworld experience started with the VMUG leader gathering on Sunday, 3rd November 2019. It is an informal gathering of various represantatives of the VMUG program and leaders from all around the world. It was great chance to get to know the VMUG President Steve and Brad from VMUG leadership and other leaders who are in the business a bit longer than our newly created chapter “Rhein-Ruhr“.
The gathering was held in a small but very stylish bar with a comfy atmosphere. Big thanks for the team organizing it!

VMworld Europe 2019 – VMUG leader gathering

vRockstar Party

Another great event organized by Patrick Redknap, Marco Broeken, Michael Letschin and a few others for the 8th time to kick off VMworld Europe. It gave everyone a great chance to do some networking while enjoing some drinks at the ‘Cabaret – The Barcelona EDITION’ club, this year’s venue. Cohesity, Comdivision, Kemp, Veeam, VMUG and Zerto were kind enough to sponsor everything.
In between a small panel was held to introduce the sponsors and to discuss what to expect of VMworld and the time to come:

VMworld Europe 2019 – vRockstar Party panel

Getting ready for VMworld Europe 2019

vCommunity

Many people put a lot of effort in empowering people who are not active in the vCommunity or aren’t even aware of it. I am myself quite new to this network of people sharing interest in virtualization and cloud technology and what is possible with it, but can already say the support both virtually (mostly on Twitter, Slack) and in person (e.g. at VMUGs and conferences) is amazing.
One of such people is Yadin Porter de Leon who founded the Level Up Project. The project’s goal is to make it easier to “newbies” to join the vCommunity and make the most of this network, be it learning, network or to advance careers.
The Level Up Project’s contributers provided a guide with tons of information called the “vTrail Map” which was distributed during last year’s VMworld conferences in the US.
This year a couple of volunteers, called ambassadors, were invited to do bring the vTrail Map to VMworld 2019 Europe in Barcelona, which is about to start Monday. The ambassadors were instructed how to distribute the map in various areas of VMworld (recording online) so it reaches everybody.
It also is available online.

News & Resources

A good place to start reading about VMworld is the official blog page.
VMware Champions is an app which features gamification (collection of points) to distributed news and gather feedback. There are currently several “challenges” online to inform about VMworld.
For specific and up-to-date news follow the official twitter account and the hashtags, #VMworld2019, #VMworld or #VMworldEU.
At the end of next week there will be a collection of articles written by a couple of vExperts who were selected for a so called Blogger pass. A preview of this with articles from VMworld US 2019 can be found here.

Parties & Events

There are plenty of events happening before and during VMworld, but must of them are invite only.
Here is an inofficial VMworld 2019 EMEA Parties and Events list giving an overview about everything which happens.
Andreas Lesslhumer and Manfred Hofer already wrote about this extensivly. (Big thank you!)

Swag

On top of the complimentary VMworld backpack (this year co-sponsored by Rubrik) every visitor with a full pass receives after checking in, there are plenty of ways to get free promotional items. At the booths in the hall crawl you have good chances to get free stuff from sponsors and exhibitors of the VMware ecosystem if you engage in talks, attend mini presentations or enter raffles.
Also the VMworld blog lists some more ways to win or obtain interesting giveaways, e.g. a Oculus Quest VR set.

Deploying a vRealize Network Insight 4.2 Collector/Proxy to receive NetFlow data from OPNsense routers

vRealize Network Insight [vRNI] supports receiving and processing flow information from a variety of network equipment from different vendors out of the box, but also offers the possibility to ingest NetFlow/IPFix data from third party devices, e.g. physical routers.

Assuming you already have you vRNI instance deployed head to the Settings page of your vRNI WebGUI and click on “Acounts and Data Sources” to add such data sources.
If not you can deploy vRNI quickly using the vRealize Suite Lifecycle Manager as described in this older blog post. It shows an older version of vRNI (4.1), but the process is the same for 4.2.

vRealize Network Insight – Accounts and Data Sources

The button “Add Source” brings you to a list of all supported sources: (The option “Phyiscal Flow Collector” is only available if you have an Enterprise license registered in vRNI)

vRealize Network Insight – Accounts and Data Sources, Add Data Source

The minimum deployment of vRNI has a platform VM, which you use to administer and use the tool, and a collector VM (formerly called proxy), which can be selected as target for most data sources.
To be able to receive NetFlow data from a physical device however you need another dedicated collector VM. If this was not created earlier the screen informs you that no collector VM is available:

vRealize Network Insight – Accounts and Data Sources, Add Phyiscal Data Source, No Collector VM available

It does however offer you a button “Add Collector VM” to help create one.
When clicking the button a shared secret is displayed in a popup, which should be stored, as it is needed later on:

vRealize Network Insight – Accounts and Data Sources, Add Phyiscal Data Source, Add Collector VM

Download the “vRealize Network Insight – Proxy OVA file” (7 GB) from my.vmware.com and either deploy it via command line (see further below) or the vSphere WebGUI:

vSphere Client – Deploy OVF Template

Enter the shared secret from before in the step “Customize template”:

vSphere Client – Deploy OVF Template, Customize template

An alternative to deploying the OVA via the WebGUI is VMware’s OVF Tool, which allows you to deploy virtual appliance from the command line of your operating system (Windows, Linux or MacOS). The virtual appliances are distributed as file bundles, which usually contain the description (.ovf), the virtual disks (.vmdk in case of VMware environments) and a manifest (.mf) file containing hashes of the other files. For easier handling a tar archive with the file extentions .ova is created, containing these files.

To use the OVF Tool first download the current version (as of writing this post 4.3.0 U2) from VMware {code} and install it.
Then you can deploy the OVA directly to your vCenter with the following command: (modify datastore name, VM folder, VM name, port group name, download path, credentials, data center and cluster names according to your environment and enter the shared secret from before in the placeholder xxxxxx)

/Applications/VMware\ OVF\ Tool/ovftool -dm=thin -ds="vSAN xyz" --vmFolder="Management\ VMs" --acceptAllEulas --allowAllExtraConfig --name=vrni-collector2 --deploymentOption=large --net:"VM Network"="vRack-DPortGroup-vRealize" --prop:Proxy_Shared_Secret=xxxxxx  /home/user/Downloads/VMware-vRealize-Network-Insight-4.2.0.1562947515-proxy.ova vi://username:password@vcenter.rainpole.local/Datacenter/host/Cluster/

The prefix “/Applications/VMware\ OVF\ Tool” is only needed if you are running MacOS and did not add the directory where the OVF Tool was installed to to the $PATH environment variable.
Select one of the deployment options, depending on your expected system load:

Deployment Options:
                medium: vCPUs: 4, Memory: 12GB.
                large: vCPUs: 8, Memory: 16GB.
                extra_large: vCPUs: 8, Memory: 24GB.

After a while the deployment should succeed with the following messages:

Opening OVA source: VMware-vRealize-Network-Insight-4.2.0.1562947515- -proxy.ova
The manifest validates
Opening VI target: vi://username@vcenter.rainpole.local/Datacenter/host/Cluster/
Deploying to VI: vi://username@vcenter.rainpole.local/Datacenter/host/Cluster/
Transfer Completed
Completed successfully 

If you forgot to supply the shared secret as an argument you will receive the following error upon trying to power up the VM:

vSphere Client – Collector VM power on failed

You can still enter or, if entered false informarion earlier, correct the shared secret in the vApp Options properties as shown below:

vSphere Client – vApp properties of Collector VM

Upon clicking the edit button this popup allows adjusting the value:

vSphere Client – vApp properties of Collector VM, set value

After powering it up the appliance needs to be initially configured via the VM console. Login with the presented credentials (consoleuser / ark1nc0ns0l3) and enter “setup”:

vSphere Client – Collector VM setup start in VM console

Follow the wizard and enter the configuration options according to your environment:

vSphere Client – Collector VM setup finished in VM console

After finishing the configuration of the collector (formerly called proxy) you can select it from the drop-down list when adding a new physical netflow source at the “Accounts and Data Sources” page as shown in the beginning of the post. Don’t forget to give it a nickname: (e.g. the name of the collector VM or Netflow_collector)

vRealize Network Insight – Accounts and Data Sources, Add Phyiscal Data Source, Collector VM available

Now you can send NetFlow information from physical sources to port 2055 of the collector VMs IP address. NetFlow versions 5, 7, 9 and IPFIX are supported by vRNI, but keep in mind, that version 5 does not support IPv6.

To test the deployment I used the free open source firewall distribution OPNsense, based on FreeBSD.
As described in the OPNsense Wiki NetFlow destinations and capture details can the configured in the “Reporting” section:

OPNsense configuration, Reporting: NetFlow

After a while vRNI should have received some flows, visible in the “Accounts and Data Sources” page:

vRealize Network Insight – Accounts and Data Sources, Flow count

A quick test can be done with the following query suggested by Martijn Smit’s blog:

flow where Flow Type = 'Source is Physical' and Flow Type = 'Destination is Internet'

Further configuration of the NetFlow source or mapping in vRNI may be needed, e.g. regarding DNS or VLAN, which is both mentioned in Martijn Smit’s blog article.

Upgrading the VCSA via SSH

As two days ago the VMware vCenter Server Appliance 6.7 Update 2c patch (build 14070457) was released to resolve minor issues and update the Photon OS kernel to version 4.4.182 to resolve a couple of security issues (release notes), it was time to update a couple of VCSA appliance I set up for a client. After verifying the backup schedule was still working as intended and taking a snapshot I decided to start the upgrade via CLI and not via the vCenter Server Appliance Management Interface (VAMI).
To be able to use the “software-packages” binary required for this we first need to change the standard shell of the root user, which usually looks like this when connecting via SSH:

Using username “root”.
Pre-authentication banner message from server:
|
| VMware vCenter Server Appliance 6.7.0.31000
|
| Type: vCenter Server with an embedded Platform Services Controller
|
End of banner message from server
root@vcenter[ ~ ]#

Enter the following commands to do the change:

chsh -s /bin/appliancesh root
logout

After reconnecting the prompt should now look like this:

Using username "root".

Pre-authentication banner message from server:
|
| VMware vCenter Server Appliance 6.7.0.31000
|
| Type: vCenter Server with an embedded Platform Services Controller
|
End of banner message from server

Keyboard-interactive authentication prompts from server:

End of keyboard-interactive prompts from server

Connected to service

* List APIs: "help api list"
* List Plugins: "help pi list"
* Launch BASH: "shell"

Command>

Now connect the patch ISO to the VCSA VM (via PowerShell or the vSphere Client) and start the upgrade with these commands:

software-packages stage --iso
software-packages list --staged
software-packages install --staged
reboot

If everything works as intended the result would look like this:

VMware vCenter Server Appliance upgrade process in SSH session

To speed up the process or when placing the upgrade ISO on a network share instead of a local storage replace the first command by one of these lines:

software-packages stage --iso --acceptEulas
software-packages stage --url --acceptEulas

Creating workload domains in VMware Cloud Foundation 3.7.2 with NSX-T and vSAN

In VMware Cloud Foundation (VCF) workloads usually are deployed in one or more dedicated virtual infrastructure (VI) workload domains. During the VCF deployment (as shown in my earlier posts) the management workload domain (MWLD) is created with a minimum of four hosts. The WLD contains among other components the management vCenter and the SDDC manager.
For each VI workload domain (WLD) created using the SDDC manager a separate vCenter is deployed in the MWLD. The vCenters manage the WLD’s hosts and use the vSphere linked mode. As only fifteen vCenters can be linked as per current configuration maximums, currently up to 14 WLDs are supported.
Before the SDDC manager can create a WLD enough hosts (minimum three per WLD) need to be commissioned. Click on the button “Commission hosts” either in the Dashboard or the Inventory/Hosts view:

VMware Cloud Foundation – SDDC Manager, Commission hosts

The hosts need be be prepared similarly to the VCF deployment. This includes ESXi version, hardware configuration and network settings (e.g. DNS resolution) and shown in below checklist. In a later post I will provide some helpful PowerCLI snippets to accelerate the host preparation.

VMware Cloud Foundation – SDDC Manager, Commission hosts, Checklist

After clicking on “Proceed” the details of the hosts need to be provided. Either add each individual host manually (Select “Add new”) or perform a bulk commission by preparing and uploading a JSON file:

VMware Cloud Foundation – SDDC Manager, Commission hosts, host addition

The JSON template provided looks like this:

{
    "hostsSpec": [
        {
            "hostfqdn": "Fully qual. domain name goes here",
            "username": "User Name goes here",
            "storageType": "VSAN/NFS",
            "password": "Password goes here",
            "networkPoolName": "Network Pool Name goes here"
        },
        {
            "hostfqdn": "Fully qual. domain name goes here",
            "username": "User Name goes here",
            "storageType": "VSAN/NFS",
            "password": "Password goes here",
            "networkPoolName": "Network Pool Name goes here"
        }
    ]
}

Not only the host’s details (FQDN, credentials) and the storage type (preferably vSAN) needs to be provided, but the network pool to be used. Later on also license keys are required. A total of three license keys for vSphere, vSAN and NSX should be entered in the “Administration/License” screen of the SDDC manager.
Network pools are created in the “Administration/Network settings” screen. In this case VLAN-IDs and subnet for vMotion and vSAN separate from the default pool (used by the MWLD) are used:

VMware Cloud Foundation – SDDC Manager, Network pools

After the hosts are commissioned they show up in the “Usassigned hosts” tab:

VMware Cloud Foundation – SDDC Manager, Inventory/Hosts

Click on a host to show its details, e.g. manufacturer, model and storage capacity:

VMware Cloud Foundation – SDDC Manager, Inventory/Hosts, Host details

To create a new WLD use the “+ workload domain” button in the inventory:

VMware Cloud Foundation – SDDC Manager, Workload Domains

Select your storage in the next dialog box. vSAN and NFS are fully supported out of the box (Fibre Channel can be added later on manually, but must be managed independently):

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 0

In the first step of the VI configuration wizard enter names for the WLD, the first cluster and the organization the domain is intended for:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 1

Then enter a free IP address in the management subnet, a FQDN configured in your DNS servers and root password for the WLD’s vCenter:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 2

The most interesting part if you are enthusiastic for VMware’s SDN portfolio is the networking screen, which allows you to choose between the legacy product NSX-V or the 2019 released NSX-T version 2.4.
In both cases FQDNs, IP addresses and root/admin password for the NSX managers must be entered, as well as a VLAN ID used for the overlay transport (VXLAN for NSX-V; Geneve for NSX-T):

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 3

If you selected vSAN as primary storage provider in the first step you need to enter the PFTT (primary failure to tolerate) parameter in step four. “One failure to tolerate” means each data set is replicated once, similar to RAID 1. This means that any of the three required hosts can fail at any point in time without data loss.
If you have at least five hosts you can select PFTT=2, which means data is replicated twice, so two hosts may fail simultaneously. This is only the default setting however. PFTT can be set for each object via storage profiles later on, too.

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 4

In the next steps select the hosts which shall be used for initial WLD creation. Further hosts can be added to the WLD later. The host selection screen previews the accumulated resources of the selected hosts:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 5

In the License step select the license keys entered before from the drop down menus. Each license should provide enough capacity for each product (e.g. enough CPU socket count) and not be expired:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 6

The last two screens show a review of all entered parameters and a preview of the component names which will be created:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 7
VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Step 8

After finishing the wizard the creation progress can be tracked in the Tasks view in the bottom of the SDDC manager. If you click on the task all of its subtasks and their status are shown below:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Subtasks 1
VMware Cloud Foundation – SDDC Manager, Workload Domains, Add VI WLD, Subtasks 2

After some time the WLD creation tasks should succeed:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Creating VI WLD succeeded

Open the overview of the newly created WLD under the “Inventory/Workload Domains” to show its status. The “Services” tab features links to the vCenter and the NSX-T manager GUIs:

VMware Cloud Foundation – SDDC Manager, Workload Domains, Details of WLD

After a host is removed from a workload domain or the entire WLD is deleted the hosts are found under the tab “Unassigned hosts” again, but their state shows “Need Cleanup”:

VMware Cloud Foundation – SDDC Manager, Inventory/Hosts, Decommissioning

First select the checkbox on the left of each host needing cleanup and click on the button “Decommission selected hosts”.

Then login into the SDDC manager using SSH (e.g. “ssh vcf@sddc-mgr.local”) and prepare a JSON file containing the hosts and their management credentials as follows:

[
   {
     "host1.local":
       {
         "root_user": "root",
         "root_password": "VMware123!"
       }
   },
   {
     "host2.local":
       {
         "root_user": "root",
         "root_password": "VMware123!"
       }
   }
 ] 

Now run the following commands found in the VCF documentation to commence the cleanup:

su 
cd /opt/vmware/sddc-support
./sos --cleanup-decommissioned-host /tmp/dirty_hosts.json
VMware Cloud Foundation – SDDC Manager, Host cleanup script

Afterwards however there is still the task of the network cleanup, which requires access to Direct Console User Interface (DCUI).
If the network cleanup is not performed you will be presented with errors as shown below when trying to re-commission the hosts:

VMware Cloud Foundation – SDDC Manager, Host addition of partly cleaned up hosts, Error 1
VMware Cloud Foundation – SDDC Manager, Host addition of partly cleaned up hosts, Error 2

When logging into the ESXi management GUI in your browser you can see the left over distributed virtual switch and its port groups from the previous WLD:

VMware ESXi, Network settings

Perform the network cleanup by logging into the DCUI with the root user and then select “Network Restore Options”:

VMware ESXi, DCUI, Network Restore Options

Then select “Restore Network Settings” option which resets any network settings and devices to the defaults:

VMware ESXi, DCUI, Network Restore Settings
VMware ESXi, DCUI, Network Restore Settings, Done

Re-configuration of management network settings like IP address, subnet mask, default gateway and VLAN is needed afterwards.
Now of the cleaned hosts are ready to be re-commissioned, which works as shown in the beginning of this post.

Migrating VMkernel adapters to logical switches through NSX-T N-vDS

In hyperconverged setups the servers usually have a very limited amount of physical network interfaces. So when using your ESXi hypervisor hosts as NSX-T transport nodes you often can’t use dedicated vmnic devices as VTEPs.
This posts shows how you can use the same pyhsical adapters for VTEP traffic and for VMkernel adapters (e.g. for vSAN or vMotion) by migrating them to an N-vDS switch while configuring the hosts for NSX-T.

Starting point in this example is a hosts with two network cards, one quad port 10 GbE card and a dual 100 GbE card, resulting in six available ports. The first two are used by a Virtual Distributed Switch, which contains a port group for the management VMkernel adapter (vmk0). The next two ports are reserverd for future use (e.g. iSCSI), so the last two ports are supposed to function as uplink for our N-vDS. Both ports will be used as active uplinks with the teaming policy “LOADBALANCE_SRCID”.

vSphere Client – Physical adapters before migration

To be able to migrate the vSAN and vMotion VMkernel adapters they need to be created first.
If you are using PowerCLI you can use this command:

New-VMHostNetworkAdapter 

In the vSphere Client open the Configure/VMkernel adapters view and click on “Add Networking…”:

vSphere Client – Adding VMkernel adapters

As the port group is going to be replaced by a logical switch anyway it does not matter which network is selected:

vSphere Client – Adding VMkernel adapters, Select target device

Set up the port settings depending on its purpose:

vSphere Client – Adding VMkernel adapters, Port properties vSAN

Configure the IP address settings according to your design:

vSphere Client – Adding VMkernel adapters, IPv4 settings

Repeat the steps for the vMotion VMkernel adapter. The use of the custom vMotion TCP/IP stack is recommended:

vSphere Client – Adding VMkernel adapters, Port properties vMotion

Finally our two additional adapters are created:

vSphere Client – VMkernel adapters before migration

In the NSX-T GUI you can accomplish the goal to migrate VMkernel adapters to N-vDS in three different ways, depending on how you configure your host transport nodes.
If the host is not part of a cluster which has a Transport Node Profile assigned it can be configured manually as shown here:

NSX-T – Fabric/Nodes/Host Transport Nodes

After configuring the details like transport zones etc. the VMkernel migration can be set up after clicking on “Add Mapping”:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX

Add a mapping for each vmk-adapter:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX – Add Network Mappings for Install

Select which logical switch should be used for connectivity for each vmk-adapter:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX – Network Mappings for Install

In the second case a transport node is already configured for NSX, but no mappings have been added as shown above. Select the host transport node and click on the “Migrate ESX VMkernel and Physical Adapters” entry in the “Actions” menu:

NSX-T – Fabric/Nodes/Host Transport Nodes, Migrate ESX VMkernel and Physical Adapters

The third way is to create a Transport Node Profile which contains “Network Mappings for Install” as shown above.

NSX-T – Fabric/Profiles/Transport Node Profiles

When the profile is attached to a cluster as shown below any hosts added to that cluster in vSphere is automatically configured for NSX-T (including the vmk-adapter mappings) accordingly:

NSX-T – Fabric/Nodes/Host Transport Nodes, Configure NSX for a cluster

A green checkmark next to the attached profile is shown for the cluster when all NSX-T is finished configuring all hosts:

NSX-T – Fabric/Nodes/Host Transport Nodes, Transport Node Profile attached

In the vSphere client you can verify whether the correct logical switches are used for the migrated VMkernel adapters:

vSphere Client – VMkernel adapters after migration

Also the phyiscal adapters used as uplinks for the N-vDS are visible in the vSphere client:

vSphere Client – Physical adapters after migration

If your hardware only has two physical interfaces you can migrate the management VMkernel adapter (usually vmk0) to the N-vDS as well. The NSX-T product documentation shows this in a diagram and offers some additional consideratios, e.g. that the DVS port group type should be set to Ephemeral when reverting back from a N-vDS.

Upgrading vRealize Network Insight v.4.1.1 with vRLCM

Recently when checking the vRealize Suite Lifecycle Manager GUI in the lab I am working on I noticed a new notification (red dot at the bell symbol in the upper right corner). Further inspection of the notifications showed the availability of the Product Support Pack 2 (Content Version 2.1.0.4), as shown in the lower entry in below screenshot.
It is also mentioned at the vRealize LCM release page at “VMware docs”.

vRealize Suite Lifecycle Manager – Notifications

Comparing the supported product versions of this new version with its predecessor (Version 2.1.0.2) reveals that vRealize Network Insight 4.1.1 is now supported: (highlighted in blue)

vRealize Suite Lifecycle Manager – Settings/Update

The release notes show all fixed issues, which are mostly focused on performance and stability.

After applying the new version a new entry in the Product Support section appears. As usual start the download in the “Actions” column.
If your My VMware credentials are not configured in the Lifecycle Manager or your deployment is a at a dark site, you can always download the product binaries manually, upload them via SCP and map them yourself, as shown in my previous post.

vRealize Suite Lifecycle Manager – Settings/Product Support

After the product binaries are available you can either deploy a fresh vRNI deployment or upgrade existing environments as shown in this screenshot below. You can also import existing vRNI deployments into an LCM environment which were not created by an LCM or by a different LCM.

vRealize Suite Lifecycle Manager – Environments

Follow the wizard by clicking on “Next” or on “Check compatibility matrix” to make sure the products used in your environment are supported:

vRealize Suite Lifecycle Manager – Environments/Upgrade

vRealize Network Insight 4.1.1 supports all recent VMware products, like NSX, vCenter Server & vRealize Log Insight as shown in the compatibility matrix: (NSX-T is not mentioned, but is

vRealize Suite Lifecycle Manager – vRNI 4.1.1 compatibility matrix

Before upgrading you should run the the pre-check validations. If any items do not show the “Successful” status you should follow the recommendations before proceeding:

vRealize Suite Lifecycle Manager – Environments/Upgrade/Precheck

Once the upgrade request is submitted you can check the status on the “Requests” section:

vRealize Suite Lifecycle Manager – Requests (In progress)

Depending on the specifications of your environment, e.g. cluster size, computing power etc. the upgrade process will take some time so complete. In this lab it took almost 50 minutes.

vRealize Suite Lifecycle Manager – Requests (Completed)

To verify the successful upgrade log into your vRNI GUI and open the “About” page in the “Settings” section. The version string should show the following:

vRealize Network Insight – Settings/About

Deploying vRealize Network Insight 4.1.0 with vRSLCM

In the beginning of May vRealize Network Insight 4.1 [vRNI] was released with a lot of interesting new features and enhancements described in the release notes.

It is getting more and more popular to use the vRealize Suite Lifecycle Manager appliance to deploy vRealize components like vRNI. In earlier posts I described how to deploy and update this tool to the current version as shown on below screenshot:

vRealize Suite Lifecycle Manager Version 2.1.0 Patch 1

In that version however support for vRNI 4.1.0 does not come out of the box. You rather have to install a product support package available in the VMware Marketplace / Solution Exchange first.

Download page for vRealize Network Insight 4.1.0 product support pack for vRealize Suite Lifecycle Manager

After installing the .pak file in the vRSLCM GUI under the “Settings/System Administration” page the new version needs to activated by clicking on the “Apply version” button:

vRealize Suite Lifecycle Manager – Installing a product support pack

You can check which products are supported by your deployment any time by clicking on the user name in the top right corner and then on “Products”, which opens up a pop up window.
The message “Policy successfully refreshed” confirms the new version is applied correctly:

vRealize Suite Lifecycle Manager – Applying a installed product support pack

Of course vRSLCM needs access to the product binaries. If the appliance has internet access and you would provide your my.vmware.com credentials it can download the .ova files directly.
For dark sites you can download both the “proxy” and “platform” .ova files on your workstation and upload them using SCP/SFTP: (screenshot shows WinSCP)

Uploading .ova files to vRealize Suite Lifecycle Manager using WinSCP

You need to add the product binaries to the product binary repository by entering the base location where you uploaded the .ova files earlier and then click on the “Discover” button. Finally select the added binaries and click “Add”:

vRealize Suite Lifecycle Manager – Adding product binaries

It takes a while until the product binaries are mapped and show up in the list:

vRealize Suite Lifecycle Manager – Adding product binaries in progress

Now you can deploy vRNI using vRSLCM by adding it to an existing environment or by creating a new environment. You have two deployment options for vRNI: Standard (1 Platform VM and 1 Cluster VM) or Cluster (3 Platform VMs and 1 Cluster VM). If you select “Cluster” only large nodes will be deployed, otherwise you can choose from “Standard” or “Large”.

This blog post shows all the required steps in between (prodiving certificate information, network details like IP addresses, subnet mask, gateway, portgroup and so on). Although the post is based on older versions of both vRealize Suite Lifecycle Manager and Network Insight the steps are mostly the same.

After entering all the details for creating a new environment you should run the pre-check validations:

vRealize Suite Lifecycle Manager – Pre-checks for deploying vRealize Network Insight in progress

If the validation succeeds you can commence the environment creation:.

vRealize Suite Lifecycle Manager – Pre-checks for deploying vRealize Network Insight successful

During the environment creation you can track the progress under the corresponding “In progress” request:

vRealize Suite Lifecycle Manager – Deploying vRealize Network Insight in progress

Once the request completes the deployment is ready to use:

vRealize Suite Lifecycle Manager – Deploying vRealize Network Insight successful

You can access the vRNI GUI via HTTPS on the configured address. Use the default admin user “admin@local” and the password you selected:

vRealize Network Insight login page

After first login the main features are explained in four separate screens:

vRealize Network Insight welcome page 1/4
vRealize Network Insight welcome page 2/4
vRealize Network Insight welcome page 3/4
vRealize Network Insight welcome page 4/4

You can use the self service wizard which helps you configure and learn about your vRNI deployment. Among the first steps it suggests to add data sources like vCenters and NSX managers:

vRealize Network Insight – Self Service

Apart from physical devices like routers and switches a whole variety of transport and infrastructure components can be added as data source:

vRealize Network Insight – Adding accounts and data sources

After some time to record flow information vRealize Network Insight is ready to display the first example path, in this case how a VM, which is attached to a logical switch (NSX-T 2.4 segment), connects to the Internet. The path from the T1 distributed router on the same host as the VM (cyan background) to the service router on the Edge Transport Node (purple background) is visible. As the physical switches and routers behind the NSX-T edges have not been configured as data source (yet) no further topology information is available between the service router and the Internet.

vRealize Network Insight – First packet flow/path