When a VMware Cloud Foundation deployment was updated to the current version, as described previously, a few tasks should be done afterwards. First the vSAN datastore disk format version might need an upgrade. To check this head to the “Configure” tab of your DC in vCenter and click on “vSAN /Disk Management”:
Of course you should run the pre-check by clicking on the right button. If everything is working as it should it would look like this:
Now you can click the “Upgrade” button, which informs you this can take a while. Also you should backup your data/VMs elsewhere, especially if you select “Allow Reduced Redundancy”, which speeds up the process:
As you can see now the disk format version has changed from “5” to “7”:
However still some vSAN issues are displayed:
As this deployment is a “dark site”, meaning no internet access is available, the HCL database and Release catalog have to be updated manually.
The URL to download the 14.7 MB file can be found in a post from William Lam from 2015 or in this KB article. The release catalog’s URL is taken from another KB article. This file is less than 8 KB in size. After uploading both using the corresponding “Update from file” buttons the screen should look like this:
The last remaining issue in this case was the firmware version of the host bus adapter connecting the vSAN datastore devices could not be retrieved (“N/A”):
Since the firmware version listed in the hosts iDRAC (see next screenshot) matches one of the “Recommended firmwares” from above I decided to rather hit “Silence alert”. Eventually one could look for an updated VIB file allowing the ESXi host to retrieve the firmware version from the controller.
One more effect of the upgrade from 126.96.36.199 to 3.5 is the appearance of three more VMs in vCenter. These are the old (6.5.x) instances of the platform service controllers and the vCenter. New instances with version 6.7.x have been deployed during the upgrade. After all settings had been imported from the old ones, these were apparently powered off and kept in case something would have gone wrong. After a period of time and confirming everything works as expected those three VMs may be deleted from the datastore:
In this post I would like to show you the process of updating a VCF deployment at a customer site to the current version which was released in mid december. The pictures show only the update of the management workload domain, as that is the only one currently available there. If you have multiple VI/VDI workload domain still you have to update the management domain first, and then individual workload domains.
The steps necessary are the same as in previous updates. E.g. if you are located at an environment isolated from the internet you can use a laptop to download the bundle files based on a delta file provided by the SDDC manager and import these afterwards, as described in one of my previous posts. The update itself can be scheduled or started immediately. The process is the same as before, but consists of multiple phases.
The first phase updates the VCF services itself, including domain manager, SDDC manager UI and LCM:
Afterwards the NSX components are updated to version 6.4.4 as shown in below screenshot:
In the next phase the platform service controllers (for the management domain typically two) and the vCenter are updated to version 6.7. Sadly in the first release of the VCF 3.5 update bundles there was a but resulting in an error in the stage “vCenter upgrade create input spec”:
The SDDC manager’s log file “/var/log/vmware/vcf/lcm/lcm-debug.log” only showed a “java.lang.NullPointerException” error at the component “com.vmware.evo.sddc.lcm.orch.PrimitiveService“, which didn’t help me much, so after a unlucky Google search I contacted VMware’s support. Upon opening a support case on my.vmware.com a very friendly Senior Technical Support Engineer got back to me within minutes and pointed my attention to this knowledge base article. Apparently the issue cannot be fixed in place, but a new update bundle is available replacing the buggy one. If your SDDC manager has internet access it can download the bundle automatically, but if you are at a “dark site” you first need to get rid of the faulty bundle’s id by running the following python script:
Afterwards a new marker file has to be created and transferred to a workstation with internet access where the updated bundles are downloaded: (same procedure as described before)
This screen shows the new successful import of the previously downloaded bundles after copying it back to the SDDC manager:
Finally we can retry the phase 3. As you can see here a new screen appears now:
As vCenter appliances can not be upgraded from 6.5 to 6.7 directly a new appliance has to be deployed which then imports all settings from the old one. To be able to complete this process the SDDC manager needs a temporary IP address for that new appliance in the same range as the vCenter/PSC:
Check the review screen to confirm the temporary IP settings and hit “Finish” to start the update:
Hooray! The update process did not fail at the stage it did before:
After a little more than an hour all three appliances are up-to-date:
As we can see in the overview screen of the domain all components are updated, except for the ESXi hosts:
This means the fourth and final phase can be started: the update of the ESXi hosts to build 1076412:
This concludes the update to VCF 3.5 as all components now have the current build numbers:
The next screenshot of the Update history section shows the update from 3.0.1 to 188.8.131.52 and the four updates from above:
After deploying vROPS using the vRSLCM yesterday, today the task was to deploy two separate instances of vRealize Log Insight. Both instances should consist of a cluster of one master and three workers (deployment type “Medium with HA”) and be placed on different hypervisor clusters, each managed by their own vCenter and separated by a third-party firewall. Finally the “outer” vRLI cluster would forward their received telemetry onto the “inner” cluster, which will function as part of a central SIEM platform.
The first step is to deploy both of the clusters. Again the “Create Environment” screen is used:
After being finished with entering all the deployment parameters the pre-check is performed, but failed. Allegedly the IP addresses provided could not be resolved. Correctly configured Active Directory servers with the according A- and (reverse) PTR-entries were set up and reachable, so the warnings were ignored:
The environment creation is initiated:
After deploying the master the three workers are deployed in parallel:
After deploying the three workers the LCM fails to configure the supplied NTP servers for some reason:
At this point you have two options. The first one being deleting the environment (including the VMs by the below checkbox) and starting over: (e.g. if you actually made a mistake)
The other option is to resume the request: (The arrow on the right already disappeared after clicking so I drew one where it was)
This time the step and eventually the entire request finished successfully. From the vCenter perspective the result will look like this:
This process is repeated for the second cluster / environment, leaving us with two environments, each with a vRealize Log Insight cluster:
The next step is to set up message forwarding, so that the “inner” cluster will receive also the messages from the devices logging to the “outer” cluster, with only allowing SSL secured traffic from that cluster to the other on the firewall between the clusters. Before configuring the two vRLI clusters we first need to export the certificate for the “inner” cluster, which was created separately using the vRSLCM: (If the same certificate is used for both environments, e.g. subject alternative name=*.”parent.domain”, you can skip this)
The receiving (“inner”) cluster can be configured to accept only SSL encrypted traffic: (optionally)
Finally the FQDN for the virtual IP of the the “inner” cluster is added as event forwarding destination in the configuration page of the “outer” cluster. The protocol drop-down should be left on “Ingestion API” as changing to “Syslog” will overwrite the original source IPs of the logging entries. After checking the “Use SSL” box verify the connection by using the “Test” button:
If no filters are added here all events received by that vRLI cluster will also be available on the other one.
For testing the setup I configured a NSX-T manager, placed at the “inner” management cluster, to log directly onto the “inner” cluster and a couple of edge VMs, which were deployed to the “outer” edge cluster, as described here.
In my previous post I described how to deploy the vRealize Lifecycle Manager 2.0 and import product binaries and patches. Now it is time to make use of it to deploy the first vRealize product: vRealize Operations Manager. There are some more steps, which you need to complete first, like generating a certificate or certificate signing request, and also some optional tasks, like adding an identity manager or Active Directory association. As they are described quite well in the official documentation I will skip those here.
Before you can add an environment (the term used for deploying vRealize products) a vCenter has to be added. The documentation states how to add a user with only the necessary roles, but for testing purposes you can also use the default administrator SSO account.
If you have an isolated environment the request to add a vCenter will look like the above screenshot, as it can’t get patches from the internet, but it will still work. In the “Create Environment” screen you can select which products you want to deploy. For each product you need to select the version and the deployment type:
Next to the deployment type each product has a small “info” icon. Upon clicking that the details to each type are displayed:
After selecting your desired products you have to accept the license agreements and fill in details like license keys, deployment options, IP addresses, host names etc.
After putting in all necessary information a pre-check is performed:
The pre-check verifies the availability of your DNS servers, datastores and so on:
After submitting the LCM creates the environment according to your input:
As I made a mistake in the DNS server configuration the request failed.
Upon clicking “View Request Details” a more detailed view is presented. (see screenshot below) Before deleting the environment and giving it another shot after having the mistake fixed you should export the configuration. Two options are offered: Simple or Advanced. I picked simple, which lets you download most of the parameters you entered as a JSON file.
The red info icon in the lower left corner gives even more details. In my case the successfully deployed master node was not reachable because of the DNS misconfiguration mentioned above.
In the “Create Environment” screen you can paste the contents of the saved JSON file (see above) to speed up the process. This brings you directly to the pre-check step. However you still need to go back one step and select your NTP servers – this doesn’t seem to be included in the JSON configuration. While the environment creation request is in progress you can also see details:
Finally the request finished successfully. Some steps were left out, probably because this is a single node deployment and not a “real” cluster…
After the environment is created you can (and should) enable health checks via the menu which open when you click the three dots in the upper right corner of the request box. This menu also offers you to download logs and export the configuration, as done before.
The first task I am going to do with the newly deployed vROPS is to install the HF3 security fix imported earlier:
Just select the patch, click “Next” to review and install:
You can monitor the patch installation progress:
To be able to use the integrated Content Management you have to configure the environment as an endpoint. Just click the link “Edit” which appears when clicking on the three dots next to the list element:
First confirm or modify the credentials entered earlier and test the connection:
Finally you have four checkboxes to selecht your desired Policy Settings:
I will pick up the Content Management section in another blog post. Up until then the vROPS deployed using the vRealize Suite LCM can be used as usual by opening the web GUI. It asks you to set your currency (can’t be modified later on!) and is ready to fill its dashboards with data as soon as you configure the parameters and credentials for the solutions you want to monitor, e.g. vCenter:
Sometimes when a storage device (i.e. SSD or HDD) has been used for a previous vSAN deployment or has other leftovers it cannot be re-used (either for vSAN or a local VMFS datastore) right away. When you try to format the drive as shown below the error message “Cannot change the host configuration”:
The easiest way is to change the partition scheme from GPT to MSDOS via CLI (and back via GUI) and has been described in the community before.
However, even that may fail, e.g. because of the error “Read-only file system during write”. This can occur if the ESXi hypervisor finds traces of old vSAN deployments on the drive and refuses to overwrite these. In that case you first have to delete those traces manually. Log into the host in question as the root user and issue the vSAN commands needed. For deleting a SSD the command looks like this:
In my company’s lab I found a couple of quite old x86 servers, which were not in use anymore. The rack servers are in fact so old, that the original manufacturer (Sun) doesn’t exist anymore. The model is named “X4270 M2” and labeled end-of-life by Oracle for a while now. They are equipped with Intel Xeon processors released in 2011 (!), code name Westmere EP. That is in fact the oldest dual socket CPU generation by Intel which is supported by ESXi 6.7 (needed soon for VCF 3.5 upgrade). I found some more servers, but those are equipped with Nehalem CPUs, so not hypervisor material; One possibility to give them a new purpose could be as a baremetal NSX-T edge…
The main concerns whether VCF could be successfully deployed on old hardware like that (when vSAN Ready Nodes, as required by VCF, were not even a thing yet) were compatibility with VMware’s HCL (especially HDDs, SSDs & raid controller), lack of 10 GbE adapters and not enough RAM. Preparing the five servers (four for the management domain and another for the Cloud Builder VM) with ESXi was by the book, except for a well known workaround needed on old Sun servers. For NTP, DNS and DHCP the OPNsense distribution was used once more. After uploading the filled-out Deployment Parameter Sheet the Cloud Builder VM started its validation, resulting only in one warning/error regarding cache/capacity tier ratio which can be acknowledged. In fact the same message was displayed at a customer´s site with Dell PowerEdge R640 nodes with 4TB/800GB SSDs. This seems to be related with a known issue.
However after hitting Retry another error was displayed saying that no SSDs available for vSAN were found. This could be confirmed when logging into any of the hosts ESXi interface. The Intel SSDs were marked as hard disks and could not be marked as Flash via the GUI. The reason for this is the RAID controller by LSI which does not have a SATA bypass mode, meaning you have to create a RAID 0 virtual disk for each pass-through drive, so that the hypervisor has no clue about which hardware device lies underneath. Upon investigating further in VMware´s KB a storage filter for local devices can be added via the CLI so that after a reboot that device will be marked correctly as SSD:
esxcli storage core device list [Find the SSD which is supposed to be marked as such, e.g. "naa.600605b00411be5021404f8240529589"] esxcli storage nmp satp rule add --satp=VMW_SATP_LOCAL --device naa.600605b00411be5021404f8240529589 --option "enable_local enable_ssd"
Finally the Cloud Foundation Bring-Up Process could be initiated. Still no luck however, as an error deploying the NSX manager was displayed:
As the error was in that stage meant that the platform services controllers, SDDC controller and vCenter were already successfully deployed and reachable. After logging into the latter it was clear all VMs were put on the first host and so no more RAM space was available for the NSX manager.
The first attempt to fix this problem was to migrate all VMs which were deployed so far onto the other three hosts. Afterwards the Bring Up process could be picked up by hitting Retry, but eventually the same error came up again. It became apparent, that the four hosts were not equipped with a sufficient amount of RAM (24 GB) after all. After shutting down the hosts in the correct order more RAM was added (however still less than the amount described as required minimum: 72 GB vs. 192 GB) and then started up again.
Now the Bring Up went through, resulting in an up-to-date private cloud SDDC automated deployment on >8 year old hardware…
Of course this setup is only valid for lab tests as not respecting VMware’s minimum requirements and design recommendations is not supported and not suited for production.
If your VCF SDDC deployment does not have Internet connectivity you can manually download update bundles on another machine and import it afterwards. Here are the necessary steps on a Windows workstation.
First use Putty to connect to the SDDC manager as user “vcf” and the password set in the cloud foundation deployment parameter spreadsheet (red circle in the image above) and run the following commands:
cd /opt/vmware/vcf/lcm/ su [enter root password; see green circle in top image] mkdir bundleimport chown vcf:vcf bundleimport exit cd lcm-tools/bin/ ./lcm-bundle-transfer-util --generateMarker
Create a folder on your windows machine (e.g. “C:\…\bundleupdate”) and copy the remote files “markerFile” and “markerFile.md5” from “/home/vcf/”, as well as the entire “/opt/vmware/vcf/lcm/lcm-tools/” directory structure using WinSCP. In that folder create another subfolder; In my case I called it “downloadedBundles”. Make sure you have a current version of Java (JRE) installed. Open a command prompt and run the following the commands: (when asked enter your my.vmware.com password)
After the download is completed unplug your internet cord and connect to your VCF deployment once more. Using WinSCP copy the content of your local folder “C:…\ bundleupdate\downloadedBundles” to “/opt/vmware/vcf/lcm/bundleimport”. Then use Putty again to run these commands:
cd /opt/vmware/vcf/lcm/lcm-tools/bin chmod -R 777 ../../bundleimport ./lcm-bundle-transfer-util -upload -bundleDirectory /opt/vmware/vcf/lcm/bundleimport
Another customer, another project – again the need to deploy a couple of vRealize components (Log Insight, Network Insight, Operations Manager, Automation & more). Why not use the same helper tool the VMware Cloud Foundationuses to deploy “vROPS” and “vRA”?
VMware describes this management appliance as follows:
vRealize Suite Lifecycle Manager automates install, configuration, upgrade, patch, configuration management, drift remediation and health from within a single pane of glass, thereby freeing IT Managers/Cloud admin resources to focus on business-critical initiatives, while improving time to value (TTV), reliability and consistency. Automates Day 0 to Day 2 operations of the entire vRealize Suite, enabling simplified operational experience for customers.
Download and deployment of the appliance’s OVA file is pretty straight forward as with most of VMware’s current products. After starting the newly created VM in the vCenter client you can log in with the default credentials “admin@localhost” / “vmware”, as described in the documentation.
Some patches are available and can be downloaded from my.vmware.com and applied to the VM via the web GUI pretty easily.
For being able to use the current versions of “vRA” and “vRLI” you also need to install a product support pack available on the VMware marketplace. For downloading you need to click the “Try” button on the right hand side. The screenshot on there shows how to install the “.pspak” file. After the pack is applied the product versions shown in the following screenshots are supported:
The vRealize Suite LCM first needs to import the binaries of the products which are supposed to be deployed. If you are at a site with internet access you can use the integrated “My VMware downloads” option. At an isolated site however the easiest way for me was to upload the required OVA files into the LCM VM, e.g. with WinSCP. After connecting with the “root” user (needs to set a password first) change into the “/data” folder and create a new directory (e.g. called “binary_import”) and copy everything into there. Afterwards import the binaries from the web GUI as described in the documentation (local location type, base location = “/data/binary_import”, discover, add). When the LCM is finished with discovering and mapping the product binaries and importing the patches the GUI should look like this:
After the holiday break the next steps will be to deploy and manage the vRealize Suite components needed…
I am currently helping a customer build a infrastructure platform to run a couple of virtualized applications. The decision to use VMware products was already made before I joined the project, but at that stage (middle of the year) it was still uncertain whether the deployment / networking would both be “old school” (setting up everything by hand / VLANs seperated by physical firewalls) or if new approaches should be applied. My experience with NSX and some articles I read about a new way of deploying VMware based SDDCs, namely the VMware Cloud Foundation (VCF), layed out the foundation (see what I did there…) for our new private cloud.
After continuing to dive into the VCF stack and its ideas (this free fundamentals course is great for starters) it quickly became clear that this could help reduce resources spent on deploying and operating the project’s infrastructure drastically and also prevent human errors, as entire batches of tasks are automated, following the VMware Validated Designs.
While planning the environment the latest VMware Cloud Foundation version available was 2.3.2. For this version the hardware compatibility list (both compute and networking equipment) was rather short, so for hardware selection Dell components were chosen. Until some more workshops were conducted an the boxes finally arrived some time passed, so a lot happened in the mean time…
During the VMworld US 2018 the new version 3.0 was announced and was released shortly after. The big difference introduced in this mayor update was focusing on VMware’s own products. When pre-3.0 versions also included the networking stack, supporting only certain models from a handful of vendors (Cisco, Juniper, QCT, Dell), now any underlay network supporting 1600 byte MTUs and 10 Gbps ethernet and all vSAN Ready Nodes (> 20 vendors) meeting the required/supported minimums could be used, making even brown-field scenarios possible.
More than a test deployment of the 3.0 Cloud Builder VM to download the deployment parameter spreadsheet and prerequisite checklist didn’t see the light of the day in the project, as by the time the hardware was installed 3.0.1 was already available to download. This minor version jump featured some bug fixes and improvements. For example it was no longer necessary to convert the Excel spreadsheet containing the deployment parameters (IP addresses/networks, license details, passwords) into JSON format with the included Python script on your own. The 3.0.1 Cloud Builder VM web GUI accepts the Excel file directly. Very nice!
The entire VCF 3.0.1 deployment took less than two hours from uploading the parameter spreadsheet to finishing the bring up, leaving us with a ready to use environment with vCenter, two Platform Service Controllers, vSAN, NSX, vRealize Log Insight cluster and, of course, the new SDDC manager. The preparation of our hosts (Dell PowerEdge vSAN ReadyNodes) with ESX 6.5 was pretty easy. For DHCP (VXLAN transport VLAN), DNS & NTP I set up a HA cluster of OPNsense gateways. Some pictures from the deployment process will follow in a separate post.
Shortly after this another new version came out (184.108.40.206). As that only contains the current security patches for ESX 6.5 there only is a update bundle, not an OVA download.
Last week the next long awaited mayor release was published: 3.5. Again being available via upgrade or fresh OVA deployment it includes a log of changes. These were already announced at this year’s VMworld Europe, which I had the fortune to attend for the first time. Besides more bug fixes the jump to the current 6.7 releases of ESX, vCenter & vSAN is the biggest news (finally no need for Flash client – long live HTML5!), along with NSX 6.4.4 and updated version of vRLI, SDDC Manager and so on. Now also included is NSX-T 2.3.0, but only for workload domains – the management domain continues to rely on NSX(-V). This is supposed to pave the road for container based workloads like PKS/Kubernetes.
After the holidays I will continue the story with both results from upgrading the customer’s 220.127.116.11 site to 3.5 and also deploying 3.5 at my company’s lab on older hardware, so stay tuned…