After just getting started with PowerCLI on my company Windows 10 notebook I read that you could also run it on Linux and MacOS systems since last year. As I just started to like the functionality (took some time when you are only accustomed to Bash and Python) I wanted to give that a try on my private Macbook Pro, so here are the steps I took:
First download the latest stable release for MacOS (shown above), currently “powershell-6.1.2-osx-x64.pkg“, and install it. Then open a shell, either by clicking on “PowerShell” in the Launchpad or open a Terminal window and enter “pwsh”.
If you skip the first line the PSGallery repository, which hosts the PowerCLI packages, is not trusted, resulting in the following warning:
You are installing the modules from an untrusted repository. If you trust this repository, change its InstallationPolicy value by running the Set-PSRepository cmdlet. Are you
sure you want to install the modules from 'PSGallery'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"):
When a VMware Cloud Foundation deployment was updated to the current version, as described previously, a few tasks should be done afterwards. First the vSAN datastore disk format version might need an upgrade. To check this head to the “Configure” tab of your DC in vCenter and click on “vSAN /Disk Management”:
Of course you should run the pre-check by clicking on the right button. If everything is working as it should it would look like this:
Now you can click the “Upgrade” button, which informs you this can take a while. Also you should backup your data/VMs elsewhere, especially if you select “Allow Reduced Redundancy”, which speeds up the process:
As you can see now the disk format version has changed from “5” to “7”:
However still some vSAN issues are displayed:
As this deployment is a “dark site”, meaning no internet access is available, the HCL database and Release catalog have to be updated manually.
The URL to download the 14.7 MB file can be found in a post from William Lam from 2015 or in this KB article. The release catalog’s URL is taken from another KB article. This file is less than 8 KB in size. After uploading both using the corresponding “Update from file” buttons the screen should look like this:
The last remaining issue in this case was the firmware version of the host bus adapter connecting the vSAN datastore devices could not be retrieved (“N/A”):
Since the firmware version listed in the hosts iDRAC (see next screenshot) matches one of the “Recommended firmwares” from above I decided to rather hit “Silence alert”. Eventually one could look for an updated VIB file allowing the ESXi host to retrieve the firmware version from the controller.
One more effect of the upgrade from 220.127.116.11 to 3.5 is the appearance of three more VMs in vCenter. These are the old (6.5.x) instances of the platform service controllers and the vCenter. New instances with version 6.7.x have been deployed during the upgrade. After all settings had been imported from the old ones, these were apparently powered off and kept in case something would have gone wrong. After a period of time and confirming everything works as expected those three VMs may be deleted from the datastore:
In this post I would like to show you the process of updating a VCF deployment at a customer site to the current version which was released in mid december. The pictures show only the update of the management workload domain, as that is the only one currently available there. If you have multiple VI/VDI workload domain still you have to update the management domain first, and then individual workload domains.
The steps necessary are the same as in previous updates. E.g. if you are located at an environment isolated from the internet you can use a laptop to download the bundle files based on a delta file provided by the SDDC manager and import these afterwards, as described in one of my previous posts. The update itself can be scheduled or started immediately. The process is the same as before, but consists of multiple phases.
The first phase updates the VCF services itself, including domain manager, SDDC manager UI and LCM:
Afterwards the NSX components are updated to version 6.4.4 as shown in below screenshot:
In the next phase the platform service controllers (for the management domain typically two) and the vCenter are updated to version 6.7. Sadly in the first release of the VCF 3.5 update bundles there was a but resulting in an error in the stage “vCenter upgrade create input spec”:
The SDDC manager’s log file “/var/log/vmware/vcf/lcm/lcm-debug.log” only showed a “java.lang.NullPointerException” error at the component “com.vmware.evo.sddc.lcm.orch.PrimitiveService“, which didn’t help me much, so after a unlucky Google search I contacted VMware’s support. Upon opening a support case on my.vmware.com a very friendly Senior Technical Support Engineer got back to me within minutes and pointed my attention to this knowledge base article. Apparently the issue cannot be fixed in place, but a new update bundle is available replacing the buggy one. If your SDDC manager has internet access it can download the bundle automatically, but if you are at a “dark site” you first need to get rid of the faulty bundle’s id by running the following python script:
Afterwards a new marker file has to be created and transferred to a workstation with internet access where the updated bundles are downloaded: (same procedure as described before)
This screen shows the new successful import of the previously downloaded bundles after copying it back to the SDDC manager:
Finally we can retry the phase 3. As you can see here a new screen appears now:
As vCenter appliances can not be upgraded from 6.5 to 6.7 directly a new appliance has to be deployed which then imports all settings from the old one. To be able to complete this process the SDDC manager needs a temporary IP address for that new appliance in the same range as the vCenter/PSC:
Check the review screen to confirm the temporary IP settings and hit “Finish” to start the update:
Hooray! The update process did not fail at the stage it did before:
After a little more than an hour all three appliances are up-to-date:
As we can see in the overview screen of the domain all components are updated, except for the ESXi hosts:
This means the fourth and final phase can be started: the update of the ESXi hosts to build 1076412:
This concludes the update to VCF 3.5 as all components now have the current build numbers:
The next screenshot of the Update history section shows the update from 3.0.1 to 18.104.22.168 and the four updates from above:
After deploying vROPS using the vRSLCM yesterday, today the task was to deploy two separate instances of vRealize Log Insight. Both instances should consist of a cluster of one master and three workers (deployment type “Medium with HA”) and be placed on different hypervisor clusters, each managed by their own vCenter and separated by a third-party firewall. Finally the “outer” vRLI cluster would forward their received telemetry onto the “inner” cluster, which will function as part of a central SIEM platform.
The first step is to deploy both of the clusters. Again the “Create Environment” screen is used:
After being finished with entering all the deployment parameters the pre-check is performed, but failed. Allegedly the IP addresses provided could not be resolved. Correctly configured Active Directory servers with the according A- and (reverse) PTR-entries were set up and reachable, so the warnings were ignored:
The environment creation is initiated:
After deploying the master the three workers are deployed in parallel:
After deploying the three workers the LCM fails to configure the supplied NTP servers for some reason:
At this point you have two options. The first one being deleting the environment (including the VMs by the below checkbox) and starting over: (e.g. if you actually made a mistake)
The other option is to resume the request: (The arrow on the right already disappeared after clicking so I drew one where it was)
This time the step and eventually the entire request finished successfully. From the vCenter perspective the result will look like this:
This process is repeated for the second cluster / environment, leaving us with two environments, each with a vRealize Log Insight cluster:
The next step is to set up message forwarding, so that the “inner” cluster will receive also the messages from the devices logging to the “outer” cluster, with only allowing SSL secured traffic from that cluster to the other on the firewall between the clusters. Before configuring the two vRLI clusters we first need to export the certificate for the “inner” cluster, which was created separately using the vRSLCM: (If the same certificate is used for both environments, e.g. subject alternative name=*.”parent.domain”, you can skip this)
The receiving (“inner”) cluster can be configured to accept only SSL encrypted traffic: (optionally)
Finally the FQDN for the virtual IP of the the “inner” cluster is added as event forwarding destination in the configuration page of the “outer” cluster. The protocol drop-down should be left on “Ingestion API” as changing to “Syslog” will overwrite the original source IPs of the logging entries. After checking the “Use SSL” box verify the connection by using the “Test” button:
If no filters are added here all events received by that vRLI cluster will also be available on the other one.
For testing the setup I configured a NSX-T manager, placed at the “inner” management cluster, to log directly onto the “inner” cluster and a couple of edge VMs, which were deployed to the “outer” edge cluster, as described here.
In my previous post I described how to deploy the vRealize Lifecycle Manager 2.0 and import product binaries and patches. Now it is time to make use of it to deploy the first vRealize product: vRealize Operations Manager. There are some more steps, which you need to complete first, like generating a certificate or certificate signing request, and also some optional tasks, like adding an identity manager or Active Directory association. As they are described quite well in the official documentation I will skip those here.
Before you can add an environment (the term used for deploying vRealize products) a vCenter has to be added. The documentation states how to add a user with only the necessary roles, but for testing purposes you can also use the default administrator SSO account.
If you have an isolated environment the request to add a vCenter will look like the above screenshot, as it can’t get patches from the internet, but it will still work. In the “Create Environment” screen you can select which products you want to deploy. For each product you need to select the version and the deployment type:
Next to the deployment type each product has a small “info” icon. Upon clicking that the details to each type are displayed:
After selecting your desired products you have to accept the license agreements and fill in details like license keys, deployment options, IP addresses, host names etc.
After putting in all necessary information a pre-check is performed:
The pre-check verifies the availability of your DNS servers, datastores and so on:
After submitting the LCM creates the environment according to your input:
As I made a mistake in the DNS server configuration the request failed.
Upon clicking “View Request Details” a more detailed view is presented. (see screenshot below) Before deleting the environment and giving it another shot after having the mistake fixed you should export the configuration. Two options are offered: Simple or Advanced. I picked simple, which lets you download most of the parameters you entered as a JSON file.
The red info icon in the lower left corner gives even more details. In my case the successfully deployed master node was not reachable because of the DNS misconfiguration mentioned above.
In the “Create Environment” screen you can paste the contents of the saved JSON file (see above) to speed up the process. This brings you directly to the pre-check step. However you still need to go back one step and select your NTP servers – this doesn’t seem to be included in the JSON configuration. While the environment creation request is in progress you can also see details:
Finally the request finished successfully. Some steps were left out, probably because this is a single node deployment and not a “real” cluster…
After the environment is created you can (and should) enable health checks via the menu which open when you click the three dots in the upper right corner of the request box. This menu also offers you to download logs and export the configuration, as done before.
The first task I am going to do with the newly deployed vROPS is to install the HF3 security fix imported earlier:
Just select the patch, click “Next” to review and install:
You can monitor the patch installation progress:
To be able to use the integrated Content Management you have to configure the environment as an endpoint. Just click the link “Edit” which appears when clicking on the three dots next to the list element:
First confirm or modify the credentials entered earlier and test the connection:
Finally you have four checkboxes to selecht your desired Policy Settings:
I will pick up the Content Management section in another blog post. Up until then the vROPS deployed using the vRealize Suite LCM can be used as usual by opening the web GUI. It asks you to set your currency (can’t be modified later on!) and is ready to fill its dashboards with data as soon as you configure the parameters and credentials for the solutions you want to monitor, e.g. vCenter:
Sometimes when a storage device (i.e. SSD or HDD) has been used for a previous vSAN deployment or has other leftovers it cannot be re-used (either for vSAN or a local VMFS datastore) right away. When you try to format the drive as shown below the error message “Cannot change the host configuration”:
The easiest way is to change the partition scheme from GPT to MSDOS via CLI (and back via GUI) and has been described in the community before.
However, even that may fail, e.g. because of the error “Read-only file system during write”. This can occur if the ESXi hypervisor finds traces of old vSAN deployments on the drive and refuses to overwrite these. In that case you first have to delete those traces manually. Log into the host in question as the root user and issue the vSAN commands needed. These are the commands for listing all known vSAN disks, deleting a SSD (cache device) and a (capacity) disk:
In my company’s lab I found a couple of quite old x86 servers, which were not in use anymore. The rack servers are in fact so old, that the original manufacturer (Sun) doesn’t exist anymore. The model is named “X4270 M2” and labeled end-of-life by Oracle for a while now. They are equipped with Intel Xeon processors released in 2011 (!), code name Westmere EP. That is in fact the oldest dual socket CPU generation by Intel which is supported by ESXi 6.7 (needed soon for VCF 3.5 upgrade). I found some more servers, but those are equipped with Nehalem CPUs, so not hypervisor material; One possibility to give them a new purpose could be as a baremetal NSX-T edge…
The main concerns whether VCF could be successfully deployed on old hardware like that (when vSAN Ready Nodes, as required by VCF, were not even a thing yet) were compatibility with VMware’s HCL (especially HDDs, SSDs & raid controller), lack of 10 GbE adapters and not enough RAM. Preparing the five servers (four for the management domain and another for the Cloud Builder VM) with ESXi was by the book, except for a well known workaround needed on old Sun servers. For NTP, DNS and DHCP the OPNsense distribution was used once more. After uploading the filled-out Deployment Parameter Sheet the Cloud Builder VM started its validation, resulting only in one warning/error regarding cache/capacity tier ratio which can be acknowledged. In fact the same message was displayed at a customer´s site with Dell PowerEdge R640 nodes with 4TB/800GB SSDs. This seems to be related with a known issue.
However after hitting Retry another error was displayed saying that no SSDs available for vSAN were found. This could be confirmed when logging into any of the hosts ESXi interface. The Intel SSDs were marked as hard disks and could not be marked as Flash via the GUI. The reason for this is the RAID controller by LSI which does not have a SATA bypass mode, meaning you have to create a RAID 0 virtual disk for each pass-through drive, so that the hypervisor has no clue about which hardware device lies underneath. Upon investigating further in VMware´s KB a storage filter for local devices can be added via the CLI so that after a reboot that device will be marked correctly as SSD:
esxcli storage core device list [Find the SSD which is supposed to be marked as such, e.g. "naa.600605b00411be5021404f8240529589"] esxcli storage nmp satp rule add --satp=VMW_SATP_LOCAL --device naa.600605b00411be5021404f8240529589 --option "enable_local enable_ssd"
Finally the Cloud Foundation Bring-Up Process could be initiated. Still no luck however, as an error deploying the NSX manager was displayed:
As the error was in that stage meant that the platform services controllers, SDDC controller and vCenter were already successfully deployed and reachable. After logging into the latter it was clear all VMs were put on the first host and so no more RAM space was available for the NSX manager.
The first attempt to fix this problem was to migrate all VMs which were deployed so far onto the other three hosts. Afterwards the Bring Up process could be picked up by hitting Retry, but eventually the same error came up again. It became apparent, that the four hosts were not equipped with a sufficient amount of RAM (24 GB) after all. After shutting down the hosts in the correct order more RAM was added (however still less than the amount described as required minimum: 72 GB vs. 192 GB) and then started up again.
Now the Bring Up went through, resulting in an up-to-date private cloud SDDC automated deployment on >8 year old hardware…
Of course this setup is only valid for lab tests as not respecting VMware’s minimum requirements and design recommendations is not supported and not suited for production.
If your VCF SDDC deployment does not have Internet connectivity you can manually download update bundles on another machine and import it afterwards. Here are the necessary steps on a Windows workstation.
First use Putty to connect to the SDDC manager as user “vcf” and the password set in the cloud foundation deployment parameter spreadsheet (red circle in the image above) and run the following commands:
cd /opt/vmware/vcf/lcm/ su [enter root password; see green circle in top image] mkdir bundleimport chown vcf:vcf bundleimport exit cd lcm-tools/bin/ ./lcm-bundle-transfer-util --generateMarker
Create a folder on your windows machine (e.g. “C:\…\bundleupdate”) and copy the remote files “markerFile” and “markerFile.md5” from “/home/vcf/”, as well as the entire “/opt/vmware/vcf/lcm/lcm-tools/” directory structure using WinSCP. In that folder create another subfolder; In my case I called it “downloadedBundles”. Make sure you have a current version of Java (JRE) installed. Open a command prompt and run the following the commands: (when asked enter your my.vmware.com password)
After the download is completed unplug your internet cord and connect to your VCF deployment once more. Using WinSCP copy the content of your local folder “C:…\ bundleupdate\downloadedBundles” to “/opt/vmware/vcf/lcm/bundleimport”. Then use Putty again to run these commands:
cd /opt/vmware/vcf/lcm/lcm-tools/bin chmod -R 777 ../../bundleimport ./lcm-bundle-transfer-util -upload -bundleDirectory /opt/vmware/vcf/lcm/bundleimport