vRealize Network Insight [vRNI] supports receiving and processing flow information from a variety of network equipment from different vendors out of the box, but also offers the possibility to ingest NetFlow/IPFix data from third party devices, e.g. physical routers.
Assuming you already have you vRNI instance deployed head to the Settings page of your vRNI WebGUI and click on “Acounts and Data Sources” to add such data sources.
If not you can deploy vRNI quickly using the vRealize Suite Lifecycle Manager as described in this older blog post. It shows an older version of vRNI (4.1), but the process is the same for 4.2.
The button “Add Source” brings you to a list of all supported sources: (The option “Phyiscal Flow Collector” is only available if you have an Enterprise license registered in vRNI)
The minimum deployment of vRNI has a platform VM, which you use to administer and use the tool, and a collector VM (formerly called proxy), which can be selected as target for most data sources.
To be able to receive NetFlow data from a physical device however you need another dedicated collector VM. If this was not created earlier the screen informs you that no collector VM is available:
It does however offer you a button “Add Collector VM” to help create one.
When clicking the button a shared secret is displayed in a popup, which should be stored, as it is needed later on:
Download the “vRealize Network Insight – Proxy OVA file” (7 GB) from my.vmware.com and either deploy it via command line (see further below) or the vSphere WebGUI:
Enter the shared secret from before in the step “Customize template”:
An alternative to deploying the OVA via the WebGUI is VMware�s OVF Tool, which allows you to deploy virtual appliance from the command line of your operating system (Windows, Linux or MacOS). The virtual appliances are distributed as file bundles, which usually contain the description (.ovf), the virtual disks (.vmdk in case of VMware environments) and a manifest (.mf) file containing hashes of the other files. For easier handling a tar archive with the file extentions .ova is created, containing these files.
To use the OVF Tool first download the current version (as of writing this post 4.3.0 U2) from VMware {code} and install it.
Then you can deploy the OVA directly to your vCenter with the following command: (modify datastore name, VM folder, VM name, port group name, download path, credentials, data center and cluster names according to your environment and enter the shared secret from before in the placeholder xxxxxx)
/Applications/VMware\ OVF\ Tool/ovftool -dm=thin -ds="vSAN xyz" --vmFolder="Management\ VMs" --acceptAllEulas --allowAllExtraConfig --name=vrni-collector2 --deploymentOption=large --net:"VM Network"="vRack-DPortGroup-vRealize" --prop:Proxy_Shared_Secret=xxxxxx /home/user/Downloads/VMware-vRealize-Network-Insight-4.2.0.1562947515-proxy.ova vi://username:[email protected]/Datacenter/host/Cluster/
The prefix “/Applications/VMware\ OVF\ Tool” is only needed if you are running MacOS and did not add the directory where the OVF Tool was installed to to the $PATH environment variable.
Select one of the deployment options, depending on your expected system load:
Deployment Options: medium: vCPUs: 4, Memory: 12GB. large: vCPUs: 8, Memory: 16GB. extra_large: vCPUs: 8, Memory: 24GB.
After a while the deployment should succeed with the following messages:
Opening OVA source: VMware-vRealize-Network-Insight-4.2.0.1562947515- -proxy.ova The manifest validates Opening VI target: vi://[email protected]/Datacenter/host/Cluster/ Deploying to VI: vi://[email protected]/Datacenter/host/Cluster/ Transfer Completed Completed successfully
If you forgot to supply the shared secret as an argument you will receive the following error upon trying to power up the VM:
You can still enter or, if entered false informarion earlier, correct the shared secret in the vApp Options properties as shown below:
Upon clicking the edit button this popup allows adjusting the value:
After powering it up the appliance needs to be initially configured via the VM console. Login with the presented credentials (consoleuser / ark1nc0ns0l3) and enter “setup”:
Follow the wizard and enter the configuration options according to your environment:
After finishing the configuration of the collector (formerly called proxy) you can select it from the drop-down list when adding a new physical netflow source at the “Accounts and Data Sources” page as shown in the beginning of the post. Don’t forget to give it a nickname: (e.g. the name of the collector VM or Netflow_collector)
Now you can send NetFlow information from physical sources to port 2055 of the collector VMs IP address. NetFlow versions 5, 7, 9 and IPFIX are supported by vRNI, but keep in mind, that version 5 does not support IPv6.
To test the deployment I used the free open source firewall distribution OPNsense, based on FreeBSD.
As described in the OPNsense Wiki NetFlow destinations and capture details can the configured in the “Reporting” section:
After a while vRNI should have received some flows, visible in the “Accounts and Data Sources” page:
A quick test can be done with the following query suggested by Martijn Smit�s blog:
flow where Flow Type = 'Source is Physical' and Flow Type = 'Destination is Internet'
Further configuration of the NetFlow source or mapping in vRNI may be needed, e.g. regarding DNS or VLAN, which is both mentioned in Martijn Smit�s blog article.