Contents
1 Create a new directory on Versa Analytics
2 Configure a local collector on Versa Analytics
3.2 Configure a new server pool
3.3 Configure new virtual service
5 Associating new logging profile
Purpose
Versa FlexVNF can be configured to send flow logs for various services/features to Versa Analytics which later will be seen on Versa Analytics GUI. However, the Flow logs are usually huge in size, this can cause disk full issues on Versa Analytics. Firstly, it is recommended to enable flow only when it is required for a short duration and disable when the requirement is fulfilled.
In few case, these logs are needed for all the timestamps for various reasons(compliance/business). In such cases we recommend configuring a separate log collector for flow logs and keep them for "archive only" by storing in a separate directory.
1. Create a new directory on Versa Analytics
We need to create a separate directory on versa Analytics where we can keep these logs separate so and are not directly ingested into the database. Any log collected under /var/tmp/log will be processed and ingested into the DB for displaying on GUI.
Note: A new directory needs to be created on all the nodes in Analytics.
2. Configure a local collector on Versa Analytics
Below are the details of the parameters used:
Address - IP address of the local collector which is the southbound / control network IP of Analytics
Port - Port number of the local collector default is 1234 so we need to choose a different one
Storage has two subcategories as below
directory - Directory to store the log, this would be a new directory in which flow logs will be stored
format - Format used for storing the logs in our case its syslog format
Note: Collectors need to be configured on all the nodes in the cluster towards all the controller in the network.
3. Configuration on controller
In R1S2 onwards we use ADC for load-balancing the connection TCP connections. The default ADC is listing on default port 1234 so we need to configure another ADC same as existing but the port must be changed to the new port in my case 1235.
3.1 Configure ADC server
3.2 Configure a new server pool
Configure a server pool and make the new ADC server a member of this new server pool
3.3 Configure new virtual service
Configure a new VIP, all the config remains the same as default ADC VIP except the port number.
Verify the same using CLI on the controller
At this point, ADC server and VIP should come up
Note: If two controllers are present this needs to be performed on both the controller. Port will remain the same only IP’s will be changed accordingly.
4. Configuration on Branch
At the branch, we need to configure a new logging profile. It’s just a copy of the existing logging profile config except for the port.
4.1 Configure the template
The Default LEF template can also be used.
4.2 Configure new collector
Here we must use new port number in my case its 1235
4.3 Collector group
Configure a new collector group and associate previously configured collector here.
4.4 Logging Profile
Create a new logging profile and associate the new collector group here.
New LEF collector should come up now and show established.
4.5 Associating new logging profile
Once the flow collector is created, you can go to the appliance in question and under next-generation firewall à security à policies à click on the rule for which you don’t want to send logs to database à and in the enforce tab select the flow collector profile as shown below.
Checks if the logs started to come in the new directory on Analytics
Analytics is listening on a new port and TCP connection got established
The log collector is up on the analytics and data received.
5. Change from R2S9
There is a change in the configuration after R2S9 to simplify things. We have added a config nob in the collector where we can specify to send Flow log to different directories in the same collector. This mitigates the requirement of config change on the controller.
Note: With this config, we do not have to do any changes on the controller rest all steps remain the same.
6. Restoring flow logs
If flow log needs to be analysed, it can be restored by running the following script on the Versa Analytics shell:
# sudo /opt/versa/scripts/van-scripts/log-restore.py usage: --src <src-path> --dst <dst-path> optional: --start-date <yyyy/mm/dd> optional: --end-date <yyyy/mm/dd> optional: --tenant <tenant> optional: --appliance <appliance>
Example:
sudo /opt/versa/scripts/van-scripts/log-restore.py --src /var/tmp/flow-archive/ --dst /var/tmp/log --tenant tenant-Customer1 --appliance VSN0-Branch1 --start-date 2020/02/12 -end-date 2020/03/13
7. Archiving flow log
Logs under /var/tmp/flow-logs will not be stored in database. To archive the flow logs from the active directory, we need to run a cron job as follows:
You can change the archive cron job interval/directory by running the following script on the Versa Analytics shell: Copy the log-archive-start script to log-archive-start-active # sudo cp /opt/versa/scripts/van-scripts/log-archive-start /opt/versa/scripts/van-scripts/logarchive-start-active # edit log-archive-start-active file. Rename the cron job file name from /tmp/log-archive to /tmp/log-archive-active. # mkdir /var/tmp/flow-archive
#sudo /opt/versa/scripts/van-scripts/log-archive-start /var/tmp/flow-log /var/tmp/flow-archive hourly active
8. Contact Support
Capture Logs all the above outputs and contact Versa Support.
NOTE: Log the putty/terminal session to capture all outputs when performing the requested steps, will be helpful when engaging Support.