Table of Contents


Purpose    

Upgrading DSE from 4.5 to 4.8

Upgrading OSSPACK

Upgrading the Package to 20.2

Post Upgrade Checks

DSE to FUSION migration



 

Purpose 

The purpose of this document is to provide a procedure to upgrade analytics (standalone/cluster) running on 16.1R2 package to a 20.2 package. Post the package upgrade the Analytics database can be moved from DSE schema to Fusion schema (Note: During a greenfield installation of 20.2 the database schema is Fusion by default, however while upgrading from 16.1R2 to 20.2 the database schema will not be updated to Fusion automatically – it has to be done manually)


Upgrading DSE from 4.5 to 4.8 (if applicable)


Before upgrading the Analytic nodes to a 20.2 release it’s important (and mandatory) to upgrade the DSE version to 4.8, if the current version is 4.5.

To check the dse version you execute the below

In the above instance, the dse version is 4.8 hence the analytics node is eligible to be upgraded. However, if you find that the dse version is 4.5 then you will first need to upgrade the dse using the procedure enlisted in the below KB

https://support.versa-networks.com/a/solutions/articles/23000019690

Once the dse is upgraded on the standalone node, or all the nodes in a cluster, make sure you confirm that all the services are up (verify via “vsh status”) and the node bindings are active (look for “UN” state in “nodetool status” output) as shown below

 

[Few important points to heed]


- Prior to upgrading nodes which are dse-4.5 it's "mandatory" to first migrate it to dse4.8

- Execute the DSE migration script, please make sure that the cluster-name in the script matches the cluster-name present under the below config file for the cluster you are migrating (you can check this file on any one of the nodes)

/opt/versa/scripts/van-scripts/vansetup.conf

- Post the successful completion of DSE.4.8 migration, you will need to wait for a few hours, and then execute the upgrade sstables script below (only on the Analytic personality nodes)

nohup sudo nodetool upgradesstables &       (press "enter" - this will run the task in the background, even after you close the terminal)

(if there are any errors when you execute the above, let's say the process exits with an "error" statement, you can execute "sudo service dse restart" once, wait for 30 mins and then try the above command again)

After 24 hrs check the below

sudo su -c "find /var/lib/cassandra/data -name '*jb*.db' | wc -l" root 

The count should be 6, if the count is > 6, wait for 24 hours more and check again (and so forth, until the count shows a 6) - upgrading the sstables can sometimes take a few days so please allow yourself that time.

Once the count shows as 6, you are ready to upgrade the node, and execute fusion migration


Upgrading the OSSpack

Download the latest OSSPACK, for analytics, from the below location. Though the release notes don’t specifically mandate the upgrade of osspack, it’s recommend to install the latest available osspack before upgrading to the 20.2 release (whether it be for director, controller or analytics)

https://versanetworks.app.box.com/v/osspack

 

 

You can download the osspack image to /home/versa directory

 

Make it into an executable and install it as shown below

The installation can take a few minutes, and when it completes you will return to the shell prompt as seen below

 

Upgrading the Package

If you are running analytics as a VM, you can look to take a snapshot of the VM before proceeding with the upgrade so as to have a backup in case of any unforeseen failures during the process.

Download the 20.2 image to the location /home/versa/packages on all the nodes in the cluster (or to the standalone node). One of the ways of doing this is as shown below through the use of scp from a server to the analytic node.

Verify the md5sum on the downloaded file and, once verified, make the file into an executable as shown below via the chmod a+x

You can now start the upgrade from the cli as shown below using the "request system package upgrade" command. You can look to upgrade the “analytic personality” nodes first, one by one (if there are more than one) followed by the “search personality” nodes.

 

 

The upgrade can take a few minutes and once done it would print “upgrade task ended” as seen above


Post Upgrade Checks 

Execute the below check post the upgrade to ensure sanity of services

Check 1:

Check the status of the services, as below and ensure that all are in running state.

If any of the service is found to in a “stopped” state, you can try a “vsh stop” followed by “vsh start” to check if the restart helps. If not, please raise a ticket with Versa TAC.

Check 2:

Execute the below commands on all the nodes to ensure that the database process is running fine

As you can see, post upgrade to 20.2 the database still follows the DSE schema. We would need to execute a procedure to change the schema from DSE to Fusion

Check 3:

Check the association on each node, it should be in “UN” state

Check 4:

Check the web UI access to all the analytic nodes (either port 8080 or 8443 as the case may be), also check the analytic access from Director’s “Analytics” tab as shown below

 

Go through various site data (like link  usage, access-circuit usage, vrf, qos, sla-metrics etc), and check various ranges like 1 hr, 15 mins, 5 mins, 1 day and customer-range, to ensure that all the data is available and is being plotted properly.

Also, check the “Logs” section (which pertains to the search nodes) for various alarm logs, firewall logs etc, to ensure that all the logs are showing up normally and new logs events are being displayed without any issues. If not, please get in touch with Versa tac.

 

Dse to Fusion Migration

The recommendation is for all the Analytic databases to be moved to Fusion schema, as the DSE support will slowly be phased out. When you install analytic nodes on 20.2 directly (Greenfield) the databases follow the Fusion model by default. However, while upgrading analytics from 16.1R2 to 20.2, the databases have to be manually migrated to Fusion using the procedure below.

Download the fusion migration script from the below location – the script keeps getting updated, and the latest copy is always available at this site

https://versanetworks.app.box.com/s/8pdi9ppyjzfq8cx53s10l3zbwt6k2kbw or 

https://download.versa-networks.com/index.php/s/uZZVI0Wo5wU6xJE?path=%2FFusion-Migration


NOTE ADDED (on 03/02/2021) :-  Please ensure that you download the "latest" version of the fusion migration script, the current version is v2.2


Download the tar file to the /home/admin directory (or for that matter any user directory) on the “master” Versa Director.

 

 

Untar the file as below

Under the directory “fusion_migration”, you will find fusion_migration.conf and fusion_migration.py. We would first need to feed the required data in fusion_migration.conf.

You would just need to provide details of the management and the internal ip (the ip-address that you see in “nodetool status” output), along with the username/hostname details, for all the analytic/search nodes involved as shown below.

 Note: you can open fusion_migration.conf using vi or nano, or any other editor of your choice

 

 

 

Note: you can also refer to the below document in case of any doubts that you may have regarding this script

Once the required details have been filled into the fusion_migration.conf file, save the file, and proceed towards executing the script – “python fusion_migration.py”


Note Added on 02/02/2021 - In v1.8 of the fusion migration script package you will find an additional python script called  migration_precheck.py as shown below



Execute migration_precheck.py and validate if the script executes successfully (without generating any ERROR logs, as shown below) before proceeding with the execution of the main script fusion_migration.py



Important note: Sometimes the pre-check can show the below error. If the count is 6 or less, you can ignore this error and continue with the migration


 

Post the successful execution of the migration_precheck.py you can proceed with the execution of fusion_migration.py

 

 

The migration can take several minutes and ideally the script should run through without any errors until completion with the INFO message “Fusion migration completed on Analytics & Search nodes”.

 

Note: if you see an “ERROR” message towards the end citing “Fusion migration not completed on Search & Analytics nodes”, please capture all the session logs and open a ticket with Versa TAC. Please make sure you enable session logging on the terminal where the script is bring run, so that all the logs are captured which will be an important reference for the TAC, to troubleshoot the issue, in case the script fails for any reason.

 

If the migration is successful, please execute the same checks as listed under section “Post Upgrade Checks” above, however instead of using “nodetool status” you will need to use “vsh dbstatus” – also “dse -v” will return the below output since dse is no longer running.

               

 

On the analytic personality, the “vsh dbstatus” output will be as below under normal condition

 

 

On the search personality, the vsh dbstatus output will be as below

 

In the above output of "vsh db status" on the search nodes, should show up "Found 1 Solr nodes" and "Collections: 1" irrespective of the number of nodes in the cluster. The "liveNodes" value should be equal to the number of search nodes in the cluster. 

You can raise a support ticket with Versa TAC in case you encounter any error/failure during the package upgrade or during dse to fusion migration.