By default core-0 is used for all the control processes (rtd/dhcp/infmgr/ipsec/control-data and vsmd) along with pinning a few tasks to core-1 (which is also used for worker/data traffic). You can check the utilization of cores using top -H (press 1)

 

On a regular branch/spoke/hub, there is minimal control-processing needed with just a few bgp sessions and post-staging ipsec tunnels (towards controllers mostly), rtd (routing daemon) being the only control intensive task, which again is not highly evoked in a stable environment. So, the above default settings work fine for control-plane processing.

 

On a controller the amount of bgp control-data, rtd and ipsec processing, is quite high and hence the recommendation to dedicate more cores to be used by these control tasks – 8 being considered optimal for > 500 cpes. This is  especially required to cater to the high amount of processing during restarts/reboots/wan-link-flaps.

 

https://docs.versa-networks.com/Getting_Started/Deployment_and_Initial_Configuration/Headend_Deployment/Headend_Basics/Hardware_and_Software_Requirements_for_Headend#General_Hardware_and_VM_Requirements



 

 

 


When you set the isolcpu using the above command, it would prompt for a reboot, post the reboot you can verify the setting by checking the below


request system isolate-cpu status


You can also check the below on shell


cat /proc/cmdline


For ex, below are some lab outputs, here "isolcpu enable num-control-cpu 3" was executed



Please note that in the above output, for "show vsm cpu-info" shows that "Used CPUs" has removed 0 1 2 as these cores are now isolated for control-tasks and not used by vsmd, the control-cpus that you see in the output above are actually the ones used by vsmd task and are not the ones that have been isolated for control-tasks.