How Dom0 hypervisor settings affect performance?
Generally, on hypervisors with low CPU resources (e.g. 8 CPU sores), the recommended settings per our install guide can be set the following in /onapp/onapp-hv.conf file:
But on hypervisors with huge CPU resources (32 cores or more, with many CPU sockets, and so on) these options may cause XEN to be unstable. For more info on XEN CPU throttling, refer to http://wiki.xen.org/wiki/Credit_Scheduler
Especially, please pay attention to the following options:
sched_ratelimit_us: "This feature can be disabled by setting the ratelimit to 0. One could imagine, on a computation-heavy workload, setting this to something higher, like 5ms or 10ms; or if you have a particularly latency-sensitive workload, bringing it down to 500us or even 100us. "
tslice_ms: "Interesting values you might give a try are 10ms, 5ms, and 1ms. One millisecond might be a good choice for particularly latency-sensitive workloads; but beware that reducing the timeslice also increases the overhead from context switching and reduces the effectiveness of the CPU cache..." ...
Using these options may not be effective for hypervisors with huge amount of CPU resourced (32 or more cores,a few sockets and many VSs, etc) because of "overhead from context switching" ...etc.
At the same time, if we use options 'dom0_max_vcpus=2' (strictly set number of CPUs for Dom0), it turns off 'cpu throttling' for Dom0 (it uses another more stable algorithm for Dom0 CPU sharing and hypervisor doesn't crash) and makes XEN very stable (even if it has decreased CPU cores to 2 or 4).
So, if we have a hypervisor with only a few VSs and we want to share unused CPU resources between VSs and Dom0, we should use and tune the mentioned options. Otherwise, we should strictly set the number of CPUs for Dom0.