Hyper-V Series: Configure CPU Ratio

Hyper-V Server

New chapter about Hyper-V Series, the articles dedicated to Hyper-V configuration. In this new article, we will talk again about CPU but in particular about Ratio.

CPU Ratio

The Ratio is the relationship between physical CPU (pCPU) and virtual CPU (vCPU) and is an important configuration detail when we create a new virtual machine, that could impact to entire host.

Each processor has inside a specific number of Cores, changes by category, power and price: start from 4 of classic CPU consumer up to 24 of Intel Xeon E7 family.

This is the metric that we should consider when we build a new virtual infrastructure: if a server has 2 pCPU with 12 Cores/CPU, we have 24 Cores for the virtual machines. Can I consider also the Hyper-Threading? The answer is no! HT is ok for lab environment but in production should be used and Microsoft discourage to use it.

Relationship Between VM and Host

The Ratio is not a fixed value and depends by the application present into the virtual machine and also the number of vCPU that we will assign. A File Server, with a mid-workload, should have at least 2 vCPU, a SQL Server should have 4 vCPU but SQL Server Express can use only 1 vCPU. This means that you must know what kind of application will be deployment into the virtual environment to configure the right solution for your company.

CPU Management in Hyper-V

The virtual machine CPU value can be changed from the Hyper-V console, as showed in figure 1. Right now, the modifies can be done only when the VM is turned-off.

Figure 1 – Configuration vCPU

The maximum value of vCPU that you can assign depends by the number of pCPU present into the host; this means that you cannot create a virtual machine with 64 vCPU if you have a machine with only 4 pCPU.

To calculate the maximum value, this is the formula: (Number of processors) * (Number of cores) * (Number of threads per core)

Check this example of my machine – figure 2.

Figure 2 – CPU Parameters

The formula is: 1 (processor) * 2 (core) * 2 (threads) = 4; so, I could create a virtual machine with maximum 4 vCPU. This threshold cannot be overtaken because Hyper-V is not able to manage the Overcommit. Keep in mind that my machine has the Hyper-Threading enabled and this is the reason why there 4 pCPU.

Application Ratio

If we have available 4 vCPU doesn’t mean that you can create more than 4 virtual machines (with 1 vCPU) but this can impact on performance. This statement is an half-true because everything depends by what kind of application you need to run; this is called Application Ratio, the partitioning value allowed for each CPU. Before go ahead, check this example:

  • Exchange Server: 1:1
  • SQL Server: 1:1 up to 4:1 (for a low workload)
  • SharePoint Server: 1:1 (8:1 in lab environment)
  • VDI 12:1

The formula to calculate the value is: (Number of processors) * (Number of cores) * (Number of threads per core) * (ratio)

What this numbers means? For Exchange Server we can associate a vCPU for each pCPU available; my machine could run 2 VM with 2 vCPU/cad.

For SQL Server, with a low workload, I could adopt a Ratio 2:1 so the formula is: 1 (processor) * 2 (core) * 2 (threads) * 2 (ratio) = 8

This means that I can create 2 VM with 4 vCPU/cad.

Less is heavy workload and the Ratio becomes large, as can be seen for VDI environments where we can reach the value of 48 each VM with 1 vCPU.

This values are provided by best practices but are not a requirement to use the application but the abuse, or the overtake, can create many problems to the Hyper-V server. If we have a server with 2 pCPU and 48 Cores, run 3 Exchange Servers (with 4 vCPU), 2 SQL Server (with 4vCPU) plus Domain Controllers, many Web Servers and other VMs it’s sure that performance will go down.

Default Ratio Value

Formulas and calculates cannot resolve the main question: what is the default application ratio? There’s no a default rule about this but in the most of cases, for the classic Windows Roles, we can adopt the ratio 4:1, unless the workloads are mid or intensive, like File Server or Web Server, in this case could be used the ratio 2:1 or 1:1. The last decision is your hand because you must keep under control the environment, test the new changes and detect the performance issues.

Performance Issues

How I can detect if everything works fine and that I do my job well? It’s not easy but if you detect a slow performance in more than one VMs, this could be the first “bell-ding”. For example when you try to open a folder from File Server or when you try to open a new RDP session and the response is too high, check the VM Task Manager to see if the CPU threshold is over 90%; if is true, add another vCPU could resolve the issue but if you have overtaken the right Ratio, this could only generate new performance problems to the other VMs.

Server Core and Nano Server

We know that a small part of resources are assigned to Windows services so reduce the number of tasks helps, not only for security, to dedicate more resources to the applications.

Two good solutions to achieve this goal are Server Core and Nano Server: the first one is present since Windows Server 2008 R2 and is a great starting point to have all the Windows Roles and Features, without the UI, the second one a new SKU introduced in Windows Server 2016 that use a very small footprint (256MB of memory and 600MB of disk).

The bad news of Nano Server is that not all Roles are supported and, most important, that requires a Software Assurance Agreement.

Monitoring and Prevention

Unless you want spend your time to control and check your environment, it’s necessary have a monitoring platform to trace the evolving of your infrastructure able to send alerts when something does not work properly. Operations Management Suite is the cloud platform to monitor, analyze and manage the information of your physical, virtual and cloud environment; with the Capacity & Performance solution, figure 3, the virtual machines are under control with a great UX.

Figure 3 – OMS Capacity and Performance

What I really love of this product is the possibility to generate alerts in case a threshold is achieved for a long time, but it’s really interesting the possibility the run a script through Azure Automation (for example the migration to another Hyper-V host or change the configuration plan in case the machine running on Azure).

Conclusion

As you can see, configure a virtual environment is not easy and every single item can impact with the entire infrastructure. Before starts a new project, evaluate what kind of workload will be necessary and read the documentation to understand what kind of hardware you really need.

#DBS