I have often been asked about memory reservations in Horizon deployments. People want to know how much should they reserve and what are the impacts, so I thought I’d write up something to clarify this.
Firstly lets talk about what memory reservations are, they are the amount of physical host memory that the VM is guaranteed at any time. There is a common misconception that a VM will always consume it’s entire memory allocation, but this isn’t entirely true. As per the vSphere Resource Management Guide (https://pubs.vmware.com/vsphere-51/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-51-resource-management-guide.pdf):
Until the virtual machines accesses its full reservation, VMkernel can allocate any unused portion of its reservation to other virtual machines. However, after the guest’s workload increases and it consumes its full reservation, it is allowed to keep this memory.
For server workloads that are rarely power cycled this is likely to have little impact, but with desktops this may have an overall impact as depending on their power policy there may be machines being powered off, or rebooted with some frequency which means their full reservation may not be allocated. The next thing we need to look at is the effect of memory reservations on the VSWAP file.
The VSWAP file, or actually VSWP is a file that vSphere create with each VM that allows it to swap memory pages stored in the hosts physical RAM into pages on physical disk. The size of the VSWAP is quite easy to determine, the forumulae is:
VSWAP = Allocated vMEM – Memory Reservation
So a VM with 4GB of vMEM with a 2GB Memory reservation would have a 2GB VSWAP file. There is also a VSWAP file for the virtual machine overhead, which whilst important to consider for capacity planning is not a part of this discussion. By having a VSWAP file available to each VM for the unreserved portion of the memory, vSphere can overallocate memory and if it becomes memory constrained then pages can be swapped to disk. This can have a major impact on performance so we need to understand when this will occur.
vSphere has a number of memory management techniques available to it that range in their performance impacts to the guests including transparent page sharing, memory compression, ballooning and vswapping. Vswapping considered to be the most expensive form of memory management because the hypervisor does not have knowledge of which pages files an application needs to have quick access too, so it will not using the VSWAP file until it is under extreme memory pressure.
The decision on what to set a VSWAP file to has a few considerations. Firstly comes memory sizing, if you are sizing your hosts to have enough memory for all of the VMs it is hosting then there is no reason (unless you want to buy more disk) not to use 100% memory reservations. So if you have 4GB per VM of vMEM, you’re planning to run 100 VMs on each host and your hosts have greater than 400GB of RAM, then use a 100% reservation. If however you are assuming that the VMs may only use on average 70% of their allocated 4GB you may choose to size your hosts with less physical memory. In this instance 100% memory reservation would not be suitable and you would want to start lower. I find typically a 50% reservation is a good balance, if you don’t have 50% of the memory available for the planned amount of VMs then review your parent image sizing!
In a VDI environment the VSWAP used to be used when RAM was very expensive and storage was much cheaper, but in today’s market many architects are choosing to use All-Flash solutions where allocating 400GB of diskspace for 100 VMs to a VSWAP file that will never be used is just not cost-effective and servers are shipping with much more RAM than they used to allowing for less reliance on memory management such as TPS and VSwapping. Start with a minimum of 50% memory reservation to reduce your storage utilisation and the adjust from there based on the sizing of your physical memory in your hosts.
Interesting post, but I’d argue that if you’re not overcommitting your RAM, then there’s no point to reservations at all, as reservations are used to guarantee resources in the event of contention which will not happen without over commitment. Additionally, I’d mention that you can lower the performance cost of swapping out to disk by provisioning space on an SSD for swap space, perhaps even have local SSDs in the ESXi host just for that purpose.
Hi Ken, thanks for the feedback. I considered talking more about locations for SWAP space but the article was getting a bit too big. I agree if you’re not overcommitting then there is no contention so the benefit of reservations is not relevant, but you still end up with a VSWAP file. The trend I am seeing now is architects are not overcommitting RAM but using fast, expensive storage like all-flash arrays, so the reservation saves you from needing more expensive disk space.