StarWind is a hyperconverged (HCI) vendor with focus on Enterprise ROBO, SMB & Edge

Thick and Thin Provisioning: What’s the performance difference in context of Hyper-V

  • March 7, 2024
  • 11 min read
StarWind Solutions Engineer. Diana possesses comprehensive technical knowledge of various storage types and expertise in building and optimizing virtualized environments.
StarWind Solutions Engineer. Diana possesses comprehensive technical knowledge of various storage types and expertise in building and optimizing virtualized environments.

Many hypervisors on the market offer special features that make them distinct from all other existing programs. Still, most have the same or very similar settings when it comes to the basics, including storage provisioning. Hyper-V is one such popular virtualization platform that is often considered a VMware alternative, and here we will discuss the different disk provisioning options offered by this software.

Thick vs Thin Provisioning

Hyper-V is a type 1 hypervisor, which means that it sits directly on top of the physical hardware and virtualizes all of the compute and storage resources of a given server. This allows you to run multiple simulated computers or virtual machines with different operating systems and compute specifications at the same time. In order to use Hyper-V, you need to have upwards of Windows Server 2008 or 64-bit versions of Windows 10 Pro, Enterprise, or Education. Once the Hyper-V feature is enabled, you can start creating VMs with set parameters for CPU, RAM, network adapters, and storage.

Understanding Performance In Context Of Hyper-V

Since virtualization is about distributing resources and balancing workload, the performance of your environment will depend on how fine-tuned your VM settings are. There are a number of Microsoft best practices to follow regarding CPU usage, RAM, and network optimization. Naturally, the same is true for storage. The choice of virtual controller that will expose your virtual disks, disk image format (the older VHD or, the newer VHDX), considerations of block and sector size and more advanced settings of multiple communication channels between guest device and storage stack will affect your storage I/O performance. Finally, there are also the thick vs. thin disk provisioning options to consider, which we focus on here.

Explanation of what is thick provisioning

Thick-provisioned disks have a certain amount of storage provisioned to them in advance. In Hyper-V environments, thick-provisioned disks are called “fixed”, meaning the disk size will not change regardless of how much actual disk space is consumed. Fixed disks reserve the specified amount on the actual underlying physical storage.

Benefits and drawbacks of thick provisioning

The nice thing about fixed disks is that the VM’s access to the storage is quite straightforward since the storage is pre-allocated ahead of time and the VHDX file location never changes, which allows you to bypass a performance penalty that you would have incurred by the time your VM would have spent on seeking, as is the case with thin-provisioned disks. However, fixed disks take more time to create since all the data has to be zeroed-out in advance.

Another drawback is that in contrast to thin-provisioned disks, if you were to decide later on a bigger VHD or VHDX capacity, you would have to manually perform a couple of manoeuvers to expand the disks on each VM. Moreover, you would not be able to reallocate unused space on a VHDX to any other VM due to the nature of the reservation.

Explanation of what is thin provisioning

Thin-provisioned disks are disks that are not allocated a fixed amount of storage. In Hyper-V, these disks are called dynamic. Although you have to specify a maximum storage amount, this limit does not actually indicate the size of the disk. Dynamic disks do not reserve the physical space of the specified storage capacity upon creation. A dynamic disk is always created as a tiny, less than 1 GB, that grows according to demand. This is different from the thick-provisioned or fixed disk, where the specified amount reflects the actual size of the VHDX files.

Benefits and drawbacks of thin provisioning

Thin-provisioned or dynamic disks are easier on the overall storage reservoir because you do not need to reserve storage in advance. This eliminates extra, unused storage that is sitting around in over-provisioned VMs. You also do not need extensive calculations on how much storage to allocate per VM.

However, dynamic disks require more vigilance and monitoring since thin provisioning assigns storage limits for VMs whose cumulative storage volume is more than what is actually available in the overall storage pool. Therefore, if you do not add the required underlying physical storage to support your VM storage growth in time, your system will crash.

Previously, dynamic disks were known to be prone to fragmentation, which slowed storage performance. Whenever you delete chunks of data from the dynamic disk, the VHDX would not shrink back, leaving unused fragments scattered across the disk. With the onset of Windows Server 2012, the automatic trim feature has remedied this issue by unmapping unused storage sectors. Regardless, dynamic disks are still slower than fixed disks, which will be touched on in more detail in the performance comparison.

Performance Comparison of Thick and Thin provisioning

Dynamic disks offer slower performance than fixed disks due to the nature of the file allocation system, which involves an additional layer of inventory. For this reason, fixed disks have generally been the recommended option for production use. However, with the switch and availability of SSDs, this difference has become less noticeable. The dynamic allocation process still involves overhead, but it is not as pronounced as with HDDs. Lastly, as mentioned earlier, fixed disks take more time to create than dynamic ones, which could become an issue in the event that the initial set-up must be very quick for whatever reason.


As we discussed, multiple considerations will determine the speed and reliability of your storage stack. Note that there are also two other VHDX options in Hyper-V in a somewhat different category that deserve a separate mention: differencing (or shared disks), and pass-through disks. Here, we have focused on the two main flavors of disk provisioning and their respective use cases. If storage space is a concern, dynamic disks may be the optimal solution and, especially with SSDs, can be useful for fault tolerance due to their storage efficiency. If the opposite is true and no specific RAID or other redundancy settings are required, fixed disks are the more reliable, well-performing option past initial creation. If unsure, bear in mind that dynamic disks can be converted to fixed, but not the other way around.

This material has been prepared in collaboration with Asah Syxtus Mbuo, Technical Writer at StarWind.

Hey! Found Diana’s article helpful? Looking to deploy a new, easy-to-manage, and cost-effective hyperconverged infrastructure?
Alex Bykovskyi
Alex Bykovskyi StarWind Virtual HCI Appliance Product Manager
Well, we can help you with this one! Building a new hyperconverged environment is a breeze with StarWind Virtual HCI Appliance (VHCA). It’s a complete hyperconverged infrastructure solution that combines hypervisor (vSphere, Hyper-V, Proxmox, or our custom version of KVM), software-defined storage (StarWind VSAN), and streamlined management tools. Interested in diving deeper into VHCA’s capabilities and features? Book your StarWind Virtual HCI Appliance demo today!