Understanding of Windows NTFS Block Size

I’m doing Veeam Backup Sizing for Hyper-V Cluster recently, and one things that make me interested is on how the Block Size affecting the performance of Application

Please refer to the following study that I had done

Basic Concept

Storage is just a bunch of One and Zeros and operating system doesn’t just read or write ones and zeros one by one, it combines them into units called blocks and then writes/reads them all at once.

These blocks / Cluster (in Microsoft Term) / Allocation Unit are then combined to create files and the Master File Table (MFT)  – All information about a file, including its size, time and date stamps, permissions, and data content, is stored either in MFT entries

Understanding of MFT

When you format a volume with NTFS, Windows Server 2003 creates an MFT and metadata files on the partition. The MFT is a relational database that consists of rows of file records and columns of file attributes. Because the MFT stores information about itself, NTFS reserves the first 16 records of the MFT for metadata files (approximately 16 KB)

MFT Zone

To prevent the MFT from becoming fragmented, NTFS reserves 12.5 percent of volume by default for exclusive use of the MFT. This space, known as the MFT zone, is not used to store data unless the remainder of the volume becomes full.


Relationship Between Cluster Size (4K Default in Windows) of File Size

If you’re writing a reasonably large file (say 100gb) in 4k blocks, that’s 26 million blocks that make up that file. Twenty-six million blocks just to keep up with for that one file. If it’s written it in 64k blocks, though, it’s now only 1.6 million blocks to keep up with for that one file.


I had prepared the following Diagram for ease of understanding on this concept 


PowerShell to add a newly inserted disk and format it with 64K Cluster Size 

Reference link 

  1. http://www.mirazon.com/veeam-repository-best-practices/
  2. http://www.mirazon.com/properly-configuring-your-filesystem-block-size/
Share This