Below, we will guide you step by step through creating a nested ESXi host where you can enable memory tiering, even if you don't physically have NVMe disks installed on the host.
Disclaimer: Instruction below are NOT to use in production but are for learning purpose only.
Let's walk through:
1. Create a Nested ESXi (version 9.0)
First, let's create our nested environment, creating a new virtual machine (1) ...
Let's give a name to the VM for instance "ESXi-NVMe" (2) and set the following parameters as below (3) and click NEXT (4).
| Compatibility | ESXi 9.0 virtual machine |
| Guest OS family | Other |
| Guest OS version | VMware ESXi 9.0 or later |
Select the right Storage (5) with enough space and click NEXT (6).
Set up how many CPU you like, in my case 10 vCPU (2 cores per sockets) (7) and then enable both Hardware Virtualization and IOMMU (8) as shown from the image below and scroll down ....
In my case, for testing purposes, I configure 16 GB of RAM (9) and 32 GB of hard disk (10) in thin provisioning, add 3 more new network adapters (11) and connect (12) the ISO image (ISO/VMware-VMvisor-Installer-9.0.2.0.25148076.x86_64.iso) (13) for installation then I confirm by pressing SELECT (14).
Review and confirm clicking on FINISH (15).
The structure has been created, let's proceed with the installation by turning on the VM > Power ON (16).
Hit enter to Continue > (F11) Accept and Continue (17) ...
... select the disk (18) and Continue (19) ...
... select the right keybiard layout (Italian in my case) and hit (Enter) to continue (21) ...
... insert your own password and Continue (23) ...
Continue by pressing Enter (24) to accept the warning about the unsupported CPU...
... Enter (25) again to force installation ...
... then (F11) Install (26).
Once the installation is complete, reboot by pressing Enter (27).
Once ESXi has rebooted, log in and customize the settings to suit your needs.
Customizing settings isn't part of this guide, but remember to enable SSH before shutting down the nested ESXi (which will be useful later).
I log in ..
I confirm the IP by setting it static.
Enable SSH (29).
Press (F12) to shutdown (30) > (F2) to shut down ... and wait until it is completely turned off.
2. Add the NVMe controller
Now, we add the NVMe controller to the nested ESXi host to simulate an NVMe disk attached to it.
To do so, select the VM (in my case "ESXi-NVMe") (30) > Actions (31) > Edit Settings (32).
Click on Add other device (33) > NVMe controller (34)
Click again on Add other device (35) > SCSI controller (36) ... we will use this to add a capacitive disk to place the VMs on
We add the disks by clicking on Add hard disk > New standard hard disk (37) ...
Let's configure the first disk to be used to enable memory tiering in the following way (see image below) ...
| Maximum Size | 20 GB (38) |
| Disk Provisioning | Thin Provisioned (39) |
| Controller location | NVMe controller 0 (40) |
We add an additional disk (task (37)) to use as a capacitive disk and configure it as shown in the image below...
| Maximum Size | 100 GB (41) |
| Disk Provisioning | Thin Provisioned (42) |
| Controller location | SCSI controller 1 (43) |
... SAVE (44).
3. Activating Memory Tiering
Here, we configure Memory Tiering by using ESXCLI commands at host level folowing the official documentation "How do I Activate and Deactivate Memory Tiering in vSphere". However, Memory Tiering configuration can be activate at the cluster, ESX host, or virtual machine level by using either the vSphere Client, ESXCLI commands or PowerCLI scripts.
To Activate Memory Tiering, first power on (45) the VM.
We connect to the host in SSH and we put the host into maintenance mode and we list the NVMe devices on the host running the two commands below (46).
It is important that no changes are made to partitions on NVMe devices used as tiered memory while VMs are operational.
[root@Host-005:~] esxcli system maintenanceMode set --enable true
[root@Host-005:~] esxcli storage core adapter device list
we list the disks with command below (47), to choose the NVMe device to use as tiered memory and note the NVMe device path (in our case /vmfs/devices/disks/t10.NVMe____VMware_Virtual_NVMe_Disk________________VMware_NVME_0000____00000001 -> eui.6aee49956e658ca4000c29618ae19141 (48)).
[root@Host-005:~] ls -ltrh /vmfs/devices/disks/
It is possible to check that directly at ESXi host > Storage > Devices > selecting the Local NVMe disk (Capacity 20 GB) ...
we should list and then delete each existing partition, but isn't our case, because the disk we'll use is new and not yet initialized.
Let's create a tiered partition on the NVMe device and check whether a tiered partition has already been created on the NVMe device. Use the following commands ...
[root@Host-005:~] esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____VMware_Virtual_NVMe_Disk________________VMware_NVME_0000____00000001
[root@Host-005:~] esxcli system tierdevice list
...and the current memory seen by the host with the command ...
[root@Host-005:~] esxcli hardware memory get
Let's verify the partition creation with the command ...
[root@Host-005:~] ls -ltrh /vmfs/devices/disks/
We activate the Memory Tiering at the kernel level (running the command below) and reboot the host ...
[root@Host-005:~] esxcli system settings kernel set -s MemoryTiering -v TRUE
[root@Host-005:~]
[root@Host-005:~] reboot
4. Checks post reboot
After rebooting, we check if the NVMe device has a tier partition created.
What we can immediately notice after the reboot is that, the VM is configured with 16GB of Memory (50), however from the console the ESXi system show 36 GB (51) of Memory - 16GB Original Memory plus the 20GB size of the NVMe HD.
We login via ssh and check the memory by running the command ...
[root@Host-005:~] esxcli hardware memory get
5. Configuring the DRAM to NVMe Ratio
By default, hosts are configured to use a DRAM to NVMe ratio of 4:1. This can be configured per host to evaluate performance
when using different ratios, changing the value of the Mem.TierNvmePct parameter.
The host advanced setting for Mem.TierNvmePct sets the amount of NVMe to be used as tiered memory using a percentage equivalent of the total amount of DRAM. A host reboot is required for any changes to this setting to take effect.
For example, setting a value to 25 would configure using an amount of NVMe as tiered memory that is equivalent to 25% of the total amount of DRAM. This is known as the DRAM to NVMe ratio of 4:1. A host with 1 TB of DRAM would use 256 GB of NVMe as tiered memory.
Another example, setting this value to 50 would configure using an amount of NVMe as tiered memory that is equivalent to 50% of the total amount of DRAM. This is known as the DRAM to NVMe ratio of 2:1. A host with 1 TB of DRAM would use 512 GB of NVMe as tiered memory.
One last example, setting this value to 100 would configure using an amount of NVMe as tiered memory that is equivalent to 100% of the total amount of DRAM. This is known as the DRAM to NVMe ratio of 1:1. A host with 1 TB of DRAM would use 1 TB of NVMe as tiered memory.
It is recommended that the amount of NVMe configured as tiered memory does not exceed the total amount of DRAM.
Using the vSphere Client to configure the DRAM to NVMe ratio requires the following steps:
Login to the vSphere Client for the cluster.
Navigate to Host (52) > Manage (52) > System (53) > Advanced settings (54)
Filter or find the option Mem.TierNvmePct (55)
Edit the option Mem.TierNvmePct (56) and set this option to a value between 1 and 400. This value is the percentage of NVMe to be used out of the Total Memory Capacity. The default value is 25.
We set the new value to 400 (57), SAVE (58) the settings and to make the changes effective we need to reboot the host.
That's all for now, we'll see in a future post some tests with some workload VMs later.
Useful links:
– KB memory Tiering:
https://knowledge.broadcom.com/external/article?legacyId=95944
– Techdocs Broadcom:
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/9-0/how-do-i-activate-memory-tiering-in-vsphere-.html
– Using partedUtil command line:
https://knowledge.broadcom.com/external/article?legacyId=1036609
– Memory Tiering tech preview blogpost:
https://blogs.vmware.com/cloud-foundation/2024/07/18/vsphere-memory-tiering-tech-preview-in-vsphere-8-0u3/
– VCF 9 Memory Tiering GA:
https://blogs.vmware.com/cloud-foundation/2025/06/19/advanced-memory-tiering-now-available/
That's it.


















































Nessun commento:
Posta un commento