venerdì 3 aprile 2026

[VCF 9.0] Enable/Configure LDAP in HoloRouter to be used as Identity Source in VCF 9.0

Disclaimer:

The steps described below are fine for its own lab, but they do not meet the security criteria required for production deployment (e.g., lack of LDAPS/TLS by default and simplified RBAC, etc.).

As the title suggests, below I'll show you the steps to enable and configure LDAP as an Identity Source, leveraging the Kubernetes Cluster instance configured inside the HoloRouter appliance; which can then be used in your internal lab by the VCF 9.0 embedded identity broker.

We will then create an LDAP server instance on the Holorouter appliance, with the directory structure created as follows (the complete structure can be viewed and modified in the ldap-vcf.yaml file at the bottom of the page).

1. The Base DN

The root of our directory is dc=vcf,dc=lab. This is the entry point VCF will use to begin its search for identities.


2. Organizational Units (OUs)

We have segmented the directory into two logical containers to keep things clean:

  • ou=Users,dc=vcf,dc=lab: This is where all individual user accounts reside.
  • ou=Groups,dc=vcf,dc=lab: This contains the group definitions used for Role-Based Access Control (RBAC).

3. Users and Identities

The lab environment currently features four distinct users:

  • Administrative Accounts: admin and admin.vcf (used for management tasks).
  • Standard Users: lorenzo and test.

Note: All users utilize the inetOrgPerson object class, which is a standard requirement for most modern identity brokers.


4. Group Memberships

The directory uses the groupOfNames object class, which is the most common way to handle memberships in LDAP. We have defined two main groups:

  • cn=Administrators: Contains the two admin accounts. They can be associated with administrative roles in VCF 9.
  • cn=Users: Contains the standard lab users (lorenzo and test).

In VCF 9, use the following mapping based on our structure:

Field Value
Base DN dc=vcf,dc=lab
User Search Base ou=Users,dc=vcf,dc=lab
Group Search Base ou=Groups,dc=vcf,dc=lab
Common Name Identifier cn or uid
Group Member Attribute member


However, before creating the LDAP structure and expose it to the external port, we should create a persistent volume (as indicated in the ldap-pv.yaml file at the bottom of the page). This ensures that even if we upgrade the container image or restart the cluster/appliance, our lab users remain intact. In our case in holorouter, we'll create (if doesn't exist) and use a local directory "/opt/ldap-data".



Let's see the step-by-step procedure below:

1. Download the appliance, deploy, and run it.

First of all, download the HoloRouter appliance from this link, deploy it, and power it on. Detailed steps on how to distribute an OVA are beyond the scope here.

2. Log in and verify that the services have started correctly.

As a second step, once the appliance is turned on and running, we log in and verify the services by running the following commands:

kubectl get pods,service,deployment,configmap,pv -A
If everything is well, the PODs should be in STATUS running as shown in the image below.

3. Create/Copy .yaml files

Now let's create the manifest files needed to deploy the Kubernetes instances.
For convenience, I'll create the "vcf" directory (in /root/vcf), where I'll create the following files: ldap-pv.yaml (for creating the persistent volume) and ldap-vcf.yaml (for instantiating the LDAP service and populating it as indicated above).

root@holorouter [ ~ ]# mkdir -p ./vcf
root@holorouter [ ~ ]# cd vcf
root@holorouter [ ~/vcf ]# vi ldap-pv.yaml
Copy and paste the lines from the file "ldap-pv.yaml" at the bottom of the page and save the file Do the same for the file "ldap-vcf.yaml"
Note: The password used for all the users is "VMware123!VMware123!"

4. Create the Persistent Volume

To create and verify the creation of the persistent volume, we run the following commands in sequence:

root@holorouter [ ~/vcf ]# kubectl apply -f ldap-pv.yaml
....
root@holorouter [ ~/vcf ]# kubectl get pv -A 
As you can see from the image above, the PV has been successfully created, so let's proceed to the next step...

5. OpenLDAP Framework Deployment

One of the last steps is to create LDAP. We do this by running the following commands:

root@holorouter [ ~/vcf ]# kubectl apply -f ldap-vcf.yaml
....
root@holorouter [ ~/vcf ]# kubectl get pods -A 
If everything went as it should, the openldap POD status should be running ... Note: As a Kubernetes image for LDAP, "osixia/openldap" was used.

6. Tests

Let's check now that everything is working correctly (see image below)....

kubectl get pods,service,deployment,configmap,pv -A
...since everything seems to be working fine, let's try to make an ldap request testing a standard user ("lorenzo") login by running the following command ...
ldapwhoami -x -H ldap://10.106.209.20 -D "cn=lorenzo,ou=Users,dc=vcf,dc=lab" -w 'VMware123!VMware123!'
If the query was successful, it should show what is shown in the image above.
This time, let's run the same test, but from an external machine, using the holorouter's public IP ... We can use the following command to perform a complete discovery of the newly configured LDAP....
ldapsearch -x -H ldap://192.168.1.246 -D "cn=admin,dc=vcf,dc=lab" -w 'VMware123!VMware123!' -b "dc=vcf,dc=lab"

As a next step, we just need to see how to use/integrate the newly configured LDAP in VCF9. This will be described in a future post.



Below is the yaml files ...

"ldap-pv.yaml"

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ldap-data-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/opt/ldap-data" # The data will be saved in this physical folder of the node
    type: DirectoryOrCreate # If the folder does not exist, it creates it automatically.


"ldap-vcf.yaml"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ldap-data-pvc
  labels:
    app: openldap
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: ldap-bootstrap-ldif
  labels:
    app: openldap
data:
  01-struttura.ldif: |-
    dn: ou=Users,dc=vcf,dc=lab
    objectClass: organizationalUnit
    ou: Users
    
    dn: ou=Groups,dc=vcf,dc=lab
    objectClass: organizationalUnit
    ou: Groups
    
    dn: cn=admin,ou=Users,dc=vcf,dc=lab
    objectClass: top
    objectClass: person
    objectClass: organizationalPerson
    objectClass: inetOrgPerson
    cn: admin
    sn: Administrator
    uid: admin
    userPassword: VMware123!VMware123!
    
    dn: cn=admin.vcf,ou=Users,dc=vcf,dc=lab
    objectClass: top
    objectClass: person
    objectClass: organizationalPerson
    objectClass: inetOrgPerson
    cn: admin.vcf
    sn: AdminVCF
    uid: admin.vcf
    userPassword: VMware123!VMware123!
    
    dn: cn=lorenzo,ou=Users,dc=vcf,dc=lab
    objectClass: top
    objectClass: person
    objectClass: organizationalPerson
    objectClass: inetOrgPerson
    cn: lorenzo
    sn: Lorenzo
    uid: lorenzo
    userPassword: VMware123!VMware123!
    
    dn: cn=test,ou=Users,dc=vcf,dc=lab
    objectClass: top
    objectClass: person
    objectClass: organizationalPerson
    objectClass: inetOrgPerson
    cn: test
    sn: TestUser
    uid: test
    userPassword: VMware123!VMware123!
    
    dn: cn=Administrators,ou=Groups,dc=vcf,dc=lab
    objectClass: top
    objectClass: groupOfNames
    cn: Administrators
    member: cn=admin,ou=Users,dc=vcf,dc=lab
    member: cn=admin.vcf,ou=Users,dc=vcf,dc=lab
    
    dn: cn=Users,ou=Groups,dc=vcf,dc=lab
    objectClass: top
    objectClass: groupOfNames
    cn: Users
    member: cn=lorenzo,ou=Users,dc=vcf,dc=lab
    member: cn=test,ou=Users,dc=vcf,dc=lab

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openldap
  labels:
    app: openldap
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openldap
  template:
    metadata:
      labels:
        app: openldap
    spec:
      containers:
      - name: openldap
        image: osixia/openldap:1.5.0
        ports:
        - containerPort: 389
          hostPort: 389 # <-- Directly exposes 389 on K8s node IP
          name: ldap
        env:
        - name: LDAP_ORGANISATION
          value: "VCF Lab"
        - name: LDAP_DOMAIN
          value: "vcf.lab"
        - name: LDAP_ADMIN_PASSWORD
          value: "VMware123!VMware123!"
        volumeMounts:
        # ConfigMap in sola lettura per l'iniezione iniziale
        - name: configmap-volume
          mountPath: /custom-ldif
        # Mount per la persistenza dei dati (Database)
        - name: ldap-data
          mountPath: /var/lib/ldap
          subPath: database
        # Mount per la persistenza della configurazione
        - name: ldap-data
          mountPath: /etc/ldap/slapd.d
          subPath: config
        
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh
              - -c
              - |
                while ! ldapsearch -x -H ldap://localhost -D "cn=admin,dc=vcf,dc=lab" -w 'VMware123!VMware123!' -b "dc=vcf,dc=lab" > /dev/null 2>&1; do sleep 2; done;
                # Il comando fallirà silenziosamente se gli utenti esistono già nel PVC, grazie a "|| true"
                ldapadd -x -H ldap://localhost -D "cn=admin,dc=vcf,dc=lab" -w 'VMware123!VMware123!' -f /custom-ldif/01-struttura.ldif || true
                
      volumes:
      - name: configmap-volume
        configMap:
          name: ldap-bootstrap-ldif
      - name: ldap-data
        persistentVolumeClaim:
          claimName: ldap-data-pvc

---
apiVersion: v1
kind: Service
metadata:
  name: openldap-service
  labels:
    app: openldap
spec:
  type: ClusterIP
  ports:
  - port: 389
    targetPort: 389
    protocol: TCP
    name: ldap
  selector:
    app: openldap

That's it.

mercoledì 1 aprile 2026

[HomeLAB] Playing with "Memory Tiering" in a nested environment

Although nested virtualization is NOT officially supported, many users (myself included) rely on nested ESXi for testing, development, and learning purposes. Therefore, the purpose of this article is to familiarize with the process of enabling/using "Memory Tiering" on ESXi host.

Below, we will guide you step by step through creating a nested ESXi host where you can enable memory tiering, even if you don't physically have NVMe disks installed on the host.

Disclaimer: Instruction below are NOT to use in production but are for learning purpose only.

Let's walk through:

1. Create a Nested ESXi (version 9.0)

First, let's create our nested environment, creating a new virtual machine (1) ...

Let's give a name to the VM for instance "ESXi-NVMe" (2) and set the following parameters as below (3) and click NEXT (4).

Compatibility ESXi 9.0 virtual machine
Guest OS family Other
Guest OS version VMware ESXi 9.0 or later

Select the right Storage (5) with enough space and click NEXT (6).

Set up how many CPU you like, in my case 10 vCPU (2 cores per sockets) (7) and then enable both Hardware Virtualization and IOMMU (8) as shown from the image below and scroll down ....

In my case, for testing purposes, I configure 16 GB of RAM (9) and 32 GB of hard disk (10) in thin provisioning, add 3 more new network adapters (11) and connect (12) the ISO image (ISO/VMware-VMvisor-Installer-9.0.2.0.25148076.x86_64.iso) (13) for installation then I confirm by pressing SELECT (14).

Review and confirm clicking on FINISH (15).

The structure has been created, let's proceed with the installation by turning on the VM > Power ON (16).

Hit enter to Continue > (F11) Accept and Continue (17) ...

... select the disk (18) and Continue (19) ...

... select the right keybiard layout (Italian in my case) and hit (Enter) to continue (21) ...

... insert your own password and Continue (23) ...

Continue by pressing Enter (24) to accept the warning about the unsupported CPU...

... Enter (25) again to force installation ...

... then (F11) Install (26).

Once the installation is complete, reboot by pressing Enter (27).

Once ESXi has rebooted, log in and customize the settings to suit your needs.
Customizing settings isn't part of this guide, but remember to enable SSH before shutting down the nested ESXi (which will be useful later).

I log in ..

I confirm the IP by setting it static.

Enable SSH (29).

Press (F12) to shutdown (30) > (F2) to shut down ... and wait until it is completely turned off.





2. Add the NVMe controller

Now, we add the NVMe controller to the nested ESXi host to simulate an NVMe disk attached to it.
To do so, select the VM (in my case "ESXi-NVMe") (30) > Actions (31) > Edit Settings (32).

Click on Add other device (33) > NVMe controller (34)

Click again on Add other device (35) > SCSI controller (36) ... we will use this to add a capacitive disk to place the VMs on

We add the disks by clicking on Add hard disk > New standard hard disk (37) ...

Let's configure the first disk to be used to enable memory tiering in the following way (see image below) ...

Maximum Size 20 GB (38)
Disk Provisioning Thin Provisioned (39)
Controller location NVMe controller 0 (40)

We add an additional disk (task (37)) to use as a capacitive disk and configure it as shown in the image below...

Maximum Size 100 GB (41)
Disk Provisioning Thin Provisioned (42)
Controller location SCSI controller 1 (43)

... SAVE (44).





3. Activating Memory Tiering

Here, we configure Memory Tiering by using ESXCLI commands at host level folowing the official documentation "How do I Activate and Deactivate Memory Tiering in vSphere". However, Memory Tiering configuration can be activate at the cluster, ESX host, or virtual machine level by using either the vSphere Client, ESXCLI commands or PowerCLI scripts.

To Activate Memory Tiering, first power on (45) the VM.

We connect to the host in SSH and we put the host into maintenance mode and we list the NVMe devices on the host running the two commands below (46).
It is important that no changes are made to partitions on NVMe devices used as tiered memory while VMs are operational.

[root@Host-005:~] esxcli system maintenanceMode set --enable true
[root@Host-005:~] esxcli storage core adapter device list

we list the disks with command below (47), to choose the NVMe device to use as tiered memory and note the NVMe device path (in our case /vmfs/devices/disks/t10.NVMe____VMware_Virtual_NVMe_Disk________________VMware_NVME_0000____00000001 -> eui.6aee49956e658ca4000c29618ae19141 (48)).

[root@Host-005:~] ls -ltrh /vmfs/devices/disks/

It is possible to check that directly at ESXi host > Storage > Devices > selecting the Local NVMe disk (Capacity 20 GB) ...

we should list and then delete each existing partition, but isn't our case, because the disk we'll use is new and not yet initialized.

Let's create a tiered partition on the NVMe device and check whether a tiered partition has already been created on the NVMe device. Use the following commands ...

[root@Host-005:~] esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____VMware_Virtual_NVMe_Disk________________VMware_NVME_0000____00000001 
[root@Host-005:~] esxcli system tierdevice list

...and the current memory seen by the host with the command ...

[root@Host-005:~] esxcli hardware memory get

Let's verify the partition creation with the command ...

[root@Host-005:~] ls -ltrh /vmfs/devices/disks/

We activate the Memory Tiering at the kernel level (running the command below) and reboot the host ...

[root@Host-005:~] esxcli system settings kernel set -s MemoryTiering -v TRUE
[root@Host-005:~] 
[root@Host-005:~] reboot





4. Checks post reboot

After rebooting, we check if the NVMe device has a tier partition created.

What we can immediately notice after the reboot is that, the VM is configured with 16GB of Memory (50), however from the console the ESXi system show 36 GB (51) of Memory - 16GB Original Memory plus the 20GB size of the NVMe HD.

We login via ssh and check the memory by running the command ...

[root@Host-005:~] esxcli hardware memory get





5. Configuring the DRAM to NVMe Ratio

By default, hosts are configured to use a DRAM to NVMe ratio of 4:1. This can be configured per host to evaluate performance when using different ratios, changing the value of the Mem.TierNvmePct parameter.

The host advanced setting for Mem.TierNvmePct sets the amount of NVMe to be used as tiered memory using a percentage equivalent of the total amount of DRAM. A host reboot is required for any changes to this setting to take effect.

For example, setting a value to 25 would configure using an amount of NVMe as tiered memory that is equivalent to 25% of the total amount of DRAM. This is known as the DRAM to NVMe ratio of 4:1. A host with 1 TB of DRAM would use 256 GB of NVMe as tiered memory.

Another example, setting this value to 50 would configure using an amount of NVMe as tiered memory that is equivalent to 50% of the total amount of DRAM. This is known as the DRAM to NVMe ratio of 2:1. A host with 1 TB of DRAM would use 512 GB of NVMe as tiered memory.

One last example, setting this value to 100 would configure using an amount of NVMe as tiered memory that is equivalent to 100% of the total amount of DRAM. This is known as the DRAM to NVMe ratio of 1:1. A host with 1 TB of DRAM would use 1 TB of NVMe as tiered memory.

It is recommended that the amount of NVMe configured as tiered memory does not exceed the total amount of DRAM.

Using the vSphere Client to configure the DRAM to NVMe ratio requires the following steps:
Login to the vSphere Client for the cluster.
Navigate to Host (52) > Manage (52) > System (53) > Advanced settings (54)
Filter or find the option Mem.TierNvmePct (55)

Edit the option Mem.TierNvmePct (56) and set this option to a value between 1 and 400. This value is the percentage of NVMe to be used out of the Total Memory Capacity. The default value is 25.

We set the new value to 400 (57), SAVE (58) the settings and to make the changes effective we need to reboot the host.

That's all for now, we'll see in a future post some tests with some workload VMs later.



Useful links:

– KB memory Tiering:
https://knowledge.broadcom.com/external/article?legacyId=95944

– Techdocs Broadcom:
https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/9-0/how-do-i-activate-memory-tiering-in-vsphere-.html

– Using partedUtil command line:
https://knowledge.broadcom.com/external/article?legacyId=1036609

– Memory Tiering tech preview blogpost:
https://blogs.vmware.com/cloud-foundation/2024/07/18/vsphere-memory-tiering-tech-preview-in-vsphere-8-0u3/

– VCF 9 Memory Tiering GA:
https://blogs.vmware.com/cloud-foundation/2025/06/19/advanced-memory-tiering-now-available/



That's it.