Disclaimer:
The steps described below are fine for its own lab, but they do not meet the security criteria required for production deployment (e.g., lack of LDAPS/TLS by default and simplified RBAC, etc.).As the title suggests, below I'll show you the steps to enable and configure LDAP as an Identity Source, leveraging the Kubernetes Cluster instance configured inside the HoloRouter appliance; which can then be used in your internal lab by the VCF 9.0 embedded identity broker.
We will then create an LDAP server instance on the Holorouter appliance, with the directory structure created as follows (the complete structure can be viewed and modified in the ldap-vcf.yaml file at the bottom of the page).
1. The Base DN
The root of our directory is dc=vcf,dc=lab. This is the entry point VCF will use to begin its search for identities.
2. Organizational Units (OUs)
We have segmented the directory into two logical containers to keep things clean:
- ou=Users,dc=vcf,dc=lab: This is where all individual user accounts reside.
- ou=Groups,dc=vcf,dc=lab: This contains the group definitions used for Role-Based Access Control (RBAC).
3. Users and Identities
The lab environment currently features four distinct users:
- Administrative Accounts:
adminandadmin.vcf(used for management tasks). - Standard Users:
lorenzoandtest.
Note: All users utilize the inetOrgPerson object class, which is a standard requirement for most modern identity brokers.
4. Group Memberships
The directory uses the groupOfNames object class, which is the most common way to handle memberships in LDAP. We have defined two main groups:
- cn=Administrators: Contains the two admin accounts. They can be associated with administrative roles in VCF 9.
- cn=Users: Contains the standard lab users (lorenzo and test).
In VCF 9, use the following mapping based on our structure:
| Field | Value |
|---|---|
| Base DN | dc=vcf,dc=lab |
| User Search Base | ou=Users,dc=vcf,dc=lab |
| Group Search Base | ou=Groups,dc=vcf,dc=lab |
| Common Name Identifier | cn or uid |
| Group Member Attribute | member |
However, before creating the LDAP structure and expose it to the external port, we should create a persistent volume (as indicated in the ldap-pv.yaml file at the bottom of the page). This ensures that even if we upgrade the container image or restart the cluster/appliance, our lab users remain intact. In our case in holorouter, we'll create (if doesn't exist) and use a local directory "/opt/ldap-data".
Let's see the step-by-step procedure below:
1. Download the appliance, deploy, and run it.
First of all, download the HoloRouter appliance from this link, deploy it, and power it on. Detailed steps on how to distribute an OVA are beyond the scope here.
2. Log in and verify that the services have started correctly.
As a second step, once the appliance is turned on and running, we log in and verify the services by running the following commands:
If everything is well, the PODs should be in STATUS running as shown in the image below.
kubectl get pods,service,deployment,configmap,pv -A
3. Create/Copy .yaml files
Now let's create the manifest files needed to deploy the Kubernetes instances.
For convenience, I'll create the "vcf" directory (in /root/vcf), where I'll create the following files: ldap-pv.yaml (for creating the persistent volume) and ldap-vcf.yaml (for instantiating the LDAP service and populating it as indicated above).
root@holorouter [ ~ ]# mkdir -p ./vcf
root@holorouter [ ~ ]# cd vcf
root@holorouter [ ~/vcf ]# vi ldap-pv.yaml
Copy and paste the lines from the file "ldap-pv.yaml" at the bottom of the page and save the file
Do the same for the file "ldap-vcf.yaml"
Note: The password used for all the users is "VMware123!VMware123!"
4. Create the Persistent Volume
To create and verify the creation of the persistent volume, we run the following commands in sequence:
root@holorouter [ ~/vcf ]# kubectl apply -f ldap-pv.yaml
....
root@holorouter [ ~/vcf ]# kubectl get pv -A
As you can see from the image above, the PV has been successfully created, so let's proceed to the next step...
5. OpenLDAP Framework Deployment
One of the last steps is to create LDAP. We do this by running the following commands:
If everything went as it should, the openldap POD status should be running ...
root@holorouter [ ~/vcf ]# kubectl apply -f ldap-vcf.yaml
....
root@holorouter [ ~/vcf ]# kubectl get pods -A
Note: As a Kubernetes image for LDAP, "osixia/openldap" was used.
6. Tests
Let's check now that everything is working correctly (see image below)....
...since everything seems to be working fine, let's try to make an ldap request testing a standard user ("lorenzo") login by running the following command ...
kubectl get pods,service,deployment,configmap,pv -A
ldapwhoami -x -H ldap://10.106.209.20 -D "cn=lorenzo,ou=Users,dc=vcf,dc=lab" -w 'VMware123!VMware123!'
If the query was successful, it should show what is shown in the image above.
This time, let's run the same test, but from an external machine, using the holorouter's public IP ...
We can use the following command to perform a complete discovery of the newly configured LDAP....
ldapsearch -x -H ldap://192.168.1.246 -D "cn=admin,dc=vcf,dc=lab" -w 'VMware123!VMware123!' -b "dc=vcf,dc=lab"
As a next step, we just need to see how to use/integrate the newly configured LDAP in VCF9. This will be described in a future post.
Below is the yaml files ...
"ldap-pv.yaml"
apiVersion: v1
kind: PersistentVolume
metadata:
name: ldap-data-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/opt/ldap-data" # The data will be saved in this physical folder of the node
type: DirectoryOrCreate # If the folder does not exist, it creates it automatically.
"ldap-vcf.yaml"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldap-data-pvc
labels:
app: openldap
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ldap-bootstrap-ldif
labels:
app: openldap
data:
01-struttura.ldif: |-
dn: ou=Users,dc=vcf,dc=lab
objectClass: organizationalUnit
ou: Users
dn: ou=Groups,dc=vcf,dc=lab
objectClass: organizationalUnit
ou: Groups
dn: cn=admin,ou=Users,dc=vcf,dc=lab
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: admin
sn: Administrator
uid: admin
userPassword: VMware123!VMware123!
dn: cn=admin.vcf,ou=Users,dc=vcf,dc=lab
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: admin.vcf
sn: AdminVCF
uid: admin.vcf
userPassword: VMware123!VMware123!
dn: cn=lorenzo,ou=Users,dc=vcf,dc=lab
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: lorenzo
sn: Lorenzo
uid: lorenzo
userPassword: VMware123!VMware123!
dn: cn=test,ou=Users,dc=vcf,dc=lab
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: test
sn: TestUser
uid: test
userPassword: VMware123!VMware123!
dn: cn=Administrators,ou=Groups,dc=vcf,dc=lab
objectClass: top
objectClass: groupOfNames
cn: Administrators
member: cn=admin,ou=Users,dc=vcf,dc=lab
member: cn=admin.vcf,ou=Users,dc=vcf,dc=lab
dn: cn=Users,ou=Groups,dc=vcf,dc=lab
objectClass: top
objectClass: groupOfNames
cn: Users
member: cn=lorenzo,ou=Users,dc=vcf,dc=lab
member: cn=test,ou=Users,dc=vcf,dc=lab
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openldap
labels:
app: openldap
spec:
replicas: 1
selector:
matchLabels:
app: openldap
template:
metadata:
labels:
app: openldap
spec:
containers:
- name: openldap
image: osixia/openldap:1.5.0
ports:
- containerPort: 389
hostPort: 389 # <-- Directly exposes 389 on K8s node IP
name: ldap
env:
- name: LDAP_ORGANISATION
value: "VCF Lab"
- name: LDAP_DOMAIN
value: "vcf.lab"
- name: LDAP_ADMIN_PASSWORD
value: "VMware123!VMware123!"
volumeMounts:
# ConfigMap in sola lettura per l'iniezione iniziale
- name: configmap-volume
mountPath: /custom-ldif
# Mount per la persistenza dei dati (Database)
- name: ldap-data
mountPath: /var/lib/ldap
subPath: database
# Mount per la persistenza della configurazione
- name: ldap-data
mountPath: /etc/ldap/slapd.d
subPath: config
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- |
while ! ldapsearch -x -H ldap://localhost -D "cn=admin,dc=vcf,dc=lab" -w 'VMware123!VMware123!' -b "dc=vcf,dc=lab" > /dev/null 2>&1; do sleep 2; done;
# Il comando fallirà silenziosamente se gli utenti esistono già nel PVC, grazie a "|| true"
ldapadd -x -H ldap://localhost -D "cn=admin,dc=vcf,dc=lab" -w 'VMware123!VMware123!' -f /custom-ldif/01-struttura.ldif || true
volumes:
- name: configmap-volume
configMap:
name: ldap-bootstrap-ldif
- name: ldap-data
persistentVolumeClaim:
claimName: ldap-data-pvc
---
apiVersion: v1
kind: Service
metadata:
name: openldap-service
labels:
app: openldap
spec:
type: ClusterIP
ports:
- port: 389
targetPort: 389
protocol: TCP
name: ldap
selector:
app: openldap
That's it.
Nessun commento:
Posta un commento