martedì 23 aprile 2024

[vCenter] - How to connect to vCenter via Rest API

Issue


Recently I had to create a script that grab some information from vCenter Server via Rest API calls. To do so, I had to create a few lines of code to authenticate on the vCenter, obtain a sessions and the c. Let's see bellow how does it works ....

Solution


The script must run on an Ubuntu machine, so I decided to make a bash script and use cURL. Information regarding api call, can be found at the following link https://developer.vmware.com/apis
More specific information regarding, for example, how to list the VMs already present on the inventory, can be found here.

First of all we have to create a session with the API. This is the equivalent of login. This operation exchanges user credentials supplied in the security context for a session token that is to be used for authenticating subsequent calls. To authenticate subsequent calls clients are expected to include the session token. For REST API calls the HTTP vmware-api-session-id header field should be used for this.

The call looks like this:
curl -sk -u username:password -X POST https://{vCenter}/api/session
The authentication, can be also be passed in a Base64 encoded value of username:password as header parameter, like this:
echo -n 'username:password' | base64
in a single line of code:
curl -ks -H "Authorization: Basic `echo -n 'username:password' | base64`" -X POST https://{vCenter}/api/session
For the rest API calls we can use the returned vmware-api-session-id.

Assembly the script all together it looks like the following :
#!/bin/bash

VC=192.168.1.90
ADMIN=administrator@vsphere.local
PASSWORD=VMware1!

Session_ID=`curl -sk -u ${ADMIN}:${PASSWORD} -X POST https://${VC}/api/session`

# Request sent through session ID
curl -ks -H "vmware-api-session-id: ${Session_ID:1:-1}" https://${VC}/api/vcenter/vm
Outcomes bellow

That's it.

giovedì 11 aprile 2024

[MS Windows] - How to extend trial version

Issue


Working within a LAB and/or in a nested environment often requires the deployment a Windows Server Machines (for instance, for a local services such as Active Directory, DNS, NTP and so on).
It may happen that the tests have not been completed, but the Windows trial period has expired.
Below I'm reporting few steps to extend the Windows Trial Period for other 180 days.

Solution


It is possible to "rearm" MS Windows (in my case Server 2022), extending the Trial Period of 180 days for 6 times each time. Let's see how to do that ...

Taking a look at the desktop, we can see the countdown in the corner down right. In my case, the period is expired.
Let's "run as administrator" a powershell, and run ....

slmgr -dlv
.. as we can see above, we still have 6 shots (Remaining Windows rearm count).

.. then, we rearm Windows .. running the command below and verifying that the command complete successfully ..

slmgr -rearm
.. we must restart the server, performing in powershell "Restart-Computer" or ..

shutdown /f /t 0 /r

When the Server will be newly up and running let's check that everything worked fine, performing the following commands...
slmgr -dli
slmgr -ato
slmgr -dlv

That's it.

[Nested ESXi] - How to properly configure it

Issue


Working within a LAB and/or in a nested environment often requires the deployment of new ESXi hosts. The fastest way to have new ESXi hosts deployed quickly is to clone them from a master VM.
Below few steps to follow to properly create a working VM clone in a nested environment.

Solution


  • First of all, we do a normal installation in a neste environment of our ESXi master VM, giving to it minimal resources, as:
    - 2 vCPU
    - 8 GB of RAM
    - 20GB of Disk (Thin Provision)
    - 4 vNIC

  • When started, we get into ssh and we change the VMkernel MAC address by running the following command:

    esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1

  • To have the unique UUID on each host, we need to delete the "/system/uuid" record stored in /etc/vmware/esx.conf. To do this we can edit the file and delete the corresponding line, or launch the command below, which replaces the corresponding line with an empty line.

    sed -i 's/^\(\/system\/uuid\).*//' /etc/vmware/esx.conf

  • It is possible to shutdown the ESXi master VM and convert it to a template, or leave it as is to be cloned later when needed.

  • When needed, we can clone the master VM to deploy our ESXi nodes. On starting up of the VM, a new UUID will be generated.
    Once the ESXi VM clones is pawered on, we can change network settings on them (IP addresses, hostnames, etc.).

  • Let's generate a new certificate, typing the following command:

    /sbin/generate-certificates

  • Let's restart the following services, in order to make the host ready for our LAB.

    /etc/init.d/hostd restart && /etc/init.d/vpxa restart

That's it.

venerdì 23 febbraio 2024

NSX UI does not load information

Issue


NSX UI does not load for one manager node holding the VIP
NSX Version 4.1.2.1.0.22667794

Error message:
Feb 8, 2024, 3:22:39 PM : Error: Failed to fetch System details. Please contact the administrator. Error: 400 : "{<EOL> "details" : "SEARCH_FRAMEWORK_INITIALIZATION_FAILED, params: [manager]",<EOL> "httpStatus" : "BAD_REQUEST",<EO> "error_code" : 60525,<EOL> "module_name" : "nsx-search",<EOL> "error_message" : "Search framework initialization failed, please restart the service via 'restart service manager'."<EOL>}" (Error code: 513002)

Solution


In my case the solution was quite simple. I restarted the service manager on the NSX Manager appliance indicated by the VIP, as per the image below...

> restart service manager
... and it worked

That's it.

venerdì 9 febbraio 2024

Adding a Static Route to macOS

Issue


A quick post just to remind myself how to add a static route to macOS when I need it.

Solution


sudo route -n add -net X.X.X.X/Z Y.Y.Y.Y
Symbol legend:
X.X.X.X is the network that we want reach out
Z is the subnet mask in CIDR notation
   255.0.0.0 = 8
   255.255.0.0 = 16
   255.255.255.0 = 24
Y.Y.Y.Y it is the IP address where we find the subnet we want to reach


Examples
If we want to reach the subnet 172.16.11.0/24 and we know that is behind the IP 192.168.1.45 (that is not your default gateway); we have to add the route as follows:

sudo route -n add -net 172.16.11.0/24 192.168.1.45

That's it.

giovedì 7 settembre 2023

[NAPP] Helm pull chart operation failed.

Issue


Yesterday I was trying to deploy the NSX Application Platform (NAPP) in automated way. Below my environment:

• NSX-T version 3.2.3.0.0.21703624
• NAPP version 4.0.1-0.0-20606727

when I received the following error message:

status Code is 400, body: {"httpStatus" : "BAD REQUEST", "error_code" : 46013, "module_name" : "NAPP", "error_message" : "Helm pull chart operation failed. Error: failed to fetch https://projects.registry.vmware.com/chartrepo/nsx_application_platform/charts/nsxi-platform-standard-4.0.1-0.0-20606727.tgz : 404 Not Found\\n"}

Then I tried to deploy it manually, but I received the following error message (very similar to the previous one):

Error: Helm pull chart operation failed. Error: failed to fetch provenance https://projects.registry.vmware.com/chartrepo/nsx_application_platform/charts/nsxi-platform-standard-4.0.1-0.0-20606727.tgz.prov\n (Error code: 46013)


Before to see the solution a brief introduction to what NAPP is.

The NSX Application Platform is a modern microservices platform that hosts the following NSX features that collect, ingest, and correlate network traffic data in your NSX environment.
  • VMware NSX® Intelligence™
  • VMware NSX® Network Detection and Response™
  • VMware NSX® Malware Prevention
  • VMware NSX® Metrics

NAPP is a microservices application platform based on Kubernets and can be installed in two ways:
  • manually
  • automated

By choosing an automated NAPP installation, the customer does not need to be concerned with the installation and maintenance of the individual NAPP platform components including TKGs (Kubernetes).
Further information on how to "Getting Started with NSX Application Platform (NAPP)" can be found here.

Solution


Asking at the VMware GSS for help they told me the following:

"Due to an upgrade of the VMware Public Harbor registry to version 2.8.x ChartMuseum support has been deprecated and removed. And OCI is now the only supported access method. This unfortunately impacts NAPP deployment using NSX version 3.2.x which relies on ChartMuseum.

Option - 1 - Upgrade the environment to 3.2.3.1 and proceed with OCI URLs. Alternatively, any NSX 4.x release will also work.

Option - 2 - Wait for patches.

Once the environment is upgraded use the following URLs

Helm Repository - oci://projects.registry.vmware.com/nsx_application_platform/helm-charts
Docker Registry - projects.registry.vmware.com/nsx_application_platform/clustering"

That's it.

lunedì 14 agosto 2023

[NAPP] Deployment get stuck on "Create Guest Custer "

Issue


Deployment of NAPP get stucked on "Create Guest Cluster - Waiting for Guest cluster napp-cluster-01 to be available for login ..."
Looking at the vCenter, we can see that the SupervisorControlPlaneVM(s) has been created correctly, as well as the namespace and napp-cluster-01-control-plane VM.
What we don't see here are the workers VM.

Solution


To investigate and troubleshoot the the issue we connect via ssh on the SupervisorControlPlaneVM. I will explain in another post how to get the credentials to access the SV CP.

Describing the NAPP TKC ...
# kubectl describe tkc napp-cluster-01 -n nsx-01
we found 2 errors:

Message:          2 errors occurred:
                         * failed to configure DNS for /, Kind= nsx-01/napp-cluster-01: unable to reconcile kubeadm ConfigMap's CoreDNS info: unable to retrieve kubeadm Configmap from the guest cluster: configmaps "kubeadm-config" not found
                         * failed to configure kube-proxy for /, Kind= nsx-01/napp-cluster-01: unable to retrieve kube-proxy daemonset from the guest cluster: daemonsets.apps "kube-proxy" not found


Looking the deployment state of the workers node..
# kubectl get wcpmachine,machine,kcp,vm -n nsx-01

.. we saw that the workers node were still in Pending state. We describe the worker node ...
# kubectl describe wepmachine.infrastructure.cluster.vmware.com/napp-cluster-01-workers-qlpm6-7h2qr -n nsx-01
We also debugged Kubernetes with crictl command looking inside the logs
... and so on.

Tried to Ping from Supervisor cluster to the TKC VIP:
# kubectl get svc -A | grep -i napp-cluster-01
nsx-01                                      napp-cluster-01-control-plane-service                           LoadBalancer   10.96.1.25    192.168.100.25   6443:32296
At the end, we discovered that we were unable to :
  • ping from SupervisorControlPlane to Tanzu Kubernetes Cluster VIP
  • ping from TKC CP to Supervisor CP
Allowed connection on the firewall from SV CP to TKC VIP & from TKC CP to Supervisor CP, we never saw the error any more, but the state was still in pending.

So, we removed the namespace and re-deployed, now control-plane and workers node are UP and running ad we can contiune with NAPP installation.

That's it.