giovedì 11 aprile 2024

[MS Windows] - How to extend trial version

Issue


Working within a LAB and/or in a nested environment often requires the deployment a Windows Server Machines (for instance, for a local services such as Active Directory, DNS, NTP and so on).
It may happen that the tests have not been completed, but the Windows trial period has expired.
Below I'm reporting few steps to extend the Windows Trial Period for other 180 days.

Solution


It is possible to "rearm" MS Windows (in my case Server 2022), extending the Trial Period of 180 days for 6 times each time. Let's see how to do that ...

Taking a look at the desktop, we can see the countdown in the corner down right. In my case, the period is expired.
Let's "run as administrator" a powershell, and run ....

slmgr -dlv
.. as we can see above, we still have 6 shots (Remaining Windows rearm count).

.. then, we rearm Windows .. running the command below and verifying that the command complete successfully ..

slmgr -rearm
.. we must restart the server, performing in powershell "Restart-Computer" or ..

shutdown /f /t 0 /r

When the Server will be newly up and running let's check that everything worked fine, performing the following commands...
slmgr -dli
slmgr -ato
slmgr -dlv

That's it.

[Nested ESXi] - How to properly configure it

Issue


Working within a LAB and/or in a nested environment often requires the deployment of new ESXi hosts. The fastest way to have new ESXi hosts deployed quickly is to clone them from a master VM.
Below few steps to follow to properly create a working VM clone in a nested environment.

Solution


  • First of all, we do a normal installation in a neste environment of our ESXi master VM, giving to it minimal resources, as:
    - 2 vCPU
    - 8 GB of RAM
    - 20GB of Disk (Thin Provision)
    - 4 vNIC

  • When started, we get into ssh and we change the VMkernel MAC address by running the following command:

    esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1

  • To have the unique UUID on each host, we need to delete the "/system/uuid" record stored in /etc/vmware/esx.conf. To do this we can edit the file and delete the corresponding line, or launch the command below, which replaces the corresponding line with an empty line.

    sed -i 's/^\(\/system\/uuid\).*//' /etc/vmware/esx.conf

  • It is possible to shutdown the ESXi master VM and convert it to a template, or leave it as is to be cloned later when needed.

  • When needed, we can clone the master VM to deploy our ESXi nodes. On starting up of the VM, a new UUID will be generated.
    Once the ESXi VM clones is pawered on, we can change network settings on them (IP addresses, hostnames, etc.).

  • Let's generate a new certificate, typing the following command:

    /sbin/generate-certificates

  • Let's restart the following services, in order to make the host ready for our LAB.

    /etc/init.d/hostd restart && /etc/init.d/vpxa restart

That's it.

venerdì 23 febbraio 2024

NSX UI does not load information

Issue


NSX UI does not load for one manager node holding the VIP
NSX Version 4.1.2.1.0.22667794

Error message:
Feb 8, 2024, 3:22:39 PM : Error: Failed to fetch System details. Please contact the administrator. Error: 400 : "{<EOL> "details" : "SEARCH_FRAMEWORK_INITIALIZATION_FAILED, params: [manager]",<EOL> "httpStatus" : "BAD_REQUEST",<EO> "error_code" : 60525,<EOL> "module_name" : "nsx-search",<EOL> "error_message" : "Search framework initialization failed, please restart the service via 'restart service manager'."<EOL>}" (Error code: 513002)

Solution


In my case the solution was quite simple. I restarted the service manager on the NSX Manager appliance indicated by the VIP, as per the image below...

> restart service manager
... and it worked

That's it.

venerdì 9 febbraio 2024

Adding a Static Route to macOS

Issue


A quick post just to remind myself how to add a static route to macOS when I need it.

Solution


sudo route -n add -net X.X.X.X/Z Y.Y.Y.Y
Symbol legend:
X.X.X.X is the network that we want reach out
Z is the subnet mask in CIDR notation
   255.0.0.0 = 8
   255.255.0.0 = 16
   255.255.255.0 = 24
Y.Y.Y.Y it is the IP address where we find the subnet we want to reach


Examples
If we want to reach the subnet 172.16.11.0/24 and we know that is behind the IP 192.168.1.45 (that is not your default gateway); we have to add the route as follows:

sudo route -n add -net 172.16.11.0/24 192.168.1.45

That's it.

giovedì 7 settembre 2023

[NAPP] Helm pull chart operation failed.

Issue


Yesterday I was trying to deploy the NSX Application Platform (NAPP) in automated way. Below my environment:

• NSX-T version 3.2.3.0.0.21703624
• NAPP version 4.0.1-0.0-20606727

when I received the following error message:

status Code is 400, body: {"httpStatus" : "BAD REQUEST", "error_code" : 46013, "module_name" : "NAPP", "error_message" : "Helm pull chart operation failed. Error: failed to fetch https://projects.registry.vmware.com/chartrepo/nsx_application_platform/charts/nsxi-platform-standard-4.0.1-0.0-20606727.tgz : 404 Not Found\\n"}

Then I tried to deploy it manually, but I received the following error message (very similar to the previous one):

Error: Helm pull chart operation failed. Error: failed to fetch provenance https://projects.registry.vmware.com/chartrepo/nsx_application_platform/charts/nsxi-platform-standard-4.0.1-0.0-20606727.tgz.prov\n (Error code: 46013)


Before to see the solution a brief introduction to what NAPP is.

The NSX Application Platform is a modern microservices platform that hosts the following NSX features that collect, ingest, and correlate network traffic data in your NSX environment.
  • VMware NSX® Intelligence™
  • VMware NSX® Network Detection and Response™
  • VMware NSX® Malware Prevention
  • VMware NSX® Metrics

NAPP is a microservices application platform based on Kubernets and can be installed in two ways:
  • manually
  • automated

By choosing an automated NAPP installation, the customer does not need to be concerned with the installation and maintenance of the individual NAPP platform components including TKGs (Kubernetes).
Further information on how to "Getting Started with NSX Application Platform (NAPP)" can be found here.

Solution


Asking at the VMware GSS for help they told me the following:

"Due to an upgrade of the VMware Public Harbor registry to version 2.8.x ChartMuseum support has been deprecated and removed. And OCI is now the only supported access method. This unfortunately impacts NAPP deployment using NSX version 3.2.x which relies on ChartMuseum.

Option - 1 - Upgrade the environment to 3.2.3.1 and proceed with OCI URLs. Alternatively, any NSX 4.x release will also work.

Option - 2 - Wait for patches.

Once the environment is upgraded use the following URLs

Helm Repository - oci://projects.registry.vmware.com/nsx_application_platform/helm-charts
Docker Registry - projects.registry.vmware.com/nsx_application_platform/clustering"

That's it.

lunedì 14 agosto 2023

[NAPP] Deployment get stuck on "Create Guest Custer "

Issue


Deployment of NAPP get stucked on "Create Guest Cluster - Waiting for Guest cluster napp-cluster-01 to be available for login ..."
Looking at the vCenter, we can see that the SupervisorControlPlaneVM(s) has been created correctly, as well as the namespace and napp-cluster-01-control-plane VM.
What we don't see here are the workers VM.

Solution


To investigate and troubleshoot the the issue we connect via ssh on the SupervisorControlPlaneVM. I will explain in another post how to get the credentials to access the SV CP.

Describing the NAPP TKC ...
# kubectl describe tkc napp-cluster-01 -n nsx-01
we found 2 errors:

Message:          2 errors occurred:
                         * failed to configure DNS for /, Kind= nsx-01/napp-cluster-01: unable to reconcile kubeadm ConfigMap's CoreDNS info: unable to retrieve kubeadm Configmap from the guest cluster: configmaps "kubeadm-config" not found
                         * failed to configure kube-proxy for /, Kind= nsx-01/napp-cluster-01: unable to retrieve kube-proxy daemonset from the guest cluster: daemonsets.apps "kube-proxy" not found


Looking the deployment state of the workers node..
# kubectl get wcpmachine,machine,kcp,vm -n nsx-01

.. we saw that the workers node were still in Pending state. We describe the worker node ...
# kubectl describe wepmachine.infrastructure.cluster.vmware.com/napp-cluster-01-workers-qlpm6-7h2qr -n nsx-01
We also debugged Kubernetes with crictl command looking inside the logs
... and so on.

Tried to Ping from Supervisor cluster to the TKC VIP:
# kubectl get svc -A | grep -i napp-cluster-01
nsx-01                                      napp-cluster-01-control-plane-service                           LoadBalancer   10.96.1.25    192.168.100.25   6443:32296
At the end, we discovered that we were unable to :
  • ping from SupervisorControlPlane to Tanzu Kubernetes Cluster VIP
  • ping from TKC CP to Supervisor CP
Allowed connection on the firewall from SV CP to TKC VIP & from TKC CP to Supervisor CP, we never saw the error any more, but the state was still in pending.

So, we removed the namespace and re-deployed, now control-plane and workers node are UP and running ad we can contiune with NAPP installation.

That's it.

mercoledì 9 agosto 2023

[NSX-T] Stale logical-port(s) still connected in NSX-T 3.x

Issue


I was cleaning up a customer's NSX-T configuration to bring some changes, when I saw a lot of logical-ports still connected, more than hundred even if VM was no more present on vCenter.

Solution


I immediately thought of creating a script with rest APIs calls to remove the logical ports from NSX-T Manager. It is possible to find all the NSX-T API calls here.

For rest APIs calls within the bash script I will be using cURL with the suggestions provided here.

First, let's see the rest APIs to use:
  1. to retrieve the IDs of the Logical Ports
  2. to delete the connection

To get the list of Logical-Ports:
GET /api/v1/logical-ports

Below how it looks the command line ...
# lorenzo@ubuntu:~$ curl -ksn -X GET https://{NSX-T MANAGER IP}/api/v1/logical-ports 
... combining to the previous line the jq command and sed, we can extract only the ID field of our interst.
# lorenzo@ubuntu:~$ curl -ksn -X GET https://{NSX-T MANAGER IP}/api/v1/logical-ports |  jq '.results[] | .id' | sed 's/"//g' 
Outcome in the image below.


To get the deletion of the Logical-Port:
DELETE /api/v1/logical-ports/<LogicalPort-ID>?detach=true

We now have all the elements to build the bash script, which looks like the one below...

WARNING: It provided witout warranty. Use it at your own risk and only if you are aware of what you are doing

#!/bin/bash
curl -ksn -X GET https://{NSX-T MANAGER IP}/api/v1/logical-ports |  jq '.results[] | .id' | sed 's/"//g' | while read -r LP_ID
do
 curl -ksn -X DELETE https://{NSX-T MANAGER IP}/api/v1/logical-ports/${LP_ID}?detach=true
 echo " -> "${LP_ID}" removed "
done
... launch the script as below ...
lorenzo@ubuntu:~$ bash remove_all_logical_port.sh 
... the result is the following. All Logial-Ports have been cancelled.


That's it.