Harbor Registry - Automating LDAP/S Configuration - Part 1

2024-11-01 4 min read Cloud Native Harbor Kubernetes Tanzu

The Harbor Registry is involved in many of my Kubernetes implementations in the field, and in almost every implementation I am asked about the options to configure LDAP/S authentication for the registry. Unfortuntely, neither the community Helm chart nor the Tanzu Harbor package provides native inputs for this setup. Fortunately, the Harbor REST API enables LDAP configuration programmatically. Automating this process ensures consistency across environments, faster deployments, and reduced chances of human error.

Continue reading

MinIO on vSphere - Automated Deployment and Onboarding

In the world of Kubernetes, reliable S3-compliant object storage is essential for tasks like storing backups. However, not everyone has access to a native S3-compatible solution, and setting one up can feel like a daunting task. MinIO, an open-source object storage solution, is a popular choice to fill this gap. Its lightweight, high-performance architecture makes it an excellent option for Kubernetes users seeking quick and reliable storage.

MinIO is also one of the most widely adopted open-source object storage solutions, thanks to its simplicity and S3 compatibility. It’s perfect for Kubernetes environments that need a reliable and scalable storage layer for backups, logs, or other data.

Continue reading

Fixing Missing TKRs in Existing TKGS Deployments

2024-05-01 4 min read Cloud Native Kubernetes Tanzu TKG

I regularly check the Tanzu Kubernetes Releases (TKR) release notes page for new updates. Yesterday, a new TKR was released with support for Kubernetes 1.28.8, and while attempting to test this new version in my TKGS environment, I realized that the TKR was not present in my environment and I started wondering why, as normally, when new TKRs are released, they immediately become available for deployment, since the vCenter is subscribed to the VMware public content library where all the TKRs are hosted. This time, that was not the case, so I started investigating.

Continue reading

HashiCorp Vault Intermediate CA Setup with Cert-Manager and Microsoft Root CA

In this post, we’ll explore how to set up HashiCorp Vault as an Intermediate Certificate Authority (CA) on a Kubernetes cluster, using a Microsoft CA as the Root CA. We’ll then integrate this setup with cert-manager, a powerful Kubernetes add-on for automating the management and issuance of TLS certificates.

The following is an architecture diagram for the use case I’ve built.

Screenshot

  • A Microsoft Windows server is used as the Root CA of the environment.
  • A Kubernetes cluster hosting shared/common services, including HashiCorp Vault. This is a cluster that can serve many other purposes/solutions, consumed by other clusters. The Vault server is deployed on this cluster and serves as an intermediate CA server, under the Microsoft Root CA server.
  • A second Kubernetes cluster hosting the application(s). Cert-Manager is deployed on this cluster, integrated with Vault, and handles the management and issuance of TLS certificates against Vault using the ClusterIssuer resource. A web application, exposed via ingress, is running on this cluster. The ingress resource consumes its TLS certificate from Vault.

Prerequisites

  • Atleast one running Kubernetes cluster. To follow along, you will need two Kubernetes clusters, one serving as the shared services cluster and the other as the workload/application cluster.
  • Access to a Microsoft Root Certificate Authority (CA).
  • The Helm CLI installed.
  • Clone my GitHub repository. This repository contains all involved manifests, files and configurations needed.

Setting Up HashiCorp Vault as Intermediate CA

Deploy Initialize and Configure Vault

Install the Vault CLI. In the following example, Linux Ubuntu is used. If you are using a different operating system, refer to these instructions.

Continue reading

Using HashiCorp Vault as Ingress TLS Certificate Issuer in TAP

Using HashiCorp Vault as Ingress TLS Certificate Issuer in TAP

Tanzu Application Platform (TAP) uses Contour HTTPProxy resources to expose several web components externally via ingress. Some of these components include the API Auto Registration, API Portal, Application Live View, Metadata Store, and TAP GUI. Web workloads deployed through TAP also leverage the same method for their ingress resources. For example:

$ kubectl get httpproxy -A

NAMESPACE               NAME                                                              FQDN                                                     TLS SECRET                                               STATUS   STATUS DESCRIPTION
api-auto-registration   api-auto-registration-controller                                  api-auto-registration.tap.cloudnativeapps.cloud          api-auto-registration-cert                               valid    Valid HTTPProxy
api-portal              api-portal                                                        api-portal.tap.cloudnativeapps.cloud                     api-portal-tls-cert                                      valid    Valid HTTPProxy
app-live-view           appliveview                                                       appliveview.tap.cloudnativeapps.cloud                    appliveview-cert                                         valid    Valid HTTPProxy
metadata-store          amr-cloudevent-handler-ingress                                    amr-cloudevent-handler.tap.cloudnativeapps.cloud         amr-cloudevent-handler-ingress-cert                      valid    Valid HTTPProxy
metadata-store          amr-graphql-ingress                                               amr-graphql.tap.cloudnativeapps.cloud                    amr-ingress-cert                                         valid    Valid HTTPProxy
metadata-store          metadata-store-ingress                                            metadata-store.tap.cloudnativeapps.cloud                 ingress-cert                                             valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-76691bbb1936a7b010ca900ce58a3f57spring   spring-petclinic.tap-demo-01.svc.cluster.local                                                                    valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-88f827fbdc09abbb4ee2b887bba100edspring   spring-petclinic.tap-demo-01.tap.cloudnativeapps.cloud   tap-demo-01/route-a4b7b2c7-0a56-48b9-ad26-6b0e06ca1925   valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-spring-petclinic.tap-demo-01             spring-petclinic.tap-demo-01                                                                                      valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-spring-petclinic.tap-demo-01.svc         spring-petclinic.tap-demo-01.svc                                                                                  valid    Valid HTTPProxy
tap-gui                 tap-gui                                                           tap-gui.tap.cloudnativeapps.cloud                        tap-gui-cert                                             valid    Valid HTTPProxy

TAP uses a shared ingress issuer as a centralized certificate authority representation, providing a method to set up TLS for the entire platform. All participating components get their ingress certificates issued by it. This is the recommended best practice for issuing ingress certificates on the platform.

Continue reading

CAPV: Addressing Node Provisioning Issues Due to an Invalid State of ETCD

2023-12-01 7 min read Cloud Native Kubernetes Tanzu TKG

I recently ran into a strange scenario on a Kubernetes cluster after a sudden and unexpected crash it had experienced due to an issue in the underlying vSphere environment. In this case, the cluster was a TKG cluster (in fact, it happened to be the TKG management cluster), however, the same situation could have occurred on any cluster managed by Cluster API Provider vSphere (CAPV).

I have seen clusters unexpectedly crash many times before and most of the time, they successfully went back online when all nodes were up and running. In this case, however, some of the nodes could not boot properly, and Cluster API started attempting their reconciliation.

Continue reading

CAPV: Fixing and Cleaning Up Idle vCenter Server Sessions

2023-11-01 4 min read Cloud Native Kubernetes Tanzu TKG

I recently ran into an issue causing the vCenter server to crash almost daily. What seemed to be a random vCenter issue initially, turned out to be related to CAPV (Cluster API Provider vSphere), running on some of our Kubernetes clusters. That was also an edge case I had not seen before, so I decided to document and share it here.

Initially, the issue we were witnessing on the vCenter server was the following:

Continue reading
Older posts