HashiCorp Vault Intermediate CA Setup with Cert-Manager and Microsoft Root CA

In this post, we’ll explore how to set up HashiCorp Vault as an Intermediate Certificate Authority (CA) on a Kubernetes cluster, using a Microsoft CA as the Root CA. We’ll then integrate this setup with cert-manager, a powerful Kubernetes add-on for automating the management and issuance of TLS certificates.

The following is an architecture diagram for the use case I’ve built.

Screenshot

  • A Microsoft Windows server is used as the Root CA of the environment.
  • A Kubernetes cluster hosting shared/common services, including HashiCorp Vault. This is a cluster that can serve many other purposes/solutions, consumed by other clusters. The Vault server is deployed on this cluster and serves as an intermediate CA server, under the Microsoft Root CA server.
  • A second Kubernetes cluster hosting the application(s). Cert-Manager is deployed on this cluster, integrated with Vault, and handles the management and issuance of TLS certificates against Vault using the ClusterIssuer resource. A web application, exposed via ingress, is running on this cluster. The ingress resource consumes its TLS certificate from Vault.

Prerequisites

  • Atleast one running Kubernetes cluster. To follow along, you will need two Kubernetes clusters, one serving as the shared services cluster and the other as the workload/application cluster.
  • Access to a Microsoft Root Certificate Authority (CA).
  • The Helm CLI installed.
  • Clone my GitHub repository. This repository contains all involved manifests, files and configurations needed.

Setting Up HashiCorp Vault as Intermediate CA

Deploy Initialize and Configure Vault

Install the Vault CLI. In the following example, Linux Ubuntu is used. If you are using a different operating system, refer to these instructions.

Continue reading

Using HashiCorp Vault as Ingress TLS Certificate Issuer in TAP

Using HashiCorp Vault as Ingress TLS Certificate Issuer in TAP

Tanzu Application Platform (TAP) uses Contour HTTPProxy resources to expose several web components externally via ingress. Some of these components include the API Auto Registration, API Portal, Application Live View, Metadata Store, and TAP GUI. Web workloads deployed through TAP also leverage the same method for their ingress resources. For example:

$ kubectl get httpproxy -A

NAMESPACE               NAME                                                              FQDN                                                     TLS SECRET                                               STATUS   STATUS DESCRIPTION
api-auto-registration   api-auto-registration-controller                                  api-auto-registration.tap.cloudnativeapps.cloud          api-auto-registration-cert                               valid    Valid HTTPProxy
api-portal              api-portal                                                        api-portal.tap.cloudnativeapps.cloud                     api-portal-tls-cert                                      valid    Valid HTTPProxy
app-live-view           appliveview                                                       appliveview.tap.cloudnativeapps.cloud                    appliveview-cert                                         valid    Valid HTTPProxy
metadata-store          amr-cloudevent-handler-ingress                                    amr-cloudevent-handler.tap.cloudnativeapps.cloud         amr-cloudevent-handler-ingress-cert                      valid    Valid HTTPProxy
metadata-store          amr-graphql-ingress                                               amr-graphql.tap.cloudnativeapps.cloud                    amr-ingress-cert                                         valid    Valid HTTPProxy
metadata-store          metadata-store-ingress                                            metadata-store.tap.cloudnativeapps.cloud                 ingress-cert                                             valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-76691bbb1936a7b010ca900ce58a3f57spring   spring-petclinic.tap-demo-01.svc.cluster.local                                                                    valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-88f827fbdc09abbb4ee2b887bba100edspring   spring-petclinic.tap-demo-01.tap.cloudnativeapps.cloud   tap-demo-01/route-a4b7b2c7-0a56-48b9-ad26-6b0e06ca1925   valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-spring-petclinic.tap-demo-01             spring-petclinic.tap-demo-01                                                                                      valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-spring-petclinic.tap-demo-01.svc         spring-petclinic.tap-demo-01.svc                                                                                  valid    Valid HTTPProxy
tap-gui                 tap-gui                                                           tap-gui.tap.cloudnativeapps.cloud                        tap-gui-cert                                             valid    Valid HTTPProxy

TAP uses a shared ingress issuer as a centralized certificate authority representation, providing a method to set up TLS for the entire platform. All participating components get their ingress certificates issued by it. This is the recommended best practice for issuing ingress certificates on the platform.

Continue reading

TKG: Updating Pinniped Configuration and Addressing Common Issues

2023-06-01 4 min read Cloud Native Kubernetes Tanzu TKG

Most of the TKG engagements I’ve been involved in included Pinniped for Kubernetes authentication. On many occasions, I have seen issues where the configuration provided to Pinniped was incorrect or partially incorrect. For example, common issues may be related to the LDAPS integration. Many environments I have seen utilize Active Directory as the authentication source, and Pinniped requires the LDAPS certificate, username, and password, which are often specified incorrectly. Since this configuration is not validated during the deployment, you end up with an invalid state of Pinniped on your management cluster.

Continue reading

Tanzu Kubernetes Grid GPU Integration

2023-03-01 16 min read Cloud Native Kubernetes Tanzu TKG

I recently had to demonstrate Tanzu Kubernetes Grid and its GPU integration capabilities. Developing a good use case and assembling the demo required some preliminary research.

During my research, I reached out to Jay Vyas, staff engineer at VMware, SIG Windows lead for Kubernetes, a Kubernetes legend, and an awesome guy in general. :) For those who don’t know Jay, he is also one of the authors of the fantastic book Core Kubernetes (look it up!).

Continue reading

Harbor Registry – Automating LDAP/S Configuration – Part 2

This post continues our two-part series on automating LDAP configuration for Harbor Registry. In the previous post, we demonstrated how to achieve this using Ansible, running externally. However, external automation has its challenges, such as firewall restrictions or limited API access in some cases/environments.

Note: make sure you review the previous post as it provides a lot of additional background and clarifications on this process, LDAPS configuration, and more.

Here, we explore an alternative approach using Terraform, running the automation directly inside the Kubernetes cluster hosting Harbor. This method leverages native Kubernetes scheduling capabilities for running the configuration job in a fully declarative approach and does not require any network access to Harbor from the machine running the job.

Continue reading

Replacing your vCenter server certificate? TKG needs to know about it…

2023-01-01 3 min read Cloud Native Kubernetes Tanzu TKG

I recently ran into an issue where TKGm had suddenly failed to connect to the vCenter server.

The issue turned out to be TLS-related, and I noticed that the vCenter server certificate had been replaced…

Due to the certificate issue, Cluster API components failed to communicate with vSphere, causing cluster reconciliation to fail, among other vSphere-related operations.

Since all TKG clusters in the environment were deployed with the VSPHERE_TLS_THUMBPRINT parameter specified, replacing the vCenter certificate breaks the connection to vSphere, as the TLS thumbprint changes as well.

Continue reading

Upgrading NSX ALB in a TKG Environment

2022-09-01 8 min read Cloud Native Kubernetes NSX ALB Tanzu TKG

For quite a long time, the highest version of the NSX ALB TKG supported was 20.1.6/20.1.3, although 21.1.x has been available for a while, and I have been wondering when TKG would support it. In the release notes of TKG 1.5.4, I recently noticed a note that has been added regarding NSX ALB 21.1.x under the Configuration variables section:

AVI_CONTROLLER_VERSION sets the NSX Advanced Load Balancer (ALB) version for NSX ALB v21.1.x deployments in Tanzu Kubernetes Grid.

Continue reading
Older posts Newer posts