HashiCorp Vault Intermediate CA Setup with Cert-Manager and Microsoft Root CA

In this post, we’ll explore how to set up HashiCorp Vault as an Intermediate Certificate Authority (CA) on a Kubernetes cluster, using a Microsoft CA as the Root CA. We’ll then integrate this setup with cert-manager, a powerful Kubernetes add-on for automating the management and issuance of TLS certificates.

The following is an architecture diagram for the use case I’ve built.

Screenshot

  • A Microsoft Windows server is used as the Root CA of the environment.
  • A Kubernetes cluster hosting shared/common services, including HashiCorp Vault. This is a cluster that can serve many other purposes/solutions, consumed by other clusters. The Vault server is deployed on this cluster and serves as an intermediate CA server, under the Microsoft Root CA server.
  • A second Kubernetes cluster hosting the application(s). Cert-Manager is deployed on this cluster, integrated with Vault, and handles the management and issuance of TLS certificates against Vault using the ClusterIssuer resource. A web application, exposed via ingress, is running on this cluster. The ingress resource consumes its TLS certificate from Vault.

Prerequisites

  • Atleast one running Kubernetes cluster. To follow along, you will need two Kubernetes clusters, one serving as the shared services cluster and the other as the workload/application cluster.
  • Access to a Microsoft Root Certificate Authority (CA).
  • The Helm CLI installed.
  • Clone my GitHub repository. This repository contains all involved manifests, files and configurations needed.

Setting Up HashiCorp Vault as Intermediate CA

Deploy Initialize and Configure Vault

Install the Vault CLI. In the following example, Linux Ubuntu is used. If you are using a different operating system, refer to these instructions.

Continue reading

Using HashiCorp Vault as Ingress TLS Certificate Issuer in TAP

Using HashiCorp Vault as Ingress TLS Certificate Issuer in TAP

Tanzu Application Platform (TAP) uses Contour HTTPProxy resources to expose several web components externally via ingress. Some of these components include the API Auto Registration, API Portal, Application Live View, Metadata Store, and TAP GUI. Web workloads deployed through TAP also leverage the same method for their ingress resources. For example:

$ kubectl get httpproxy -A

NAMESPACE               NAME                                                              FQDN                                                     TLS SECRET                                               STATUS   STATUS DESCRIPTION
api-auto-registration   api-auto-registration-controller                                  api-auto-registration.tap.cloudnativeapps.cloud          api-auto-registration-cert                               valid    Valid HTTPProxy
api-portal              api-portal                                                        api-portal.tap.cloudnativeapps.cloud                     api-portal-tls-cert                                      valid    Valid HTTPProxy
app-live-view           appliveview                                                       appliveview.tap.cloudnativeapps.cloud                    appliveview-cert                                         valid    Valid HTTPProxy
metadata-store          amr-cloudevent-handler-ingress                                    amr-cloudevent-handler.tap.cloudnativeapps.cloud         amr-cloudevent-handler-ingress-cert                      valid    Valid HTTPProxy
metadata-store          amr-graphql-ingress                                               amr-graphql.tap.cloudnativeapps.cloud                    amr-ingress-cert                                         valid    Valid HTTPProxy
metadata-store          metadata-store-ingress                                            metadata-store.tap.cloudnativeapps.cloud                 ingress-cert                                             valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-76691bbb1936a7b010ca900ce58a3f57spring   spring-petclinic.tap-demo-01.svc.cluster.local                                                                    valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-88f827fbdc09abbb4ee2b887bba100edspring   spring-petclinic.tap-demo-01.tap.cloudnativeapps.cloud   tap-demo-01/route-a4b7b2c7-0a56-48b9-ad26-6b0e06ca1925   valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-spring-petclinic.tap-demo-01             spring-petclinic.tap-demo-01                                                                                      valid    Valid HTTPProxy
tap-demo-01             spring-petclinic-contour-spring-petclinic.tap-demo-01.svc         spring-petclinic.tap-demo-01.svc                                                                                  valid    Valid HTTPProxy
tap-gui                 tap-gui                                                           tap-gui.tap.cloudnativeapps.cloud                        tap-gui-cert                                             valid    Valid HTTPProxy

TAP uses a shared ingress issuer as a centralized certificate authority representation, providing a method to set up TLS for the entire platform. All participating components get their ingress certificates issued by it. This is the recommended best practice for issuing ingress certificates on the platform.

Continue reading

TKG 2.3: Fixing the Prometheus Data Source in the Grafana Package

With the release of TKG 2.3, the Grafana package was finally updated from version 7.5.x to 9.5.1. If you have deployed the new Grafana package (9.5.1+vmware.2-tkg.1) or upgraded your existing one to this version, you may have run into error messages in your Grafana dashboards.

For example, in the TKG Kubernetes cluster monitoring default dashboard, you may have run into the Failed to call resource error when opening the dashboard and noticed that a lot of the data is missing.

Continue reading

TKG: Updating Pinniped Configuration and Addressing Common Issues

2023-06-01 4 min read Cloud Native Kubernetes Tanzu TKG

Most of the TKG engagements I’ve been involved in included Pinniped for Kubernetes authentication. On many occasions, I have seen issues where the configuration provided to Pinniped was incorrect or partially incorrect. For example, common issues may be related to the LDAPS integration. Many environments I have seen utilize Active Directory as the authentication source, and Pinniped requires the LDAPS certificate, username, and password, which are often specified incorrectly. Since this configuration is not validated during the deployment, you end up with an invalid state of Pinniped on your management cluster.

Continue reading

Streamlining and Customizing Windows Image Builder for TKG

2023-03-01 11 min read Cloud Native Kubernetes Tanzu TKG

Tanzu Kubernetes Grid (TKG) is one of the few platforms providing out-of-the-box support and streamlined deployment of Windows Kubernetes clusters. VMware is actively investing in this area and constantly improving the support and capabilities around Windows on Kubernetes.

Unlike Linux-based clusters, for which VMware provides pre-packaged base OS images (typically based on Ubuntu and Photon OS), VMware cannot offer Windows pre-packaged images, primarily due to licensing restrictions, I suppose. Therefore, building your own Windows base OS image is one of the prerequisites for deploying a TKG Windows workload cluster. Fortunately, VMware leverages the upstream Image Builder project - a fantastic collection of cross-provider Kubernetes virtual machine image-building utilities intended to simplify and streamline the creation of base OS images for Kubernetes.

Continue reading

Tanzu Kubernetes Grid GPU Integration

2023-03-01 16 min read Cloud Native Kubernetes Tanzu TKG

I recently had to demonstrate Tanzu Kubernetes Grid and its GPU integration capabilities. Developing a good use case and assembling the demo required some preliminary research.

During my research, I reached out to Jay Vyas, staff engineer at VMware, SIG Windows lead for Kubernetes, a Kubernetes legend, and an awesome guy in general. :) For those who don’t know Jay, he is also one of the authors of the fantastic book Core Kubernetes (look it up!).

Continue reading

Getting Started with Carvel ytt - Real-World Examples

2023-01-01 11 min read Carvel Cloud Native Kubernetes Tanzu TAP TKG

Over the years of working with Tanzu Kubernetes Grid (TKG), one tool has stood out as a game-changer for resource customization: Carvel’s ytt. Whether tailoring cluster manifests, customizing TKG packages, or addressing unique deployment requirements, ytt has consistently been a fundamental part of the workflow. Its flexibility, power, and declarative approach make it an essential tool for anyone working deeply with Kubernetes in a TKG ecosystem.

But what exactly is ytt? Short for YAML Templating Tool, ytt is part of the Carvel suite of tools designed for Kubernetes resource management. It provides a powerful, programmable approach to templating YAML configurations by combining straightforward data values, overlays, and scripting capabilities. Unlike many traditional templating tools, ytt prioritizes structure and intent, making it easier to maintain, validate, and debug configurations—particularly in complex, large-scale Kubernetes environments.

Continue reading
Older posts Newer posts