<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hashicorp on Build. Run. Repeat.</title><link>https://buildrunrepeat.com/tags/hashicorp/</link><description>Recent content in Hashicorp on Build. Run. Repeat.</description><generator>Hugo</generator><language>en</language><lastBuildDate>Wed, 01 Jan 2025 09:00:00 -0400</lastBuildDate><atom:link href="https://buildrunrepeat.com/tags/hashicorp/index.xml" rel="self" type="application/rss+xml"/><item><title>HashiCorp Consul Service Mesh on Kubernetes Series - Part 1 - Introduction and Setup</title><link>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-01-intro-and-setup/</link><pubDate>Wed, 01 Jan 2025 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-01-intro-and-setup/</guid><description>&lt;p&gt;Modern cloud-native architectures rely heavily on microservices, and Kubernetes has become the go-to platform for deploying, managing, and scaling these distributed applications. As the number of microservices grows, ensuring secure, reliable, and observable service-to-service communication becomes increasingly complex. This is where service mesh solutions, such as HashiCorp Consul, step in to provide a seamless approach to managing these challenges. In this blog post, we will delve into the integration of HashiCorp Consul Service Mesh with Kubernetes, exploring its architecture, features, and step-by-step deployment guide.&lt;/p&gt;</description></item><item><title>HashiCorp Consul Service Mesh on Kubernetes Series - Part 2 - Observability</title><link>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-02-observability/</link><pubDate>Wed, 01 Jan 2025 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-02-observability/</guid><description>&lt;p&gt;Modern service meshes require robust observability to ensure seamless operations, proactive troubleshooting, and performance optimization. In this section, we explore the observability features of HashiCorp Consul Service Mesh, including visualizing the service mesh, querying metrics, distributed tracing, and logging and auditing.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="visualizing-the-service-mesh"&gt;Visualizing the Service Mesh&lt;/h2&gt;
&lt;p&gt;The Consul UI is used for visualizing the service mesh and its topology.&lt;/p&gt;
&lt;p&gt;Use the &lt;code&gt;watch&lt;/code&gt; command to send requests to the application continually. Make sure HTTP status code &lt;code&gt;200&lt;/code&gt; is returned in the output.&lt;/p&gt;</description></item><item><title>HashiCorp Consul Service Mesh on Kubernetes Series - Part 3 - Traffic Management</title><link>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-03-traffic-mgmt/</link><pubDate>Wed, 01 Jan 2025 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-03-traffic-mgmt/</guid><description>&lt;p&gt;Efficient traffic management is essential for maintaining application reliability, optimizing performance, and implementing advanced deployment strategies in a service mesh. HashiCorp Consul provides powerful traffic management capabilities through service routers, splitters, and resolvers. In this section, we explore request routing, traffic shifting, request timeouts, and circuit breaking.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="request-routing"&gt;Request Routing&lt;/h2&gt;
&lt;p&gt;This section shows you how to route requests dynamically to multiple versions of a microservice.&lt;/p&gt;
&lt;p&gt;The Bookinfo sample consists of four separate microservices, each with multiple versions. Three different versions of one of the microservices, &lt;code&gt;reviews&lt;/code&gt;, have been deployed and are running concurrently. To illustrate the problem this causes, access the Bookinfo app&amp;rsquo;s &lt;code&gt;/productpage&lt;/code&gt; in a browser and refresh several times.&lt;/p&gt;</description></item><item><title>HashiCorp Consul Service Mesh on Kubernetes Series - Part 4 - Security</title><link>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-04-security/</link><pubDate>Wed, 01 Jan 2025 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-04-security/</guid><description>&lt;p&gt;Security is a fundamental aspect of any service mesh, ensuring that all service-to-service communication is secure, controlled, and auditable. HashiCorp Consul provides robust security features, including mutual TLS (mTLS), access control, and rate limiting.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="mtls"&gt;mTLS&lt;/h2&gt;
&lt;p&gt;In this section, we will demonstrate mTLS with Consul. Consul enables and strictly enforces mTLS by default. All traffic sent through the Consul Connect Service Mesh is encrypted.&lt;/p&gt;
&lt;p&gt;This section is slightly different from the Istio mTLS section because:&lt;/p&gt;</description></item><item><title>HashiCorp Vault Enterprise - Performance Replication on Kubernetes</title><link>https://buildrunrepeat.com/posts/hashicorp-vault-enterprise-performance-replication-on-k8s/</link><pubDate>Wed, 01 Jan 2025 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-vault-enterprise-performance-replication-on-k8s/</guid><description>&lt;p&gt;This blog post dives into the technical implementation of Vault Enterprise replication within a Kubernetes environment. We’ll explore how to set up performance and disaster recovery replication, overcome common challenges, and ensure smooth synchronization between clusters. Whether you’re aiming for redundancy or better data locality, this guide will equip you with the insights and tools needed to leverage Vault’s enterprise-grade features in Kubernetes effectively.&lt;/p&gt;
&lt;h2 id="architecture"&gt;Architecture&lt;/h2&gt;
&lt;p&gt;
&lt;a href="https://buildrunrepeat.com/posts/hashicorp-vault-enterprise-performance-replication-on-k8s/images/001.png" data-dimbox data-dimbox-caption="Screenshot"&gt;
 &lt;img alt="Screenshot" src="https://buildrunrepeat.com/posts/hashicorp-vault-enterprise-performance-replication-on-k8s/images/001.png"/&gt;
&lt;/a&gt;
&lt;/p&gt;
&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;2 Kubernetes clusters. *Note: for simulation purposes, you can also use a single Kubernetes cluster with multiple namespaces to host both Vault clusters.&lt;/li&gt;
&lt;li&gt;Helm installed&lt;/li&gt;
&lt;li&gt;kubectl installed&lt;/li&gt;
&lt;li&gt;Vault CLI installed&lt;/li&gt;
&lt;li&gt;jq installed&lt;/li&gt;
&lt;li&gt;Vault Enterprise license&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note: for this implementation LoadBalancer services are used on Kubernetes to expose the Vault services (the API/UI and the cluster address for replication). It is highly recommended to use a LoadBalancer rather than ingress to expose the cluster address for replication. Vault itself performs the TLS termination as the TLS certificates are mounted to the Vault pods from Kubernetes. Additionally, note that when enabling the replication, the primary cluster points to the secondary cluster address (port 8201) and not the API/UI address (port 8200). When the secondary cluster applies the replication token, however, it points to the API/UI address (port 8200) to unwrap it and compelete the setup of the replication. We will see this in more detail in the implementation section.&lt;/p&gt;</description></item><item><title>HashiCorp Vault Intermediate CA Setup with Cert-Manager and Microsoft Root CA</title><link>https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/</link><pubDate>Mon, 01 Jan 2024 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/</guid><description>&lt;p&gt;In this post, we&amp;rsquo;ll explore how to set up HashiCorp Vault as an Intermediate Certificate Authority (CA) on a Kubernetes cluster, using a Microsoft CA as the Root CA. We&amp;rsquo;ll then integrate this setup with cert-manager, a powerful Kubernetes add-on for automating the management and issuance of TLS certificates.&lt;/p&gt;
&lt;p&gt;The following is an architecture diagram for the use case I&amp;rsquo;ve built.&lt;/p&gt;
&lt;p&gt;
&lt;a href="https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/images/019.png" data-dimbox data-dimbox-caption="Screenshot"&gt;
 &lt;img alt="Screenshot" src="https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/images/019.png"/&gt;
&lt;/a&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Microsoft Windows server is used as the Root CA of the environment.&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster hosting shared/common services, including HashiCorp Vault. This is a cluster that can serve many other purposes/solutions, consumed by other clusters. The Vault server is deployed on this cluster and serves as an intermediate CA server, under the Microsoft Root CA server.&lt;/li&gt;
&lt;li&gt;A second Kubernetes cluster hosting the application(s). Cert-Manager is deployed on this cluster, integrated with Vault, and handles the management and issuance of TLS certificates against Vault using the ClusterIssuer resource. A web application, exposed via ingress, is running on this cluster. The ingress resource consumes its TLS certificate from Vault.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Atleast one running Kubernetes cluster. To follow along, you will need two Kubernetes clusters, one serving as the shared services cluster and the other as the workload/application cluster.&lt;/li&gt;
&lt;li&gt;Access to a Microsoft Root Certificate Authority (CA).&lt;/li&gt;
&lt;li&gt;The Helm CLI installed.&lt;/li&gt;
&lt;li&gt;Clone my &lt;a href="https://github.com/itaytalmi/k8s-vault-int-ca.git"&gt;GitHub repository&lt;/a&gt;. This repository contains all involved manifests, files and configurations needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="setting-up-hashicorp-vault-as-intermediate-ca"&gt;Setting Up HashiCorp Vault as Intermediate CA&lt;/h2&gt;
&lt;h3 id="deploy-initialize-and-configure-vault"&gt;Deploy Initialize and Configure Vault&lt;/h3&gt;
&lt;p&gt;Install the Vault CLI. In the following example, Linux Ubuntu is used. If you are using a different operating system, refer to &lt;a href="https://developer.hashicorp.com/vault/install"&gt;these instructions&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Using HashiCorp Vault as Ingress TLS Certificate Issuer in TAP</title><link>https://buildrunrepeat.com/posts/tap-using-hashicorp-vault-as-ingress-tls-certificate-issuer/</link><pubDate>Mon, 01 Jan 2024 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/tap-using-hashicorp-vault-as-ingress-tls-certificate-issuer/</guid><description>&lt;h1 id="using-hashicorp-vault-as-ingress-tls-certificate-issuer-in-tap"&gt;Using HashiCorp Vault as Ingress TLS Certificate Issuer in TAP&lt;/h1&gt;
&lt;p&gt;Tanzu Application Platform (TAP) uses Contour HTTPProxy resources to expose several web components externally via ingress. Some of these components include the API Auto Registration, API Portal, Application Live View, Metadata Store, and TAP GUI. Web workloads deployed through TAP also leverage the same method for their ingress resources. For example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-text" data-lang="text"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;$ kubectl get httpproxy -A
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;NAMESPACE NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;api-auto-registration api-auto-registration-controller api-auto-registration.tap.cloudnativeapps.cloud api-auto-registration-cert valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;api-portal api-portal api-portal.tap.cloudnativeapps.cloud api-portal-tls-cert valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;app-live-view appliveview appliveview.tap.cloudnativeapps.cloud appliveview-cert valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;metadata-store amr-cloudevent-handler-ingress amr-cloudevent-handler.tap.cloudnativeapps.cloud amr-cloudevent-handler-ingress-cert valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;metadata-store amr-graphql-ingress amr-graphql.tap.cloudnativeapps.cloud amr-ingress-cert valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;metadata-store metadata-store-ingress metadata-store.tap.cloudnativeapps.cloud ingress-cert valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;tap-demo-01 spring-petclinic-contour-76691bbb1936a7b010ca900ce58a3f57spring spring-petclinic.tap-demo-01.svc.cluster.local valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;tap-demo-01 spring-petclinic-contour-88f827fbdc09abbb4ee2b887bba100edspring spring-petclinic.tap-demo-01.tap.cloudnativeapps.cloud tap-demo-01/route-a4b7b2c7-0a56-48b9-ad26-6b0e06ca1925 valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;tap-demo-01 spring-petclinic-contour-spring-petclinic.tap-demo-01 spring-petclinic.tap-demo-01 valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;tap-demo-01 spring-petclinic-contour-spring-petclinic.tap-demo-01.svc spring-petclinic.tap-demo-01.svc valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;tap-gui tap-gui tap-gui.tap.cloudnativeapps.cloud tap-gui-cert valid Valid HTTPProxy
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;TAP uses a shared ingress issuer as a centralized certificate authority representation, providing a method to set up TLS for the entire platform. All participating components get their ingress certificates issued by it. This is the recommended best practice for issuing ingress certificates on the platform.&lt;/p&gt;</description></item><item><title>Harbor Registry – Automating LDAP/S Configuration – Part 2</title><link>https://buildrunrepeat.com/posts/harbor-registry-automating-ldap-configuration-part-2/</link><pubDate>Sun, 01 Jan 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/harbor-registry-automating-ldap-configuration-part-2/</guid><description>&lt;p&gt;This post continues our two-part series on automating LDAP configuration for Harbor Registry. In the &lt;a href="https://buildrunrepeat.com/posts/harbor-registry-automating-ldap-configuration-part-1/"&gt;previous post&lt;/a&gt;, we demonstrated how to achieve this using Ansible, running externally. However, external automation has its challenges, such as firewall restrictions or limited API access in some cases/environments.&lt;/p&gt;
&lt;p&gt;Note: make sure you review the previous post as it provides a lot of additional background and clarifications on this process, LDAPS configuration, and more.&lt;/p&gt;
&lt;p&gt;Here, we explore an alternative approach using Terraform, running the automation directly inside the Kubernetes cluster hosting Harbor.
This method leverages native Kubernetes scheduling capabilities for running the configuration job in a fully declarative approach and does not require any network access to Harbor from the machine running the job.&lt;/p&gt;</description></item></channel></rss>