<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Storage on Build. Run. Repeat.</title><link>https://buildrunrepeat.com/tags/storage/</link><description>Recent content in Storage on Build. Run. Repeat.</description><generator>Hugo</generator><language>en</language><lastBuildDate>Wed, 01 Jan 2025 09:00:00 -0400</lastBuildDate><atom:link href="https://buildrunrepeat.com/tags/storage/index.xml" rel="self" type="application/rss+xml"/><item><title>HashiCorp Consul Service Mesh on Kubernetes Series - Part 1 - Introduction and Setup</title><link>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-01-intro-and-setup/</link><pubDate>Wed, 01 Jan 2025 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-consul-k8s-service-mesh-series-01-intro-and-setup/</guid><description>&lt;p&gt;Modern cloud-native architectures rely heavily on microservices, and Kubernetes has become the go-to platform for deploying, managing, and scaling these distributed applications. As the number of microservices grows, ensuring secure, reliable, and observable service-to-service communication becomes increasingly complex. This is where service mesh solutions, such as HashiCorp Consul, step in to provide a seamless approach to managing these challenges. In this blog post, we will delve into the integration of HashiCorp Consul Service Mesh with Kubernetes, exploring its architecture, features, and step-by-step deployment guide.&lt;/p&gt;</description></item><item><title>HashiCorp Vault Enterprise - Performance Replication on Kubernetes</title><link>https://buildrunrepeat.com/posts/hashicorp-vault-enterprise-performance-replication-on-k8s/</link><pubDate>Wed, 01 Jan 2025 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-vault-enterprise-performance-replication-on-k8s/</guid><description>&lt;p&gt;This blog post dives into the technical implementation of Vault Enterprise replication within a Kubernetes environment. We’ll explore how to set up performance and disaster recovery replication, overcome common challenges, and ensure smooth synchronization between clusters. Whether you’re aiming for redundancy or better data locality, this guide will equip you with the insights and tools needed to leverage Vault’s enterprise-grade features in Kubernetes effectively.&lt;/p&gt;
&lt;h2 id="architecture"&gt;Architecture&lt;/h2&gt;
&lt;p&gt;
&lt;a href="https://buildrunrepeat.com/posts/hashicorp-vault-enterprise-performance-replication-on-k8s/images/001.png" data-dimbox data-dimbox-caption="Screenshot"&gt;
 &lt;img alt="Screenshot" src="https://buildrunrepeat.com/posts/hashicorp-vault-enterprise-performance-replication-on-k8s/images/001.png"/&gt;
&lt;/a&gt;
&lt;/p&gt;
&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;2 Kubernetes clusters. *Note: for simulation purposes, you can also use a single Kubernetes cluster with multiple namespaces to host both Vault clusters.&lt;/li&gt;
&lt;li&gt;Helm installed&lt;/li&gt;
&lt;li&gt;kubectl installed&lt;/li&gt;
&lt;li&gt;Vault CLI installed&lt;/li&gt;
&lt;li&gt;jq installed&lt;/li&gt;
&lt;li&gt;Vault Enterprise license&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note: for this implementation LoadBalancer services are used on Kubernetes to expose the Vault services (the API/UI and the cluster address for replication). It is highly recommended to use a LoadBalancer rather than ingress to expose the cluster address for replication. Vault itself performs the TLS termination as the TLS certificates are mounted to the Vault pods from Kubernetes. Additionally, note that when enabling the replication, the primary cluster points to the secondary cluster address (port 8201) and not the API/UI address (port 8200). When the secondary cluster applies the replication token, however, it points to the API/UI address (port 8200) to unwrap it and compelete the setup of the replication. We will see this in more detail in the implementation section.&lt;/p&gt;</description></item><item><title>MinIO on vSphere - Automated Deployment and Onboarding</title><link>https://buildrunrepeat.com/posts/minio-on-vsphere-automated-deployment-and-onboarding/</link><pubDate>Fri, 01 Nov 2024 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/minio-on-vsphere-automated-deployment-and-onboarding/</guid><description>&lt;p&gt;In the world of Kubernetes, reliable S3-compliant object storage is essential for tasks like storing backups. However, not everyone has access to a native S3-compatible solution, and setting one up can feel like a daunting task. MinIO, an open-source object storage solution, is a popular choice to fill this gap. Its lightweight, high-performance architecture makes it an excellent option for Kubernetes users seeking quick and reliable storage.&lt;/p&gt;
&lt;p&gt;MinIO is also one of the most widely adopted open-source object storage solutions, thanks to its simplicity and S3 compatibility. It’s perfect for Kubernetes environments that need a reliable and scalable storage layer for backups, logs, or other data.&lt;/p&gt;</description></item><item><title>HashiCorp Vault Intermediate CA Setup with Cert-Manager and Microsoft Root CA</title><link>https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/</link><pubDate>Mon, 01 Jan 2024 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/</guid><description>&lt;p&gt;In this post, we&amp;rsquo;ll explore how to set up HashiCorp Vault as an Intermediate Certificate Authority (CA) on a Kubernetes cluster, using a Microsoft CA as the Root CA. We&amp;rsquo;ll then integrate this setup with cert-manager, a powerful Kubernetes add-on for automating the management and issuance of TLS certificates.&lt;/p&gt;
&lt;p&gt;The following is an architecture diagram for the use case I&amp;rsquo;ve built.&lt;/p&gt;
&lt;p&gt;
&lt;a href="https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/images/019.png" data-dimbox data-dimbox-caption="Screenshot"&gt;
 &lt;img alt="Screenshot" src="https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/images/019.png"/&gt;
&lt;/a&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Microsoft Windows server is used as the Root CA of the environment.&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster hosting shared/common services, including HashiCorp Vault. This is a cluster that can serve many other purposes/solutions, consumed by other clusters. The Vault server is deployed on this cluster and serves as an intermediate CA server, under the Microsoft Root CA server.&lt;/li&gt;
&lt;li&gt;A second Kubernetes cluster hosting the application(s). Cert-Manager is deployed on this cluster, integrated with Vault, and handles the management and issuance of TLS certificates against Vault using the ClusterIssuer resource. A web application, exposed via ingress, is running on this cluster. The ingress resource consumes its TLS certificate from Vault.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Atleast one running Kubernetes cluster. To follow along, you will need two Kubernetes clusters, one serving as the shared services cluster and the other as the workload/application cluster.&lt;/li&gt;
&lt;li&gt;Access to a Microsoft Root Certificate Authority (CA).&lt;/li&gt;
&lt;li&gt;The Helm CLI installed.&lt;/li&gt;
&lt;li&gt;Clone my &lt;a href="https://github.com/itaytalmi/k8s-vault-int-ca.git"&gt;GitHub repository&lt;/a&gt;. This repository contains all involved manifests, files and configurations needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="setting-up-hashicorp-vault-as-intermediate-ca"&gt;Setting Up HashiCorp Vault as Intermediate CA&lt;/h2&gt;
&lt;h3 id="deploy-initialize-and-configure-vault"&gt;Deploy Initialize and Configure Vault&lt;/h3&gt;
&lt;p&gt;Install the Vault CLI. In the following example, Linux Ubuntu is used. If you are using a different operating system, refer to &lt;a href="https://developer.hashicorp.com/vault/install"&gt;these instructions&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>CAPV: Addressing Node Provisioning Issues Due to an Invalid State of ETCD</title><link>https://buildrunrepeat.com/posts/capv-addressing-node-provisioning-issues-due-to-invalid-state-of-etcd/</link><pubDate>Fri, 01 Dec 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/capv-addressing-node-provisioning-issues-due-to-invalid-state-of-etcd/</guid><description>&lt;p&gt;I recently ran into a strange scenario on a Kubernetes cluster after a sudden and unexpected crash it had experienced due to an issue in the underlying vSphere environment. In this case, the cluster was a TKG cluster (in fact, it happened to be the TKG management cluster), however, the same situation could have occurred on any cluster managed by Cluster API Provider vSphere (CAPV).&lt;/p&gt;
&lt;p&gt;I have seen clusters unexpectedly crash many times before and most of the time, they successfully went back online when all nodes were up and running. In this case, however, some of the nodes could not boot properly, and Cluster API started attempting their reconciliation.&lt;/p&gt;</description></item><item><title>CAPV: Fixing and Cleaning Up Idle vCenter Server Sessions</title><link>https://buildrunrepeat.com/posts/capv-fixing-and-cleaning-up-idle-vcenter-sessions/</link><pubDate>Wed, 01 Nov 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/capv-fixing-and-cleaning-up-idle-vcenter-sessions/</guid><description>&lt;p&gt;I recently ran into an issue causing the vCenter server to crash almost daily. What seemed to be a random vCenter issue initially, turned out to be related to CAPV (Cluster API Provider vSphere), running on some of our Kubernetes clusters. That was also an edge case I had not seen before, so I decided to document and share it here.&lt;/p&gt;
&lt;p&gt;Initially, the issue we were witnessing on the vCenter server was the following:&lt;/p&gt;</description></item><item><title>TKG: Updating Pinniped Configuration and Addressing Common Issues</title><link>https://buildrunrepeat.com/posts/tkg-updating-pinniped-config-and-addressing-common-issues/</link><pubDate>Thu, 01 Jun 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/tkg-updating-pinniped-config-and-addressing-common-issues/</guid><description>&lt;p&gt;Most of the TKG engagements I&amp;rsquo;ve been involved in included Pinniped for Kubernetes authentication.
On many occasions, I have seen issues where the configuration provided to Pinniped was incorrect or partially incorrect. For example, common issues may be related to the LDAPS integration. Many environments I have seen utilize Active Directory as the authentication source, and Pinniped requires the LDAPS certificate, username, and password, which are often specified incorrectly. Since this configuration is not validated during the deployment, you end up with an invalid state of Pinniped on your management cluster.&lt;/p&gt;</description></item><item><title>Streamlining and Customizing Windows Image Builder for TKG</title><link>https://buildrunrepeat.com/posts/streamlining-and-customizing-windows-image-builder-in-tkg/</link><pubDate>Wed, 01 Mar 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/streamlining-and-customizing-windows-image-builder-in-tkg/</guid><description>&lt;p&gt;Tanzu Kubernetes Grid (TKG) is one of the few platforms providing out-of-the-box support and streamlined deployment of Windows Kubernetes clusters. VMware is actively investing in this area and constantly improving the support and capabilities around Windows on Kubernetes.&lt;/p&gt;
&lt;p&gt;Unlike Linux-based clusters, for which VMware provides pre-packaged base OS images (typically based on Ubuntu and Photon OS), VMware cannot offer Windows pre-packaged images, primarily due to licensing restrictions, I suppose. Therefore, building your own Windows base OS image is one of the prerequisites for deploying a TKG Windows workload cluster.
Fortunately, VMware leverages the &lt;a href="https://github.com/kubernetes-sigs/image-builder"&gt;upstream Image Builder project&lt;/a&gt; - a fantastic collection of cross-provider Kubernetes virtual machine image-building utilities intended to simplify and streamline the creation of base OS images for Kubernetes.&lt;/p&gt;</description></item><item><title>Tanzu Kubernetes Grid GPU Integration</title><link>https://buildrunrepeat.com/posts/tkg-gpu-integration/</link><pubDate>Wed, 01 Mar 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/tkg-gpu-integration/</guid><description>&lt;p&gt;I recently had to demonstrate Tanzu Kubernetes Grid and its GPU integration capabilities.
Developing a good use case and assembling the demo required some preliminary research.&lt;/p&gt;
&lt;p&gt;During my research, I reached out to Jay Vyas, staff engineer at VMware, SIG Windows lead for Kubernetes, a Kubernetes legend, and an awesome guy in general. :) For those who don&amp;rsquo;t know Jay, he is also one of the authors of the fantastic book &lt;code&gt;Core Kubernetes&lt;/code&gt; (look it up!).&lt;/p&gt;</description></item><item><title>Is your TKG cluster name too long, or is it your DHCP Server…?</title><link>https://buildrunrepeat.com/posts/is-your-tkg-cluster-name-too-long-or-is-it-your-dhcp-server/</link><pubDate>Mon, 01 Aug 2022 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/is-your-tkg-cluster-name-too-long-or-is-it-your-dhcp-server/</guid><description>&lt;p&gt;Recently, when working on a TKGm implementation project, I initially ran into an issue that seemed very odd, as I hadn&amp;rsquo;t encountered such behavior in any other implementation before.&lt;/p&gt;
&lt;p&gt;The issue was that a workload cluster deployment hung after deploying the first control plane node. Until then, everything seemed just fine; as the cluster deployment had successfully initialized, NSX ALB had successfully allocated a control plane VIP. After that, however, the deployment had completely hung and seemed like it wouldn&amp;rsquo;t proceed.&lt;/p&gt;</description></item><item><title>Kubernetes Data Protection: Getting Started with Kasten (K10)</title><link>https://buildrunrepeat.com/posts/kubernetes-data-protection-getting-started-with-kasten/</link><pubDate>Mon, 01 Aug 2022 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/kubernetes-data-protection-getting-started-with-kasten/</guid><description>&lt;p&gt;In a recent Kubernetes project I was involved in, our team had to conduct an in-depth proof of concept for several Kubernetes data protection solutions. The main highlights of the PoC covered data protection for stateful applications and databases, disaster recovery, and application mobility, including relocating applications across Kubernetes clusters and even different types of Kubernetes clusters (for example, from TKG on-premise to AWS EKS, etc.).&lt;/p&gt;
&lt;p&gt;One of the solutions we evaluated was Kasten (K10), a data management platform for Kubernetes, which is now a part of Veeam. The implementation of Kasten was one of the smoothest we have ever experienced in terms of ease of use, stability, and general clarity around getting things done, as everything is very well documented, which certainly cannot be taken for granted these days. :)&lt;/p&gt;</description></item><item><title>Production-Grade Multi-Cluster TAP Installation Guide</title><link>https://buildrunrepeat.com/posts/production-grade-multi-cluster-tap-installation-guide/</link><pubDate>Mon, 01 Aug 2022 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/production-grade-multi-cluster-tap-installation-guide/</guid><description>&lt;ul&gt;
&lt;li&gt;&lt;a href="#introduction"&gt;Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#prerequisites"&gt;Prerequisites&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#prepare-your-workstation"&gt;Prepare your Workstation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#relocate-tap-images-to-your-private-registry"&gt;Relocate TAP Images to your Private Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-tap"&gt;Install TAP&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#view-cluster"&gt;View Cluster&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#set-up-the-installation-namespace"&gt;Set up the Installation Namespace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#issue-a-tls-certificate-for-tap-gui"&gt;Issue a TLS Certificate for TAP GUI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-a-database-for-tap-gui"&gt;Set up a Database for TAP GUI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-tap-gui-catalog-git-repository"&gt;Set up the TAP GUI Catalog Git Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-rbac-for-the-metadata-store"&gt;Set up RBAC for the Metadata Store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-an-authentication-provider-for-tap-gui"&gt;Set up an Authentication Provider for TAP GUI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-rbac-for-the-build-run-and-iterate-clusters"&gt;Set up RBAC for the Build, Run and Iterate Clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-an-ingress-domain-tap-gui-hostname-and-ca-certificate"&gt;Set an Ingress Domain, TAP GUI Hostname and CA Certificate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tap-package"&gt;Deploy the TAP Package&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#build-cluster"&gt;Build Cluster&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#set-up-the-installation-namespace-1"&gt;Set up the Installation Namespace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-metadata-store-authentication-and-ca-certificate"&gt;Set up Metadata Store Authentication and CA Certificate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#prepare-a-sample-source-code-git-repository"&gt;Prepare a Sample Source Code Git Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-tap-values-file"&gt;Update the TAP Values File&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tap-package-1"&gt;Deploy the TAP Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tbs-full-dependencies-package"&gt;Deploy the TBS Full Dependencies Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-developer-namespace-and-deploy-a-workload"&gt;Set up the Developer Namespace and Deploy a Workload&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#run-cluster"&gt;Run Cluster&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#set-up-the-installation-namespace-2"&gt;Set up the Installation Namespace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-tap-values-file-1"&gt;Update the TAP Values File&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tap-package-2"&gt;Deploy the TAP Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-developer-namespace-and-deploy-a-workload-1"&gt;Set up the Developer Namespace and Deploy a Workload&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#iterate-cluster"&gt;Iterate Cluster&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#set-up-the-installation-namespace-3"&gt;Set up the Installation Namespace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-tap-values-file-2"&gt;Update the TAP Values File&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tap-package-3"&gt;Deploy the TAP Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tbs-full-dependencies-package-1"&gt;Deploy the TBS Full Dependencies Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-developer-namespace-and-deploy-a-workload-2"&gt;Set up the Developer Namespace and Deploy a Workload&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#iterate-on-your-application"&gt;Iterate on your Application&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#wrap-up"&gt;Wrap Up&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Since my previous posts on &lt;a href="https://buildrunrepeat.com/posts/vmware-tanzu-application-platform-overview/"&gt;TAP Overview&lt;/a&gt; and &lt;a href="https://buildrunrepeat.com/posts/backstage-introduction-kubecon-cloudnativecon-europe-2022/"&gt;Backstage&lt;/a&gt;, I have been diving deeper into TAP, trying to establish the practices around it.&lt;/p&gt;</description></item></channel></rss>