<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Harbor on Build. Run. Repeat.</title><link>https://buildrunrepeat.com/tags/harbor/</link><description>Recent content in Harbor on Build. Run. Repeat.</description><generator>Hugo</generator><language>en</language><lastBuildDate>Fri, 01 Nov 2024 09:00:00 -0400</lastBuildDate><atom:link href="https://buildrunrepeat.com/tags/harbor/index.xml" rel="self" type="application/rss+xml"/><item><title>Harbor Registry - Automating LDAP/S Configuration - Part 1</title><link>https://buildrunrepeat.com/posts/harbor-registry-automating-ldap-configuration-part-1/</link><pubDate>Fri, 01 Nov 2024 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/harbor-registry-automating-ldap-configuration-part-1/</guid><description>&lt;p&gt;The Harbor Registry is involved in many of my Kubernetes implementations in the field, and in almost every implementation I am asked about the options to configure LDAP/S authentication for the registry. Unfortuntely, neither the community Helm chart nor the Tanzu Harbor package provides native inputs for this setup. Fortunately, the Harbor REST API enables LDAP configuration programmatically. Automating this process ensures consistency across environments, faster deployments, and reduced chances of human error.&lt;/p&gt;</description></item><item><title>HashiCorp Vault Intermediate CA Setup with Cert-Manager and Microsoft Root CA</title><link>https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/</link><pubDate>Mon, 01 Jan 2024 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/</guid><description>&lt;p&gt;In this post, we&amp;rsquo;ll explore how to set up HashiCorp Vault as an Intermediate Certificate Authority (CA) on a Kubernetes cluster, using a Microsoft CA as the Root CA. We&amp;rsquo;ll then integrate this setup with cert-manager, a powerful Kubernetes add-on for automating the management and issuance of TLS certificates.&lt;/p&gt;
&lt;p&gt;The following is an architecture diagram for the use case I&amp;rsquo;ve built.&lt;/p&gt;
&lt;p&gt;
&lt;a href="https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/images/019.png" data-dimbox data-dimbox-caption="Screenshot"&gt;
 &lt;img alt="Screenshot" src="https://buildrunrepeat.com/posts/hashicorp-vault-intermediate-ca-setup-with-cert-manager-and-ms-root-ca/images/019.png"/&gt;
&lt;/a&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Microsoft Windows server is used as the Root CA of the environment.&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster hosting shared/common services, including HashiCorp Vault. This is a cluster that can serve many other purposes/solutions, consumed by other clusters. The Vault server is deployed on this cluster and serves as an intermediate CA server, under the Microsoft Root CA server.&lt;/li&gt;
&lt;li&gt;A second Kubernetes cluster hosting the application(s). Cert-Manager is deployed on this cluster, integrated with Vault, and handles the management and issuance of TLS certificates against Vault using the ClusterIssuer resource. A web application, exposed via ingress, is running on this cluster. The ingress resource consumes its TLS certificate from Vault.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Atleast one running Kubernetes cluster. To follow along, you will need two Kubernetes clusters, one serving as the shared services cluster and the other as the workload/application cluster.&lt;/li&gt;
&lt;li&gt;Access to a Microsoft Root Certificate Authority (CA).&lt;/li&gt;
&lt;li&gt;The Helm CLI installed.&lt;/li&gt;
&lt;li&gt;Clone my &lt;a href="https://github.com/itaytalmi/k8s-vault-int-ca.git"&gt;GitHub repository&lt;/a&gt;. This repository contains all involved manifests, files and configurations needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="setting-up-hashicorp-vault-as-intermediate-ca"&gt;Setting Up HashiCorp Vault as Intermediate CA&lt;/h2&gt;
&lt;h3 id="deploy-initialize-and-configure-vault"&gt;Deploy Initialize and Configure Vault&lt;/h3&gt;
&lt;p&gt;Install the Vault CLI. In the following example, Linux Ubuntu is used. If you are using a different operating system, refer to &lt;a href="https://developer.hashicorp.com/vault/install"&gt;these instructions&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>CAPV: Addressing Node Provisioning Issues Due to an Invalid State of ETCD</title><link>https://buildrunrepeat.com/posts/capv-addressing-node-provisioning-issues-due-to-invalid-state-of-etcd/</link><pubDate>Fri, 01 Dec 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/capv-addressing-node-provisioning-issues-due-to-invalid-state-of-etcd/</guid><description>&lt;p&gt;I recently ran into a strange scenario on a Kubernetes cluster after a sudden and unexpected crash it had experienced due to an issue in the underlying vSphere environment. In this case, the cluster was a TKG cluster (in fact, it happened to be the TKG management cluster), however, the same situation could have occurred on any cluster managed by Cluster API Provider vSphere (CAPV).&lt;/p&gt;
&lt;p&gt;I have seen clusters unexpectedly crash many times before and most of the time, they successfully went back online when all nodes were up and running. In this case, however, some of the nodes could not boot properly, and Cluster API started attempting their reconciliation.&lt;/p&gt;</description></item><item><title>TKG 2.3: Fixing the Prometheus Data Source in the Grafana Package</title><link>https://buildrunrepeat.com/posts/tkg-2-3-fixing-the-prometheus-data-source-in-the-grafana-package/</link><pubDate>Fri, 01 Sep 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/tkg-2-3-fixing-the-prometheus-data-source-in-the-grafana-package/</guid><description>&lt;p&gt;With the release of TKG 2.3, the Grafana package was finally updated from version 7.5.x to 9.5.1.
If you have deployed the new Grafana package (&lt;code&gt;9.5.1+vmware.2-tkg.1&lt;/code&gt;) or upgraded your existing one to this version, you may have run into error messages in your Grafana dashboards.&lt;/p&gt;
&lt;p&gt;For example, in the &lt;code&gt;TKG Kubernetes cluster monitoring&lt;/code&gt; default dashboard, you may have run into the &lt;code&gt;Failed to call resource&lt;/code&gt; error when opening the dashboard and noticed that a lot of the data is missing.&lt;/p&gt;</description></item><item><title>Streamlining and Customizing Windows Image Builder for TKG</title><link>https://buildrunrepeat.com/posts/streamlining-and-customizing-windows-image-builder-in-tkg/</link><pubDate>Wed, 01 Mar 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/streamlining-and-customizing-windows-image-builder-in-tkg/</guid><description>&lt;p&gt;Tanzu Kubernetes Grid (TKG) is one of the few platforms providing out-of-the-box support and streamlined deployment of Windows Kubernetes clusters. VMware is actively investing in this area and constantly improving the support and capabilities around Windows on Kubernetes.&lt;/p&gt;
&lt;p&gt;Unlike Linux-based clusters, for which VMware provides pre-packaged base OS images (typically based on Ubuntu and Photon OS), VMware cannot offer Windows pre-packaged images, primarily due to licensing restrictions, I suppose. Therefore, building your own Windows base OS image is one of the prerequisites for deploying a TKG Windows workload cluster.
Fortunately, VMware leverages the &lt;a href="https://github.com/kubernetes-sigs/image-builder"&gt;upstream Image Builder project&lt;/a&gt; - a fantastic collection of cross-provider Kubernetes virtual machine image-building utilities intended to simplify and streamline the creation of base OS images for Kubernetes.&lt;/p&gt;</description></item><item><title>Tanzu Kubernetes Grid GPU Integration</title><link>https://buildrunrepeat.com/posts/tkg-gpu-integration/</link><pubDate>Wed, 01 Mar 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/tkg-gpu-integration/</guid><description>&lt;p&gt;I recently had to demonstrate Tanzu Kubernetes Grid and its GPU integration capabilities.
Developing a good use case and assembling the demo required some preliminary research.&lt;/p&gt;
&lt;p&gt;During my research, I reached out to Jay Vyas, staff engineer at VMware, SIG Windows lead for Kubernetes, a Kubernetes legend, and an awesome guy in general. :) For those who don&amp;rsquo;t know Jay, he is also one of the authors of the fantastic book &lt;code&gt;Core Kubernetes&lt;/code&gt; (look it up!).&lt;/p&gt;</description></item><item><title>Harbor Registry – Automating LDAP/S Configuration – Part 2</title><link>https://buildrunrepeat.com/posts/harbor-registry-automating-ldap-configuration-part-2/</link><pubDate>Sun, 01 Jan 2023 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/harbor-registry-automating-ldap-configuration-part-2/</guid><description>&lt;p&gt;This post continues our two-part series on automating LDAP configuration for Harbor Registry. In the &lt;a href="https://buildrunrepeat.com/posts/harbor-registry-automating-ldap-configuration-part-1/"&gt;previous post&lt;/a&gt;, we demonstrated how to achieve this using Ansible, running externally. However, external automation has its challenges, such as firewall restrictions or limited API access in some cases/environments.&lt;/p&gt;
&lt;p&gt;Note: make sure you review the previous post as it provides a lot of additional background and clarifications on this process, LDAPS configuration, and more.&lt;/p&gt;
&lt;p&gt;Here, we explore an alternative approach using Terraform, running the automation directly inside the Kubernetes cluster hosting Harbor.
This method leverages native Kubernetes scheduling capabilities for running the configuration job in a fully declarative approach and does not require any network access to Harbor from the machine running the job.&lt;/p&gt;</description></item><item><title>Upgrading NSX ALB in a TKG Environment</title><link>https://buildrunrepeat.com/posts/upgrading-nsx-alb-in-a-tkg-environment/</link><pubDate>Thu, 01 Sep 2022 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/upgrading-nsx-alb-in-a-tkg-environment/</guid><description>&lt;p&gt;For quite a long time, the highest version of the NSX ALB TKG supported was &lt;code&gt;20.1.6/20.1.3&lt;/code&gt;, although &lt;code&gt;21.1.x&lt;/code&gt; has been available for a while, and I have been wondering when TKG would support it.
In the release notes of TKG &lt;code&gt;1.5.4&lt;/code&gt;, I recently noticed a note that has been added regarding NSX ALB &lt;code&gt;21.1.x&lt;/code&gt; under the &lt;code&gt;Configuration variables&lt;/code&gt; section:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;AVI_CONTROLLER_VERSION&lt;/code&gt; sets the NSX Advanced Load Balancer (ALB) version for NSX ALB v21.1.x deployments in Tanzu Kubernetes Grid.&lt;/em&gt;&lt;/p&gt;</description></item><item><title>Getting Harbor to trust your LDAPS certificate in TKG</title><link>https://buildrunrepeat.com/posts/getting-harbor-to-trust-your-ldaps-certificate-in-tkg/</link><pubDate>Mon, 01 Aug 2022 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/getting-harbor-to-trust-your-ldaps-certificate-in-tkg/</guid><description>&lt;p&gt;In a recent TKG implementation, it was required to configure Harbor with LDAPS rather than LDAP.&lt;/p&gt;
&lt;p&gt;I deployed the Harbor package on the TKG shared services cluster and configured LDAP. However, when testing the connection, I received an error message that was not informative at all:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;Failed to verify LDAP server with error: error: ldap server network timeout.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;
&lt;a href="https://buildrunrepeat.com/posts/getting-harbor-to-trust-your-ldaps-certificate-in-tkg/images/001.png" data-dimbox data-dimbox-caption="Screenshot"&gt;
 &lt;img alt="Screenshot" src="https://buildrunrepeat.com/posts/getting-harbor-to-trust-your-ldaps-certificate-in-tkg/images/001.png"/&gt;
&lt;/a&gt;
&lt;/p&gt;
&lt;p&gt;Although the error message doesn&amp;rsquo;t explicitly say there&amp;rsquo;s a certificate issue and there is nothing in the &lt;code&gt;harbor-core&lt;/code&gt; container logs, it immediately made sense to me that the &lt;code&gt;harbor-core&lt;/code&gt; container didn&amp;rsquo;t trust my LDAPS/CA certificate, so I started investigating how the certificate could be injected somehow into Harbor. The Harbor package doesn&amp;rsquo;t have any input for the LDAPS/CA certificate in its data values file, so I knew I had to create &lt;a href="https://github.com/itaytalmi/vmware-tkg/blob/main/ytt-overlays/tkg-packages/harbor/ldaps-overlay/overlay-harbor-ldaps-cert.yaml"&gt;my own YTT overlay&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Getting kapp-controller to trust your CA certificates in TKG</title><link>https://buildrunrepeat.com/posts/getting-kapp-controller-to-trust-your-ca-certificates-in-tkg/</link><pubDate>Mon, 01 Aug 2022 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/getting-kapp-controller-to-trust-your-ca-certificates-in-tkg/</guid><description>&lt;p&gt;Have you ever had to deploy a package using kapp-controller from your Harbor private registry?&lt;/p&gt;
&lt;p&gt;I recently deployed the Tanzu RabbitMQ package to a TKGm workload cluster in an air-gapped/internet-restricted environment.&lt;/p&gt;
&lt;p&gt;Doing so in air-gapped environments requires you to push the packages into Harbor, then have kapp-controller deploy the package from Harbor.&lt;/p&gt;
&lt;p&gt;After adding the PackageRepository referencing my Harbor registry, I observed it couldn&amp;rsquo;t complete reconciling due to a certificate issue.&lt;/p&gt;</description></item><item><title>Harbor Registry: is your LDAP user unique?</title><link>https://buildrunrepeat.com/posts/harbor-registry-is-your-ldap-user-unique/</link><pubDate>Mon, 01 Aug 2022 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/harbor-registry-is-your-ldap-user-unique/</guid><description>&lt;p&gt;A recent project I was working on required granting different levels of permissions for several Active Directory service accounts on Harbor registry so that some can only pull images from the registry, and others can also push, etc.&lt;/p&gt;
&lt;p&gt;On the Harbor project, I had the following configuration for my users:&lt;/p&gt;
&lt;p&gt;
&lt;a href="https://buildrunrepeat.com/posts/harbor-registry-is-your-ldap-user-unique/images/001.png" data-dimbox data-dimbox-caption="Screenshot"&gt;
 &lt;img alt="Screenshot" src="https://buildrunrepeat.com/posts/harbor-registry-is-your-ldap-user-unique/images/001.png"/&gt;
&lt;/a&gt;
&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;harbor-group-01&lt;/code&gt; group contains an Active Directory user named &lt;code&gt;harbor-user-01&lt;/code&gt; and &lt;code&gt;harbor-group-02&lt;/code&gt; contains &lt;code&gt;harbor-user-02&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;From the command line, I was able to log in to Harbor with &lt;code&gt;harbor-user-01&lt;/code&gt;:&lt;/p&gt;</description></item><item><title>Production-Grade Multi-Cluster TAP Installation Guide</title><link>https://buildrunrepeat.com/posts/production-grade-multi-cluster-tap-installation-guide/</link><pubDate>Mon, 01 Aug 2022 09:00:00 -0400</pubDate><guid>https://buildrunrepeat.com/posts/production-grade-multi-cluster-tap-installation-guide/</guid><description>&lt;ul&gt;
&lt;li&gt;&lt;a href="#introduction"&gt;Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#prerequisites"&gt;Prerequisites&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#prepare-your-workstation"&gt;Prepare your Workstation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#relocate-tap-images-to-your-private-registry"&gt;Relocate TAP Images to your Private Registry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-tap"&gt;Install TAP&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#view-cluster"&gt;View Cluster&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#set-up-the-installation-namespace"&gt;Set up the Installation Namespace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#issue-a-tls-certificate-for-tap-gui"&gt;Issue a TLS Certificate for TAP GUI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-a-database-for-tap-gui"&gt;Set up a Database for TAP GUI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-tap-gui-catalog-git-repository"&gt;Set up the TAP GUI Catalog Git Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-rbac-for-the-metadata-store"&gt;Set up RBAC for the Metadata Store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-an-authentication-provider-for-tap-gui"&gt;Set up an Authentication Provider for TAP GUI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-rbac-for-the-build-run-and-iterate-clusters"&gt;Set up RBAC for the Build, Run and Iterate Clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-an-ingress-domain-tap-gui-hostname-and-ca-certificate"&gt;Set an Ingress Domain, TAP GUI Hostname and CA Certificate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tap-package"&gt;Deploy the TAP Package&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#build-cluster"&gt;Build Cluster&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#set-up-the-installation-namespace-1"&gt;Set up the Installation Namespace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-metadata-store-authentication-and-ca-certificate"&gt;Set up Metadata Store Authentication and CA Certificate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#prepare-a-sample-source-code-git-repository"&gt;Prepare a Sample Source Code Git Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-tap-values-file"&gt;Update the TAP Values File&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tap-package-1"&gt;Deploy the TAP Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tbs-full-dependencies-package"&gt;Deploy the TBS Full Dependencies Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-developer-namespace-and-deploy-a-workload"&gt;Set up the Developer Namespace and Deploy a Workload&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#run-cluster"&gt;Run Cluster&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#set-up-the-installation-namespace-2"&gt;Set up the Installation Namespace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-tap-values-file-1"&gt;Update the TAP Values File&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tap-package-2"&gt;Deploy the TAP Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-developer-namespace-and-deploy-a-workload-1"&gt;Set up the Developer Namespace and Deploy a Workload&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#iterate-cluster"&gt;Iterate Cluster&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#set-up-the-installation-namespace-3"&gt;Set up the Installation Namespace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-tap-values-file-2"&gt;Update the TAP Values File&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tap-package-3"&gt;Deploy the TAP Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#deploy-the-tbs-full-dependencies-package-1"&gt;Deploy the TBS Full Dependencies Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-developer-namespace-and-deploy-a-workload-2"&gt;Set up the Developer Namespace and Deploy a Workload&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#iterate-on-your-application"&gt;Iterate on your Application&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#wrap-up"&gt;Wrap Up&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Since my previous posts on &lt;a href="https://buildrunrepeat.com/posts/vmware-tanzu-application-platform-overview/"&gt;TAP Overview&lt;/a&gt; and &lt;a href="https://buildrunrepeat.com/posts/backstage-introduction-kubecon-cloudnativecon-europe-2022/"&gt;Backstage&lt;/a&gt;, I have been diving deeper into TAP, trying to establish the practices around it.&lt;/p&gt;</description></item></channel></rss>