Production-Grade Multi-Cluster TAP Installation Guide

Introduction

Since my previous posts on TAP Overview and Backstage, I have been diving deeper into TAP, trying to establish the practices around it.

I have to admit I was overwhelmed at first by the amount of components/moving parts and the amount of features TAP has to offer, especially when it comes to running it in production, i.e., in a non-naive environment… :), so I decided to document everything I have done, and compose this step-by-step guide for anyone looking to get started with TAP in production. This guide covers the steps to deploy TAP in multicluster architecture and takes a very restrictive approach in terms of security and least privilege for image registry permissions, Git permissions, etc.

I used TKG clusters for this guide. However, the procedure is the same for any Kubernetes cluster. The only difference is that for non-TKG clusters, you also have to deploy the Cluster Essentials package.

As of this writing, the guide uses the latest TAP version available - 1.2.1.

Prerequisites

To follow along, you will need the following:

  • 4 Kubernetes clusters - View cluster, Build cluster, Run cluster, and Iterate cluster. In my environment, all worker nodes have 12GB of memory, 12 CPUs and 120GB of disk space.

  • DNS zones - I recommend creating a dedicated DNS zone or sub-zone for TAP, and then a zone per cluster (except for the View cluster) under that main zone. For example, I created a dedicated zone named it-tap.terasky.demo (terasky.demo being the parent domain), and then sub-zones per cluster (e.g., build.it-tap.terasky.demo, run.it-tap.terasky.demo, iterate.it-tap.terasky.demo). If your environment requires several run clusters, serving different environments (for example, production, QA, dev, etc.), you should create a zone per cluster (e.g., qa.it-tap.terasky.demo, dev.it-tap.terasky.demo, and so on). Due to the dynamic nature of the TAP environment, I also use a wildcard certificate covering all DNS zones, which is used by TAP and the applications/workloads deployed by TAP.

  • Clone my TAP GitHub repository.

  • A container image registry. I use Harbor.

  • A Linux Ubuntu machine. I use Ubuntu 22.04.

  • The following must be installed on the machine: yq, jq, relok8s.

  • An account and an API token on Tanzu Network. If you don’t already have an account, click here to create it. Once you complete the initial registration, log in to your account, hover on your username at the top right, and select Edit Profile. Then, click on Request New Refresh Token.

    Screenshot

    Screenshot

    Copy the generated token and keep it somewhere safe.

  • Accept the following EULAs on Tanzu Network as documented here.

    Accept the Tanzu Application Platform EULA:

    Screenshot

    Screenshot

    Accept the Cluster Essentials for VMware Tanzu EULA:

    Screenshot

    Screenshot

    Accept the Tanzu Build Service EULA:

    Screenshot

    Screenshot

    Accept the Tanzu Build Service Dependencies EULA:

    Screenshot

    Screenshot

Prepare your Workstation

The machine you are working on must have the following installed.

Install Pivnet CLI.

You can use the following commands to install the latest version:

python3 -m pip install --upgrade lastversion
PIVNET_CLI_URL=$(lastversion pivotal-cf/pivnet-cli --assets --filter ^pivnet-linux-amd64)
echo "Pivnet CLI latest release URL: $PIVNET_CLI_URL"
sudo wget -O /usr/local/bin/pivnet "$PIVNET_CLI_URL"
sudo chmod +x /usr/local/bin/pivnet
pivnet version

If you are curious about the lastversion pip package used above and want to know more about it, check out my colleague Scott Rosenberg’s blog post “Is that really the latest version?” for more information.

Login to Tanzu Network using Pivnet CLI with the API token you generated previously.

pivnet login --api-token=<your_api_token>

Screenshot

Download Tanzu CLI from Tanzu Network.

# Create a temporary directory
TANZU_TMP_DIR=/tmp/tanzu
mkdir -p "$TANZU_TMP_DIR"

# Define variables
TAP_VERSION=1.2.1
PRODUCT_FILE_ID=1246421

# Download bundle
pivnet download-product-files \
--product-slug="tanzu-application-platform" \
--release-version="$TAP_VERSION" \
--product-file-id="$PRODUCT_FILE_ID" \
--download-dir="$TANZU_TMP_DIR"

Example output:

2022/08/24 06:07:51 Downloading 'tanzu-framework-linux-amd64.tar' to '/tmp/tanzu/tanzu-framework-linux-amd64.tar'
 192.45 MiB / 192.45 MiB [=========================================] 100.00% 16s
2022/08/24 06:08:08 Verifying SHA256
2022/08/24 06:08:09 Successfully verified SHA256

If you are installing a different TAP version and need the download link, you can get it directly from Tanzu Network. Every file has an information icon next to it. Clicking it will get you the download link. For example: Screenshot Screenshot

Extract and install Tanzu CLI and necessary plugins.

tar -xvf "$TANZU_TMP_DIR/tanzu-framework-linux-amd64.tar" -C "$TANZU_TMP_DIR"
export TANZU_CLI_NO_INIT=true
TANZU_FRAMEWORK_VERSION=v0.11.6
sudo install "$TANZU_TMP_DIR/cli/core/$TANZU_FRAMEWORK_VERSION/tanzu-core-linux_amd64" /usr/local/bin/tanzu
tanzu plugin install --local "$TANZU_TMP_DIR/cli" all
tanzu version
tanzu plugin list

Example output:

cli/
cli/core/
cli/core/v0.11.6/
cli/core/v0.11.6/tanzu-core-linux_amd64
cli/core/plugin.yaml
cli/distribution/
cli/distribution/linux/
cli/distribution/linux/amd64/
cli/distribution/linux/amd64/cli/
cli/distribution/linux/amd64/cli/accelerator/
cli/distribution/linux/amd64/cli/accelerator/v1.2.0/
cli/distribution/linux/amd64/cli/accelerator/v1.2.0/tanzu-accelerator-linux_amd64
cli/distribution/linux/amd64/cli/package/
cli/distribution/linux/amd64/cli/package/v0.11.6/
cli/distribution/linux/amd64/cli/package/v0.11.6/tanzu-package-linux_amd64
cli/distribution/linux/amd64/cli/apps/
cli/distribution/linux/amd64/cli/apps/v0.7.0/
cli/distribution/linux/amd64/cli/apps/v0.7.0/tanzu-apps-linux_amd64
cli/distribution/linux/amd64/cli/secret/
cli/distribution/linux/amd64/cli/secret/v0.11.6/
cli/distribution/linux/amd64/cli/secret/v0.11.6/tanzu-secret-linux_amd64
cli/distribution/linux/amd64/cli/insight/
cli/distribution/linux/amd64/cli/insight/v1.2.2/
cli/distribution/linux/amd64/cli/insight/v1.2.2/tanzu-insight-linux_amd64
cli/distribution/linux/amd64/cli/services/
cli/distribution/linux/amd64/cli/services/v0.3.0/
cli/distribution/linux/amd64/cli/services/v0.3.0/tanzu-services-linux_amd64
cli/discovery/
cli/discovery/standalone/
cli/discovery/standalone/apps.yaml
cli/discovery/standalone/services.yaml
cli/discovery/standalone/secret.yaml
cli/discovery/standalone/insight.yaml
cli/discovery/standalone/package.yaml
cli/discovery/standalone/accelerator.yaml
Installing plugin 'accelerator:v1.2.0'
Installing plugin 'apps:v0.7.0'
Installing plugin 'insight:v1.2.2'
Installing plugin 'package:v0.11.6'
Installing plugin 'secret:v0.11.6'
Installing plugin 'services:v0.3.0'
✔  successfully installed 'all' plugin
version: v0.11.6
buildDate: 2022-05-20
sha: 90440e2b
  NAME                DESCRIPTION                                                                                        SCOPE       DISCOVERY                VERSION  STATUS
  cluster             Kubernetes cluster operations                                                                      Context     default-it-tkg-mgmt-cls  v0.11.6  installed
  kubernetes-release  Kubernetes release operations                                                                      Context     default-it-tkg-mgmt-cls  v0.11.6  installed
  login               Login to the platform                                                                              Standalone  default                  v0.11.6  installed
  management-cluster  Kubernetes management-cluster operations                                                           Standalone  default                  v0.11.6  installed
  package             Tanzu package management                                                                           Standalone  default                  v0.11.6  installed
  pinniped-auth       Pinniped authentication operations (usually not directly invoked)                                  Standalone  default                  v0.11.6  installed
  secret              Tanzu secret management                                                                            Standalone  default                  v0.11.6  installed
  apps                Applications on Kubernetes                                                                         Standalone                           v0.7.0   installed
  services            Explore Service Instance Classes, discover claimable Service Instances and manage Resource Claims  Standalone                           v0.3.0   installed
  accelerator         Manage accelerators in a Kubernetes cluster                                                        Standalone                           v1.2.0   installed
  insight             post & query image, package, source, and vulnerability data                                        Standalone                           v1.2.2   installed

Install Carvel tools.

curl -L https://carvel.dev/install.sh | sudo bash

Relocate TAP Images to your Private Registry

Log in to the Tanzu Network registry.

# Define credentials
TANZU_REGISTRY_USERNAME='your_tanzu_registry_username'
TANZU_REGISTRY_PASSWORD='your_tanzu_registry_password'

# Login
docker login registry.tanzu.vmware.com \
-u "$TANZU_REGISTRY_USERNAME" \
-p "$TANZU_REGISTRY_PASSWORD"

Create a repository for TAP in your private registry. You can use the following commands when using Harbor.

# Define variables
PRIVATE_REGISTRY_HOSTNAME='your_private_registry_fqdn'
PRIVATE_REGISTRY_USERNAME='your_private_registry_username'
PRIVATE_REGISTRY_PASSWORD='your_private_registry_password'
TAP_VERSION=1.2.1
TAP_REPO=tap

# Create project
curl -k -H "Content-Type: application/json" \
-u "$PRIVATE_REGISTRY_USERNAME:$PRIVATE_REGISTRY_PASSWORD" \
-X POST "https://$PRIVATE_REGISTRY_HOSTNAME/api/v2.0/projects" \
-d '{"project_name": '\"${TAP_REPO}\"', "public": false}'

Screenshot

Log in to your private registry.

docker login "$PRIVATE_REGISTRY_HOSTNAME" \
-u "$PRIVATE_REGISTRY_USERNAME" \
-p "$PRIVATE_REGISTRY_PASSWORD"

Relocate the images to your private registry using imgpkg.

imgpkg copy --registry-verify-certs=false \
-b "registry.tanzu.vmware.com/tanzu-application-platform/tap-packages:$TAP_VERSION" \
--to-repo "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/tap-packages"

Example output:

copy | exporting 177 images...
copy | will export registry.tanzu.vmware.com/tanzu-application-platform/tap-packages@sha256:001224ba2c37663a9a412994f1086ddbbe40aaf959b30c465ad06c2a563f2b9f
copy | will export registry.tanzu.vmware.com/tanzu-application-platform/tap-packages@sha256:029a19b8bbce56243c5b1200662cfe57ebdcb42fb6c789d99aa8044df658c023
...
copy | will export registry.tanzu.vmware.com/tanzu-application-platform/tap-packages@sha256:fdf4eca1d730c92a6b81700eb0156ac67dc700ed09d7dcb4dd909a75f0914c0b
copy | exported 177 images
copy | importing 177 images...

6.77 GiB / 6.77 GiB [------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00% 113.77 MiB p/s
copy |
copy | done uploading images
copy | Warning: Skipped the followings layer(s) due to it being non-distributable. If you would like to include non-distributable layers, use the --include-non-distributable-layers flag
copy |  - Image: it-tkg-harbor.terasky.demo/tap/tap-packages@sha256:58e3e097dca779281ce4fe2838fee6f2bb3edba8723b0178ad9520ee1c82e4d0
copy |    Layers:
copy |      - sha256:3a78847ea829208edc2d7b320b7e602b9d12e47689499d5180a9cc7790dec4d7
copy |  - Image: it-tkg-harbor.terasky.demo/tap/tap-packages@sha256:80f4981585dc5331386077bb8fb8a3eef53de591c6c70ec44990ed17bfcd6c9c
copy |    Layers:
copy |      - sha256:3a78847ea829208edc2d7b320b7e602b9d12e47689499d5180a9cc7790dec4d7
copy | Tagging images

Succeeded

Next, we have to configure the appropriate permissions on the image registry. For this tutorial, I’ll be following the least privilege approach everywhere. We will need two types of registry credentials:

  • Pulling images – all Kubernetes clusters will require registry credentials for pulling TAP images.
  • Pushing images – Tanzu Build Service (TBS) will require registry credentials for pushing the container images it builds.

In my environment, I created the following:

  • An Active Directory user named tap-harbor-pull associated with an Active Directory group named tap-harbor-pull-users. This group has the Limited Guest role on the tap repository, so this group of users can only pull images from Harbor.
  • An Active Directory user named tap-harbor-tbs associated with an Active Directory group named tap-harbor-tbs-users. This group has the Maintainer role on the tap repository, so this group of users can also push images into Harbor.

If you also use LDAP users on Harbor, make sure they have unique email addresses set. Check out my post “Harbor Registry: is your LDAP user unique?” for more information.

Screenshot

If your TAP repository is public and you use an administrative account for registry authentication, you do not need to configure the above permissions. However, it is generally recommended to follow the least privilege approach whenever possible, especially in production.

Install TAP

You should now be set to install TAP.

View Cluster

Set up the Installation Namespace

The installation namespace will contain the TAP package repository, the TAP package, and the secret containing the registry credentials.

Make sure you run the following from the multicluster folder.

Create the tap-install namespace.

kubectl create ns tap-install

Create the tap-registry secret using your registry FQDN and credentials for pulling images (e.g., tap-harbor-pull). The following command will also create a SecretExport resource, which enables us to import these credentials into other namespaces by “cloning” this secret from the installation namespace.

tanzu secret registry add tap-registry \
--server "$PRIVATE_REGISTRY_HOSTNAME" \
--username "$PRIVATE_REGISTRY_USERNAME" \
--password "$PRIVATE_REGISTRY_PASSWORD" \
--export-to-all-namespaces --yes --namespace tap-install

Create the TAP package repository and ensures it reconciles successfully.

tanzu package repository add tanzu-tap-repository \
--url "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/tap-packages:$TAP_VERSION" \
--namespace tap-install

Example output:

 Adding package repository 'tanzu-tap-repository'

 Validating provided settings for the package repository

 Creating package repository resource

 Waiting for 'PackageRepository' reconciliation for 'tanzu-tap-repository'

 'PackageRepository' resource install status: Reconciling

 'PackageRepository' resource install status: ReconcileSucceeded

 'PackageRepository' resource successfully reconciled

Added package repository 'tanzu-tap-repository' in namespace 'tap-install'

kapp-controller must trust your CA certificates for the above to work. If you need to add your certificates to kapp-controller, refer to my post on Getting kapp-controller to trust your CA certificates in TKG.

Ensure the TAP packages are now available.

tanzu package available list -n tap-install

Issue a TLS Certificate for TAP GUI

By default, TAP GUI is exposed externally using HTTP. To use HTTPS, you must provide a valid TLS certificate for the TAP GUI hostname (e.g., tap-gui.it-tap.terasky.demo). As mentioned in the prerequisites section, I have a wildcard certificate for my TAP environment, so I’m also using it for TAP GUI.

Create the tap-gui namespace.

kubectl create ns tap-gui

Modify the view-cluster/tap-gui-tls-secret.yaml manifest using your certificate, then apply it.

kubectl apply -f view-cluster/tap-gui-tls-cert.yaml

Set up a Database for TAP GUI

We have to set up a database for TAP GUI. I currently recommend using Postgres. If you have a license for Tanzu Postgres, it would be a great idea to use it here. You can also use Bitnami’s Postgres Helm chart. For this tutorial, I’ll use the open-source Postgres Operator to deploy my Postgres database on the View cluster. This way, TAP GUI can communicate with the database locally within Kubernetes without exposing the database externally.

Ensure you run the following from the multicluster/view-cluster/postgres-operator directory.

Add the Postgres Operator Helm repository and pull the Helm chart locally.

# Add Helm repository
helm repo add postgres-operator https://opensource.zalando.com/postgres-operator/charts/postgres-operator
helm repo update

# Version-pinning to prevent breaking changes
CHART_VERSION="1.8.2"
helm pull postgres-operator/postgres-operator --untar --version "$CHART_VERSION" --destination /tmp

# Add additional image references we'll need to the chart's values.yaml file
VALUES_FILE=/tmp/postgres-operator/values.yaml
yq -i '.relok8s_custom.busybox.image = "busybox:latest"' "$VALUES_FILE"
yq -i '.relok8s_custom.postgres.image = "postgres:14"' "$VALUES_FILE"

We will now use the relok8s CLI to relocate the Postgres Operator images to your TAP repository on the private registry. relok8s will also create a modified copy of the chart’s values.yaml file containing the paths of the relocated images in your private registry.

Ensure your machine trusts your private registry’s CA certificates as relok8s requires. You can use the following commands to add your certificates:

REG_CA_CERT=$(cat <<EOF
-----BEGIN CERTIFICATE-----
MIIFljCCBH6gAwIBAgITUwAAAUq.... # Your CA certificates/chain
-----END CERTIFICATE-----
EOF
)

sudo bash -c 'echo '"'$REG_CA_CERT'"' > /usr/local/share/ca-certificates/ca-cert.crt'
sudo update-ca-certificates

Relocate the Postgres Operator images to your registry and extract the resulting Helm chart.

relok8s chart move /tmp/postgres-operator \
-i relok8s-image-hints.yaml \
--repo-prefix "$TAP_REPO/postgres-operator" \
--registry "$PRIVATE_REGISTRY_HOSTNAME" \
--out *.tgz -y

tar -zxf postgres-operator-*.tgz

# Cleanup leftovers
rm -rf /tmp/postgres-operator
rm -f postgres-operator-*.tgz

Example output:

Computing relocation...

Image copies:
 registry.opensource.zalan.do/acid/postgres-operator:v1.8.2 => it-tkg-harbor.terasky.demo/tap/postgres-operator/postgres-operator:v1.8.2 (sha256:5bac4b7a0d150dce71581dcdfc93775cef4a5c2f9205e38b64c673aef53426c2)
 registry.opensource.zalan.do/acid/spilo-14:2.1-p6 => it-tkg-harbor.terasky.demo/tap/postgres-operator/spilo-14@sha256:8d79a3fdd0dee672e369b2907c9d0b321792bc49682c7ef777a18570257253f0 (sha256:8d79a3fdd0dee672e369b2907c9d0b321792bc49682c7ef777a18570257253f0)
 registry.opensource.zalan.do/acid/logical-backup:v1.8.0 => it-tkg-harbor.terasky.demo/tap/postgres-operator/logical-backup@sha256:e9e6ebab415b7331b327438581c6431845030f94a0729c3663cdedf9e8b8461e (sha256:e9e6ebab415b7331b327438581c6431845030f94a0729c3663cdedf9e8b8461e)
 registry.opensource.zalan.do/acid/pgbouncer:master-22 => it-tkg-harbor.terasky.demo/tap/postgres-operator/pgbouncer@sha256:eeb694b3089b735ece41f746c21c4725bd969553d7812daa6f3d827a38158393 (sha256:eeb694b3089b735ece41f746c21c4725bd969553d7812daa6f3d827a38158393)
 index.docker.io/library/busybox:latest => it-tkg-harbor.terasky.demo/tap/postgres-operator/busybox@sha256:98de1ad411c6d08e50f26f392f3bc6cd65f686469b7c22a85c7b5fb1b820c154 (sha256:98de1ad411c6d08e50f26f392f3bc6cd65f686469b7c22a85c7b5fb1b820c154)
 index.docker.io/library/postgres:14 => it-tkg-harbor.terasky.demo/tap/postgres-operator/postgres@sha256:8d46fa657b46fb96a707b3dff90ff95014476874a96389f0370c1c2a2846f249 (sha256:8d46fa657b46fb96a707b3dff90ff95014476874a96389f0370c1c2a2846f249)

Changes to be applied to postgres-operator/values.yaml:
  .image.registry: it-tkg-harbor.terasky.demo
  .image.repository: tap/postgres-operator/postgres-operator
  .configGeneral.docker_image: it-tkg-harbor.terasky.demo/tap/postgres-operator/spilo-14@sha256:8d79a3fdd0dee672e369b2907c9d0b321792bc49682c7ef777a18570257253f0
  .configLogicalBackup.logical_backup_docker_image: it-tkg-harbor.terasky.demo/tap/postgres-operator/logical-backup@sha256:e9e6ebab415b7331b327438581c6431845030f94a0729c3663cdedf9e8b8461e
  .configConnectionPooler.connection_pooler_image: it-tkg-harbor.terasky.demo/tap/postgres-operator/pgbouncer@sha256:eeb694b3089b735ece41f746c21c4725bd969553d7812daa6f3d827a38158393
  .relok8s_custom.busybox.image: it-tkg-harbor.terasky.demo/tap/postgres-operator/busybox@sha256:98de1ad411c6d08e50f26f392f3bc6cd65f686469b7c22a85c7b5fb1b820c154
  .relok8s_custom.postgres.image: it-tkg-harbor.terasky.demo/tap/postgres-operator/postgres@sha256:8d46fa657b46fb96a707b3dff90ff95014476874a96389f0370c1c2a2846f249

Relocating [email protected]...
Done moving postgres-operator-1.8.2.tgz

Create the postgres-system namespace.

kubectl create ns postgres-system

Create the tap-registry secret in the postgres-system namespace. This is the secret containing the registry credentials for pulling images from the TAP repository.

kubectl apply -f tap-registry-creds.yaml

Install the Helm chart.

helm upgrade -i postgres-operator postgres-operator -n postgres-system --create-namespace --set "imagePullSecrets[0].name=tap-registry"

Ensure the Postgres Operator pod is running.

kubectl get pod -n postgres-system

Example output:

NAME                                 READY   STATUS    RESTARTS   AGE
postgres-operator-66565667d6-k5grr   1/1     Running   0          78s

Create the tap-gui-db namespace, then deploy the database.

kubectl create ns tap-gui-db
kubectl apply -f tap-gui-postgres-db-cls.yaml

Monitor the status of the database and wait for it to deploy.

kubectl get pods,postgresqls.acid.zalan.do -n tap-gui-db
NAME                   READY   STATUS    RESTARTS   AGE
pod/tap-gui-db-cls-0   1/1     Running   0          13s

NAME                                      TEAM         VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE   STATUS
postgresql.acid.zalan.do/tap-gui-db-cls   tap-gui-db   14        1      50Gi     10m           100Mi            13s   Running

Get the database password.

export POSTGRES_PASSWORD=$(kubectl get secret postgres.tap-gui-db-cls.credentials.postgresql.acid.zalan.do -n tap-gui-db -o 'jsonpath={.data.password}' | base64 -d)
echo "$POSTGRES_PASSWORD"

Optionally, use postgres-psql-test.yaml to deploy a test pod to validate the database connectivity.

# Define private registry hostname
export PRIVATE_REGISTRY_HOSTNAME='your-private-registry-hostname'
envsubst < postgres-psql-test.yaml | kubectl apply -f -

The envsubst command above renders the postgres-psql-test.yaml along with the defined environment variables (POSTGRES_PASSWORD and PRIVATE_REGISTRY_HOSTNAME). kubectl then applies the resulting manifest.

Inspect the logs of the test pod.

kubectl logs -l app=postgres-psql-test -n tap-gui-db

If you get the following output, the connection is successful.

   Name    |     Owner      | Encoding |   Collate   |    Ctype    |   Access privileges
-----------+----------------+----------+-------------+-------------+-----------------------
 postgres  | postgres_owner | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres       | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |                |          |             |             | postgres=CTc/postgres
 template1 | postgres       | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |                |          |             |             | postgres=CTc/postgres
(4 rows)

Finally, clean up the test pod.

kubectl delete -f postgres-psql-test.yaml

Modify the multicluster/view-cluster/tap-values.yaml file and set the database password under tap_gui.app_config.backend.database.connection.password.

For example:

Screenshot

Set up the TAP GUI Catalog Git Repository

The TAP GUI catalog git repository stores the application catalog files for TAP GUI.

Download the blank catalog bundle from Tanzu Network.

# Create a temporary directory
TANZU_TMP_DIR=/tmp/tanzu
mkdir -p "$TANZU_TMP_DIR"

# Define variables
TAP_VERSION=1.2.1
PRODUCT_FILE_ID=1099786

# Download bundle
pivnet download-product-files \
--product-slug="tanzu-application-platform" \
--release-version="$TAP_VERSION" \
--product-file-id="$PRODUCT_FILE_ID" \
--download-dir="$TANZU_TMP_DIR"

Example output:

2022/08/24 07:02:08 Downloading 'tap-gui-blank-catalog.tgz' to '/tmp/tanzu/tap-gui-blank-catalog.tgz'
 25.50 KiB / 25.50 KiB [============================================] 100.00% 0s
2022/08/24 07:02:09 Verifying SHA256
2022/08/24 07:02:09 Successfully verified SHA256

Create a Git repository. I created mine on GitHub, named it tap-gui-catalog, and set it to private.

Screenshot

Clone your repository. For example:

git clone https://github.com/itaytalmi/tap-gui-catalog.git

Extract the bundle into the repository directory. For example:

tar xf "$TANZU_TMP_DIR/tap-gui-blank-catalog.tgz" -C ~/git/tap-gui-catalog

Commit and push your changes.

Example output:

Screenshot

On GitHub, you should now see the blank catalog files in your repository under the blank directory.

Screenshot

Create a personal access token on GitHub for TAP to connect to this repository. On GitHub, hover on your profile at the top right, go to Settings > Developer Settings > Personal access tokens > Generate new token.

Enter a note for the token (e.g., tap-gui), expiration (I selected No expiration), and select the following scopes:

repo
workflow
read:org
read:user
user:email

Screenshot

Screenshot

Generate the token and keep it somewhere safe.

Screenshot

Modify the multicluster/view-cluster/tap-values.yaml file and set the GitHub token under tap_gui.app_config.integrations.github[].token. Also, set the URL to the catalog-info.yaml file from your tap-gui-catalog Git repository (e.g., https://github.com/itaytalmi/tap-gui-catalog/blob/main/blank/catalog-info.yaml) under tap_gui.app_config.catalog.locations[].target.

For example:

Screenshot

Set up RBAC for the Metadata Store

To see detailed vulnerability information and scanning results of your supply chains on TAP GUI, you must configure permissions and a token for the metadata store component.

Create the metadata-store namespace.

kubectl create ns metadata-store

Apply the view-cluster/tap-gui-metadata-store-rbac.yaml manifest for the service account and its permissions.

kubectl apply -f view-cluster/tap-gui-metadata-store-rbac.yaml

Extract the service account token.

kubectl get secret $(kubectl get sa -n metadata-store metadata-store-read-client -o json | jq -r '.secrets[0].name') -n metadata-store -o json | jq -r '.data.token' | base64 -d

Copy the extracted token, modify the multicluster/view-cluster/tap-values.yaml file and set the token under tap_gui.app_config.proxy./metadata_store.headers.Authorization.

For example:

Screenshot

Set up an Authentication Provider for TAP GUI

By default, TAP GUI logs you in as a guest user. However, you can (and should) set up an authentication provider for your organization’s users. I use a GitHub-based OAuth App.

To configure an OAuth App on GitHub, go to the Developer Settings page, select OAuth Apps, then Register a new application.

Screenshot

Provide the required information.

  • Application Name: e.g., TAP GUI
  • Homepage URL: Typically, your TAP GUI FQDN. e.g., https://tap-gui.it-tap.terasky.demo
  • Application Description: e.g., Tanzu Application Platform
  • Authorization callback URL: e.g., https://tap-gui.it-tap.terasky.demo/api/auth/github

Then click Register application.

Screenshot

Click Generate a new client secret, then copy the generated secret and the client ID.

Screenshot

Optionally, upload a logo for your application. You can use the view-cluster/tanzu-logo.png image for this.

Screenshot

Set your client ID and client secret in the multicluster/view-cluster/tap-values.yaml file under tap_gui.app_config.auth.providers.github.tap_gui.clientId and tap_gui.app_config.auth.providers.github.tap_gui.clientSecret accordingly.

Screenshot

You can set the environment parameter to an actual environment name (dev, qa, etc.) if you have different authentication providers for different environments. Set allowGuestAccess to false if you want to forbid guest access to TAP GUI.

Set up RBAC for the Build, Run and Iterate Clusters

You must set up RBAC on the Build, Run and Iterate clusters for the View cluster to get their data.

Perform the following steps on the Build, Run and Iterate clusters.

Apply the common/tap-gui-viewer-service-account-rbac.yaml manifest to create the service account and its permissions.

kubectl apply -f common/tap-gui-viewer-service-account-rbac.yaml

Get the cluster URL, service account token, and CA certificates.

CLUSTER_URL=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')

CLUSTER_TOKEN=$(kubectl -n tap-gui get secret $(kubectl -n tap-gui get sa tap-gui-viewer -o=json | jq -r '.secrets[0].name') -o=json | jq -r '.data["token"]' | base64 --decode)

echo ""
echo "CLUSTER_URL: $CLUSTER_URL"
echo ""
echo "CLUSTER_TOKEN: $CLUSTER_TOKEN"
echo ""

Example output:

CLUSTER_URL: https://10.100.154.13:6443

CLUSTER_TOKEN: eyJhbGciOiJSUzI1NiIsImtpZCI6Im8tdkowZXhrNnZKT2E...........

Add all clusters’ information to multicluster/view-cluster/tap-values.yaml file under tap_gui.app_config.kubernetes.clusterLocatorMethods[].clusters.

For example:

Screenshot

Set an Ingress Domain, TAP GUI Hostname and CA Certificate

This section will finalize our configuration.

In the multicluster/view-cluster/tap-values.yaml file under shared.ingress_domain, set your ingress domain for TAP (e.g., it-tap.terasky.demo).

Under shared.ca_cert_data, set your CA certificates/chain.

Screenshot

Set your TAP GUI hostname under tap_gui.app_config.app.baseUrl, tap_gui.app_config.backend.baseUrl, and tap_gui.app_config.backend.cors.origin.

For example:

Screenshot

Important note regarding DNS: I have External DNS deployed on all my clusters. External DNS will create all DNS records as ingress/HTTPProxy resources are deployed on the clusters. Due to the dynamic nature of the TAP environment, it is highly recommended that you deploy External DNS, especially on the Run clusters, where actual workloads will be deployed. However, if you prefer, you can manually create DNS records (or wildcard DNS records for each DNS zone).

If you wish to create your DNS records manually, you will have to extract the Envoy load balancer IP address after the deployment of the TAP package using the following command:

ENVOY_LB_IP=$(kubectl get svc -n tanzu-system-ingress envoy -o jsonpath='{.status.loadBalancer.ingress[].ip}')
echo "$ENVOY_LB_IP"

Then, you can use the following PowerShell commands to create the records:

# Define variables
$DNSServer = "demo-dc-01.terasky.demo"
$DNSZone = "terasky.demo"
$DNSSubDomainRecord = "it-tap"
$EnvoyLBIP = "10.100.154.18"
$NetworkCIDR = "10.100.154.0/24"
$Hostnames = @("appliveview", "metadata-store", "tap-gui")
# Note: if you wish to create a wildcard record, set the Hostnames variable to @("*")

# Create Reverse Lookup Zone
Add-DnsServerPrimaryZone -NetworkId $NetworkCIDR -ReplicationScope "Domain"

# Create DNS Records
$Hostnames | ForEach-Object -Process {
  Add-DnsServerResourceRecord -ComputerName $DNSServer -A -ZoneName $DNSZone -Name "$_.$DNSSubDomainRecord" -IPv4Address $EnvoyLBIP -CreatePtr
}

Deploy the TAP Package

Now that you’ve completed the configuration of the tap-values.yaml file for the View cluster, it’s time to deploy the TAP package.

# Define TAP version
TAP_VERSION=1.2.1

tanzu package install tap \
-n tap-install \
-p tap.tanzu.vmware.com \
-v "$TAP_VERSION" \
--values-file view-cluster/tap-values.yaml

Example output:

 Installing package 'tap.tanzu.vmware.com'

 Getting package metadata for 'tap.tanzu.vmware.com'

 Creating service account 'tap-tap-install-sa'

 Creating cluster admin role 'tap-tap-install-cluster-role'

 Creating cluster role binding 'tap-tap-install-cluster-rolebinding'

 Creating secret 'tap-tap-install-values'

 Creating package resource

 Waiting for 'PackageInstall' reconciliation for 'tap'

 'PackageInstall' resource install status: Reconciling

 'PackageInstall' resource install status: ReconcileSucceeded

 'PackageInstall' resource successfully reconciled

 Added installed package 'tap'

You can view the installed packages.

kubectl get app -n tap-install

Example output:

NAME                       DESCRIPTION           SINCE-DEPLOY   AGE
accelerator                Reconcile succeeded   9m35s          5m6s
api-portal                 Reconcile succeeded   9m26s          7m21s
appliveview                Reconcile succeeded   6m             5m7s
cert-manager               Reconcile succeeded   2m39s          7m22s
contour                    Reconcile succeeded   6m23s          6m16s
fluxcd-source-controller   Reconcile succeeded   8m15s          7m22s
learningcenter             Reconcile succeeded   47s            5m6s
learningcenter-workshops   Reconcile succeeded   8m49s          70s
metadata-store             Reconcile succeeded   6m16s          5m7s
source-controller          Reconcile succeeded   6m6s           6m16s
tap                        Reconcile succeeded   6m10s          7m30s
tap-gui                    Reconcile succeeded   92s            5m7s
tap-telemetry              Reconcile succeeded   8m39s          7m22s

Since External DNS is deployed on my cluster, as I mentioned, I can also see the DNS records it created on my DNS server.

Screenshot

Browse to TAP GUI from a web browser to ensure it is accessible.

Screenshot

You can also test your authentication provider, which is GitHub in this case.

Screenshot

Screenshot

Screenshot

And I am now logged in using my GitHub credentials. :)

Screenshot

Build Cluster

The Build cluster is typically the cluster Tanzu Build Service (TBS) resides on, where your container images are built.

There are different types of supply chains that can be used on the Build cluster. I use the testing and scanning supply chain, which is the type I believe should be used in a production-grade deployment of TAP.

Set up the Installation Namespace

The installation namespace will contain the TAP package repository, the TAP package, and the secret containing the registry credentials.

You have to set up the TAP installation namespace and create the TAP package repository, the TAP package, and the secret containing the registry credentials, same as the View cluster.

Make sure you run the following from the multicluster folder.

Create the tap-install namespace.

kubectl create ns tap-install

Create the tap-registry secret using your registry FQDN and credentials for pulling images (e.g., tap-harbor-pull). The following command will also create a SecretExport resource, which enables us to import these credentials into other namespaces by “cloning” this secret from the installation namespace.

tanzu secret registry add tap-registry \
--server "$PRIVATE_REGISTRY_HOSTNAME" \
--username "$PRIVATE_REGISTRY_USERNAME" \
--password "$PRIVATE_REGISTRY_PASSWORD" \
--export-to-all-namespaces --yes --namespace tap-install

Create the TAP package repository and ensures it reconciles successfully.

tanzu package repository add tanzu-tap-repository \
--url "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/tap-packages:$TAP_VERSION" \
--namespace tap-install

Apply the build-cluster/overlay-cert-injection-webhook-scantemplates.yaml manifest. This is an overlay file I created as a workaround for an issue preventing Grype (used as the image scanner on TAP) from connecting to a private registry using a self-signed certificate. You can skip this step if your registry is configured with a public certificate. Note: if you skip this step, you will also have to remove the overlay reference from your tap-values.yaml file under the package_overlays section.

kubectl apply -f build-cluster/overlay-cert-injection-webhook-scantemplates.yaml

Ensure the TAP packages are now available.

tanzu package available list -n tap-install

Set up Metadata Store Authentication and CA Certificate

We have to create the store-ca-cert and the store-auth-token secrets on the Build cluster for Grype to connect successfully to the metadata store ingress residing on the View cluster.

Get the metadata store ingress CA certificate and service account token from the View cluster.

# CA certificate
export CA_CERT=$(kubectl get secret -n metadata-store ingress-cert -o jsonpath='{.data.ca\.crt}')
echo "$CA_CERT"

# Auth token
export AUTH_TOKEN=$(kubectl get secrets -n metadata-store -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='metadata-store-read-write-client')].data.token}")
echo "$AUTH_TOKEN"

Create the metadata-store-secrets namespace on the Build cluster.

kubectl create ns metadata-store-secrets

Apply the build-cluster/metadata-store-secrets.yaml manifest.

envsubst < build-cluster/metadata-store-secrets.yaml | kubectl apply -f -

Prepare a Sample Source Code Git Repository

We will need a sample Git repository with some source code to test TBS. I recommend cloning the sample repository from https://github.com/sample-accelerators/tanzu-java-web-app.git. On GitHub, there is an import option. I will import the sample repository into a new private repository for this example. If you also use GitHub, you can follow along.

Go to your main GitHub page and click the New button.

Screenshot

Select the Import a repository option.

Screenshot

For the Clone URL field, enter https://github.com/sample-accelerators/tanzu-java-web-app.git. Enter a name for the new repository. For example, tanzu-java-web-app. Set the repository to private. Then click Begin import.

Screenshot

Wait for the import to complete.

Screenshot

Update the TAP Values File

You must update the tap-values.yaml file for the Build cluster to reflect your environment configuration.

In the build-cluster/tap-values.yaml file under shared.ca_cert_data, set your CA certificates/chain.

Screenshot

In the buildservice section, set the following:

  • kp_default_repository – a writable repository in your registry. Tanzu Build Service dependencies are written to this location. You should only update your registry hostname if you use the same convention as me throughout this tutorial. For example: it-tkg-harbor.terasky.demo/tap/build-service
  • kp_default_repository_username – the username that can write and push images to kp_default_repository. As mentioned before, I have a service account named tap-harbor-tbs that has Maintainer access to the tap repository on Harbor.
  • kp_default_repository_password – the password for kp_default_repository_username.

Screenshot

In the ootb_supply_chain_testing_scanning section, under registry, you specify the registry and repository for the images built by TBS. I created a dedicated repository on Harbor for storing images built by TBS. Since you may require multiple repositories in the future, creating a repository per team/purpose may be a good idea. For this example, I created a repository named dev-team-01 on Harbor, and my Active Directory group tap-dev-team-01 has Maintainer access to that repository.

Screenshot

In the grype section, update the metadataStore.url parameter using the ingress hostname of your metadata store from the View cluster. If you don’t know the hostname, you can run kubectl get httpproxy -n metadata-store and copy the hostname from the output.

Screenshot

Deploy the TAP Package

Now that you’ve completed the configuration of the tap-values.yaml file for the Build cluster, it’s time to deploy the TAP package.

# Define TAP version
TAP_VERSION=1.2.1

tanzu package install tap \
-n tap-install \
-p tap.tanzu.vmware.com \
-v "$TAP_VERSION" \
--values-file build-cluster/tap-values.yaml

Example output:

 Installing package 'tap.tanzu.vmware.com'

 Getting package metadata for 'tap.tanzu.vmware.com'

 Creating service account 'tap-tap-install-sa'

 Creating cluster admin role 'tap-tap-install-cluster-role'

 Creating cluster role binding 'tap-tap-install-cluster-rolebinding'

 Creating secret 'tap-tap-install-values'

 Creating package resource

 Waiting for 'PackageInstall' reconciliation for 'tap'

 'PackageInstall' resource install status: Reconciling

 'PackageInstall' resource install status: ReconcileSucceeded

 'PackageInstall' resource successfully reconciled

 Added installed package 'tap'

You can view the installed packages.

kubectl get app -n tap-install

Example output:

NAME                                 DESCRIPTION           SINCE-DEPLOY   AGE
appliveview-conventions              Reconcile succeeded   10m            2m10s
cartographer                         Reconcile succeeded   6m23s          3m30s
cert-manager                         Reconcile succeeded   3m23s          4m42s
contour                              Reconcile succeeded   9m30s          3m31s
conventions-controller               Reconcile succeeded   4m7s           3m31s
fluxcd-source-controller             Reconcile succeeded   4m17s          4m41s
full-tbs-deps                        Reconcile succeeded   6m29s          4m8s
grype                                Reconcile succeeded   10s            40s
ootb-supply-chain-testing-scanning   Reconcile succeeded   16s            84s
ootb-templates                       Reconcile succeeded   3s             4m42s
scanning                             Reconcile succeeded   3m45s          3m30s
source-controller                    Reconcile succeeded   2m7s           2m10s
spring-boot-conventions              Reconcile succeeded   10m            4m51s
tap                                  Reconcile succeeded   3m34s          4m41s
tap-auth                             Reconcile succeeded   32s            4m41s
tap-telemetry                        Reconcile succeeded   3s             4m41s
tekton-pipelines                     Reconcile succeeded   7m16s          4m42s

Deploy the TBS Full Dependencies Package

Get the latest available version of the buildservice package.

TBS_FULL_DEPS_PKG_NAME=buildservice.tanzu.vmware.com
TBS_FULL_DEPS_PKG_VERSIONS=($(tanzu package available list "$TBS_FULL_DEPS_PKG_NAME" -n tap-install -o json | jq -r ".[].version" | sort -t "." -k1,1n -k2,2n -k3,3n))
TBS_FULL_DEPS_PKG_VERSION=${TBS_FULL_DEPS_PKG_VERSIONS[-1]}
echo "$TBS_FULL_DEPS_PKG_VERSION"

Relocate the TBS full dependencies package to your registry.

imgpkg copy --registry-verify-certs=false \
-b "registry.tanzu.vmware.com/tanzu-application-platform/full-tbs-deps-package-repo:$TBS_FULL_DEPS_PKG_VERSION" \
--to-repo "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/tbs-full-deps"

Example output:

\
copy | exporting 18 images...
copy | will export registry.tanzu.vmware.com/tanzu-application-platform/full-tbs-deps-package-repo@sha256:241fb42142361ba734c21197808c833643b0853a9698abfa383777f010fb6d2a
copy | will export registry.tanzu.vmware.com/tanzu-application-platform/full-tbs-deps-package-repo@sha256:2e07f819720cb3365414524e596a0eeba213ffcc078ee0e48353e73e3efa425d
...
copy | exported 18 images
copy | importing 18 images...

7.13 GiB / 7.13 GiB [------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00% 749.05 MiB p/s
copy |
copy | done uploading images
copy | Tagging images

Succeeded

Create the TBS Full Dependencies package repository and ensure it reconciles successfully.

tanzu package repository add tbs-full-deps-repository \
--url "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/tbs-full-deps:$TBS_FULL_DEPS_PKG_VERSION" \
--namespace tap-install

Example output:

 Adding package repository 'tbs-full-deps-repository'

 Validating provided settings for the package repository

 Creating package repository resource

 Waiting for 'PackageRepository' reconciliation for 'tbs-full-deps-repository'

 'PackageRepository' resource install status: Reconciling

 'PackageRepository' resource install status: ReconcileSucceeded

Added package repository 'tbs-full-deps-repository' in namespace 'tap-install'

Install the TBS Full Dependencies package.

tanzu package install full-tbs-deps \
-n tap-install \
-p full-tbs-deps.tanzu.vmware.com \
-v "$TBS_FULL_DEPS_PKG_VERSION"

Example output:

 Installing package 'full-tbs-deps.tanzu.vmware.com'

 Getting package metadata for 'full-tbs-deps.tanzu.vmware.com'

 Creating service account 'full-tbs-deps-tap-install-sa'

 Creating cluster admin role 'full-tbs-deps-tap-install-cluster-role'

 Creating cluster role binding 'full-tbs-deps-tap-install-cluster-rolebinding'

 Creating package resource

 Waiting for 'PackageInstall' reconciliation for 'full-tbs-deps'

 'PackageInstall' resource install status: Reconciling

 'PackageInstall' resource install status: ReconcileSucceeded

 Added installed package 'full-tbs-deps'

If you look at the Tanzu Dependency Updater logs, you can see that the Build Service images are being pushed to your TAP repository.

kubectl logs -l app=dependency-updater -n build-service

Screenshot

The Tanzu Dependency Updater uses the credentials you specified in your tap-values.yaml file for the buildservice section.

In this case, the tap-harbor-tbs user has Maintainer access to my TAP repository so that it can push images, as mentioned before.

Screenshot

Set up the Developer Namespace and Deploy a Workload

Before we can run a workload on the Build cluster, we have to set up the developer namespace.

In your tap-values.yaml file for the Build cluster, under the ootb_supply_chain_testing_scanning, you specified the registry and repository where TBS can store the images it builds (e.g., dev-team-01).

Screenshot

If you use Harbor registry, you can refer to the following commands to create your repository on Harbor.

# Define variables
TAP_DEVELOPER_PROJECT=dev-team-01
PRIVATE_REGISTRY_HOSTNAME=your-private-registry-fqdn
PRIVATE_REGISTRY_USERNAME=your-private-registry-username
PRIVATE_REGISTRY_PASSWORD=your-private-registry-password

curl -k -H "Content-Type: application/json" \
-u "$PRIVATE_REGISTRY_USERNAME:$PRIVATE_REGISTRY_PASSWORD" \
-X POST "https://$PRIVATE_REGISTRY_HOSTNAME/api/v2.0/projects" \
-d '{"project_name": '\"${TAP_DEVELOPER_PROJECT}\"', "public": false}'

Your developer group (e.g., tap-dev-team-01) requires Maintainer access to that repository.

Screenshot

Your developer group also requires Limited Guest access to your TAP repository to pull images.

Screenshot

Create the developer namespace. For example:

kubectl create ns dev-team-01

Create a secret containing the developer’s registry credentials. For example:

# Define variables
PRIVATE_REGISTRY_HOSTNAME=your-private-registry-fqdn
TAP_DEVELOPER_USERNAME=your-tap-developer-username
TAP_DEVELOPER_PASSWORD=your-tap-developer-password

tanzu secret registry add registry-credentials \
--server "$PRIVATE_REGISTRY_HOSTNAME" \
--username "$TAP_DEVELOPER_USERNAME" \
--password "$TAP_DEVELOPER_PASSWORD" \
--namespace dev-team-01

Create a secret containing your Git credentials for the source code repository we cloned previously (tanzu-java-web-app). Refer to the common/developer-git-credentials.yaml manifest as an example. You can modify the manifest using your GitHub username and token, then apply it.

The credentials you specify here will be used to clone the source code, so you could use the same credentials you specified for TAP GUI. However, if you plan to use the pull request feature, you must add the appropriate permissions on GitHub.

Screenshot

kubectl apply -f common/developer-git-credentials.yaml -n dev-team-01

The sample application we will use is a Java application, so we will also need a container image with Maven to run our testing. For this example, I will use the gradle container image.

Pull the latest gradle image from Docker Hub, then push it to your TAP repository.

docker pull gradle
docker tag gradle:latest "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/gradle:latest"
docker push "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/gradle:latest"

Modify the build-cluster/developer-resources-onboarding.yaml manifest and locate the developer-defined-tekton-pipeline Tekton pipeline, then set your gradle image (e.g., it-tkg-harbor.terasky.demo/tap/gradle).

Screenshot

This manifest is a set of resources containing everything required for the developer environment on the Build cluster, such as the necessary registry secrets, metadata store secrets, service accounts and permissions for various components, a sample scan policy, a sample Tekton pipeline, etc. It can be further customized for specific needs, but I think it is great to get started with TAP.

kubectl apply -f build-cluster/developer-resources-onboarding.yaml -n dev-team-01

Duplicate the default scan templates from the default namespace, as we also need them in the developer namespace. Note: the following command leverages the kubectl eksporter plugin to clone the existing resources. You can install the eksporter plugin with kubectl krew if it is not already installed on your machine.

kubectl eksporter scantemplates.scanning.apps.tanzu.vmware.com -n default \
--drop metadata.labels,metadata.annotations,metadata.namespace | \
kubectl apply -f - -n dev-team-01

We should now be able to run a workload. Modify the sample common/tanzu-java-web-app-workload.yaml manifest and set the URL of the source code repository you created previously.

Screenshot

Then, create the workload.

kubectl apply -f common/tanzu-java-web-app-workload.yaml -n dev-team-01

Monitor the status of the workload.

kubectl get workload -n dev-team-01

Example output:

NAME                 SOURCE                                                          SUPPLYCHAIN               READY     REASON               AGE
tanzu-java-web-app   https://github.com/itaytalmi/tanzu-java-web-app.git   source-test-scan-to-url   Unknown   MissingValueAtPath   2m14s

TBS should start building your image in the developer namespace.

kubectl get pod -n dev-team-01

Example output:

NAME                                     READY   STATUS      RESTARTS   AGE
scan-tanzu-java-web-app-h2szn--1-8tp25   0/1     Completed   0          8m35s
scan-tanzu-java-web-app-vvvq9--1-dx9n9   0/1     Completed   0          11m
tanzu-java-web-app-build-1-build-pod     0/1     Completed   0          10m
tanzu-java-web-app-wzpsx-test-pod        0/1     Completed   0          12m

Check the logs of the test pod. For example:

kubectl logs -n dev-team-01 -l app.kubernetes.io/component=test

You can see all the dependencies downloaded and the unit tests on the source code.

Example output:

Downloaded from central: https://repo.maven.apache.org/maven2/org/apiguardian/apiguardian-api/1.0.0/apiguardian-api-1.0.0.pom (1.2 kB at 9.9 kB/s)
Downloading from central: https://repo.maven.apache.org/maven2/org/junit/platform/junit-platform-engine/1.3.1/junit-platform-engine-1.3.1.pom
Downloaded from central: https://repo.maven.apache.org/maven2/org/junit/platform/junit-platform-engine/1.3.1/junit-platform-engine-1.3.1.pom (2.4 kB at 20 kB/s)
 at 18 kB/s)
Downloaded from central: https://repo.maven.apache.org/maven2/org/opentest4j/opentest4j/1.1.1/opentest4j-1.1.1.jar (7.1 kB at 60 kB/s)
...
[INFO]
[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running com.example.springboot.HelloControllerTest
07:53:55.327 [main] DEBUG org.springframework.test.context.BootstrapUtils - Instantiating CacheAwareContextLoaderDelegate from class [org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate]
...
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.7.1)

2022-08-18 07:53:55.904  INFO 86 --- [           main] c.e.springboot.HelloControllerTest       : Starting HelloControllerTest using Java 17.0.3 on tanzu-java-web-app-wzpsx-test-pod with PID 86 (started by root in /tmp/tmp.sj3cibwxyo)
2022-08-18 07:53:55.906  INFO 86 --- [           main] c.e.springboot.HelloControllerTest       : No active profile set, falling back to 1 default profile: "default"
2022-08-18 07:53:57.092  INFO 86 --- [           main] o.s.b.t.m.w.SpringBootMockServletContext : Initializing Spring TestDispatcherServlet ''
2022-08-18 07:53:57.093  INFO 86 --- [           main] o.s.t.web.servlet.TestDispatcherServlet  : Initializing Servlet ''
2022-08-18 07:53:57.094  INFO 86 --- [           main] o.s.t.web.servlet.TestDispatcherServlet  : Completed initialization in 1 ms
2022-08-18 07:53:57.116  INFO 86 --- [           main] c.e.springboot.HelloControllerTest       : Started HelloControllerTest in 1.556 seconds (JVM running for 2.367)
Let's inspect the beans provided by Spring Boot:
application
applicationTaskExecutor
basicErrorController
beanNameHandlerMapping
beanNameViewResolver
characterEncodingFilter
...
stringHttpMessageConverter
taskExecutorBuilder
themeResolver
viewControllerHandlerMapping
viewNameTranslator
viewResolver
welcomePageHandlerMapping
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.262 s - in com.example.springboot.HelloControllerTest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:04 min
[INFO] Finished at: 2022-08-18T07:53:57Z
[INFO] ------------------------------------------------------------------------

You can also check the logs of the source code scan pod.

kubectl logs -n dev-team-01 -l app.kubernetes.io/component=source-scan

Example of source code scanning results:

scan:
  scanner:
    name: Grype
    vendor: Anchore
    version: v0.40.1
  reports:
  - /workspace/scan.xml
eval:
  violations: []
store:
  locations:
  - https://metadata-store.it-tap.terasky.demo/api/sources?repo=dev-team-01/tanzu-java-web-app/b54184bbf372890e6e40f38eee80f22dbe3e836a.tar.gz&sha=main/b54184bbf372890e6e40f38eee80f22dbe3e836a&org=gitrepository

As you can see, Grype didn’t find any violations there.

Check the logs of the build pod. You should see a simple Build successful message.

kubectl logs -n dev-team-01 -l app.kubernetes.io/component=build

Example output:

Build successful

Check the logs of the image scan pod.

kubectl logs -n dev-team-01 -l app.kubernetes.io/component=image-scan

Example output:

...
    vendor: Anchore
    version: v0.40.1
  reports:
  - /workspace/scan.xml
eval:
  violations:
  - CVE spring-core CVE-2016-1000027 {"Critical"}
store:
  locations:
  - https://metadata-store.it-tap.terasky.demo/api/images?digest=sha256:9642c9b7f18cec69dd6dccd88853a115b9e25dc261d7bd6a0d9766d8160fb1ba

As you can see, Grype found one critical CVE violation in the image (will elaborate on that later).

At this point, you should see your view image built by TBS on Harbor.

Screenshot

Log in to TAP GUI, navigate to Supply Chains from the left menu, and select your workload.

Screenshot

Click the Image Scanner - Grype box and observe the error message. Your error message may vary depending on the known CVE violations when running the workload. In this example, Grype is complaining about spring-core CVE-2016-1000027, which is a critical vulnerability. That caused our supply chain to fail.

Failed because of 1 violations: CVE spring-core CVE-2016-1000027 {"Critical"}

Screenshot

For testing purposes, you can exclude the CVE in the scan policy on Kubernetes. To do so, copy the CVE ID(s) (e.g., CVE-2016-1000027) and edit the scan-policy scan policy in your developer namespace.

kubectl edit scanpolicies.scanning.apps.tanzu.vmware.com scan-policy -n dev-team-01

Locate the ignoreCves parameter under spec, and add your CVE ID(s) to the array. For example:

ignoreCves := ["CVE-2016-1000027"]

Screenshot

Save the scan policy.

If you now run kubectl get pods -n dev-team-01, you will immediately notice that a new scan has been triggered. For example:

NAME                                         READY   STATUS      RESTARTS   AGE
scan-tanzu-java-web-app-7ljs8--1-f7phk       0/1     Completed   0          29m
scan-tanzu-java-web-app-7xmrj--1-9z2nk       0/1     Completed   0          16m
scan-tanzu-java-web-app-qnhr4--1-ncdw8       0/1     Init:1/6    0          4s
scan-tanzu-java-web-app-wj6vq--1-99kh8       0/1     Init:2/6    0          4s
tanzu-java-web-app-build-1-build-pod         0/1     Completed   0          4h13m
tanzu-java-web-app-config-writer-82rqj-pod   0/1     Completed   0          27m
tanzu-java-web-app-jr2pc-test-pod            0/1     Completed   0          4h15m
Wait for the scan to complete.

If you go back to TAP GUI, you should see that your supply chain has succeeded.

Screenshot

Run Cluster

The Run cluster is typically the cluster where your applications are deployed. You may have more than one run cluster. For example, you may have multiple environments (test, development, production, etc.).

Set up the Installation Namespace

The installation namespace will contain the TAP package repository, the TAP package, and the secret containing the registry credentials.

You have to set up the TAP installation namespace and create the TAP package repository, the TAP package, and the secret containing the registry credentials, the same as the other clusters.

Make sure you run the following from the multicluster folder.

Create the tap-install namespace.

kubectl create ns tap-install

Create the tap-registry secret using your registry FQDN and credentials for pulling images (e.g., tap-harbor-pull). The following command will also create a SecretExport resource, which enables us to import these credentials into other namespaces by “cloning” this secret from the installation namespace.

tanzu secret registry add tap-registry \
--server "$PRIVATE_REGISTRY_HOSTNAME" \
--username "$PRIVATE_REGISTRY_USERNAME" \
--password "$PRIVATE_REGISTRY_PASSWORD" \
--export-to-all-namespaces --yes --namespace tap-install

Create the TAP package repository and ensures it reconciles successfully.

tanzu package repository add tanzu-tap-repository \
--url "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/tap-packages:$TAP_VERSION" \
--namespace tap-install

Example output:

 Adding package repository 'tanzu-tap-repository'

 Validating provided settings for the package repository

 Creating package repository resource

 Waiting for 'PackageRepository' reconciliation for 'tanzu-tap-repository'

 'PackageRepository' resource install status: Reconciling

 'PackageRepository' resource install status: ReconcileSucceeded

 'PackageRepository' resource successfully reconciled

Added package repository 'tanzu-tap-repository' in namespace 'tap-install'

Ensure the TAP packages are now available.

tanzu package available list -n tap-install

Apply the common/overlay-cnrs.yaml. This is an overlay file my colleague Scott Rosenberg created to customize the Cloud Native Runtimes (CNRS/Knative) configuration in a way that made more sense to us for real-world scenarios, for example, setting the default scheme to HTTPS for deployed applications, redirecting HTTP traffic to HTTPS, enabling scale to 0 for Cluster Autoscaler, and more.

kubectl apply -f common/overlay-cnrs.yaml

Update the TAP Values File

You must update the tap-values.yaml file for the Run cluster to reflect your environment configuration.

In the run-cluster/tap-values.yaml file under the shared section set your ingress domain for the run cluster (e.g., run.it-tap.terasky.demo) and your CA certificates/chain.

Screenshot

In the cnrs section, set the following:

  • domain_name - the domain name for applications deployed by Cloud Native Runtimes. I use the same domain name I specified for the ingress_domain parameter (e.g., run.it-tap.terasky.demo).
  • domain_template - the naming convention for the hostnames of your applications. I set it to {{.Name}}-{{.Namespace}}.{{.Domain}}. This naming convention makes sense and simplifies certificate management, in my opinion, as I use a wildcard certificate that is valid for any hostname under the ingress domain of the Run cluster.

Screenshot

In the appliveview_connector section, set a hostname for the Application Live View connector. For example, appliveview.run.it-tap.terasky.demo.

Deploy the TAP Package

# Define TAP version
TAP_VERSION=1.2.1

tanzu package install tap \
-n tap-install \
-p tap.tanzu.vmware.com \
-v "$TAP_VERSION" \
--values-file run-cluster/tap-values.yaml

Example output:

 Installing package 'tap.tanzu.vmware.com'

 Getting package metadata for 'tap.tanzu.vmware.com'

 Creating service account 'tap-tap-install-sa'

 Creating cluster admin role 'tap-tap-install-cluster-role'

 Creating cluster role binding 'tap-tap-install-cluster-rolebinding'

 Creating secret 'tap-tap-install-values'

 Creating package resource

 Waiting for 'PackageInstall' reconciliation for 'tap'

 'PackageInstall' resource install status: Reconciling

 'PackageInstall' resource install status: ReconcileSucceeded

 'PackageInstall' resource successfully reconciled

 Added installed package 'tap'

You can view the installed packages.

kubectl get app -n tap-install

Example output:

NAME                       DESCRIPTION           SINCE-DEPLOY   AGE
appliveview-connector      Reconcile succeeded   7m29s          3m3s
appsso                     Reconcile succeeded   2m29s          2m24s
cartographer               Reconcile succeeded   6m57s          2m36s
cert-manager               Reconcile succeeded   9m15s          3m34s
cnrs                       Reconcile succeeded   9m17s          2m2s
contour                    Reconcile succeeded   5m6s           4m3s
fluxcd-source-controller   Reconcile succeeded   3m4s           3m36s
image-policy-webhook       Reconcile succeeded   7m12s          2m36s
ootb-delivery-basic        Reconcile succeeded   8m50s          4m40s
ootb-templates             Reconcile succeeded   8m4s           3m40s
policy-controller          Reconcile succeeded   3m10s          3m50s
service-bindings           Reconcile succeeded   2m52s          2m46s
services-toolkit           Reconcile succeeded   7m49s          3m24s
source-controller          Reconcile succeeded   3m33s          3m38s
tap                        Reconcile succeeded   2m56s          2m38s
tap-auth                   Reconcile succeeded   4m23s          2m39s
tap-telemetry              Reconcile succeeded   28s            2m41s

Modify the common/contour-tls-delegation-secret.yaml manifest and set your TLS certificate and key for the deployed applications. This is the secret I mentioned for the cnrs.default_tls_secret parameter in the tap-values.yaml file of the Run cluster. In my environment, I use a wildcard certificate to ensure it is valid for any application deployed on the Run cluster.

Create the secret.

kubectl apply -f common/contour-tls-delegation-secret.yaml

Reminder regarding DNS: I have External DNS deployed on all my clusters. External DNS will create all DNS records as ingress/HTTPProxy resources are deployed on the clusters. Due to the dynamic nature of the TAP environment, it is highly recommended that you deploy External DNS, especially on the Run clusters, where actual workloads will be deployed. However, if you prefer, you can manually create DNS records (or wildcard DNS records for each DNS zone).

If you wish to create your DNS records manually, you will have to extract the Envoy load balancer IP address after the deployment of the TAP package using the following command:

ENVOY_LB_IP=$(kubectl get svc -n tanzu-system-ingress envoy -o jsonpath='{.status.loadBalancer.ingress[].ip}')
echo "$ENVOY_LB_IP"

Then, you can use the following PowerShell commands to create the records:

# Define variables
$DNSServer = "demo-dc-01.terasky.demo"
$DNSZone = "terasky.demo"
$DNSSubDomainRecord = "run.it-tap"
$EnvoyLBIP = "10.100.154.10"
$NetworkCIDR = "10.100.154.0/24"
$Hostnames = @("*")

# Create Reverse Lookup Zone
Add-DnsServerPrimaryZone -NetworkId $NetworkCIDR -ReplicationScope "Domain"

# Create DNS Records
$Hostnames | ForEach-Object -Process {
  Add-DnsServerResourceRecord -ComputerName $DNSServer -A -ZoneName $DNSZone -Name "$_.$DNSSubDomainRecord" -IPv4Address $EnvoyLBIP -CreatePtr
}

Set up the Developer Namespace and Deploy a Workload

For the Run cluster, you also have to set up the developer namespace – the same thing you did for the Build cluster.

Create the developer namespace. For example:

kubectl create ns dev-team-01

Create a secret containing the developer’s registry credentials. For example:

# Define variables
PRIVATE_REGISTRY_HOSTNAME=your-private-registry-fqdn
TAP_DEVELOPER_USERNAME=your-tap-developer-username
TAP_DEVELOPER_PASSWORD=your-tap-developer-password

tanzu secret registry add registry-credentials \
--server "$PRIVATE_REGISTRY_HOSTNAME" \
--username "$TAP_DEVELOPER_USERNAME" \
--password "$TAP_DEVELOPER_PASSWORD" \
--namespace dev-team-01

Apply the run-cluster/developer-resources-onboarding.yaml manifest.

kubectl apply -f run-cluster/developer-resources-onboarding.yaml -n dev-team-01

You can now deploy a workload on the Run cluster.

Switch to the Build cluster to get the deliverable resource from it, then clean it up using the kubectl neat plugin, and apply it on the Run cluster using the --context flag. For example:

kubectl get deliverables.carto.run tanzu-java-web-app -n dev-team-01 -o yaml | \
kubectl neat | \
kubectl apply -f - -n dev-team-01 --context it-tap-run-cls-admin@it-tap-run-cls

Switch back to the Run cluster and inspect the workload.

kubectl get deliverables.carto.run -n dev-team-01

The status of the deliverable should be Ready at the end of the process.

Example output:

NAME                 SOURCE                                                                                                                        DELIVERY         READY   REASON   AGE
tanzu-java-web-app   it-tkg-harbor.terasky.demo/dev-team-01/workloads/tanzu-java-web-app-dev-team-01-bundle:8b7fede6-0b24-4648-919d-7d43518acea2   delivery-basic   True    Ready    5m

Check the status of the deployed package.

kubectl get app -n dev-team-01

Example output:

NAME                 DESCRIPTION           SINCE-DEPLOY   AGE
tanzu-java-web-app   Reconcile succeeded   8m12s          5m

Check the status of the pods.

kubectl get pods -n dev-team-01

Example output:

NAME                                                   READY   STATUS    RESTARTS   AGE
tanzu-java-web-app-00001-deployment-6bc4c849f4-nswnt   2/2     Running   0          13s

In TAP GUI, you can also see that the delivery of your workload has been completed.

Screenshot

Check the status of the HTTPProxy resources.

kubectl get httpproxy -n dev-team-01

Example output:

NAME                                                              FQDN                                                     TLS SECRET                      STATUS   STATUS DESCRIPTION
tanzu-java-web-app-contour-24634cd69118477d3e8282041d086017tanz   tanzu-java-web-app.dev-team-01.svc.cluster.local         kube-system/tap-wildcard-cert   valid    Valid HTTPProxy
tanzu-java-web-app-contour-8151f51867b3d6ebf7b09f0d9bd8f14etanz   tanzu-java-web-app-dev-team-01.run.it-tap.terasky.demo   kube-system/tap-wildcard-cert   valid    Valid HTTPProxy
tanzu-java-web-app-contour-tanzu-java-web-app.dev-team-01         tanzu-java-web-app.dev-team-01                           kube-system/tap-wildcard-cert   valid    Valid HTTPProxy
tanzu-java-web-app-contour-tanzu-java-web-app.dev-team-01.svc     tanzu-java-web-app.dev-team-01.svc                       kube-system/tap-wildcard-cert   valid    Valid HTTPProxy

External DNS has registered the tanzu-java-web-app-dev-team-01.run.it-tap.terasky.demo record in DNS.

Screenshot

You can now run a curl command to ensure the application is accessible.

curl -k https://tanzu-java-web-app-dev-team-01.run.it-tap.terasky.demo

Example output:

Greetings from Spring Boot + Tanzu!

Or you can also use a web browser.

Screenshot

Iterate Cluster

The Iterate cluster is intended for inner-loop iterative application development.

You will notice that the Iterate cluster configuration somewhat combines several components from the Run and the Build clusters. This is because we want to ensure the developer has everything required for iterating on their application – including building the images and deploying them.

Set up the Installation Namespace

The installation namespace will contain the TAP package repository, the TAP package, and the secret containing the registry credentials.

You have to set up the TAP installation namespace and create the TAP package repository, the TAP package, and the secret containing the registry credentials, the same as the other clusters.

Make sure you run the following from the multicluster folder.

Create the tap-install namespace.

kubectl create ns tap-install

Create the tap-registry secret using your registry FQDN and credentials for pulling images (e.g., tap-harbor-pull). The following command will also create a SecretExport resource, which enables us to import these credentials into other namespaces by “cloning” this secret from the installation namespace.

tanzu secret registry add tap-registry \
--server "$PRIVATE_REGISTRY_HOSTNAME" \
--username "$PRIVATE_REGISTRY_USERNAME" \
--password "$PRIVATE_REGISTRY_PASSWORD" \
--export-to-all-namespaces --yes --namespace tap-install

Create the TAP package repository and ensures it reconciles successfully.

tanzu package repository add tanzu-tap-repository \
--url "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/tap-packages:$TAP_VERSION" \
--namespace tap-install

Example output:

 Adding package repository 'tanzu-tap-repository'

 Validating provided settings for the package repository

 Creating package repository resource

 Waiting for 'PackageRepository' reconciliation for 'tanzu-tap-repository'

 'PackageRepository' resource install status: Reconciling

 'PackageRepository' resource install status: ReconcileSucceeded

 'PackageRepository' resource successfully reconciled

Added package repository 'tanzu-tap-repository' in namespace 'tap-install'

Ensure the TAP packages are now available.

tanzu package available list -n tap-install

Apply the common/overlay-cnrs manifest. This is the same overlay we applied on the Run cluster previously.

kubectl apply -f common/overlay-cnrs.yaml

Update the TAP Values File

You must update the tap-values.yaml file for the Iterate cluster to reflect your environment configuration.

In the iterate-cluster/tap-values.yaml file under the shared section set your ingress domain for the iterate cluster (e.g., iterate.it-tap.terasky.demo) and your CA certificates/chain.

In the cnrs section, set the following:

  • domain_name – the domain name for applications deployed by Cloud Native Runtimes. I use the same domain name I specified for the ingress_domain parameter (e.g., iterate.it-tap.terasky.demo).
  • domain_template – the naming convention for the hostnames of your applications. I set it to {{.Name}}-{{.Namespace}}.{{.Domain}}. This naming convention makes sense and simplifies certificate management, in my opinion, as I use a wildcard certificate that is valid for any hostname under the ingress domain of the Iterate cluster.

Screenshot

In the appliveview_connector section, set a hostname for the Application Live View connector. For example, appliveview.iterate.it-tap.terasky.demo.

For the buildservice and the ootb_supply_chain_basic sections, specify the same configuration as the Build cluster.

Screenshot

Deploy the TAP Package

Now that you’ve completed the configuration of the tap-values.yaml file for the Iterate cluster, it’s time to deploy the TAP package.

# Define TAP version
TAP_VERSION=1.2.1

tanzu package install tap \
-n tap-install \
-p tap.tanzu.vmware.com \
-v "$TAP_VERSION" \
--values-file iterate-cluster/tap-values.yaml

Example output:

 Installing package 'tap.tanzu.vmware.com'

 Getting package metadata for 'tap.tanzu.vmware.com'

 Creating service account 'tap-tap-install-sa'

 Creating cluster admin role 'tap-tap-install-cluster-role'

 Creating cluster role binding 'tap-tap-install-cluster-rolebinding'

 Creating secret 'tap-tap-install-values'

 Creating package resource

 Waiting for 'PackageInstall' reconciliation for 'tap'

 'PackageInstall' resource install status: Reconciling

 'PackageInstall' resource install status: ReconcileSucceeded

 'PackageInstall' resource successfully reconciled

 Added installed package 'tap'

You can view the installed packages.

kubectl get app -n tap-install

Example output:

NAME                       DESCRIPTION           SINCE-DEPLOY   AGE
appliveview                Reconcile succeeded   103s           2m27s
appliveview-connector      Reconcile succeeded   5m6s           5m59s
appliveview-conventions    Reconcile succeeded   2m23s          3m17s
appsso                     Reconcile succeeded   3m55s          4m2s
buildservice               Reconcile succeeded   5m17s          5m59s
cartographer               Reconcile succeeded   3m18s          4m1s
cert-manager               Reconcile succeeded   4m57s          5m58s
cnrs                       Reconcile succeeded   95s            2m27s
contour                    Reconcile succeeded   3m44s          4m2s
conventions-controller     Reconcile succeeded   3m32s          4m1s
developer-conventions      Reconcile succeeded   2m3s           3m16s
fluxcd-source-controller   Reconcile succeeded   4m37s          5m58s
image-policy-webhook       Reconcile succeeded   2m39s          4m1s
ootb-delivery-basic        Reconcile succeeded   45s            103s
ootb-supply-chain-basic    Reconcile succeeded   45s            103s
ootb-templates             Reconcile succeeded   107s           2m45s
policy-controller          Reconcile succeeded   2m34s          4m
service-bindings           Reconcile succeeded   5m53s          6m1s
services-toolkit           Reconcile succeeded   5m52s          6m1s
source-controller          Reconcile succeeded   3m10s          4m1s
spring-boot-conventions    Reconcile succeeded   119s           3m16s
tap                        Reconcile succeeded   6m2s           6m9s
tap-auth                   Reconcile succeeded   5m33s          6m
tap-telemetry              Reconcile succeeded   5m22s          6m
tekton-pipelines           Reconcile succeeded   5m53s          6m1s

Modify the common/contour-tls-delegation-secret.yaml manifest and set your TLS certificate and key for the deployed applications. This is the secret I mentioned for the cnrs.default_tls_secret parameter in the tap-values.yaml file of the Run cluster. In my environment, I use a wildcard certificate to ensure it is valid for any application deployed on the Iterate cluster.

Create the secret.

kubectl apply -f common/contour-tls-delegation-secret.yaml

Reminder regarding DNS: I have External DNS deployed on all my clusters. External DNS will create all DNS records as ingress/HTTPProxy resources are deployed on the clusters. Due to the dynamic nature of the TAP environment, it is highly recommended that you deploy External DNS. However, if you prefer, you can manually create DNS records (or wildcard DNS records for each DNS zone).

If you wish to create your DNS records manually, you will have to extract the Envoy load balancer IP address after the deployment of the TAP package using the following command:

ENVOY_LB_IP=$(kubectl get svc -n tanzu-system-ingress envoy -o jsonpath='{.status.loadBalancer.ingress[].ip}')
echo "$ENVOY_LB_IP"

Then, you can use the following PowerShell commands to create the records:

# Define variables
$DNSServer = "demo-dc-01.terasky.demo"
$DNSZone = "terasky.demo"
$DNSSubDomainRecord = "iterate.it-tap"
$EnvoyLBIP = "10.100.154.10"
$NetworkCIDR = "10.100.154.0/24"
$Hostnames = @("*")

# Create Reverse Lookup Zone
Add-DnsServerPrimaryZone -NetworkId $NetworkCIDR -ReplicationScope "Domain"

# Create DNS Records
$Hostnames | ForEach-Object -Process {
  Add-DnsServerResourceRecord -ComputerName $DNSServer -A -ZoneName $DNSZone -Name "$_.$DNSSubDomainRecord" -IPv4Address $EnvoyLBIP -CreatePtr
}

Deploy the TBS Full Dependencies Package

Get the latest available version of the buildservice package.

TBS_FULL_DEPS_PKG_NAME=buildservice.tanzu.vmware.com
TBS_FULL_DEPS_PKG_VERSIONS=($(tanzu package available list "$TBS_FULL_DEPS_PKG_NAME" -n tap-install -o json | jq -r ".[].version" | sort -t "." -k1,1n -k2,2n -k3,3n))
TBS_FULL_DEPS_PKG_VERSION=${TBS_FULL_DEPS_PKG_VERSIONS[-1]}
echo "$TBS_FULL_DEPS_PKG_VERSION"

Create the TBS Full Dependencies package repository and ensure it reconciles successfully.

tanzu package repository add tbs-full-deps-repository \
--url "$PRIVATE_REGISTRY_HOSTNAME/$TAP_REPO/tbs-full-deps:$TBS_FULL_DEPS_PKG_VERSION" \
--namespace tap-install

Example output:

 Adding package repository 'tbs-full-deps-repository'

 Validating provided settings for the package repository

 Creating package repository resource

 Waiting for 'PackageRepository' reconciliation for 'tbs-full-deps-repository'

 'PackageRepository' resource install status: Reconciling

 'PackageRepository' resource install status: ReconcileSucceeded

Added package repository 'tbs-full-deps-repository' in namespace 'tap-install'

Install the TBS Full Dependencies package.

tanzu package install full-tbs-deps \
-n tap-install \
-p full-tbs-deps.tanzu.vmware.com \
-v "$TBS_FULL_DEPS_PKG_VERSION"

Example output:

 Installing package 'full-tbs-deps.tanzu.vmware.com'

 Getting package metadata for 'full-tbs-deps.tanzu.vmware.com'

 Creating service account 'full-tbs-deps-tap-install-sa'

 Creating cluster admin role 'full-tbs-deps-tap-install-cluster-role'

 Creating cluster role binding 'full-tbs-deps-tap-install-cluster-rolebinding'

 Creating package resource

 Waiting for 'PackageInstall' reconciliation for 'full-tbs-deps'

 'PackageInstall' resource install status: Reconciling

 'PackageInstall' resource install status: ReconcileSucceeded

 Added installed package 'full-tbs-deps'

Set up the Developer Namespace and Deploy a Workload

Create the developer namespace. For example:

kubectl create ns dev-team-01

Create a secret containing the developer’s registry credentials. For example:

# Define variables
PRIVATE_REGISTRY_HOSTNAME=your-private-registry-fqdn
TAP_DEVELOPER_USERNAME=your-tap-developer-username
TAP_DEVELOPER_PASSWORD=your-tap-developer-password

tanzu secret registry add registry-credentials \
--server "$PRIVATE_REGISTRY_HOSTNAME" \
--username "$TAP_DEVELOPER_USERNAME" \
--password "$TAP_DEVELOPER_PASSWORD" \
--namespace dev-team-01

Create a secret containing your Git credentials for the source code repository we cloned previously (tanzu-java-web-app). This is typically the same secret you created for the Build cluster.

kubectl apply -f common/developer-git-credentials.yaml -n dev-team-01

Apply the iterate-cluster/developer-resources-onboarding.yaml manifest.

kubectl apply -f iterate-cluster/developer-resources-onboarding.yaml -n dev-team-01

You can now deploy our sample workload (the same tanzu-java-web-app we used previously). On the Iterate cluster, TAP will perform the entire flow – build the image from your source code and deploy the application on the cluster.

kubectl apply -f common/tanzu-java-web-app-workload.yaml -n dev-team-01

Monitor the status of the workload and wait for the deployment to complete.

kubectl get workload -n dev-team-01

Example output:

NAME                 SOURCE                                                SUPPLYCHAIN     READY   REASON   AGE
tanzu-java-web-app   https://github.com/itaytalmi/tanzu-java-web-app.git   source-to-url   True    Ready    2m

Check the status of the deliverable.

kubectl get deliverables.carto.run -n dev-team-01

The status of the deliverable should be Ready at the end of the process.

Example output:

NAME                 SOURCE                                                                                                                        DELIVERY         READY   REASON   AGE
tanzu-java-web-app   it-tkg-harbor.terasky.demo/dev-team-01/workloads/tanzu-java-web-app-dev-team-01-bundle:a8b302a6-dcee-45cb-8558-aac569afee7c   delivery-basic   True    Ready    2m

Check the status of the deployed package.

kubectl get app -n dev-team-01

Example output:

NAME                 DESCRIPTION           SINCE-DEPLOY   AGE
tanzu-java-web-app   Reconcile succeeded   5m45s          25m

Check the status of the pods.

kubectl get pods -n dev-team-01

Example output:

NAME                                                   READY   STATUS      RESTARTS   AGE
tanzu-java-web-app-00001-deployment-797fcc6fb7-zncpz   2/2     Running     0          26s
tanzu-java-web-app-build-1-build-pod                   0/1     Completed   0          30m
tanzu-java-web-app-config-writer-sml6k-pod             0/1     Completed   0          28m

You can also see that the supply chain has been completed in TAP GUI.

Screenshot

Check the status of the HTTPProxy resources.

kubectl get httpproxy -n dev-team-01

Example output:

NAME                                                              FQDN                                                         TLS SECRET                      STATUS   STATUS DESCRIPTION
tanzu-java-web-app-contour-24634cd69118477d3e8282041d086017tanz   tanzu-java-web-app.dev-team-01.svc.cluster.local             kube-system/tap-wildcard-cert   valid    Valid HTTPProxy
tanzu-java-web-app-contour-b6a7299987efcdce071f502dc18a463ftanz   tanzu-java-web-app-dev-team-01.iterate.it-tap.terasky.demo   kube-system/tap-wildcard-cert   valid    Valid HTTPProxy
tanzu-java-web-app-contour-tanzu-java-web-app.dev-team-01         tanzu-java-web-app.dev-team-01                               kube-system/tap-wildcard-cert   valid    Valid HTTPProxy
tanzu-java-web-app-contour-tanzu-java-web-app.dev-team-01.svc     tanzu-java-web-app.dev-team-01.svc                           kube-system/tap-wildcard-cert   valid    Valid HTTPProxy

External DNS has registered the tanzu-java-web-app-dev-team-01.iterate.it-tap.terasky.demo record in DNS.

Screenshot

You can now run a curl command to ensure the application is accessible.

curl -k https://tanzu-java-web-app-dev-team-01.iterate.it-tap.terasky.demo

Example output:

Greetings from Spring Boot + Tanzu!

Or you can also use a web browser.

Screenshot

Iterate on your Application

We will now go through the process of iterating on our sample application using an IDE. I will also demonstrate how you can perform live updates on the application to view code changes updating live on the Iterate cluster, debug the application, and monitor the running application on the Application Live View UI.

Currently, the supported IDEs are Visual Studio Code and IntelliJ (*IntelliJ integration is only supported on macOS). I will use Visual Studio Code for this guide.

First, download the Tanzu Developer Tools and the Tanzu App Accelerator Visual Studio Code extensions from Tanzu Network.

# Create a temporary directory
TANZU_TMP_DIR=/tmp/tanzu
mkdir -p "$TANZU_TMP_DIR"

# Define variables
TAP_VERSION=1.2.1
TANZU_DEV_TOOLS_PRODUCT_FILE_ID=1243450
TANZU_APP_ACCELERATOR_EXT_PRODUCT_FILE_ID=1244288

# Download files
pivnet download-product-files \
--product-slug="tanzu-application-platform" \
--release-version="$TAP_VERSION" \
--product-file-id="$TANZU_DEV_TOOLS_PRODUCT_FILE_ID" \
--download-dir="$TANZU_TMP_DIR"

pivnet download-product-files \
--product-slug="tanzu-application-platform" \
--release-version="$TAP_VERSION" \
--product-file-id="$TANZU_APP_ACCELERATOR_EXT_PRODUCT_FILE_ID" \
--download-dir="$TANZU_TMP_DIR"

Example output:

2022/08/22 08:59:29 Downloading 'tanzu-vscode-extension-0.7.1+build.1.vsix' to '/tmp/tanzu/tanzu-vscode-extension-0.7.1+build.1.vsix'
 60.88 MiB / 60.88 MiB [============================================] 100.00% 7s
2022/08/22 08:59:37 Verifying SHA256
2022/08/22 08:59:37 Successfully verified SHA256
2022/08/22 08:59:40 Downloading 'tanzu-app-accelerator-0.1.2.vsix' to '/tmp/tanzu/tanzu-app-accelerator-0.1.2.vsix'
 200.66 KiB / 200.66 KiB [==========================================] 100.00% 0s
2022/08/22 08:59:41 Verifying SHA256
2022/08/22 08:59:41 Successfully verified SHA256

In Visual Studio Code, go to Extensions and click the menu, then select Install from VSIX...

Screenshot

Provide the path to the VSIX file, for example /tmp/tanzu/tanzu-vscode-extension-0.7.1+build.1.vsix, and click OK.

Screenshot

Repeat for the second extension (/tmp/tanzu/tanzu-app-accelerator-0.1.2.vsix).

Screenshot

Once both extensions are installed, close and reopen Visual Studio Code.

Screenshot

Ensure both extensions are now on the list of installed extensions.

Screenshot

Install Tilt on your machine.

curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | sudo bash
echo ""
tilt version

Example output:

+ curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v0.30.7/tilt.0.30.7.linux.x86_64.tar.gz
+ tar -xzv tilt
tilt
+ copy_binary
+ [[ :/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin: == *\:\/\r\o\o\t\/\.\l\o\c\a\l\/\b\i\n\:* ]]
+ echo 'Installing Tilt to /usr/local/bin which is write protected'
Installing Tilt to /usr/local/bin which is write protected
+ echo 'If you'\''d prefer to install Tilt without sudo permissions, add $HOME/.local/bin to your $PATH and rerun the installer'
If you'd prefer to install Tilt without sudo permissions, add $HOME/.local/bin to your $PATH and rerun the installer
+ sudo mv tilt /usr/local/bin/tilt
+ set +x
Tilt installed!
For the latest Tilt news, subscribe: https://tilt.dev/subscribe
Run `tilt up` to start.

v0.30.7, built 2022-08-12

Open your tanzu-java-web-app source code Git repository as a Visual Studio Code project.

Screenshot

From Visual Studio Code, navigate to Settings > Extensions > Tanzu Developer Tools.

Screenshot

Screenshot

Screenshot

In the Local Path field, provide the path to the directory containing the Tanzu Java Web App. Since it defaults to the working directory and I opened the Tanzu Java Web App source code Git repository as a Visual Studio Code project, I can leave it blank.

In the Namespace field, provide your developer namespace.

In the Source Image field, provide the destination image repository to publish an image containing your workload source code. This is typically the same repository we have used throughout this guide for the developer namespace.

Screenshot

You are now ready to iterate on your application.

First, ensure Tilt can deploy the application on a remote cluster (our Iterate cluster).

Edit the Tiltfile file.

Screenshot

Add the following line:

allow_k8s_contexts('your-iterate-cluster-k8s-context')

For example:

allow_k8s_contexts('it-tap-iterate-cls-admin@it-tap-iterate-cls')

Screenshot

Right-click your Tilefile and select Tanzu: Live Update Start.

Screenshot

Tilt will now initiate the build and deployment of your application on the cluster. Wait for the deployment to complete.

Screenshot

When the Live Update status in the status bar is visible and resolves to Live Update Started, navigate to http://localhost:8080 in your browser, and view your running application.

Screenshot

Screenshot

Although the application is running remotely on the Iterate cluster, you can access it locally!

You can also check the pod deployed by Tilt on your Iterate cluster.

kubectl get pods -n dev-team-01 -l tanzu.app.live.view=true

On Harbor, you can also see the newly created artifacts used by Tilt for this deployment.

Screenshot

While Tilt is still running, let’s change the source code. Modify HelloController.java under src > main > java > com > example > springboot and modify the text.

Screenshot

This change will trigger Tilt to deploy the updated application automatically.

Refresh your browser. You should immediately see the updated application!

Screenshot

If you run kubectl get pods -n dev-team-01 -l tanzu.app.live.view=true on your Iterate cluster once again, you will see that Tilt did not have to deploy a new Pod to apply these changes… :)

Once you are done making changes, stop the live update process. Right-click the Tiltfile and select Tanzu: Live Update Stop.

Screenshot

You can also debug your application. Set a breakpoint in your code.

Screenshot

Right-click the workload.yaml file in the config directory and select Tanzu: Java Debug Start.

Screenshot

The workload will now be redeployed with debugging enabled so that you can debug your code.

Screenshot

Wrap Up

Hopefully, this guide gives you an idea of how powerful TAP is. I know it may be overwhelming at first due to the number of components and moving parts. Still, I think it is worth the time and effort and will save you a lot of headaches down the road on your cloud-native journey when deploying microservices at scale.

In future posts, I will dive deeper into specific features and components of TAP.