Getting Started with Carvel ytt - Real-World Examples

2023-01-01 11 min read Carvel Cloud Native Kubernetes Tanzu TAP TKG

Over the years of working with Tanzu Kubernetes Grid (TKG), one tool has stood out as a game-changer for resource customization: Carvel’s ytt. Whether tailoring cluster manifests, customizing TKG packages, or addressing unique deployment requirements, ytt has consistently been a fundamental part of the workflow. Its flexibility, power, and declarative approach make it an essential tool for anyone working deeply with Kubernetes in a TKG ecosystem.

But what exactly is ytt? Short for YAML Templating Tool, ytt is part of the Carvel suite of tools designed for Kubernetes resource management. It provides a powerful, programmable approach to templating YAML configurations by combining straightforward data values, overlays, and scripting capabilities. Unlike many traditional templating tools, ytt prioritizes structure and intent, making it easier to maintain, validate, and debug configurations—particularly in complex, large-scale Kubernetes environments.

However, ytt’s adoption has faced a significant hurdle: it’s not the industry’s mainstream templating tool. Despite its capabilities, many organizations and customers remain unfamiliar with ytt, making it challenging to introduce and integrate effectively into their workflows.

This blog post is my attempt to bridge that gap. Through step-by-step workshops, real-world examples, and actionable references, I’ll guide you through the practical uses of ytt. The post will be divided into three key sections to provide a structured learning experience:

  1. Basic ytt Example – We’ll start with an introductory example to understand the syntax, concepts, and basic features of ytt.
  2. ytt Example for a Kubernetes Manifest – Next, we’ll apply ytt to a real-world scenario, templating a Kubernetes resource manifest.
  3. ytt Example for a TKG Package Overlay – Finally, we’ll tackle a more advanced example by creating a custom overlay for a TKG package using ytt.

Whether you’re completely new to ytt or looking to expand your understanding, this post will provide the foundational knowledge and practical skills you need to make the most of this powerful tool.

Basic ytt Example

Introduction to ytt with YAML Manifests

This example demonstrates how to use ytt to template a simple YAML file. For this example, we’ll work with a Netplan manifest, which is commonly used for configuring network settings on Linux. Netplan configurations often include various options, making them a great candidate to showcase how ytt handles logic and conditions dynamically.

If you’re unfamiliar with ytt, it’s part of the Carvel suite of tools and uses Starlark, a Python-inspired language, for templating YAML. If you’ve worked with Python, you’ll notice many similarities in syntax and structure. This exercise will help you understand the basics of ytt, including overlays, data values, and templating logic.

Reading the Base YAML

Our base.yaml file contains the minimal configuration required by Netplan, set up for DHCP with no customizations:

---
network:
  version: 2
  ethernets:
    ens192:
      dhcp4: true

To render this file using ytt, you can run:

ytt -f base.yaml

The output will simply echo the contents of the file. This demonstrates how ytt reads and processes a static YAML manifest.

Adding Logic and Customizations with ytt

While the base.yaml manifest uses DHCP, Netplan also supports static IP configurations. With ytt, we can introduce logic that dynamically applies the appropriate configuration based on user-provided parameters.

Preparing the Overlay

We’ll use an overlay.yaml file to define the templating logic. First, let’s load the required ytt modules:

#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
  • Overlay Module: This module allows us to modify or combine YAML structures using annotations. We’ll use it to locate and modify specific sections of the YAML file.
  • Data Module: This module lets us read user-provided parameters from a values.yaml file, enabling dynamic customization.

Next, we define a condition to check if static IP configuration is required:

#@ if data.values.ip_address and data.values.cidr and data.values.gateway:

If all parameters (ip_address, cidr, and gateway) are provided, we apply the following logic:

#@ ip_cidr_notation = data.values.ip_address + "/" + data.values.cidr
#@overlay/match by=overlay.subset({"network":{"version": 2}})
---
network:
  ethernets:
    ens192:
      #@overlay/remove
      dhcp4: true
      #@overlay/match missing_ok=True
      addresses: #@ [ip_cidr_notation]
      #@overlay/match missing_ok=True
      routes:
        - to: default
          via: #@ data.values.gateway

Explanation:

  1. The ip_cidr_notation variable combines the ip_address and cidr inputs into CIDR format (e.g., 192.168.1.50/24).
  2. overlay/match locates the network section where version is 2.
  3. overlay/remove deletes the dhcp4: true entry since it’s not needed for static IPs.
  4. overlay/match missing_ok=True ensures new sections like addresses and routes are added if they’re not already present.

Defining User Inputs

In values.yaml, we specify the static IP parameters:

#@data/values-schema
---
ip_address: "192.168.1.50"
cidr: "24"
gateway: "192.168.1.1"
dns_servers: ""
search_paths: ""

To apply the overlay, run:

ytt -f values.yaml -f base.yaml -f overlay.yaml

The output will be:

network:
  version: 2
  ethernets:
    ens192:
      addresses:
      - 192.168.1.50/24
      routes:
      - to: default
        via: 192.168.1.1

Expanding the Example: Adding DNS Configuration

To include DNS servers and search paths, update values.yaml as follows:

#@data/values-schema
---
ip_address: "192.168.1.50"
cidr: "24"
gateway: "192.168.1.1"
dns_servers: "192.168.2.1,192.168.2.2"
search_paths: "mydomain.com,mydomain.io"

Then, rerun the ytt command:

ytt -f values.yaml -f base.yaml -f overlay.yaml

The output will now include the DNS settings:

network:
  version: 2
  ethernets:
    ens192:
      addresses:
      - 192.168.1.50/24
      routes:
      - to: default
        via: 192.168.1.1
      nameservers:
        addresses:
        - 192.168.2.1
        - 192.168.2.2
        search:
        - mydomain.com
        - mydomain.io

Best Practices and Tips

  1. Use ytt Comments: Avoid standard YAML comments (#) to prevent syntax errors. Instead, use ytt comments (#!), as shown below:

    #! This is a ytt-specific comment
    
  2. CLI Parameters: You can pass data values via the CLI using -v flags, similar to Helm’s –set. For example:

    ytt -f base.yaml -f overlay.yaml \
    -v ip_address="192.168.1.50" \
    -v cidr="24" \
    -v gateway="192.168.1.1" \
    -v dns_servers="192.168.2.1,192.168.2.2" \
    -v search_paths="mydomain.com,mydomain.io"
    

Challenge Yourself

Try adding additional Netplan configurations, such as setting an MTU or adding a second Ethernet interface. Use ytt to dynamically generate these configurations based on user input.

ytt - Kubernetes Deployment Manifests

In this example, we’ll explore how to use ytt to template Kubernetes deployment manifests. We’ll demonstrate a sample scenario for deploying a web application, starting with basic templating and progressing to overlays for customizations like tolerations and node selectors.

Templating and Functions

We begin with the app.yaml manifest, which includes the Kubernetes resources for a namespace, deployment, and service:

#@ load("@ytt:data", "data")

#@ def labels():
app: #@ data.values.app_name
#@ end

---
apiVersion: v1
kind: Namespace
metadata:
  name: #@ data.values.namespace
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: #@ data.values.namespace
  name: #@ data.values.app_name
spec:
  selector:
    matchLabels: #@ labels()
  template:
    metadata:
      labels: #@ labels()
    spec:
      containers:
      - name: #@ data.values.app_name
        image: #@ data.values.app_image
        ports:
        - name: http
          containerPort: #@ data.values.app_port
---
apiVersion: v1
kind: Service
metadata:
  namespace: #@ data.values.namespace
  name: #@ data.values.app_name
  labels: #@ labels()
spec:
  type: #@ data.values.svc_type
  ports:
  - port: #@ data.values.svc_port
    targetPort: http
  selector: #@ labels()

Key Highlights:

  1. Reusable Labels Function:
    • A labels() function is defined to include the app label and its value (e.g., spring-petclinic).
    • This avoids repetition, as labels are used across multiple sections: Deployment matchLabels and metadata.labels, and Service metadata.labels and spec.selector.
  2. Dynamic Data Values:
    • The data module pulls parameters like namespace, app_name, app_image, and ports from the values.yaml file.

Here’s an example values.yaml file:

#@data/values-schema
---
namespace: dev
app_name: spring-petclinic
app_image: springio/petclinic:latest
app_port: 8080
svc_port: 80
svc_type: ClusterIP

To render the templated manifest, run:

ytt -f app.yaml -f values.yaml

Example Output:

apiVersion: v1
kind: Namespace
metadata:
  name: dev
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: dev
  name: spring-petclinic
spec:
  selector:
    matchLabels:
      app: spring-petclinic
  template:
    metadata:
      labels:
        app: spring-petclinic
    spec:
      containers:
      - name: spring-petclinic
        image: springio/petclinic:latest
        ports:
        - name: http
          containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  namespace: dev
  name: spring-petclinic
  labels:
    app: spring-petclinic
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: http
  selector:
    app: spring-petclinic

Deploying with ytt and CLI Tools

You can directly deploy the generated manifest using Kubernetes tools like kubectl or kapp:

# With kubectl
ytt -f app.yaml -f values.yaml | kubectl apply -f -

# With kapp
ytt -f app.yaml -f values.yaml | kapp deploy -y -a spring-petclinic -f -

Adding an Overlay

In real-world scenarios, you may need to customize your deployment resources further. For instance, let’s add node selectors and tolerations to specify node scheduling rules for the Deployment.

Defining the Overlay

Here’s the overlay defined in overlay-deployment-tolerations-node-selectors.yaml:

#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Deployment"}),expects="1+"
---
spec:
  template:
    spec:
      #@overlay/match missing_ok=True
      nodeSelector:
        #@overlay/match missing_ok=True
        custom.tkg/node-type: infra
      #@overlay/match missing_ok=True
      tolerations:
        #@overlay/append
        - key: custom.tkg/node-type
          value: infra
          operator: Equal
          effect: NoSchedule

Key Concepts:

  1. Overlay Matchers:
    • overlay/match finds the Deployment resource by matching kind: Deployment.
    • The expects="1+" flag ensures at least one Deployment is found.
  2. Custom Logic:
    • A nodeSelector is added to select nodes labeled with custom.tkg/node-type: infra.
    • A toleration is appended to tolerate nodes with the taint custom.tkg/node-type=infra:NoSchedule.
  3. Handling Missing Sections:
    • overlay/match missing_ok=True ensures new sections like nodeSelector and tolerations are added if not already present.

Applying the Overlay

Run the following command to include the overlay in the rendering process:

ytt -f app.yaml -f values.yaml -f overlay-deployment-tolerations-node-selectors.yaml

Example Output:

apiVersion: v1
kind: Namespace
metadata:
  name: dev
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: dev
  name: spring-petclinic
spec:
  selector:
    matchLabels:
      app: spring-petclinic
  template:
    metadata:
      labels:
        app: spring-petclinic
    spec:
      containers:
      - name: spring-petclinic
        image: springio/petclinic:latest
        ports:
        - name: http
          containerPort: 8080
      nodeSelector:
        custom.tkg/node-type: infra
      tolerations:
      - key: custom.tkg/node-type
        value: infra
        operator: Equal
        effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
  namespace: dev
  name: spring-petclinic
  labels:
    app: spring-petclinic
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: http
  selector:
    app: spring-petclinic

The node selector and tolerations were successfully applied, enabling custom scheduling for your deployment.

Challenge Yourself

Try adding additional customizations to the Deployment, such as defining resource limits or mounting a volume. Use ytt overlays to append these configurations dynamically.

ytt Overlays for TKG Packages

In this example, we’ll explore how to use ytt overlays to customize TKG packages. Often, TKG packages require customization for functionalities not provided out of the box. For this demonstration, we’ll use the Cert-Manager package as an example to apply node selectors and tolerations to its deployment resources. Cert-Manager is an ideal example because it contains multiple Deployment resources, making it a relevant and practical use case.

Retrieving the TKG Package Files

Before creating the overlay, it’s crucial to inspect the package files to understand the involved resources. This helps in building effective overlay logic.

Steps to Retrieve and Inspect Package Files:

  1. Get the Latest Package Version:

    PKG_VERSIONS=($(tanzu package available list cert-manager.tanzu.vmware.com -n tkg-system -o json | jq -r ".[].version" | sort -t "." -k1,1n -k2,2n -k3,3n))
    PKG_VERSION=${PKG_VERSIONS[-1]}
    echo "$PKG_VERSION"
    
  2. Retrieve the Package Image URL:

    PKG_IMAGE_URL=$(kubectl -n tkg-system get packages "cert-manager.tanzu.vmware.com.${PKG_VERSION}" -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')
    echo $PKG_IMAGE_URL
    
  3. Pull the Package Files Locally:

    imgpkg pull -b "$PKG_IMAGE_URL" -o ./pkg-files-tmp
    

After running these commands, the package files will be available in the ./pkg-files-tmp/config directory. For this example, the file of interest is: ./pkg-files-tmp/config/_ytt_lib/bundle/config/upstream/cert-manager.yaml.

Inspecting the Package Files:

  • The cert-manager.yaml file contains three Deployment resources.
  • The schema.yaml file lists all configurable values for the package. Note: Node selectors and tolerations are not natively configurable in this schema, so we’ll use a ytt overlay to apply these settings.

Creating the Overlay

We’ll reuse the overlay from the Kubernetes Deployment example to apply the node selectors and tolerations to the Cert-Manager Deployments.

Overlay Contents:

overlay-deployment-tolerations-node-selectors.yaml:

#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Deployment"}),expects="1+"
---
spec:
  template:
    spec:
      #@overlay/match missing_ok=True
      nodeSelector:
        #@overlay/match missing_ok=True
        custom.tkg/node-type: infra
      #@overlay/match missing_ok=True
      tolerations:
        #@overlay/append
        - key: custom.tkg/node-type
          value: infra
          operator: Equal
          effect: NoSchedule

Key Features of the Overlay:

  1. Matcher Logic:
    • Matches all resources of kind: Deployment.
    • Ensures at least one Deployment resource is located (expects="1+").
  2. Custom Additions:
    • Adds a nodeSelector for nodes labeled custom.tkg/node-type: infra.
    • Appends a toleration for nodes with the taint custom.tkg/node-type=infra:NoSchedule.
  3. Handling Missing Sections:
    • #@overlay/match missing_ok=True ensures sections like nodeSelector and tolerations are added if they don’t already exist.

Applying the Overlay

To apply the Cert-Manager package with the overlay, run:

PKG_VERSIONS=($(tanzu package available list cert-manager.tanzu.vmware.com -n tkg-system -o json | jq -r ".[].version" | sort -t "." -k1,1n -k2,2n -k3,3n))
PKG_VERSION=${PKG_VERSIONS[-1]}
echo "$PKG_VERSION"

tanzu package install cert-manager \
--package "cert-manager.tanzu.vmware.com" \
--version "$PKG_VERSION" \
--ytt-overlay-file overlay-deployment-tolerations-node-selectors.yaml \
--namespace tkg-packages

This command installs the Cert-Manager package while applying the overlay to the package resources.

Verifying the Customizations

Once the package is deployed, inspect the Deployment resources to confirm that the node selectors and tolerations have been applied. For example, to verify the cert-manager Deployment:

kubectl get deployment cert-manager -n cert-manager -o yaml | grep -i tolerations: -A4

Example Output:

      tolerations:
      - effect: NoSchedule
        key: custom.tkg/node-type
        operator: Equal
        value: infra

The output confirms that the tolerations were added successfully.

Challenge Yourself

Explore further customizations for TKG packages using ytt overlays:

  1. Add resource limits and requests for the Deployments.
  2. Use overlays to modify other package components like ConfigMaps or Services.
  3. Experiment with dynamic inputs by combining overlays with values.yaml files.

Additional References

Wrap-Up

In this blog post, we explored how ytt simplifies YAML templating for Kubernetes and TKG. From basic examples to advanced overlays, ytt’s flexibility enables dynamic, reusable configurations tailored to real-world use cases. Whether managing Kubernetes manifests or customizing TKG packages, ytt is a powerful tool for streamlining infrastructure-as-code workflows.