TKG: Updating Pinniped Configuration and Addressing Common Issues
Most of the TKG engagements I’ve been involved in included Pinniped for Kubernetes authentication. On many occasions, I have seen issues where the configuration provided to Pinniped was incorrect or partially incorrect. For example, common issues may be related to the LDAPS integration. Many environments I have seen utilize Active Directory as the authentication source, and Pinniped requires the LDAPS certificate, username, and password, which are often specified incorrectly. Since this configuration is not validated during the deployment, you end up with an invalid state of Pinniped on your management cluster.
Another example would be a day-2 situation. I have seen issues where the self-signed certificates issued by cert-manager for Pinniped expired, and since there is currently no automatic renewal process, Pinniped becomes unreachable for authentication. In such scenarios, you may see error messages similar to the following when attempting to access Pinniped:
Unprocessable Entity: No upstream providers are configured
The error message may vary depending on the issue, but this is an error I have seen in various situations.
There are also many situations where you may be required to update your existing Pinniped configuration as environment configuration may change, such as the LDAPS certificate I mentioned above, the LDAPS username/password, etc.
In TKG, Pinniped is deployed as a package and managed by Kapp Controller, and the package values are stored in a Kubernetes secret on the management cluster. Since updating the configuration may not be straightforward and is quite error-prone, I put together a 2-part shell script that simplifies it.
Usage Instructions
Clone my GitHub repository.
git clone https://github.com/itaytalmi/vmware-tkg.git
If you are on TKG version 1.x.x, cd into vmware-tkg/helpers/update-pinniped-config/tkg-v1. If you are on TKG version 2.x.x, cd into vmware-tkg/helpers/update-pinniped-config/tkg-v2.
cd vmware-tkg/helpers/update-pinniped-config/tkg-v2
Ensure the scripts are executable.
chmod +x *.sh
Execute the 01-get-pinniped-config.sh script to get the current Pinniped config (and create a backup before modifying it). Use the following syntax:
./01-get-pinniped-config.sh <TKG_MGMT_CLUSTER_NAME>
For example:
./01-get-pinniped-config.sh it-tkg-mgmt-cls
Example output:
Base directory: .
✔ successfully logged in to management cluster using the kubeconfig it-tkg-mgmt-cls
Checking for required plugins...
All required plugins are already installed and up-to-date
Tanzu context it-tkg-mgmt-cls has been set
Setting kubectl context
Switched to context "it-tkg-mgmt-cls-admin@it-tkg-mgmt-cls".
kubectl context it-tkg-mgmt-cls-admin@it-tkg-mgmt-cls has been set
Exporting current Pinniped configuration
Creating a backup of the original Pinniped configuration
Done
If necessary, modify the pinniped-addon-values.yaml YAML. Update any parameter you need to change. The most common parameters often modified in this manifest are under the dex.config.ldap section.
If you do not need to modify any configuration, skip this step.
For example:
...
dex:
app: dex
create_namespace: true
namespace: tanzu-system-auth
organization: vmware
commonname: tkg-dex
config:
connector: ldap
frontend:
theme: tkg
web:
https: 0.0.0.0:5556
tlsCert: /etc/dex/tls/tls.crt
tlsKey: /etc/dex/tls/tls.key
expiry:
signingKeys: 90m
idTokens: 5m
authRequests: 90m
deviceRequests: 5m
logger:
level: info
format: json
staticClients:
- id: pinniped
redirectURIs:
- https://0.0.0.0/callback
name: pinniped
secret: dummyvalue
ldap:
host: cloudnativeapps.cloud:636
insecureNoSSL: false
startTLS: null
rootCA: null
rootCAData: LS0tLS1CRUdJTiBDRVJUSUZJQ0F....
bindDN: CN=tkg-ldaps,OU=ServiceAccount,OU=cloudnativeapps,DC=cloudnativeapps,DC=cloud
BIND_PW_ENV_VAR: YourP@ssw0rd!@#
usernamePrompt: LDAP Username
insecureSkipVerify: false
userSearch:
baseDN: DC=cloudnativeapps,DC=cloud
filter: (objectClass=person)
username: sAMAccountName
idAttr: DN
emailAttr: DN
nameAttr: sAMAccountName
scope: sub
groupSearch:
baseDN: DC=cloudnativeapps,DC=cloud
filter: (objectClass=group)
nameAttr: cn
scope: sub
userMatchers:
- userAttr: DN
groupAttr: member
...
Once done, execute the 02-update-pinniped-config.sh script to apply the updated configuration and reconcile the Pinniped package. Use the following syntax:
./02-update-pinniped-config.sh <TKG_MGMT_CLUSTER_NAME>
For example:
./02-update-pinniped-config.sh it-tkg-mgmt-cls
Example output:
Base directory: .
✔ successfully logged in to management cluster using the kubeconfig it-tkg-mgmt-cls
Checking for required plugins...
All required plugins are already installed and up-to-date
Tanzu context it-tkg-mgmt-cls has been set
Setting kubectl context
Switched to context "it-tkg-mgmt-cls-admin@it-tkg-mgmt-cls".
kubectl context it-tkg-mgmt-cls-admin@it-tkg-mgmt-cls has been set
Base64-encoding the updated Pinniped configuration file
Patching Pinniped configuration on Kubernetes
secret/it-tkg-mgmt-cls-pinniped-addon patched
Cleaning up old Pinniped Kubernetes deployments
deployment.apps "pinniped-supervisor" deleted
job.batch "pinniped-post-deploy-job" deleted
namespace "tanzu-system-auth" deleted
...
job.batch "pinniped-post-deploy-job" deleted
...
Cleaning up old Pinniped sessions and credentials
Done
The 02-update-pinniped-config.sh script triggers a complete redeployment of the Pinniped package by deleting the existing resources, including the pinniped-supervisor and tanzu-system-auth namespaces, deployments, etc., which means that once the Kapp Controller reconciliation process completes, your Pinniped deployment will be a fresh one. For example, new certificates will be issued by cert-manager, and if you have modified the configuration as shown above, your changes will be applied to the new deployment. Based on my experience in the field, this is the most straightforward approach for updating a Pinniped deployment and addressing most of the common issues.

