Harbor Registry – Automating LDAP/S Configuration – Part 2
This post continues our two-part series on automating LDAP configuration for Harbor Registry. In the previous post, we demonstrated how to achieve this using Ansible, running externally. However, external automation has its challenges, such as firewall restrictions or limited API access in some cases/environments.
Note: make sure you review the previous post as it provides a lot of additional background and clarifications on this process, LDAPS configuration, and more.
Here, we explore an alternative approach using Terraform, running the automation directly inside the Kubernetes cluster hosting Harbor. This method leverages native Kubernetes scheduling capabilities for running the configuration job in a fully declarative approach and does not require any network access to Harbor from the machine running the job.
In this approach, Terraform is executed inside a Kubernetes pod. The pod interacts with the Harbor REST API to configure LDAP using the terracurl Terraform provider.
Setup and Deployment
Clone the repository.
git clone https://github.com/itaytalmi/harbor-registry-ldap.git
Using an IDE of your choice, open the harbor-ldap-config-tf-deployment.yaml manifest under the terraform-on-k8s folder. This is a multi-document YAML manifest containing the required Kubernetes resources for the deployment of the job. Let’s break this down:
The harbor-ldap-configurator-tf-vars secret contains the inputs required by the Terraform module. Each key in this secret is named exactly in the format required by Terraform to define environment variables. Note that this is the only part of the manifest you will have to modify using your specific parameters. Everything else in the manifest is completely generic.
Note that here, the TF_VAR_harbor_url input is set to the internal hostname of the harbor-core Kubernetes service, in the format of <HARBOR_CORE_SERVICE>.<HARBOR_HOSTNAME>.svc.cluster.local. If you are deploying the job to the same namespace where Harbor is running, you can also simply specify the harbor-core service name, without the namespace and svc.cluster.local. For example: http://harbor-core. This would be sufficient within the same namespace, as Kubernetes Core-DNS will provide name resolution natively.
apiVersion: v1
kind: Secret
metadata:
name: harbor-ldap-configurator-tf-vars
namespace: harbor
type: Opaque
stringData:
TF_VAR_harbor_url: http://harbor-core.harbor.svc.cluster.local
TF_VAR_admin_username: admin
TF_VAR_admin_password: Kubernetes1!
TF_VAR_ldap_config: |
{
ldap_url = "ldaps://cloudnativeapps.cloud:636"
ldap_search_dn = "CN=k8s-ldaps,OU=ServiceAccounts,OU=cloudnativeapps,DC=cloudnativeapps,DC=cloud"
ldap_search_password = "Kubernetes1!"
ldap_base_dn = "DC=cloudnativeapps,DC=cloud"
ldap_filter = "objectclass=person"
ldap_uid = "sAMAccountName"
ldap_scope = 2 # Subtree
ldap_group_base_dn = "DC=cloudnativeapps,DC=cloud"
ldap_group_search_filter = "objectclass=group"
ldap_group_attribute_name = "sAMAccountName"
ldap_group_admin_dn = "CN=harbor-admins,OU=Groups,OU=cloudnativeapps,DC=cloudnativeapps,DC=cloud"
ldap_group_membership_attribute = "memberof"
ldap_group_search_scope = 2 # Subtree
ldap_verify_cert = true
}
The harbor-ldap-configurator-tf-main config map contains the main Terraform logic (main.tf) as well as the JSON template used as the payload for the LDAP configuration (harbor_ldap_config.json.tpl):
apiVersion: v1
kind: ConfigMap
metadata:
name: harbor-ldap-configurator-tf-main
namespace: harbor
data:
main.tf: |
terraform {
required_providers {
terracurl = {
source = "devops-rob/terracurl"
version = "1.2.1"
}
}
required_version = ">= 1.3.0"
}
provider "terracurl" {}
variable "harbor_url" {
description = "Harbor hostname"
type = string
}
variable "admin_username" {
description = "Harbor admin username"
type = string
}
variable "admin_password" {
description = "Harbor admin password"
type = string
sensitive = true
}
variable "ldap_config" {
description = "Harbor LDAP configurations"
type = map(string)
sensitive = true
}
data "terracurl_request" "ldap_configuration" {
name = "harbor_ldap_configuration"
method = "PUT"
response_codes = [200]
skip_tls_verify = true
url = "${var.harbor_url}/api/v2.0/configurations"
headers = {
"Content-Type" = "application/json"
"Accept" = "application/json"
"Authorization" = "Basic ${base64encode("${var.admin_username}:${var.admin_password}")}"
}
request_body = templatefile("${path.module}/harbor_ldap_config.json.tpl", var.ldap_config)
}
data "terracurl_request" "ldap_verification" {
name = "harbor_ldap_verification"
method = "POST"
response_codes = [200]
skip_tls_verify = true
url = "${var.harbor_url}/api/v2.0/ldap/ping"
headers = {
"Content-Type" = "application/json"
"Accept" = "application/json"
"Authorization" = "Basic ${base64encode("${var.admin_username}:${var.admin_password}")}"
}
request_body = templatefile("${path.module}/harbor_ldap_config.json.tpl", var.ldap_config)
}
output "harbor_ldap_connection_result" {
value = jsondecode(data.terracurl_request.ldap_verification.response)
}
harbor_ldap_config.json.tpl: |
{
"auth_mode": "ldap_auth",
"ldap_url": "${ldap_url}",
"ldap_search_dn": "${ldap_search_dn}",
"ldap_search_password": "${ldap_search_password}",
"ldap_base_dn": "${ldap_base_dn}",
"ldap_filter": "${ldap_filter}",
"ldap_uid": "${ldap_uid}",
"ldap_scope": ${ldap_scope},
"ldap_group_base_dn": "${ldap_group_base_dn}",
"ldap_group_search_filter": "${ldap_group_search_filter}",
"ldap_group_attribute_name": "${ldap_group_attribute_name}",
"ldap_group_admin_dn": "${ldap_group_admin_dn}",
"ldap_group_membership_attribute": "${ldap_group_membership_attribute}",
"ldap_group_search_scope": ${ldap_group_search_scope},
"ldap_verify_cert": ${ldap_verify_cert}
}
harbor-ldap-configurator is the Kubernetes Job executing Terraform. It mounts the abovementioned secret and configmap to the pod and executes Terraform on the pod. It also ensures that only one successful completion is required. This means that once a successful completion is met, the job will stop running.
apiVersion: batch/v1
kind: Job
metadata:
name: harbor-ldap-configurator
namespace: harbor
labels:
app.kubernetes.io/name: harbor-ldap-configurator
spec:
backoffLimit: 20 # Number of retries before considering the job failed
completions: 1 # Total number of successful completions required
parallelism: 1 # Maximum number of pods to run concurrently
template:
metadata:
labels:
app.kubernetes.io/name: harbor-ldap-configurator
spec:
restartPolicy: OnFailure
containers:
- name: harbor-ldap-configurator
image: hashicorp/terraform:1.9.8
command: ["/bin/sh"]
args: ["-c", "terraform init && terraform apply -auto-approve"]
workingDir: /harbor-ldap-configurator
envFrom:
- secretRef:
name: harbor-ldap-configurator-tf-vars
volumeMounts:
- name: harbor-ldap-configurator-tf-main
mountPath: /harbor-ldap-configurator/main.tf
subPath: main.tf
readOnly: true
- name: harbor-ldap-configurator-tf-main
mountPath: /harbor-ldap-configurator/harbor_ldap_config.json.tpl
subPath: harbor_ldap_config.json.tpl
readOnly: true
resources:
requests:
memory: 500Mi
cpu: 500m
limits:
memory: 500Mi
cpu: 500m
volumes:
- name: harbor-ldap-configurator-tf-main
configMap:
name: harbor-ldap-configurator-tf-main
Also note that the harbor namespace is defined for all of these resources in the manifest. If your namespace is different, make sure you update it in the manifest.
To deploy the job, run:
kubectl apply -f harbor-ldap-config-tf-deployment.yaml
Example output:
secret/harbor-ldap-configurator-tf-vars created
configmap/harbor-ldap-configurator-tf-main created
job.batch/harbor-ldap-configurator created
The job should immediately schedule a pod. Let’s look at the running pod.
kubectl get pod -l app.kubernetes.io/name=harbor-ldap-configurator -n harbor
Example output:
NAME READY STATUS RESTARTS AGE
...
harbor-ldap-configurator-xx8hx 1/1 Running 0 14s
Let’s look at the pod logs:
kubectl logs -l app.kubernetes.io/name=harbor-ldap-configurator -f -n harbor
Example output:
Initializing the backend...
Initializing provider plugins...
- Finding devops-rob/terracurl versions matching "1.2.1"...
- Installing devops-rob/terracurl v1.2.1...
- Installed devops-rob/terracurl v1.2.1 (self-signed, key ID 1D2DFE25F35292AC)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
data.terracurl_request.ldap_configuration: Reading...
data.terracurl_request.ldap_verification: Reading...
data.terracurl_request.ldap_configuration: Read complete after 1s [id=harbor_ldap_configuration]
data.terracurl_request.ldap_verification: Read complete after 1s [id=harbor_ldap_verification]
Changes to Outputs:
+ harbor_ldap_connection_result = {
+ success = true
}
You can apply this plan to save these new output values to the Terraform
state, without changing any real infrastructure.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
harbor_ldap_connection_result = {
"success" = true
}
If you have the following entry in the output, you are all set:
harbor_ldap_connection_result = {
"success" = true
}
Let’s look at the pod again. The status should now be Completed.
kubectl get pod -l app.kubernetes.io/name=harbor-ldap-configurator -n harbor
Example output:
NAME READY STATUS RESTARTS AGE
...
harbor-ldap-configurator-xx8hx 0/1 Completed 0 96s
Let’s look at the job. It should be complete as well.
kubectl get job -l app.kubernetes.io/name=harbor-ldap-configurator -n harbor
Example output:
NAME STATUS COMPLETIONS DURATION AGE
harbor-ldap-configurator Complete 1/1 90s 114s
Harbor is now configured for LDAP authentication, and you should be able to authenticate to it using an LDAP user, exactly as described in the previous post.
Wrap Up
In this post, we leveraged native Kubernetes scheduling capabilities for running the configuration job in a fully declarative approach which does not require any network access to Harbor from the machine running the job. This approach is great if you fully deploy and manage your Kubernetes clusters and workloads using GitOps and do not wish to deal with post-deployment manual tasks. This configuration can easily be deployed to Kubernetes in the same way I showed in this post, or even wrapped under a Helm chart for deployment.
It is also worth mentioning that this approach can also be done using Ansible. I personally prefer Terraform for these scenarios, and also wanted to provide an alternative example. In the same GitHub repository, I’ve also added the Terraform module on its own, under the terraform folder. This module can be run anywhere outside of Kubernetes just like any other Terraform module (this is basically the same approach from the first part of this series, but written in Terraform).
Whatever approach you follow - both of the methods we’ve used ensure consistent and automated LDAP configuration for Harbor Registry.
