Deploying Tanzu SQL to Tanzu Kubernetes Grid Clusters
Deploying Tanzu SQL to Tanzu Kubernetes Grid Clusters
I’ve been reading up on the Tanzu SQL announcements I found a bit of time to have a look and some space on a Tanzu Kubernetes Grid (TKG) cluster deployed via the excellent TKG demo appliance fling.
What Tanzu SQL does is enables us to operate a relational database-as-a-service function, running on any cloud. A relation database-as-a-service function enables a developer the flexibility to install and self-manage Tanzu SQL on any cloud, providing a consistent developer experience and perhaps more importantly consistency in disaster recovery strategies across clouds. Tanzu SQL promises that developers will be able to create dedicated on-demand instances of SQL without operator intervention in a service works the same in any cloud. With integration to automation tooling and CI/CD pipelines such as Concourse for VMware Tanzu.
Deployment of enterprise configured, approved and mission ready SQL DBs within Kubernetes, no mucking about with the underlying infrastructure, just state what’s needed and it’ll deploy. I’ve included a sample YAML file below to demonstrate.
apiVersion: sql.tanzu.vmware.com/v1 kind: Postgres metadata: name: pg-simon spec: memory: 800Mi cpu: "0.8" storageClassName: standard storageSize: 100M pgConfig: dbname: pg-simon username: pgadmin serviceType: LoadBalancer highAvailability: enabled: false
The instructions below are adapted for TKG from those that are provide at the VMware Tanzu SQL with Postgres for Kubernetes Documentation site
Prerequisites
- Docker
- Helm
- Image Repository
- Cert Manager
- Kubernetes Cluster with a load-balancer (MetalLB, AVI, NSX-T, HAProxy)
- VMware Tanzu SQL with Postgres for Kubernetes
With the TKG demo appliance everything is in place aside from Cert Manager and the VMware Tanzu Postgres images. The TKG Demo Appliance comes loaded with Docker, Helm, Harbor and can deploy Kubernetes clusters to VMware cloud on AWS or vSphere infrastructure. Once the Postgres images have been downloaded (requires registration) copy the archive to the TKG appliance, to help keep track of locations I created a /home/simon directory to store the archive.
Installing Cert Manager
kubectl create namespace cert-manager helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.2 --set installCRDs=true
The output of the above commands should be something like the below.
To check we can look at everything under the above namespace, to check that pods are running and for any errors.
kubectl get all --namespace=cert-manager
Add Postgres Images to Docker Registry
Extract the Tanzu Postgres images on the TKG appliance
The extracted files are in the directory above postgres-for-kubernetes-v1.0.0, change into that directory and load the <postgres-operator and postgres-instance images into Docker registry (remember Docker is already available on the TKG demo appliance).
docker load -i ./images/postgres-instance docker load -i ./images/postgres-operator
validation that the images are available can be performed via
docker images "postgres-*"
Push Docker Images to TKG Harbor
Harbor is available as part of the TKG demo appliance. It can be referenced from within the appliance as ‘registry.rainpole.io’, externally it can be accessed by browsing to the IP of the TKG appliance on port 443. There is a TKG workshop that can be followed, with further information regarding the harbor instance hosted here. The Harbor Username: admin and the password is Tanzu1!.
Authenticate to the Harbor image repository
docker login -u admin -p Tanzu1! registry.rainpole.io/library
Once authenticated, the images in the docker registry can be tagged and uploaded to the Harbor registry.
docker tag postgres-operator:v1.0.0 registry.rainpole.io/library/postgres-operator:v1.0.0 docker push registry.rainpole.io/library/postgres-operator:v1.0.0
The output from the commands is below.
Do the same for the postgres-instance image, and they’ll be available in Harbor.
If you want to know how you can do similar with the integrated Harbour image registry included with vSphere Tanzu on VCF the take a look at my write up from earlier in the year.
Deploying the Postgres Operator

Create the Postgres Operator
With the information in the values.yaml file edited, the deployment of the Postgres operator will work via Helm.
helm install postgres-operator operator/
where operator/ is pointing toward the folder location which contains the values.yaml file.
Once the Helm install run watch kubectl get all to see how the installation progresses.
Deploying a New Postgres Instance
Sample manifest files are included in the extracted VMware Tanzu SQL with Postgres for Kubernetes archive. The sample YAML file is named pg-instance-example.yaml, I’ve included it below for reference.
apiVersion: sql.tanzu.vmware.com/v1 kind: Postgres metadata: name: pg-instance-example spec: memory: 800Mi cpu: "0.8" storageClassName: standard storageSize: 100M pgConfig: dbname: pg-instance-example username: pgadmin serviceType: LoadBalancer highAvailability: enabled: false backupLocationSecret: name: s3-secret-example
Each element above is fairly intuitive and for completeness a full reference of the available options and values are included in the following Tanzu SQL documentation. Take a copy of the YAML and configure it to meet requirements. For testing I didn’t need to do anything to groundbreaking, so changed names and removed the S3 backup details.
apiVersion: sql.tanzu.vmware.com/v1 kind: Postgres metadata: name: pg-simon spec: memory: 800Mi cpu: "0.8" storageClassName: standard storageSize: 100M pgConfig: dbname: pg-simon username: pgadmin serviceType: LoadBalancer highAvailability: enabled: false
Deployment is via kubectl I also created a namespace resource for the new PGSQL DB
kubectl create ns pg-simon kubectl apply -f pg-simon.yaml -n pg-simon
Deployment via the Tanzu SQL operator creates the postgres SQL resources and a monitor, and as the manifest defined a load-balancer an external IP is also configured.
Accessing the DB
The DB can be accessed by connecting interactively to the running pod
kubectl exec -it pg-simon-0 -n pg-simon -- bash -c "psql"
Or of course a tool such as pgadmin can be used to access the DB on the external IP….
Depending on where you are connecting from that might require some additional configuration. By default access is restricted as per the following table. Which allows access to the db defined in the manifest, for the user defined in the manifest, from any location with a secure password.
This can be read by running the following commands
#To access psql on the container kubectl exec -it pg-simon-0 -n pg-simon -- bash -c "psql" #To display the pg_hba file rules select * from pg_hba_file_rules ;
You can edit this table from the container if needed, either by connecting in kubectl exec -it pg-simon-0 -n pg-simon — bash copy out the pg_hba.conf file somewhere make the required changes and write those changes using cat > /pgsql/data/pg_hba.conf.
Once changes have been made re-read the table as above to validate and restart db services
select pg_reload_conf();
If that seems complicated, then you could also write the changes from within psql itself following this excellent guide at dbi services. As I wanted to test access from multiple locations and to multiple db objects, I set and tested permissions as defined below;
Which means I can connect to all DBs from anywhere, to any DB as any trusted user.
Summary
This post been a quick demonstration of how to get Tanzu SQL configured utilising Tanzu Kubernetes Grid clusters, to provide a relational database-as-a-service function. The appeal is in the ability for the operators to create the DBaaS function aligned to enterprise requirements, networking and storage configurations. Which a developer can then leverage to deploy a database, without needing to know the detailed infrastructure configuration. To the developer the same command syntax and tooling can be used to deploy to vSphere, VMC, AWS, Azure, Google or any other cloud.
No mucking about with the underlying infrastructure or cloud services, just state what’s needed and deploy.
Thanks
Simon