Close

3rd May 2020

Use an image hosted on Harbor – vSphere image registry

Use an image hosted on Harbor – vSphere image registry

Last post I added an image to the vSphere image registry, which is great – but how can I actually use that image?

The image I uploaded was for busybox, which provides access to a range of simple UNIX tools.  What I’m going to demonstrate is how to run the image from the image repository, demonstrate some basic commands in the busybox pod and how I can reconnect to it.

kubectl run busybox --image=10.255.100.3/simon-ns/busybox -i -n simon-ns

the above command will start a busybox pod from the image host on the vSphere image registry and keep it in the foreground.  If we wanted to deploy this as an ephemeral object we could add the switch ‘–restart=Never’ which not restart the pod when we disconnect.

In the above screenshot you can see I’ve used the ‘kubectl get nodes -o wide’ command to view the internal IPs of the Kubernetes supervisor cluster nodes.  I’ve then deployed the busybox image to a pod and entered it interactively, as an example I’ve ran ‘ifconfig’ to get the pod ip, sent a ping packet to one of the supervisor cluster nodes and performed a simple nslookup on the supervisor cluster IP.  Pretty useful as those IP addresses and DNS names are only resolvable inside the supervisor cluster, they are not exposed externally.

Because I didn’t create this deployment with the ‘–restart=Never’ flag the pod is still available for connection.

Information about the pod and deployment status can be gather from kubectl get deployment and get pods respectively.  Interestingly the kubectl get pods command includes a node field which is telling us which ESXi host the pod is running on.  Sure enough if I check the vSphere client I can see information about the running deployment and pod.

 

 

As the pod is still running if I want to connect back into the busybox shell, using the name of the deployed pod the command is simply;

kubectl exec -it busybox-6f8cfd7975-rlcx6 -n simon-ns -- sh

There you have it a little UNIX command box on the inside of the cluster to help troubleshooting.

Note, when you want to delete this busybox pod remember to delete the deployment first – otherwise kubernetes will keep on deploying pods to achieve the desired configuration state!  deployment and pod deletion commands are either ‘kubectl delete deployment [deployment] -n [namespace]’ ‘kubectl delete pod [podname] -n [namespace]’.

Limitations of the vSphere Image Registry

Three limitations that I won’t explore in detail in this post are that the vSphere image registry does not appear to have integrations with either notary for signing images or clair for vulnerability scanning.  It is also not possible to upload Helm charts to the vSphere image registry.  I’ve got to assume that these are features planned for integration into other VMware services (Carbon Black?) or integrations planned for future releases.

As far as I can work out so far uploaded images are only available to the namespace projects that they are uploaded to.  For example below I have created a new namespace called busybox-ns and used the same kubectl run command (plus –restart=Never) to deploy the busybox image and receive and error that the pod is not found.

A quick look at the pod resources (‘kubectl get pods -n busybox-ns -o wide’) returns the code ‘ErrImagePull’

Looking through the events, using grep to filter for the busybox-ns the error is clear(‘kubectl get events -n busybox-ns | grep busybox’);

Code 400: ErrorType(3) pull access denied, repository does not exist or may require authorisation: server message: insufficient_scope: authorisation failed

So it appears that the vSphere image registry architecture with Harbor is architected to work on a one to one namespace to repository basis.  Which could be a limiting factor if you planned on using it to create an internal public repository for multiple development teams or departments to reference.  Of course there is absolutely nothing stopping customers from deploying the full CNCF deployment of Harbor into a vSphere namespace to create this functionality.

Thanks, hopefully this is useful.

 

Simon