Close

24th May 2020

Tanzu Mission Control – Adding a vSphere 7 Tanzu Kubernetes Cluster

Tanzu Mission Control – Adding a vSphere 7 Tanzu Kubernetes Cluster

One of the great parts of my job is that I get to collaborate and learn from some seriously clever folk in my team and across the company.  Over the past few weeks I’ve been working in the same lab as Dean Lewis.  We’ve been exploring Cloud Foundation 4, Tanzu, Bitnami, Openshift and how those components integrate with the existing VMware SDDC portfolio.

Tanzu Mission Control (TMC) promises to be a big part of that portfolio, stating that it can help “Operate and secure your Kubernetes infrastructure and modern apps across teams and clouds”, so let’s take a quick look.

Policy Based Kubernetes Cluster Management

TMC provides the mechanisms for an administrator to define and enforce policies that control how Kubernetes clusters behave.  With definition at an Organisational, Cluster Group, Cluster, Workspace and TMC managed Namespace layer.  Policies can be defined for Access management for all these object.  Image registry and Networking polices are defined from a logical workspace layer.

Cluster Groups

Cluster groups are a logical construct to which clusters are added.  Creation through the web interface is simple and intuitive.  There is the opportunity to add a description to the group along with labels for the object.

Attaching a Cluster

I’m attaching a Kubernetes cluster hosted on a Cloud Foundation 4 workload domain.  For TMC to successfully interrogate namespace and workload objects we need to configure security policies first.  Creating the following yaml and then applying it to the cluster.

PSP, Role and RoleBinding

tmc-psp.yaml

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: permissive
spec:
  privileged: false
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'
  hostPorts:
  - min: 100
    max: 100

tmc-clusterrole.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-role-permissive
rules:
- apiGroups:
  - extensions
  resourceNames:
  - permissive
  resources:
  - podsecuritypolicies
  verbs:
  - use

tmc-clusterrolebindings.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-role-binding-permissive
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-role-permissive
subjects:
  - kind: Group
    name: system:serviceaccounts
    namespace: vmware-system-tmc

The namespace referenced in this last yaml file is going to be created when we attach the cluster.

Attach Cluster

Attaching a Kubernetes cluster to TMC is a two stage process.  First the logical cluster object is created in TMC, this creates and hosts a YAML file for the installation.  You can either reference this YAML file directly from kubectl or it can be downloaded locally and referenced from there.

When creating the logical object in TMC the cluster is linked to a cluster group, a description can be added as can labels.  In the above screenshot I’ve created labels that identify this cluster is running on vSphere 7 and VCF 4.

Either run the command as specified above, or download the file locally as mentioned.

Watch the Installation

The first listed action above is to create the namespace ‘vmware-system-tmc‘.  This can be used to track the installation progress through a simple watch command ‘watch -n 5 kubectl get all -n vmware-system-tmc‘.  Which I think is a pretty cool way to track what is happening and to spot any issues early.

The final task from TMC is to verify connectivity back to the cluster.

In the above screenshot you can see the cluster object in all it’s TMC glory.  With an Openshift cluster companion, part of the project Dean is working on.  It is worth stressing the point TMC is not an opinionated solution, it does not care where the cluster is running from or what version of kubernetes is being run.  Something that aligns with VMware’s vision that can be summed up as “enable delivery of any app on any cloud to any device“.

TMC Views

With connectivity validated, TMC provides an overview view of the cluster, including resource utilisation, version, labels, component, agent and extension health.

Views are extended to show Node, Namespace and Workload information.

Each of the listed resources can be clicked through to dig deeper into your kubernetes environment down to listing information in pods, which includes the source YAML linked to the object.

Inspections

The observant amongst you may have noticed the inspection section in the cluster overview above.  TMC provides three inspections that can be performed against a cluster object Lite, CIS benchmark and Conformance.  Each of these inspections is described in the TMC documentation reproduce here for convenience;

  • The Conformance inspection validates the binaries running on your cluster and ensures that your cluster is properly installed, configured, and working. You can view the generated report from within Tanzu Mission Control to assess and address any issues that arise. For more information, see the Kubernetes Conformance documentation at https://github.com/cncf/k8s-conformance/tree/master/docs.
  • The CIS benchmark inspection evaluates your cluster against the CIS Benchmark for Kubernetes published by the Center for Internet Security.
  • The Lite inspection is a node conformance test that validates whether nodes meet requirements for Kubernetes. For more information, see Validate node setup in the Kubernetes documentation.

The three inspection types can be performed from the action menu of the cluster object;

Tracking this progress from the kubernetes cluster side, new inspection pods are created in the vmware-system-tmc namespace.

Results are accessible from the cluster and can be downloaded as a bundle in JSON output.

As you would expect, each of the findings in the report carries detailed analysis of what has been found and what objects are impacted.

Workspaces

Workspaces are a logical object within TMC that namespaces are added to.  As mentioned above they provide policy enforcement for image registry and network access.  New namespaces can either be created within a workspace (on any TMC managed cluster object)

Or existing namespaces can be attached to a workspace.

This is important to consider, because only namespaces associated with a workspace can have image registry and network policy defined and enforced.  We can see the details for any other namespaces but policy is not TMC managed.

Simply put the image registry policy defines where a namespace is permitted to download images, either privately or publicly.  Adding locations to this policy is simple and the format does allow wildcards to be used as you can see below.

A network policy defines the permitted networking within a workspace, whereby traffic can be blocked or allowed.

Policy Enforcement example

Within the namespace tkg-veducate referenced above, I deployed a manifest referencing an object that required an image from a blocked image registry.  Looking in the events for the namespace and filtering for the string ‘failed to match‘, it’s clear that access to the image is being blocked as it does not meet or match with an available image policy.

This information is also presented from TMC itself;

Mission Control

So there we have it, with Tanzu Mission Control it is simple to manage cluster and workspace policies, to add in kubernetes clusters from any provider and to gain visibility into the performance, resource consumption and issues across the entire kubernetes estate.

When I said any provider I did mean any provider, below we can see Azure Kubernetes Services (AKS) being managed alongside Openshift and Tanzu kubernetes.

For details of how the process for Openshift and Azure (spoiler it’s the same) give Dean a follow on twitter and check out his vEducate blog and in particular his fantastic series on Tanzu Mission Control!

 

Thanks

Simon