vRealise Orchestrator 8.0.1 installation errors
vRealise Orchestrator 8.0.1 installation errors
As part of something else that I’ve been working on I needed to install vRealize Orchestrator (vRO) into the lab. I’d not installed the latest versions of vRO but I was aware that the architecture had changed from earlier versions, and was now built upon kubernetes.
vRO Deployment
vRO is deployed from an OVA downloaded from the MyVMware portal, the file for vRO 8.0.1 has the catch name of “O11N_VA-8.0.1.7355-15296158_OVF10.ova”. Anyway standard practice applies, allocate an I address ahead of the deployment and configure DNS entries, as if there is one thing guaranteed to cause installation errors it is clashing IPs or missing DNS entries.
Work through the deployment steps, completing the required information – taking note that when the deployment asks for the hostname it is expecting that in an FQDN format, if you are using static IPs.
After the OVA deployment is complete on first boot you will see services come on line and the vRO kubernetes elements start to build out;
When the build finishes you’ll be left with a familiar blue login screen on the appliance console and check that the build is complete by browsing to the URL of the appliance “https://vro_FQDN/vco/”.
vRO Install errors
What I was expecting to see was the getting started page for vRO, what I was greeted by was;
When selecting the OVA deployment I checked the box to enable SSH to the appliance, which allows me to troubleshoot from putty, however, everything that follows is also possible when stood in front of the server.
Logging onto the appliance I wanted to see what was happening from a kubernetes standpoint, from the appliance I ran “kubectl get all –all-namespaces | less” . to show me all kubernetes resource types running on the appliance. look at the output it was obvious that there are crashed pods in the namespace ‘prelude’. I’ve changed the focus of the command below so I can better show the data.
Next step is to look at the logs of that pod;
Two errors in the above output that jump out to me are that the DB is not configurable and that there is a problem with the username being used. Something that we’ve seen before with some appliances where root passwords are expired. Something else I noticed whilst troubleshooting was that there is a tiller pod deployed in the kube-system namespace, This tells me that vRO is being deployed via a helm chart.
Sure enough, under /opt/charts/vco/templates we have a deployment.yaml file. I have no real experience of editing helm charts, so at this point I went to google and searched for the path to the deployment.yaml file and vRealize Orchestrator, first hit describes a similar problem to what I’m experiencing and provides details of how to edit the deployment.yaml to fix the issue.
Using vi to edit the deployment.yaml file I made the suggested changes;
- "sed -i 's/root:.*/root:x:18135:0:99999:7:::/g' /etc/shadow && sed -i 's/vco:.*/vco:x:18135:0:99999:7:::/g' /etc/shadow && /init_run.sh"
For those unfamiliar with vi ‘/’ allows you to search the file for a strong, ‘i’ to insert text, ‘esc’ to exit insert mode and ‘:wq!’ to write the file out and quit.
As per the article, change directory to ‘/opt/scripts’ and execute ‘./deploy.sh’ . which cleans up the existing environment and attempts to redeploy. It’s going to run through many steps and the text will fly by. I’ve captured screenshots from the start of the process and a notable point where is starts the postresSQL deployment below.
It’s going to loop through and do its thing, best bet is to step away and make a cup of tea to avoid any temptation. I just walked away and setup a watch in another terminal to the url I’m expecting to come online ‘watch -n 5 curl -k https://vro_FQDN/vco/’
Eventually the deploy script will finish and the curl command will connect and then we know the install is online;
Perfect, vRO is now deployed and ready to use.
Hopefully this is useful to someone
Thanks
Simon