Microclimate Pod CrashLoopBackOff in IBM Cloud Private - ibm-cloud-private

I'm trying to deploy IBM Microclimate to IBM Cloud Private CE 2.1.0.3, as described in the documentation (https://github.com/IBM/charts/blob/master/stable/ibm-microclimate/README.md), but the Microclimate pod status shows CrashLoopBackOff and the Portal is not accessible (it shows a 503 Service Unavailable error in the browser). I tried looking at the logs for the pod, but that is not possible either. Has anyone faced an issue like this one before? Any hints on how to troubleshoot or solve the issue? Thanks!

That's not a lot of information to go on. If you'd like some more interactive help do please ask in our Slack channel as per https://microclimate-dev2ops.github.io/community. If you want to debug it here, can you please post the results of: kubectl get pods, kubectl get ing, kubectl describe pods, helm list --tls, kubectl get deployments -o yaml. If you installed to a non-default namespace, please add --namespace [your-mc-ns] to each command.

Adding the command "mount --make-rshared /run" to the Vagrant file for the ICP CE image solves this issue and Microclimate is able to be installed successfully. Reference: https://github.com/IBM/deploy-ibm-cloud-private/issues/139

Related

DataHub installation on Minikube failing: "no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"" on elasticsearch setup

Im following the deployement guide of DataHub with Kubernetes present on the documentation: https://datahubproject.io/docs/deploy/kubernetes
Settin up the local clusten with Minikube I've started following the prerequisites session of the guide.
At first I tried to change some of the default values to try it locally (I've already installed it sucessfully on Google Kubernetes Engine, so I was trying different set ups)
But on the first step of the installation I've received the error:
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "elasticsearch-master-pdb" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
The steps I've followed after installing Minikube where the exact steps presented on the page:
helm repo add datahub https://helm.datahubproject.io/
helm install prerequisites datahub/datahub-prerequisites
With the error happening on step 2
At first I've changed to the default configuration to see if it wasnt a mistake on the new values, but the error remained.
Ive expected that after followint the exact default steps the installation would be successfull locally, just like it was on the GKE
I got help browsing the DataHub slack community and figured out a way to fix this error.
It was simply a matter of a version error with Kubernetes, I was able to fix it by forcing minikube to start with the 1.19.0 version of Kubernetes:
minikube start --kubernetes-version=v1.19.0

RabbitMQ as Spring Cloud Bus in Kubernetes for Spring Boot Applications

I have developed Spring Boot applications. I have setup admin and RabbitMQ as well as spring cloud bus. When i refresh the end points of applications, it refreshes the properties for application.
Can anyone please help me how to setup RabbitMQ in kubernetes now? I did research to an extent and found in few articles that it needs to be deployed as "Statefulset" rather than "Deployment" https://notallaboutcode.blogspot.de/2017/09/rabbitmq-on-kubernetes-container.html. I could not get why this needs to be done exactly. Also any useful link on deploying RabbitMQ in kubernetes would help.
It depends on what you're looking to do and what tools you have available. I guess your current setup is much like that described in http://www.baeldung.com/spring-cloud-bus. One approach to porting that to kubernetes might be to try to get your setup working with docker-compose first and then you could port that docker-compose to kubernetes deployment descriptors.
A simple way to deploy rabbitmq in k8s would be to set up a Deployment using a rabbitmq docker image. An example of this is https://github.com/Activiti/activiti-cloud-examples/blob/fe732096b5a19de0ad44879a399053f6ae02b095/kubernetes/kubectl/infrastructure.yml#L17. (Notice that file isn't radically different from a docker-compose file so you could port from one to the other.) But that won't be persisting data outside of the Pods so if the cluster were to go down or the Pod/s were to go down then you'd lose message data. The persistence is ephemeral.
So to have non-ephemeral persistence you could instead use a StatefulSet as in the example you point to. Another example is https://wesmorgan.svbtle.com/rabbitmq-cluster-on-kubernetes-with-statefulsets
If you are using helm (or can use helm) then you could use the rabbitmq helm chart, which uses a StatefulSet.
But if your only reason for needing the bus is to trigger refreshes when property changes happen then there are alternative paths available with Kubernetes. I'm guessing you need the hot reloads so you could look at using https://github.com/fabric8io/spring-cloud-kubernetes#propertysource-reload Or if you need the config to come from git specifically then you could look at http://fabric8.io/guide/develop/configuration.html (If you didn't need the hot reloads or git then you could consider versioning your configmaps and upgrading them with your application upgrades like in https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a )
If you have installed helm in your cluster
helm install stable/rabbitmq
This will install rabbitmqserver on your cluster, the following commands are for obtaining the password and erlang cookie, replace prodding-wombat-rabbitmq for w/e kubernetes decides to name the pod.
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode
To connect to the pod:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prodding-wombat-rabbitmq" -o jsonpath="{.items[0].metadata.name}")
Then prorxy to localhost so you can connect in your browswer
kubectl port-forward $POD_NAME 5672:5672 15672:15672

expose Openshift Online docker registry

I am looking to push a custom docker image to OpenShift Online 3 to run container instances there. I have seen many instructions on forums / blogs about how to do this, but the first part of the process seems to be eluding me.
This is one of the references I'm using: link
I log in using the oc command:
oc login https://api.starter-us-west-2.openshift.com --token=xxxxxxx
This gets me in and I can run the command to return the running services (one of which should be the docker instance):
oc get svc
But the response I get is simply:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-phil4 172.30.217.192 <none> 8080/TCP 13h
I was expecting to see lines for a docker instance that I could connect to. I think I need to 'expose' this, the command should be:
oc expose service docker-registry
but without seeing the service there is the list of services, I'm not sure how I can do that - and the result is - predictably:
error: services "docker-registry" not found
I feel like this is to do with the permissions on my user - I have currently granted my user 'image-pusher', 'image-builder', 'registry-admin' and 'cluster-status'. There are many more options, most of which I don't seem to be able to apply.
Perhaps this is not possible with the free-tier, or perhaps not available within the online version at all? Would anyone know how to go about connecting my existing docker repo to the OpenShift repo I'm connected to and uploading my custom images?
Thanks,
Phil
OpenShift Online clusters have their registry exposed at registry.<cluster-id>.openshift.com. So, for your example, to login to the registry for starter-us-west-2, after logging in to the cluster, you would run
docker login registry.starter-us-west-2.openshift.com -u $(oc whoami) -p $(oc whoami -t)
You can then push and pull from your project with
docker push registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
docker pull registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
Note: to docker push you have to have already tagged your local image as registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>

ibm-cloud-private CE - 2.1.0 Catalog - Error Loading Charts

I have just installed ICp CE edition 2.1.0 on Ubuntu 16.04 (one cluster, one master, one worker node, very basic installation). When opening the 'catalog' page (https://.......... :8443/catalog/), I get the message 'Error loading Charts'.
In the 'admin>repositories' page I can see ibm-charts https//blablabla and local-charts https://blablabla/helm-repo/....
The 'admin>metering dasboard dispays an error 'E_DATA_QUERY_ERROR: The query for loginbootstrap failed with the response '500 Internal Server Error'
I have done very few modifications in the config.yaml (and hosts) files in the cluster directory (just configured the password authentication). Maybe some more custom configuration is required.
I'm discovering/learning about this product,maybe there is an obvious explication for such kind of behavior according to an expert.
Thanks
Regarding the "error loading charts", check the following:
Deployments > helm-api > {click the pod name at the bottom} > logs.
Then in another tab open the Admin > Repositories page and click Sync Repositories and watch the log in other tab. Attempt to open the Catalog as well and watch the same log.
If you are seeing any cloudant related error, one possible way to resolve is to delete the helm-api pod and it will reinitialize with the view and the error should go away.
There was possibly an issue when connecting to cloudant when we setup the connection to it. So that helm-api pod needs a restart in order to add some files to cloudant now that it has been initialized.
My understanding is that a fix will be going in to help automate this recovery step in the next release.
As for the 'E_DATA_QUERY_ERROR: The query for loginbootstrap failed with the response '500 Internal Server Error' that was supposedly fixed in the GA release. Are you certain that you have installed the latest ICP from dockerhub for the CE release?
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/installing/install_containers_CE.html
The two problems, chart loading error and metering 'loginbootstap' error, are likely to have the same root cause: a problem communicating with the Cloudant database at the time of first startup when databases would be initialized. Restarting the helm-api pod would help the charts, and restarting the metering-server and then the metering-ui pods should resolve the Metering error.
Today I have seen the same issue on ICp 2.1.0.1 EE when I try to navigate Catalog -> helm charts page. Page loading for a while then ended with "error loading charts". Weird thing is I didn't do anything, just leave it, after several hours re-visit again and it works.
Next time, I will first try sync repository Manage -> Helm Repositories -> Sync repositories, then check helm-api pod: (kubectl is on Windows)
kubectl -n kube-system get pods |findstr helm-api
then kill the pod if it is not running.

default fabric8 microservice errors out on integration test - Waiting for container:spring-boot. Reason:CrashLoopBackOff

Deployed fabric8 in Google Container Engine with 12 core 45GB RAM. Used gofabric8 0.4.69 for deploying fabric8 on GCE.
Tried to create a microservice, but it is failing in integration testing phase throwing the following error "Waiting for container:spring-boot. Reason:CrashLoopBackOff"
Please help to resolve this.
Which quickstart were you trying?
It sounds like the application terminated. I wonder if this shows any output:
kubectl get pod
kubectl logs nameofpod
where nameofpod is the pod that is crashing.
BTW the new fabric8-maven-plugin version (3.1.45 or later) now has a nicer fabric8:run goal.
If you clone the git repository to your local file system and update the version of fabric8-maven-plugin you should be able to run it via:
mvn fabric8:run
Then you get to see the output of the spring app in your console to see if something fails etc.

Resources