I have developed Spring Boot applications. I have setup admin and RabbitMQ as well as spring cloud bus. When i refresh the end points of applications, it refreshes the properties for application.
Can anyone please help me how to setup RabbitMQ in kubernetes now? I did research to an extent and found in few articles that it needs to be deployed as "Statefulset" rather than "Deployment" https://notallaboutcode.blogspot.de/2017/09/rabbitmq-on-kubernetes-container.html. I could not get why this needs to be done exactly. Also any useful link on deploying RabbitMQ in kubernetes would help.
It depends on what you're looking to do and what tools you have available. I guess your current setup is much like that described in http://www.baeldung.com/spring-cloud-bus. One approach to porting that to kubernetes might be to try to get your setup working with docker-compose first and then you could port that docker-compose to kubernetes deployment descriptors.
A simple way to deploy rabbitmq in k8s would be to set up a Deployment using a rabbitmq docker image. An example of this is https://github.com/Activiti/activiti-cloud-examples/blob/fe732096b5a19de0ad44879a399053f6ae02b095/kubernetes/kubectl/infrastructure.yml#L17. (Notice that file isn't radically different from a docker-compose file so you could port from one to the other.) But that won't be persisting data outside of the Pods so if the cluster were to go down or the Pod/s were to go down then you'd lose message data. The persistence is ephemeral.
So to have non-ephemeral persistence you could instead use a StatefulSet as in the example you point to. Another example is https://wesmorgan.svbtle.com/rabbitmq-cluster-on-kubernetes-with-statefulsets
If you are using helm (or can use helm) then you could use the rabbitmq helm chart, which uses a StatefulSet.
But if your only reason for needing the bus is to trigger refreshes when property changes happen then there are alternative paths available with Kubernetes. I'm guessing you need the hot reloads so you could look at using https://github.com/fabric8io/spring-cloud-kubernetes#propertysource-reload Or if you need the config to come from git specifically then you could look at http://fabric8.io/guide/develop/configuration.html (If you didn't need the hot reloads or git then you could consider versioning your configmaps and upgrading them with your application upgrades like in https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a )
If you have installed helm in your cluster
helm install stable/rabbitmq
This will install rabbitmqserver on your cluster, the following commands are for obtaining the password and erlang cookie, replace prodding-wombat-rabbitmq for w/e kubernetes decides to name the pod.
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode
To connect to the pod:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prodding-wombat-rabbitmq" -o jsonpath="{.items[0].metadata.name}")
Then prorxy to localhost so you can connect in your browswer
kubectl port-forward $POD_NAME 5672:5672 15672:15672
Related
I'm using a Kubernetes cluster in Azure running an ingress controller. The ingress controller routes to different services via a given context root.
To add another service and connect it to my ingress I build a simple shell script looking like this:
kubectl apply -f $1'-svc.yaml'
some script magic here to add a new route in the hello-world-ingress.json
kubectl apply -f 'hello-world-ingress.json'
I tested the script on my local machine and everything works as expected. Now I want to trigger the script with an HTTP rest call on Azure.
Does anyone have an idea how to do that? So far I know:
I need the Azure cli with Kubernetes to run the kubectl command
I need something to build the HTTP trigger. I tried using AzureFunctions, but I wasn't able to install the Azure cli in Azure Functions on the Azure Portal and I wasn't able to install Azure cli + Azure Functions in a Docker Container.
Does anyone have an idea how to trigger my shell script via HTTP in Azure in an environment where the Azure cli exists?
The easiest way, in my opinion, is to set up an Azure instance with kubectl and the Azure cli configured to talk to your cluster and on that same server setup something like shell2http. For example:
shell2http -export-all-vars /mybash "yourbash.sh"
shell2http -form /apply "kubectl apply -f $v'-svc.yaml'"
shell2http -export-all-vars /domore "domore.sh"
Where $v above is the name of your deployment.
I'm developing a POC over IBM HyperLedger Blockchain. I have a business network developed and deployed in IBM Cloud. I can generate a working local API REST, but cannot make it work on cloud, on the deployed IP.
I'm following this guide:
https://ibm-blockchain.github.io/interacting/
You just have to execute the following command:
./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME
But it doesn't deploy anything, and get the following (more related to kubernetes than blockchain).
Preparing yaml file for create composer-rest-server
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the server doesn't have a resource type "svc"
Creating composer-rest-server service
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server-services-free.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Composer rest server created successfully
Any ideas? Thanks too much.
You need to ensure you have a correct kube config setup. Step 10 in https://ibm-blockchain.github.io/setup/ provides the details to set up KUBECONFIG as the error suggests that either it is not configured or not configured correctly.
The document you refer to https://ibm-blockchain.github.io/interacting/ is being updated and should be available soon.
When you run the command ./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME - should be the name of the Network Admin for the network you deployed, NOT the PeerAdmin card so it will be something like ./create/create_composer-rest-server.sh --business-network-card admin#perishable-network
Look like it's an issue of acceess control. You should make sure again you are running with Local Admin configuration.it will help you to run queries
I am looking to push a custom docker image to OpenShift Online 3 to run container instances there. I have seen many instructions on forums / blogs about how to do this, but the first part of the process seems to be eluding me.
This is one of the references I'm using: link
I log in using the oc command:
oc login https://api.starter-us-west-2.openshift.com --token=xxxxxxx
This gets me in and I can run the command to return the running services (one of which should be the docker instance):
oc get svc
But the response I get is simply:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-phil4 172.30.217.192 <none> 8080/TCP 13h
I was expecting to see lines for a docker instance that I could connect to. I think I need to 'expose' this, the command should be:
oc expose service docker-registry
but without seeing the service there is the list of services, I'm not sure how I can do that - and the result is - predictably:
error: services "docker-registry" not found
I feel like this is to do with the permissions on my user - I have currently granted my user 'image-pusher', 'image-builder', 'registry-admin' and 'cluster-status'. There are many more options, most of which I don't seem to be able to apply.
Perhaps this is not possible with the free-tier, or perhaps not available within the online version at all? Would anyone know how to go about connecting my existing docker repo to the OpenShift repo I'm connected to and uploading my custom images?
Thanks,
Phil
OpenShift Online clusters have their registry exposed at registry.<cluster-id>.openshift.com. So, for your example, to login to the registry for starter-us-west-2, after logging in to the cluster, you would run
docker login registry.starter-us-west-2.openshift.com -u $(oc whoami) -p $(oc whoami -t)
You can then push and pull from your project with
docker push registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
docker pull registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
Note: to docker push you have to have already tagged your local image as registry.<cluster-id>.openshift.com/<project_name>/<image-name>:<image-tag>
In IBM Cloud Private when stopping a Docker container, it automatically restarts. How can it be stopped?
Here's a bit more information:
When you work with containers on IBM Cloud Private, you'd actually deploying individual Pods or more likely Deployments.
When a Pod is managed by a ReplicaSet, DaemonSet, or StatefulSet, there are semantics which apply to reschedule the pod if it fails unexpectedly. Deleting a Pod isn't distinguished from other failures within a pod (application crashes or worker node failure).
You should be using kubectl to work with pods. You can configure kubectl from User > Configure Client in the top right corner of the web UI. Copy and paste the commands for your environment into your console. Validate that the IP or network address is resolvable from your client machine (control this value in the install cluster/config.yaml with cluster_access_ip).
Example kubectl configure steps (Copy from User > Configure Client in the web UI):
kubectl config set-cluster mycluster.icp --server=https://[NETWORK_ADDRESS]:8001 --insecure-skip-tls-verify=true
kubectl config set-context mycluster.icp-context --cluster=mycluster.icp
kubectl config set-credentials mycluster.icp-user --token=[TOKEN]
kubectl config set-context mycluster.icp-context --user=mycluster.icp-user --namespace=default
kubectl config use-context mycluster.icp-context
Then view running pods:
kubectl get pods [--namespace default]
These pods represent the basic unit of deployment: containers + volumes + labels + links to ConfigMaps and Secrets.
These pods are generally deployed from other management "sets":
kubectl get deployments [--namespace default]
kubectl get daemonsets [--namespace default]
kubectl get statefulsets [--namespace default]
These collections represent policy + pods; behaviors about how to recover are built into each construct.
You probably have a deployment, so to remove the container --
kubectl get deployments -o wide [--namespace default]
Find the deployment of interest, and delete it:
kubectl delete deployments my-deployment [--namespace default]
Now the deployment will be removed, along with all associated pods.
You need to stop the kubelet first, otherwise it will automatically start up exited containers. You can run “systemctl stop kubelet”.
kubernetes restarts failed containers (pods), you should scale the deployment to 0 instances or delete the deployment, both can be achieved with kubectl (kubectl scale --replicas=0 ...) or using ICP console.
You should change the number of replicas to zero.
I have configured mysql cluster service and I am using the that service instead of hostname in jdbc url in my application.properties. It is not resolving. But when I use the minikube URL, it is connecting correctly. Shouldn't DNS resolution happen for jdbc url as well in application.properties for a java project ?
Just as #sfgroups mentioned, it is highly likely that the service has not been properly registered. Maybe you are using a different namespace or simply the service is not available. In order to check that:
Run kubectl get svc and kubectl get endpoints to check if the service is registered and the mysql pods selected. It may sound silly but I advise you to check if the service name you are using is correct.
If it is registered, try kubectl get pods, get the ID of your jdbc pod and launch kubectl exec -ti <ID> nslookup <servicename>. This will give you a hint if the dns resolution is working or not.
If it is not resolving, then check in minikube addons list that dns is enabled. If it is disabled, enable it (you will need to wait a little bit) and try again.