I have been trying to setup Spring Cloud Dataflow Server for Kubernetes locally using minikube. Have followed the installation instructions in the the link here : SCDF Installation Reference
I've been getting the below error for the SCDF server:
11:32:52.095 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client namespace from Kubernetes service account namespace path...
11:32:52.096 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
2018-04-24 11:33:14.348 WARN 1 --- [ main] o.s.cloud.kubernetes.StandardPodUtils : Failed to get pod with name:[scdf-server-869d56967c-97lsd]. You should look into this if things aren't working as you expect. Are you missing serviceaccount permissions?
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/scdf-server-869d56967c-97lsd. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "scdf-server-869d56967c-97lsd" is forbidden: User "system:serviceaccount:default:default" cannot get pods in the namespace "default".
Below are the version details:
Spring Cloud Data Flow Server : 1.4.0.RELEASE
Kubernetes Local Deployment using minikube
Kubernetes Version : 1.10
The latest release of minikube enabled RBAC by default.
For RBAC enabled clusters, we have added a note in the installation section on this matter.
"The latest releases of kubernetes have enabled RBAC on the api-server. If your target platform has RBAC enabled you must ask a cluster-admin to create the roles and role-bindings for you before deploying the dataflow server. They associate the dataflow service account with the roles it needs to be run with."
For minikube, however, you can run the following command and retry installaing.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
Alternatively, if you're using the helm-chart, you can disable RBAC and install the chart with the following on minikube.
helm init
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com
helm repo update
helm install --name my-release --set server.service.type=NodePort --set rbac.create=false incubator/spring-cloud-data-flow
From the installation guide, step 7: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.4.0.RELEASE/reference/htmlsingle/#_deploying_using_kubectl
The latest releases of kubernetes have enabled RBAC on the api-server. If your target platform has RBAC enabled you must ask a cluster-admin to create the roles and role-bindings for you before deploying the dataflow server. They associate the dataflow service account with the roles it needs to be run with.
$ kubectl create -f src/kubernetes/server/server-roles.yaml
$ kubectl create -f src/kubernetes/server/server-rolebinding.yaml
Did you perform those steps?
Related
I am using spring cloud config server on openshift. The Git repository is the backend. I am getting below error when I update any property in the Git repo.
2019-12-25 12:10:36.988 WARN 1 --- [nio-8080-exec-1] o.s.cloud.kubernetes.StandardPodUtils : Failed to get pod with name:[config-server-32-66p47]. You should look into this if things aren't working as you expect. Are you missing serviceaccount permissions?
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/ci-dev/pods/config-server-32-66p47. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "ms-config-server-32-66p47" is forbidden: User "system:serviceaccount:ci-dev:default" cannot get resource "pods" in API group "" in the namespace "ci-dev".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472) ~[kubernetes-client-3.1.10.jar!/:na]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:409) ~[kubernetes-client-3.1.10.jar!/:na]
Your app has spring-cloud-kubernetes dependency on the classpath and there is no kubernetes service account configured for this deployment.
This is why it's throwing an error, because it's not related to Spring Cloud Config Server. Configure service account or remove the mentioned dependency if you don't use it, so you pass this problem and will be able to configure config server.
I have developed Spring Boot applications. I have setup admin and RabbitMQ as well as spring cloud bus. When i refresh the end points of applications, it refreshes the properties for application.
Can anyone please help me how to setup RabbitMQ in kubernetes now? I did research to an extent and found in few articles that it needs to be deployed as "Statefulset" rather than "Deployment" https://notallaboutcode.blogspot.de/2017/09/rabbitmq-on-kubernetes-container.html. I could not get why this needs to be done exactly. Also any useful link on deploying RabbitMQ in kubernetes would help.
It depends on what you're looking to do and what tools you have available. I guess your current setup is much like that described in http://www.baeldung.com/spring-cloud-bus. One approach to porting that to kubernetes might be to try to get your setup working with docker-compose first and then you could port that docker-compose to kubernetes deployment descriptors.
A simple way to deploy rabbitmq in k8s would be to set up a Deployment using a rabbitmq docker image. An example of this is https://github.com/Activiti/activiti-cloud-examples/blob/fe732096b5a19de0ad44879a399053f6ae02b095/kubernetes/kubectl/infrastructure.yml#L17. (Notice that file isn't radically different from a docker-compose file so you could port from one to the other.) But that won't be persisting data outside of the Pods so if the cluster were to go down or the Pod/s were to go down then you'd lose message data. The persistence is ephemeral.
So to have non-ephemeral persistence you could instead use a StatefulSet as in the example you point to. Another example is https://wesmorgan.svbtle.com/rabbitmq-cluster-on-kubernetes-with-statefulsets
If you are using helm (or can use helm) then you could use the rabbitmq helm chart, which uses a StatefulSet.
But if your only reason for needing the bus is to trigger refreshes when property changes happen then there are alternative paths available with Kubernetes. I'm guessing you need the hot reloads so you could look at using https://github.com/fabric8io/spring-cloud-kubernetes#propertysource-reload Or if you need the config to come from git specifically then you could look at http://fabric8.io/guide/develop/configuration.html (If you didn't need the hot reloads or git then you could consider versioning your configmaps and upgrading them with your application upgrades like in https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a )
If you have installed helm in your cluster
helm install stable/rabbitmq
This will install rabbitmqserver on your cluster, the following commands are for obtaining the password and erlang cookie, replace prodding-wombat-rabbitmq for w/e kubernetes decides to name the pod.
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode
To connect to the pod:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prodding-wombat-rabbitmq" -o jsonpath="{.items[0].metadata.name}")
Then prorxy to localhost so you can connect in your browswer
kubectl port-forward $POD_NAME 5672:5672 15672:15672
I'm developing a POC over IBM HyperLedger Blockchain. I have a business network developed and deployed in IBM Cloud. I can generate a working local API REST, but cannot make it work on cloud, on the deployed IP.
I'm following this guide:
https://ibm-blockchain.github.io/interacting/
You just have to execute the following command:
./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME
But it doesn't deploy anything, and get the following (more related to kubernetes than blockchain).
Preparing yaml file for create composer-rest-server
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the server doesn't have a resource type "svc"
Creating composer-rest-server service
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server-services-free.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Composer rest server created successfully
Any ideas? Thanks too much.
You need to ensure you have a correct kube config setup. Step 10 in https://ibm-blockchain.github.io/setup/ provides the details to set up KUBECONFIG as the error suggests that either it is not configured or not configured correctly.
The document you refer to https://ibm-blockchain.github.io/interacting/ is being updated and should be available soon.
When you run the command ./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME - should be the name of the Network Admin for the network you deployed, NOT the PeerAdmin card so it will be something like ./create/create_composer-rest-server.sh --business-network-card admin#perishable-network
Look like it's an issue of acceess control. You should make sure again you are running with Local Admin configuration.it will help you to run queries
In IBM Cloud Private when stopping a Docker container, it automatically restarts. How can it be stopped?
Here's a bit more information:
When you work with containers on IBM Cloud Private, you'd actually deploying individual Pods or more likely Deployments.
When a Pod is managed by a ReplicaSet, DaemonSet, or StatefulSet, there are semantics which apply to reschedule the pod if it fails unexpectedly. Deleting a Pod isn't distinguished from other failures within a pod (application crashes or worker node failure).
You should be using kubectl to work with pods. You can configure kubectl from User > Configure Client in the top right corner of the web UI. Copy and paste the commands for your environment into your console. Validate that the IP or network address is resolvable from your client machine (control this value in the install cluster/config.yaml with cluster_access_ip).
Example kubectl configure steps (Copy from User > Configure Client in the web UI):
kubectl config set-cluster mycluster.icp --server=https://[NETWORK_ADDRESS]:8001 --insecure-skip-tls-verify=true
kubectl config set-context mycluster.icp-context --cluster=mycluster.icp
kubectl config set-credentials mycluster.icp-user --token=[TOKEN]
kubectl config set-context mycluster.icp-context --user=mycluster.icp-user --namespace=default
kubectl config use-context mycluster.icp-context
Then view running pods:
kubectl get pods [--namespace default]
These pods represent the basic unit of deployment: containers + volumes + labels + links to ConfigMaps and Secrets.
These pods are generally deployed from other management "sets":
kubectl get deployments [--namespace default]
kubectl get daemonsets [--namespace default]
kubectl get statefulsets [--namespace default]
These collections represent policy + pods; behaviors about how to recover are built into each construct.
You probably have a deployment, so to remove the container --
kubectl get deployments -o wide [--namespace default]
Find the deployment of interest, and delete it:
kubectl delete deployments my-deployment [--namespace default]
Now the deployment will be removed, along with all associated pods.
You need to stop the kubelet first, otherwise it will automatically start up exited containers. You can run “systemctl stop kubelet”.
kubernetes restarts failed containers (pods), you should scale the deployment to 0 instances or delete the deployment, both can be achieved with kubectl (kubectl scale --replicas=0 ...) or using ICP console.
You should change the number of replicas to zero.
Deployed fabric8 in Google Container Engine with 12 core 45GB RAM. Used gofabric8 0.4.69 for deploying fabric8 on GCE.
Tried to create a microservice, but it is failing in integration testing phase throwing the following error "Waiting for container:spring-boot. Reason:CrashLoopBackOff"
Please help to resolve this.
Which quickstart were you trying?
It sounds like the application terminated. I wonder if this shows any output:
kubectl get pod
kubectl logs nameofpod
where nameofpod is the pod that is crashing.
BTW the new fabric8-maven-plugin version (3.1.45 or later) now has a nicer fabric8:run goal.
If you clone the git repository to your local file system and update the version of fabric8-maven-plugin you should be able to run it via:
mvn fabric8:run
Then you get to see the output of the spring app in your console to see if something fails etc.