I have been trying to deploy the Spring Boot application on kubernetes cluseter. But somehow I can not access the rest end point from outside the cluster.
Here are the steps which i performed
Setup the kubernetes cluster using kubespray following the guide - Kubernetes Cluster setup using Kubespray
Pushed the spring boot docker image to docker hub
Created kubernetes deployment
vagrant#node1:~/spring-boot$ kubectl create deployment demo --image=rahulwagh17/kubernetes:jhooq-k8s-springboot
deployment.apps/demo created
Exposed the deployment with external IP = 1.1.1.1
kubectl expose deployment demo --type=LoadBalancer --name=demo-service --external-ip=1.1.1.1 --port=8080
service/demo-service exposed
This is how my deployment is looking
vagrant#node1:~/spring-boot$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
demo 1/1 1 1 24s
This is how my services are looking
vagrant#node1:~/spring-boot$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-service LoadBalancer 10.233.31.159 1.1.1.1 8080:30099/TCP 13s
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 23h
I can curl the rest end point within the cluster without a problem
vagrant#node1:~/spring-boot$ curl 10.233.31.159:8080/hello
Hello - Jhooq-k8s
Problem I am facing - When i am trying to curl the rest point from outside the cluster, i can not do
$ curl http://1.1.1.1:30099/hello
curl: (7) Failed to connect to 1.1.1.1 port 30099: Operation timed out
I am little new to kubernetes, so any leads or suggestions are highly appreciated
Please try via below approach:
Via Node Port:- Which means NodeIP:NodePort and in this case, please get any node-ip and then run a command
curl http://$NODE_IP:30099/hello
and you should be able to access your service.
Related
I have installed a kubernetes elasticsearch (v. 7.0.1) environment with a deployment and service using type NodePort running on minikube. When I hit kubectl get services, I get the relevant line:
elasticsearch NodePort 10.101.5.85 <none> 9200:31066/TCP 27m
If I do
$curl http://$(minikube ip):31066
I get the usual elasticsearch page. If, however, I do
root#webapp-5489d8d6fd-2ml2w:/# curl http://localhost:9200
as root of a webapp on the same cluster, I get error:
curl: (7) Failed to connect to localhost port 9200: Connection refused
Can anyone hint at the reason for my problem?
First of all, your elasticsearch service is NodePort type with ports 9200:31066/TCP.
It means that elasticsearch is using port 9200, and NodePort is using 31066 port.
1) curl http://$(minikube ip):31066
MinikubeIP is your node ip. You can verify this using $ kubectl describe node So if you are use port 31066 it connects correctly.
2) curl http://localhost:9200
You did not provide any information about other Deployments or pods so I assume you have Elasticsearch deployment with pod.
If you will execute $ curl http://localhost:9200 in elasticsearch container it will work, because elasticsearch is running inside (local) this container.
If you want to curl from other (non elasticsearch pod) you have to use service which you have created with elasticsearch port.
$ curl elasticsearch:9200 or $ curl 10.101.5.85:9200
From other containers you can also curl using NodeIP with NodePort
$ curl $(minikube ip):31066 same like in point 1.
Usefull links:
https://gardener.cloud/050-tutorials/content/howto/service-access/
Hope it helps!
I cannot access my Cassandra database, deployed on the same namespace in kubernetes.
My service has no cluster IP but an internal endpoint cassandra.hosting:9042 but whenever I try to connect from an internal spring application using
spring.data.cassandra.contact-points=cassandra.hosting
it fails with the error All host(s) tried for query failed
How did you configure your endpoint? Generally, all services and pods in a Kubernetes cluster are discoverable through a standard DNS notation. It looks like this:
<service-name>.<namespace>.svc.cluster.local # or
<pod-name>.<namespace>.svc.cluster.local # or
<pod-name>.<subdomain>.<namespace>.svc.cluster.local
If you are within the same namespace this would work too:
<service-name>
<pod-name>
<pod-name>.<subdomain>
I would also check either core-dns or kube-dns are running and ready:
kubectl -n kube-system get pods | grep dns
I have a K8s, currently running in single node (master+kubelet,172.16.100.81). I have an config server image which I will run it in pod. The image is talking to another pod named eureka server. Both two images are spring boot application. And eureka server's http address and port is defined by me. I need to transfer eureka server's http address and port to config pod so that it could talk to eureka server.
I start eureka server: ( pesudo code)
kubectl run eureka-server --image=eureka-server-image --port=8761
kubectl expose deployment eureka-server --type NodePort:31000
Then I use command "docker pull" to download config server image and run it as below:
kubectl run config-server --image=config-server-image --port=8888
kubectl expose deployment config-server --type NodePort:31001
With these steps, I did not find the way to transfer eureka-server http
server (master IP address 172.16.100.81:31000) to config server, are there
methods I could transer variable eureka-server=172.16.100.81:31000 to Pod config server? I know I shall use ingress in K8s networking, but currently I use NodePort.
Generally, you don't need nodePort when you want two pods to communicate with each other. A simpler clusterIP is enough.
Whenever you are exposing a deployment with a service, it will be internally discoverable from the DNS. Both of your exposed services can be accessed using:
http://config-server.default:31001 and http://eureka-server.default:31000. default is the namespace here.
172.16.100.81:31000 will make it accessible from outside the cluster.
The following containers are not starting after installing IBM Cloud Private. I had previously installed ICP without a Management node and was doing a new install after having done and 'uninstall' and did restart the Docker service on all nodes.
Installed a second time with a Management node defined, Master/Proxy on a single node, and two Worker nodes.
Selecting menu option Platform / Monitoring gets 502 Bad Gateway
Event messages from deployed containers
Deployment - monitoring-prometheus
TYPE SOURCE COUNT REASON MESSAGE
Warning default-scheduler 2113 FailedScheduling
No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeNodeConflict (4).
Deployment - monitoring-grafana
TYPE SOURCE COUNT REASON MESSAGE
Warning default-scheduler 2097 FailedScheduling
No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeNodeConflict (4).
Deployment - rootkit-annotator
TYPE SOURCE COUNT REASON MESSAGE
Normal kubelet 169.53.226.142 125 Pulled
Container image "ibmcom/rootkit-annotator:20171011" already present on machine
Normal kubelet 169.53.226.142 125 Created
Created container
Normal kubelet 169.53.226.142 125 Started
Started container
Warning kubelet 169.53.226.142 2770 BackOff
Back-off restarting failed container
Warning kubelet 169.53.226.142 2770 FailedSync
Error syncing pod
The management console sometimes displays a 502 Bad Gateway Error after installation or rebooting the master node. If you recently installed IBM Cloud Private, wait a few minutes and reload the page.
If you rebooted the master node, take the following steps:
Configure the kubectl command line interface. See Accessing your IBM Cloud Private cluster by using the kubectl CLI.
Obtain the IP addresses of the icp-ds pods. Run the following command:
kubectl get pods -o wide -n kube-system | grep "icp-ds"
The output resembles the following text:
icp-ds-0 1/1 Running 0 1d 10.1.231.171 10.10.25.134
In this example, 10.1.231.171 is the IP address of the pod.
In high availability (HA) environments, an icp-ds pod exists for each master node.
From the master node, ping the icp-ds pods. Check the IP address for each icp-ds pod by running the following command for each IP address:
ping 10.1.231.171
If the output resembles the following text, you must delete the pod:
connect: Invalid argument
Delete each pod that you cannot reach:
kubectl delete pods icp-ds-0 -n kube-system
In this example, icp-ds-0 is the name of the unresponsive pod.
In HA installations, you might have to delete the pod for each master node.
Obtain the IP address of the replacement pod or pods. Run the following command:
kubectl get pods -o wide -n kube-system | grep "icp-ds"
The output resembles the following text:
icp-ds-0 1/1 Running 0 1d 10.1.231.172 10.10.2
Ping the pods again. Check the IP address for each icp-ds pod by running the following command for each IP address:
ping 10.1.231.172
If you can reach all icp-ds pods, you can access the IBM Cloud Private management console when that pod enters the available state.
i am new to kubernetes. i have just followed this guide and have a vagrant/kubernetes cluster: https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
i was interested viewing the ui, so i followed the instructions here: http://kubernetes.io/docs/user-guide/ui/#deploying-the-dashboard-ui
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
upon browsing to the above IP:PORT, <h3>Unauthorized</h3> is served. so, i suffix /ui to the URI, and we get:
// 127.0.0.1:8001/ui redirected to http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
perhaps relevant is:
$ kubectl cluster-info
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.3.0.1 <none> 443/TCP 36m
i saw another SO thread, Kubernetes dashboard keeps pending with message: no endpoints available for service "kubernetes-dashboard", and discovered get pods and describe pod <pod-name> --namespace=kube-system.
so, i ran kubectl describe pod kubernetes-dashboard-3543765157-94gj9 --namespace="kube-system" which yielded: https://gist.github.com/cdaringe/b972bf5a95c9f2a7cb8386ef6bf2252b
ultimately, my cluster had no nodes, so the UI service had no place to land and run! the API still attempts to proxy to it, which is why it reported "no endpoints"--there was no host endpoint serving the content. still haven't figured out why my vagrant cluster deployed no nodes. im going to guess that the workers never downloaded the kubelet and joined.