Spring application unable to access kafka running in kubernetes minikube - spring-boot

I used bitnami/kafka to deploy kafka on minikube. A describe of the pod kafka-0 looks says that server address is:
KAFKA_CFG_ADVERTISED_LISTENERS:INTERNAL://$(MY_POD_NAME).kafka-headless.default.svc.cluster.local:9093,CLIENT://$(MY_POD_NAME).kafka-headless.default.svc.cluster.local:9092
My kafka address is set like so in Spring config properties:
spring.kafka.bootstrap-servers=["kafka-0.kafka-headless.default.svc.cluster.local:9092"]
But when I try to send a message I get the following error:
Failed to construct kafka producer] with root cause:
org.apache.kafka.common.config.ConfigException:
Invalid url in bootstrap.servers: ["kafka-0.kafka-headless.default.svc.cluster.local:9092"]
Note that this works when I run kafka locally and set the bootstrap-servers address to localhost:9092
How do I fix this error? What is the correct kafka URL to use and where do I find it? thanks

Minikube network is different to the host network, you need a bridge.
The advertised listener is in the minikube realm, not findable from the host.
You could setup a service and an ingress in minikube pointing to your kafka, setup your hosts file to the ip address of the ingress and the hostname advertised.

spring.kafka.bootstrap-servers needs valid server hostnames along with port number as comma-separated
hostname-1:port,hostname-2:port
["kafka-0.kafka-headless.default.svc.cluster.local:9092"] is not looking like one!

Related

Openshift container health probe connection refused

Hi openshift community,
I am currently migrating an app to Openshift and has encountered failed health probes due to connection refused. What I find strange is that if I ssh into the pod and use
curl localhost:10080/xxx-service/info
It returns HTTP 200 but if I use the IP address then it fails with
This is the details:
POD status
Logs in Openshift saying Spring boot started successfully
Openshift events saying probes failed due to connection refused
Tried SSH to pod to check using localhost which works
Not sure why the IP address is not resolving at the POD level.... Does anyone know the answer or have encountered it?
It is hard to say in your case what exactly the issue is, as it is environment specific.
In general, you should avoid using IP addresses when working with Kubernetes, as these will change whenever a Pod is restarted (which may be the root cause for the issue you are seeing).
When defining readiness and liveness probes for your container, I would recommend that you always use the following syntax to define your checks (note that it does not specify the host):
...
readinessProbe:
httpGet:
path: /xxx-service/info
port: 10080
initialDelaySeconds: 15
timeoutSeconds: 1
...
See also the Kubernetes or OpenShift documentation for more information:
https://docs.openshift.com/container-platform/3.11/dev_guide/application_health.html
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request
I found the root cause, it turns out it was Spring related.
It was a Spring Boot app that was being packaged as WAR file and deployed to Tomcat server. In the application.properties, it had this field:
server.address=127.0.0.1
Removing it has fixed this issue.
https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html#common-application-properties-server

Kubernetes cannot access cassandra database

I cannot access my Cassandra database, deployed on the same namespace in kubernetes.
My service has no cluster IP but an internal endpoint cassandra.hosting:9042 but whenever I try to connect from an internal spring application using
spring.data.cassandra.contact-points=cassandra.hosting
it fails with the error All host(s) tried for query failed
How did you configure your endpoint? Generally, all services and pods in a Kubernetes cluster are discoverable through a standard DNS notation. It looks like this:
<service-name>.<namespace>.svc.cluster.local # or
<pod-name>.<namespace>.svc.cluster.local # or
<pod-name>.<subdomain>.<namespace>.svc.cluster.local
If you are within the same namespace this would work too:
<service-name>
<pod-name>
<pod-name>.<subdomain>
I would also check either core-dns or kube-dns are running and ready:
kubectl -n kube-system get pods | grep dns

Kubernetes networking, How to transfer a variable to container

I have a K8s, currently running in single node (master+kubelet,172.16.100.81). I have an config server image which I will run it in pod. The image is talking to another pod named eureka server. Both two images are spring boot application. And eureka server's http address and port is defined by me. I need to transfer eureka server's http address and port to config pod so that it could talk to eureka server.
I start eureka server: ( pesudo code)
kubectl run eureka-server --image=eureka-server-image --port=8761
kubectl expose deployment eureka-server --type NodePort:31000
Then I use command "docker pull" to download config server image and run it as below:
kubectl run config-server --image=config-server-image --port=8888
kubectl expose deployment config-server --type NodePort:31001
With these steps, I did not find the way to transfer eureka-server http
server (master IP address 172.16.100.81:31000) to config server, are there
methods I could transer variable eureka-server=172.16.100.81:31000 to Pod config server? I know I shall use ingress in K8s networking, but currently I use NodePort.
Generally, you don't need nodePort when you want two pods to communicate with each other. A simpler clusterIP is enough.
Whenever you are exposing a deployment with a service, it will be internally discoverable from the DNS. Both of your exposed services can be accessed using:
http://config-server.default:31001 and http://eureka-server.default:31000. default is the namespace here.
172.16.100.81:31000 will make it accessible from outside the cluster.

How to update Kafka config file with Docker IP address

I am running Kafka inside a Docker container. Kafka requires a connection to Zookeeper, and so I am running Zookeeper in another container. I am running Docker on OSX and so my VM has the IP address: 192.168.99.99.
What I can't figure out, is how do I update my Kafka Docker installation to point to the instance of Zookeeper running inside its own separate Docker container, i.e. with IP address of 192.168.99.9 and port 2181?
Kafka has a config file called server.properties which has a property of zookeeper.connect which I can set, but I want this value to be overridden dynamically, rather than hard-coding the IP here. How do I achieve this?
And, as an additional question, I want my Docker file to work across OS's - so whatever I do should work on Linux too..
You should not need to set an ip in that config file:
Through docker-compose v2 (docker 1.10+), a bridge network is created which means both containers are in that network and see each other.
See more at "Networking in Compose".
If Zookeeper expose its port 2181, the config file from Kafka can simply reference zookeeper by its container name.
And that will work on any docker (boot2docker on Mac or native docker on Linux)

Kubernetes proxy connection

I am trying to play around with kubernetes and specifically the REST API. The steps to connect with the cluster API are listed here. However Im stuck in the first step i.e. running kubectl proxy
I try running this:
kubectl --context='vagrant' proxy --port=8080 &
which returns error: couldn't read version from server: Get https://172.17.4.99:443/api: dial tcp 172.17.4.99:443: i/o timeout
What does this mean? How do overcome it connect to the API?
Check that your docker, proxy, kube-apiserver, kube-control-manager services are running without error. Check their status using systemclt status your-service-name. If the service is loaded but not running then restart the service by using systemctl restart your-service-name.

Resources