I've written a very basic Spring Boot 2 application that connects to Zookeeper for service discovery (by using spring-cloud-starter-zookeeper-discovery).
The application gets registered at /services/example-service with the following value:
{"name":"example-service","id":"cb14ad15-4d33-4f1c-a420-29980ddf2fa8","address":"bf3fb9191373","port":8080,"sslPort":null,"payload":{"#class":"org.springframework.cloud.zookeeper.discovery.ZookeeperInstance","id":"application-1","name":"example-service","metadata":{}},"registrationTimeUTC":1524120820273,"serviceType":"DYNAMIC","uriSpec":{"parts":[{"value":"scheme","variable":true},{"value":"://","variable":false},{"value":"address","variable":true},{"value":":","variable":false},{"value":"port","variable":true}]}}
The address is an id because I've deployed the stack with Docker.
My Prometheus configuration looks like this:
- job_name: 'example-service'
metrics_path: '/actuator/prometheus'
serverset_sd_configs:
- servers:
- zookeeper:2181
paths:
- '/services/example-service'
The service discovery page of Prometheus shows the following discovered labels:
__address__=":0" __meta_serverset_endpoint_host="" __meta_serverset_endpoint_port="0" __meta_serverset_path="/services/example-service/cb14ad15-4d33-4f1c-a420-29980ddf2fa8" __meta_serverset_shard="0" __meta_serverset_status="" __metrics_path__="/actuator/prometheus" __scheme__="http" job="example-service"
Any idea why __address__ is :0?
Serverset discovery is a particular way of using Zookeeper, which your application is not following. In this case you probably want file service discovery.
Serverset use config as below:
{"serviceEndpoint":{"host":"localhost","port":9100},"additionalEndpoints":{},"status":"ALIVE"}
Related
Please note: my prometheus is running using ubuntu terminal and my springboot application is running on windows. Seems like my ubuntu is not able to connect with the localhost of windows.
I have created springboot metrics using "actuator" and my metrics are being exposed at "http/localhost:8080/actuator/prometheus".
My application.yml configuration in my springboot application looks like this:
management:
endpoints:
web:
exposure:
include: prometheus
The configuration file of prometheus i.e. prometheus.yml is as below:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from
this config.
- job_name: "services"
static_configs:
- targets: ["localhost:8080"]
metrics_path: '/actuator/prometheus'
Despite this configuration, i see "target" as down in prometheus interface. It says Get "http://localhost:8080/actuator/prometheus": dial tcp 127.0.0.1:8080: connect: connection refused . Why is prometheus not able to pick the metrics at localhost?
I am trying to connect my springboot app (running inside minikube) to kafka on my localhost (ie, laptop).
I have tried many things, including headless services, services without selectors, updating minikube \etc\hosts, but nothing works yet.
I get error from spring boot saying No resolvable bootstrap urls given in bootstrap.servers
Can someone please point me to what I am doing wrong?
My Headless Service
apiVersion: v1
kind: Service
metadata:
name: es-local-kafka
namespace: demo
spec:
clusterIP: None
---
apiVersion: v1
kind: Endpoints
metadata:
name: es-local-kafka
subsets:
- addresses:
- ip: "10.0.2.2"
ports:
- name: "kafkabroker1"
port: 9191
- name: "kafkabroker2"
port: 9192
- name: "kafkabroker3"
port: 9193
My application properties for kafka:
kafka.bootstrap-servers=${LOCALHOST}:9191,${LOCALHOST}:9192,${LOCALHOST}:9193
My Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: rr-config
namespace: demo
data:
LOCALHOST: es-local-kafka.demo.svc
Not sure how you are trying to connect service running on Minikube or on the local system and want to leverage kafka on minikube.
If your application running on local system and Kafka on minikube
you can connect the application to Kafka cluster with the IP of minikube also.
Here is good example : https://github.com/d1egoaz/minikube-kafka-cluster
Git clone : https://github.com/d1egoaz/minikube-kafka-cluster
cd minikube-kafka-cluster
kubectl apply -f 00-namespace/
kubectl apply -f 01-zookeeper/
kubectl apply -f 02-kafka/
kubectl apply -f 03-yahoo-kafka-manager/
kubectl get svc -n kafka-ca1 (Note the port of kafka 31445)
list the Ip of minikube
minikube ip
Now from your local system to minikube kafka you can connect with, http://minikube-ip:port you will see UI of kafka manager in browser
If you are running sprint boot application on the minikube
If both services are running in same namespace you just have to use the service name only to connect
Only service name in sprint boot, if port required you can also pass it
es-local-kafka
try with passing full service also
<servicename>.<namespace>.svc.cluster.local
Headless service is for different purposes and service without a selector is weird in that case your service wont be able to connect to PODs.
I eventually got a fix, and doesn't need all the crazy stuff I was referring to in my question:
You need to make sure your kafka broker is bound to 0.0.0.0 instead of 127.0.0.0 (localhost) . By default, in the single node kafka broker setup, this is what is used. I went with this, due to both time constraint, and the fact that this was just for a POC in my local (prod will have a specific dns-able kafka URL anyway, and no such localhost shenanigans needed)
In the kafka URL in your application properties file, instead of localhost, you need to give ip as as the minikube ip. This is the same ip that you will get if you do the command minikube ip :)
Read more about how this works here: https://minikube.sigs.k8s.io/docs/handbook/host-access/
I have added a NFS volume mount to my Spring Boot container running on Kubernetes. Below is my deployment file for Kubernetes.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ldap
spec:
replicas: 3
spec:
serviceAccountName: xxx-staging-take-poc-admin
volumes:
- name: nfs-volume
nfs:
server: 10.xxx.xxx.xxx
path: /ifs/standard/take1-poc
containers:
-
image: image-id
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
name: ldap
How do I access the mount path from my Spring Boot application to achieve file read and write.
If I understand you correctly you can pass external info to sprint boot application through environment variables. Here is an article with more detailed info of how to do it.
Kubernetes ConfigMaps also allows us to load a file as a ConfigMap
property. That gives us an interesting option of loading the Spring
Boot application.properties via Kubernetes ConfigMaps.
Also, you may want to get familiar with this documentation. It shows how to reference secrets which are also mounted so you may find it helpful in your case.
The Spring Cloud Kubernetes plug-in implements the integration between
Kubernetes and Spring Boot. In principle, you could access the
configuration data from a ConfigMap using the Kubernetes API.
Please let me know if that helped.
In Kubernetes cluster I have created Endpoint pointing to Kafka cluster. Endpoint created successfully.
Name - kafka
Endpoint - X.X.X.X:9092
In my Spring Boot application's deployment yaml I have kept environment variable BROKER_IP. For this environment variable I have pointed:
env:
- name: BROKER_IP
value: kafka
The POD is in Error state. In my bootstrap-server I am getting kafka and not the actual Endpoint that was created. Any thoughts?
UPDATE - Just tried kafka:9092 and it worked. So wondering does the ENDPOINT maps to IP only and not the Port? Is my understanding correct??
Is it possible that you forgot to create the Service object matching the Endpoints? Because you are providing the ip-port pairs yourself the Service would need to be selectorless.
This works for me:
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses: [{ip: "1.2.3.4"}]
ports: [{port: 9092}]
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
ports: [{port: 9092}]
Testing it:
$ kubectl run kafka-dns-test --image=busybox --attach --rm --restart=Never -- nslookup kafka
If you don't see a command prompt, try pressing enter.
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: kafka.default.svc.cluster.local
Address: 10.96.220.40
Successful lookup, ignore extra *** Can't find xxx: No answer messages
Also, because there is a Service object you get some environment variables in your Pods (without having to declare them):
KAFKA_PORT='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP_ADDR='10.96.220.40'
KAFKA_PORT_9092_TCP_PORT='9092'
KAFKA_PORT_9092_TCP_PROTO='tcp'
KAFKA_SERVICE_HOST='10.96.220.40'
KAFKA_SERVICE_PORT='9092'
But the most flexible way to use a Service is still to use the dns name (kafka in this case).
We had configured a kubernetes cluster where we deploy various services using spring boot and we have one service that is Spring Cloud Config Server.
Our trouble is that when we start the cluster all the services try to connect to the config server to download the configuration, and since the Config Server has not yet started all the services fail, causing kubernetes to retry the initialization and consuming many resources so that config server it self can not start.
We are wondering if there is a way to initialize all services in such a way that do not over load the cluster or so that they pacefully wait until the config server starts. As of now, all services start and we have to wait for like 20 minutes until the cluster works its way out.
Thanks in advance
You can use Init Containers to ping for the server until it is online. An example would be:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
initContainers:
- name: wait-config-server
image: busybox
command: ["sh", "-c", "for i in $(seq 1 300); do nc -zvw1 config-server 8080 && exit 0 || sleep 3; done; exit 1"]
containers:
- name: web
image: my-mage
ports:
- containerPort: 80
...
In this example I am using an nc command for pinging the server but you can also use wget, curl or whatever is suited best for you.
Their are various options to do the same. Choose the one that best suits you:
You can as well try to apply liveliness probe or readiness probe to
the config server. In this manner, all containers can wait till the
config server is up and running and then try to connect with the
config server.
You can use consul service running as a quorum of 3 or 5 services,
and design the clients to connect to the consul and wait till the
config server is up and running.
You can write a startup script will will trigger the connection
establishment with the config server and post which it can start the
containers.