Clean deploy of Spring boot microservices with Config Server - spring

We had configured a kubernetes cluster where we deploy various services using spring boot and we have one service that is Spring Cloud Config Server.
Our trouble is that when we start the cluster all the services try to connect to the config server to download the configuration, and since the Config Server has not yet started all the services fail, causing kubernetes to retry the initialization and consuming many resources so that config server it self can not start.
We are wondering if there is a way to initialize all services in such a way that do not over load the cluster or so that they pacefully wait until the config server starts. As of now, all services start and we have to wait for like 20 minutes until the cluster works its way out.
Thanks in advance

You can use Init Containers to ping for the server until it is online. An example would be:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
initContainers:
- name: wait-config-server
image: busybox
command: ["sh", "-c", "for i in $(seq 1 300); do nc -zvw1 config-server 8080 && exit 0 || sleep 3; done; exit 1"]
containers:
- name: web
image: my-mage
ports:
- containerPort: 80
...
In this example I am using an nc command for pinging the server but you can also use wget, curl or whatever is suited best for you.

Their are various options to do the same. Choose the one that best suits you:
You can as well try to apply liveliness probe or readiness probe to
the config server. In this manner, all containers can wait till the
config server is up and running and then try to connect with the
config server.
You can use consul service running as a quorum of 3 or 5 services,
and design the clients to connect to the consul and wait till the
config server is up and running.
You can write a startup script will will trigger the connection
establishment with the config server and post which it can start the
containers.

Related

How to Connect to kafka on localhost (host machine) from app inside kubernetes (minikube)

I am trying to connect my springboot app (running inside minikube) to kafka on my localhost (ie, laptop).
I have tried many things, including headless services, services without selectors, updating minikube \etc\hosts, but nothing works yet.
I get error from spring boot saying No resolvable bootstrap urls given in bootstrap.servers
Can someone please point me to what I am doing wrong?
My Headless Service
apiVersion: v1
kind: Service
metadata:
name: es-local-kafka
namespace: demo
spec:
clusterIP: None
---
apiVersion: v1
kind: Endpoints
metadata:
name: es-local-kafka
subsets:
- addresses:
- ip: "10.0.2.2"
ports:
- name: "kafkabroker1"
port: 9191
- name: "kafkabroker2"
port: 9192
- name: "kafkabroker3"
port: 9193
My application properties for kafka:
kafka.bootstrap-servers=${LOCALHOST}:9191,${LOCALHOST}:9192,${LOCALHOST}:9193
My Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: rr-config
namespace: demo
data:
LOCALHOST: es-local-kafka.demo.svc
Not sure how you are trying to connect service running on Minikube or on the local system and want to leverage kafka on minikube.
If your application running on local system and Kafka on minikube
you can connect the application to Kafka cluster with the IP of minikube also.
Here is good example : https://github.com/d1egoaz/minikube-kafka-cluster
Git clone : https://github.com/d1egoaz/minikube-kafka-cluster
cd minikube-kafka-cluster
kubectl apply -f 00-namespace/
kubectl apply -f 01-zookeeper/
kubectl apply -f 02-kafka/
kubectl apply -f 03-yahoo-kafka-manager/
kubectl get svc -n kafka-ca1 (Note the port of kafka 31445)
list the Ip of minikube
minikube ip
Now from your local system to minikube kafka you can connect with, http://minikube-ip:port you will see UI of kafka manager in browser
If you are running sprint boot application on the minikube
If both services are running in same namespace you just have to use the service name only to connect
Only service name in sprint boot, if port required you can also pass it
es-local-kafka
try with passing full service also
<servicename>.<namespace>.svc.cluster.local
Headless service is for different purposes and service without a selector is weird in that case your service wont be able to connect to PODs.
I eventually got a fix, and doesn't need all the crazy stuff I was referring to in my question:
You need to make sure your kafka broker is bound to 0.0.0.0 instead of 127.0.0.0 (localhost) . By default, in the single node kafka broker setup, this is what is used. I went with this, due to both time constraint, and the fact that this was just for a POC in my local (prod will have a specific dns-able kafka URL anyway, and no such localhost shenanigans needed)
In the kafka URL in your application properties file, instead of localhost, you need to give ip as as the minikube ip. This is the same ip that you will get if you do the command minikube ip :)
Read more about how this works here: https://minikube.sigs.k8s.io/docs/handbook/host-access/

Ambassador Edge Stack : Working with sample project but not with my project

I am trying to configure Ambassador as API Gateway in my kubernates cluster locally.
Installation:
installed from https://www.getambassador.io/docs/latest/tutorials/getting-started/ both windows and Kubernetes part
can login with >edgectl login --namespace=ambassador localhost and see dashboard
configure with a sample project they provide from https://www.getambassador.io/docs/latest/tutorials/quickstart-demo/
Here is the YML file for deployment of demo app
apiVersion: apps/v1
kind: Deployment
metadata:
name: quote
namespace: ambassador
spec:
replicas: 1
selector:
matchLabels:
app: quote
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: quote
spec:
containers:
- name: backend
image: docker.io/datawire/quote:0.4.1
ports:
- name: http
containerPort: 8080
Everything is working as expected. Now I am trying to configure with my project. But it is not working.
So for simpler case, with keeping every configuration as the demo of Ambassador, I just modify from image: docker.io/datawire/quote:0.4.1 to image: angularapp:latest where this is a docker image of Angular 10 project.
But I am getting upstream connect error or disconnect/reset before headers. reset reason: connection failure
I spent one day with this problem. I restored my Kubernates from docker desktop app and reconfigured but no luck.
That error occurs when a mapping is valid, but the service it is pointing to cannot be reached for some reason. Is the deployment actually running (kubectl get deploy -A -o wide)? Is your angular app exposing port 8080? 8080 is a pretty common kubernetes port, but not so much in the frontend development world. If you use kubectl exec -it {{AMBASSADOR_POD}} -- sh does curl http://quote return the expected output?

Kubernetes Endpoint created for Kafka but not reflecting in POD

In Kubernetes cluster I have created Endpoint pointing to Kafka cluster. Endpoint created successfully.
Name - kafka
Endpoint - X.X.X.X:9092
In my Spring Boot application's deployment yaml I have kept environment variable BROKER_IP. For this environment variable I have pointed:
env:
- name: BROKER_IP
value: kafka
The POD is in Error state. In my bootstrap-server I am getting kafka and not the actual Endpoint that was created. Any thoughts?
UPDATE - Just tried kafka:9092 and it worked. So wondering does the ENDPOINT maps to IP only and not the Port? Is my understanding correct??
Is it possible that you forgot to create the Service object matching the Endpoints? Because you are providing the ip-port pairs yourself the Service would need to be selectorless.
This works for me:
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses: [{ip: "1.2.3.4"}]
ports: [{port: 9092}]
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
ports: [{port: 9092}]
Testing it:
$ kubectl run kafka-dns-test --image=busybox --attach --rm --restart=Never -- nslookup kafka
If you don't see a command prompt, try pressing enter.
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: kafka.default.svc.cluster.local
Address: 10.96.220.40
Successful lookup, ignore extra *** Can't find xxx: No answer messages
Also, because there is a Service object you get some environment variables in your Pods (without having to declare them):
KAFKA_PORT='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP_ADDR='10.96.220.40'
KAFKA_PORT_9092_TCP_PORT='9092'
KAFKA_PORT_9092_TCP_PROTO='tcp'
KAFKA_SERVICE_HOST='10.96.220.40'
KAFKA_SERVICE_PORT='9092'
But the most flexible way to use a Service is still to use the dns name (kafka in this case).

Prometheus: Address of discovered service is empty?

I've written a very basic Spring Boot 2 application that connects to Zookeeper for service discovery (by using spring-cloud-starter-zookeeper-discovery).
The application gets registered at /services/example-service with the following value:
{"name":"example-service","id":"cb14ad15-4d33-4f1c-a420-29980ddf2fa8","address":"bf3fb9191373","port":8080,"sslPort":null,"payload":{"#class":"org.springframework.cloud.zookeeper.discovery.ZookeeperInstance","id":"application-1","name":"example-service","metadata":{}},"registrationTimeUTC":1524120820273,"serviceType":"DYNAMIC","uriSpec":{"parts":[{"value":"scheme","variable":true},{"value":"://","variable":false},{"value":"address","variable":true},{"value":":","variable":false},{"value":"port","variable":true}]}}
The address is an id because I've deployed the stack with Docker.
My Prometheus configuration looks like this:
- job_name: 'example-service'
metrics_path: '/actuator/prometheus'
serverset_sd_configs:
- servers:
- zookeeper:2181
paths:
- '/services/example-service'
The service discovery page of Prometheus shows the following discovered labels:
__address__=":0" __meta_serverset_endpoint_host="" __meta_serverset_endpoint_port="0" __meta_serverset_path="/services/example-service/cb14ad15-4d33-4f1c-a420-29980ddf2fa8" __meta_serverset_shard="0" __meta_serverset_status="" __metrics_path__="/actuator/prometheus" __scheme__="http" job="example-service"
Any idea why __address__ is :0?
Serverset discovery is a particular way of using Zookeeper, which your application is not following. In this case you probably want file service discovery.
Serverset use config as below:
{"serviceEndpoint":{"host":"localhost","port":9100},"additionalEndpoints":{},"status":"ALIVE"}

Kubernetes Pod container's websocket not reachable

I've created a sample spring boot application that exposes a websocket endpoint at localhost:8080/ws.
Basically I followed this guide except for I am not using the .withSockJS Option.
When I run this application locally, my sample angular app can connect to the websocket.
Now I want to have both containers (spring boot app and angular app) in a single Kubernetes pod.
They both spin up when I run them. Then I expose the angular frontend's port to be able to view the app. But the logs tell me that it is not able to connect to the websocket backend via ws://localhost:8080/ws
Even when I connect to the backend container, I can see that it is up and running, but my curl websocket test also always fails.
This is my pod def:
---
apiVersion: v1
kind: Pod
metadata:
name: my-app.example.org
labels:
app: my-app-system
spec:
containers:
- name: backend
image: test/my-app-backend
ports:
- containerPort: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev-docker-postgres"
- name: JAVA_OPTIONS
value: "-agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n"
- name: frontend
image: test/my-app-frontend
ports:
- containerPort: 4200
imagePullPolicy: Always
command: ["/bin/sh"]
args: ["-c", "npm run kubstart"]
imagePullSecrets:
- name: registrykey
One more thing:
When I additionally expose the backend container's port via NodePort type, and start the angular app locally on my machine with the service's url the websocket connection succeeds.
It seems I am not able to let both containers in my pod communicate with each other via ws://
Never mind...
Of cource this can't work via localhost.
I need to expose the backend's port and access the websocket "from outside"

Resources