Kubernetes Pod container's websocket not reachable - websocket

I've created a sample spring boot application that exposes a websocket endpoint at localhost:8080/ws.
Basically I followed this guide except for I am not using the .withSockJS Option.
When I run this application locally, my sample angular app can connect to the websocket.
Now I want to have both containers (spring boot app and angular app) in a single Kubernetes pod.
They both spin up when I run them. Then I expose the angular frontend's port to be able to view the app. But the logs tell me that it is not able to connect to the websocket backend via ws://localhost:8080/ws
Even when I connect to the backend container, I can see that it is up and running, but my curl websocket test also always fails.
This is my pod def:
---
apiVersion: v1
kind: Pod
metadata:
name: my-app.example.org
labels:
app: my-app-system
spec:
containers:
- name: backend
image: test/my-app-backend
ports:
- containerPort: 8080
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev-docker-postgres"
- name: JAVA_OPTIONS
value: "-agentlib:jdwp=transport=dt_socket,address=5005,server=y,suspend=n"
- name: frontend
image: test/my-app-frontend
ports:
- containerPort: 4200
imagePullPolicy: Always
command: ["/bin/sh"]
args: ["-c", "npm run kubstart"]
imagePullSecrets:
- name: registrykey
One more thing:
When I additionally expose the backend container's port via NodePort type, and start the angular app locally on my machine with the service's url the websocket connection succeeds.
It seems I am not able to let both containers in my pod communicate with each other via ws://

Never mind...
Of cource this can't work via localhost.
I need to expose the backend's port and access the websocket "from outside"

Related

Ambassador Edge Stack : Working with sample project but not with my project

I am trying to configure Ambassador as API Gateway in my kubernates cluster locally.
Installation:
installed from https://www.getambassador.io/docs/latest/tutorials/getting-started/ both windows and Kubernetes part
can login with >edgectl login --namespace=ambassador localhost and see dashboard
configure with a sample project they provide from https://www.getambassador.io/docs/latest/tutorials/quickstart-demo/
Here is the YML file for deployment of demo app
apiVersion: apps/v1
kind: Deployment
metadata:
name: quote
namespace: ambassador
spec:
replicas: 1
selector:
matchLabels:
app: quote
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: quote
spec:
containers:
- name: backend
image: docker.io/datawire/quote:0.4.1
ports:
- name: http
containerPort: 8080
Everything is working as expected. Now I am trying to configure with my project. But it is not working.
So for simpler case, with keeping every configuration as the demo of Ambassador, I just modify from image: docker.io/datawire/quote:0.4.1 to image: angularapp:latest where this is a docker image of Angular 10 project.
But I am getting upstream connect error or disconnect/reset before headers. reset reason: connection failure
I spent one day with this problem. I restored my Kubernates from docker desktop app and reconfigured but no luck.
That error occurs when a mapping is valid, but the service it is pointing to cannot be reached for some reason. Is the deployment actually running (kubectl get deploy -A -o wide)? Is your angular app exposing port 8080? 8080 is a pretty common kubernetes port, but not so much in the frontend development world. If you use kubectl exec -it {{AMBASSADOR_POD}} -- sh does curl http://quote return the expected output?

Running a bash script using a Kubenetes Service

I am not sure how dumb or un-reasonable this question is, but we are trying to see if we can do this in any way.
I have a .bash file. And I want to run this when I invoke a url.
Let's take the url is https://domainname.com/jobapi
When I invoke this on the browser, this should invoke a .bash script on the container.
Is this really possible?
If it is possible, want to know if I need to add this script as a deployment or a job?
The first step, before looking at Kubernetes, is to configure a web server to run your script. This could be a generic web server like nginx or Apache, and you could add your script as a CGI script. There's plenty of tutorials out there that explain how to write CGI scripts.
Depending on the requirements of your application, a simple HTTP hook server might be a better match. Have a look at, for example, https://github.com/adnanh/webhook.
Either way, try this out with just Docker first, before trying to create a pod and potentially a service and an ingress in Kubernetes.
In a second step, to be able to access your service (the server invoking your script), you need to create a pod, probably through a deployment, and potentially a service and an ingress for it.
Kubernetes jobs are for running a script (or other program) once. They're most useful to automate maintaince tasks for your application.
What I would try to do is to run the shell script from a php file. Otherwise you are going to need some sort of driver to trigger the script.
So you would have the script as a regular executable, and upon the request php will execute it via shell.
Actually you can make it like an API; domain.com/job1 could execute job1, domain.com/jobn could execute jobn and so on.
Now, the way I'm describing would work only as a Deployment, as you want the server to be always up and ready to get requests.
Create a ingress service (NodePort if external facing) which will call a service
Call a service which maps the labels defined on the pod(runs script).This pod can be from a deployment or a simple pod
Make this service expose a pod/deployment
deployment can trigger the pods with shell script or a pod can trigger a shell script as well.
Ingress service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "domainame.com"
http:
paths:
- path: /jobapi
pathType: Prefix
backend:
serviceName: my-service
servicePort: 8080
my-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
nodePort: 30007
port: 8080
targetPort: 8080
run your bash script, this can be done by defining a deployment or a pod
Pod:
k run MyApp --image=nginx --labels=app=MyApp --port=8080 -- /bin/sh -c echo 'Im up'
or
Deployment.yaml
controllers/nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: MyAppdep
spec:
replicas: 2
selector:
matchLabels:
app: MyApp
template:
metadata:
labels:
app: MyApp
spec:
containers:
- name: nginx
image: nginx:1.14.2
commands: ["/bin/sh","-c","echo 'test'"]
ports:
- containerPort: 8080

Clean deploy of Spring boot microservices with Config Server

We had configured a kubernetes cluster where we deploy various services using spring boot and we have one service that is Spring Cloud Config Server.
Our trouble is that when we start the cluster all the services try to connect to the config server to download the configuration, and since the Config Server has not yet started all the services fail, causing kubernetes to retry the initialization and consuming many resources so that config server it self can not start.
We are wondering if there is a way to initialize all services in such a way that do not over load the cluster or so that they pacefully wait until the config server starts. As of now, all services start and we have to wait for like 20 minutes until the cluster works its way out.
Thanks in advance
You can use Init Containers to ping for the server until it is online. An example would be:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
initContainers:
- name: wait-config-server
image: busybox
command: ["sh", "-c", "for i in $(seq 1 300); do nc -zvw1 config-server 8080 && exit 0 || sleep 3; done; exit 1"]
containers:
- name: web
image: my-mage
ports:
- containerPort: 80
...
In this example I am using an nc command for pinging the server but you can also use wget, curl or whatever is suited best for you.
Their are various options to do the same. Choose the one that best suits you:
You can as well try to apply liveliness probe or readiness probe to
the config server. In this manner, all containers can wait till the
config server is up and running and then try to connect with the
config server.
You can use consul service running as a quorum of 3 or 5 services,
and design the clients to connect to the consul and wait till the
config server is up and running.
You can write a startup script will will trigger the connection
establishment with the config server and post which it can start the
containers.

Minikube VM hyperkit: Spring Boot: connect to local machine

I have a minikube cluster running on Mac OSX and a simple Spring Boot REST api that connects to Redis and Mongo DB, which I have installed and running locally.
I wish not to run Redis / MongoDb in a Docker container.
I will probably run them remotely in the cloud, therefore I would probably just connect to an external IP address.
What I don't understand is what IP address I should use to connect to my localhost machine.
I start up my Minikube with VM hyperkit.
Edit:
I also tried to start using virtualbox:
minikube start --vm-driver=virtualbox
In my spring boot application, I've configured:
spring.data.mongodb.host = 10.0.2.2
spring.redis.host = 10.0.2.2
But still, I get connection errors:
This works when I run the application locally on my host machine.
For the sake of completeness, this is my yaml file:
---
apiVersion: v1
kind: Service
metadata:
name: posts-api
labels:
app: posts-api
env: dev
spec:
type: NodePort
selector:
app: posts-api
ports:
- protocol: TCP
port: 8083
name: http
---
apiVersion: v1
kind: ReplicationController
metadata:
name: posts-api
spec:
replicas: 1
template:
metadata:
labels:
app: posts-api
spec:
containers:
- name: posts-api
image: kimgysen/posts-api:latest
ports:
- containerPort: 8083
livenessProbe:
httpGet:
path: /health
port: 8083
initialDelaySeconds: 120
timeoutSeconds: 3
I'll give you the answer I gave to someone with the same problem (different tech):
Kubernetes pod unable to connect to rabbit mq instance running locally
Replace the IP and port number, and the Service and Endpoints names as appropriate.

Eureka and Kubernetes

I am putting together a proof of concept to help identify gotchas using Spring Boot/Netflix OSS and Kubernetes together. This is also to prove out related technologies such as Prometheus and Graphana.
I have a Eureka service setup which is starting with no trouble within my Kubernetes cluster. This is named discovery and has been given the name "discovery-1551420162-iyz2c" when added to K8 using
For my config server, I am trying to use Eureka based on a logical URL so in my bootstrap.yml I have
server:
port: 8889
eureka:
instance:
hostname: configserver
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://discovery:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/xyz/microservice-config
and I am starting this using
kubectl run configserver --image=xyz/config-microservice --replicas=1 --port=8889
This service ends up running named as configserver-3481062421-tmv4d. I then see exceptions in the config server logs as it tries to locate the eureka instance and cannot.
I have the same setup for this using docker-compose locally with links and it starts the various containers with no trouble.
discovery:
image: xyz/discovery-microservice
ports:
- "8761:8761"
configserver:
image: xyz/config-microservice
ports:
- "8888:8888"
links:
- discovery
How can I setup something like eureka.client.serviceUri so my microservices can locate their peers without knowing fixed IP addresses within the K8 cluster?
How can I setup something like eureka.client.serviceUri?
You have to have a Kubernetes service on top of the eureka pods/deployments which then will provide you a referable IP address and port number. And then use that referable address to look up the Eureka service, instead of "8761".
To address further question about HA configuration of Eureka
You shouldn't have more than one pod/replica of Eureka per k8s service (remember, pods are ephemeral, you need a referable IP address/domain name for eureka service registry). To achieve high availability (HA), spin up more k8s services with one pod in each.
Eureka service 1 --> a single pod
Eureka Service 2 --> another single pod
..
..
Eureka Service n --> another single pod
So, now you have referable IP/Domain name (IP of the k8s service) for each of your Eureka.. now it can register each other.
Feeling like it's an overkill?
If all your services are in same kubernetes namespace you can achieve everything (well, almost everything, except client side load balancing) that eureka offers though k8s service + KubeDNS add-On. Read this article by Christian Posta
Edit
Instead of Services with one pod each, you can make use of StatefulSets as Stefan Ocke pointed out.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Regarding HA configuration of Eureka in Kubernetes:
You can (meanwhile) use a StatefulSet for this instead of creating a service for each instance. The StatefulSet guarantees stable network identity for each instance you create.
For example, the deployment could look like the following yaml (StatefulSet + headless Service).
There are two Eureka instances here, according to the DNS
naming rules for StatefulSets (assuming namespace is "default"):
eureka-0.eureka.default.svc.cluster.local and
eureka-1.eureka.default.svc.cluster.local
As long as your pods are in the same namespace, they can reach Eureka also as:
eureka-0.eureka
eureka-1.eureka
Note: The docker image used in the example is from https://github.com/stefanocke/eureka. You might want to chose or build your own one.
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
ports:
- port: 8761
name: eureka
clusterIP: None
selector:
app: eureka
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 2
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: stoc/eureka
ports:
- containerPort: 8761
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# Due to camelcase issues with "defaultZone" and "preferIpAddress", _JAVA_OPTIONS is used here
- name: _JAVA_OPTIONS
value: -Deureka.instance.preferIpAddress=false -Deureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:8761/eureka/,http://eureka-1.eureka:8761/eureka/
- name: EUREKA_CLIENT_REGISTERWITHEUREKA
value: "true"
- name: EUREKA_CLIENT_FETCHREGISTRY
value: "true"
# The hostnames must match with the the eureka serviceUrls, otherwise the replicas are reported as unavailable in the eureka dashboard
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
# No need to start the pods in order. We just need the stable network identity
podManagementPolicy: "Parallel"
#Stefan Ocke i'm trying to the same setup the same, but with my own image of eureka server. but i keep getting this error
Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
2019-09-27 06:27:03.363 ERROR 1 --- [ main] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://eureka-1.eureka:8761/eureka/}
Here are configurations:
Eureka Spring Properties:
server.port=${EUREKA_PORT}
spring.security.user.name=${EUREKA_USERNAME}
spring.security.user.password=${EUREKA_PASSWORD}
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.instance.prefer-ip-address=false
eureka.server.wait-time-in-ms-when-sync-empty=0
eureka.server.eviction-interval-timer-in-ms=15000
eureka.instance.leaseRenewalIntervalInSeconds=30
eureka.instance.leaseExpirationDurationInSeconds=30
eureka.instance.hostname=${EUREKA_INSTANCE_HOSTNAME}
eureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:8761/eureka/,http://eureka-1.eureka:8761/eureka/
StatefulSet Config:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: "my-image"
command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","/app/eureka-service.jar"]
ports:
- containerPort: 8761
env:
- name: EUREKA_PORT
value: "8761"
- name: EUREKA_USERNAME
value: "theusername"
- name: EUREKA_PASSWORD
value: "thepassword"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
Service Config:
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
selector:
app: eureka
ports:
- port: 8761
targetPort: 8761
Ingress Controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: eureka
servicePort: 8761
You have to install a kubernetes kube-dns server to resolve names with their IPs, and then expose your eureka pods as a service. (see kubernetes docs) for more infos to how to create dns and services.
#random_dude, what will be the case if i used to create 2 or 3 replicas of eureka? it turned out that when i mount a micro-service 'X' i will be registred in all eureka replicas, but when it becomes down, only one replicas gets the update ! the others still consider the micro-service instance as running
I got exactly this problem and it got resoled by adding an environment variable for pod. This has the answer. Sample env variable for my pod is shown below,
I'm wondering where you put that config, is that on service-registry which is on the eureka, or in the client-side (where we want to connect to eureka)?
my current case is, that all configuration is placed on a repo, the eureka config as well. which makes the config static.

Resources