Websockets + Spring boot + Kubernetes - spring

I am creating a Facebook multiplayer game and am currently evaluating my tech stack.
My game would need to use websockets and I would like to use Spring Boot. Now, I can not find the info if websocket server will work nicely in Kubernetes? For example if I deploy 5 instances of the server in Kubernetes pods will load balancing/forwarding work correctly for websockets between game clients loaded in browsers and servers in Kubernetes and is there any additional work to enable it? Each pod/server would be stateless and current game info for each player will be stored/read from redis or some other memory db.
If this would not work, how can I work around it and still use Kubernetes? Maybe add one instance of rabbitmq to the stack just for the websockets?

An adequate way to handle this would be to use "sticky sessions". This is where the user is pinned down to a specific pod based on the setting of a cookie.
Here is an example of configuring the Ingress resource object to use sticky sessions:
#
# https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
#
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/affinity: "cookie"
ingress.kubernetes.io/session-cookie-name: "route"
ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Now with that being said, the proper way to handle this would be to use a message broker or a websocket implementation that supports clustering such as socketcluster (https://socketcluster.io).

Related

Fluentd | how to drop logs of specific container [duplicate]

This question already has an answer here:
How to exclude pattern in <match> for fluentd config?
(1 answer)
Closed 6 months ago.
we have Fluentd running on our multiple K8s clusters. and with Fluentd we are using Elasticsearch to store our logs from all remote K8s clusters.
there are a few applications for which we do not want Fluentd to push logs to Elasticsearch.
for example, there is a pod running that has container named testing or it has labels as mode: testing. and we want FluentD to not process the logs of this container and drop them.
looking for suggestion on this how we can achieve this.
Thanks
Here is an explanation to do that with Fluentd.
But I would like to recommend another tool developed by the same team: Fluent Bit. It is very light (needs <1MB of memory as Fluentd needs about 40MB), and more suitable for K8S, you can install it as a damonset (a pod on each node), with a Fluentd deployment (1-3 replicas), every Fluent Bit pod will collect the log and forwards it to a Fluentd instance which aggregate it and send it to ES. In this case, you can easily filter the records using the pod annotations (more info):
apiVersion: v1
kind: Pod
metadata:
name: apache-logs
labels:
app: apache-logs
annotations:
fluentbit.io/exclude: "true"
spec:
containers:
- name: apache
image: edsiper/apache_logs

Connecting spring boot to postgres statefulset in Kubernetes

I'm new to Kubernetes and I'm learning about statefulsets. For stateful applications, where the identity of pods matter, we use statefulsets instead of simple deployments so each pod can have its own persistent volume. The writes need to be pointed to the master pod, while the reading operations can be pointed to the slaves. So pointing to the ClusterIP service attached to the statefulset won't guarantee the replication, instead we need to use a headless service that will be pointing to the master.
My questions are the following :
How to edit the application.properties in spring boot project to use the slaves for reading operations ( normal ClusterIP service ) and the master for writing/reading operations ( Headless service )?
In case that is unnecessary and the headless service does this work for us, how does it work exactly since it's pointing to the master ?

Google Kubernetes Engine Spring Boot App Cant Connect To Database Within Same Network

I have a spring boot app deployed to GKE in the us-central1 region. There is a postgres database that runs on a compute engine VM instance. Both are a part of the 'default' VPC network. I can ping this database by its hostname from within one of the GKE pods. However when the spring boot app launches and attempts connection to the database using the same hostname like so in the properties file, I get a connection timeout error and the app fails to startup:
spring.datasource.url=jdbc:postgresql://database01:5432/primary
We have similar connections to this database from other VM instances that work fine. There is aalso a similar setup with Kafka and the app is also unable to resolve the broker hostnames. What am I missing?
If your database is installed as external database on GKE, then you need to create "ExternalName" service
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
type: ExternalName
externalName: usermgmtdb.csw6ylddqss8.us-east-1.rds.amazonaws.com
In this case, you need to know the "externalName" of your external PostgreSQL service, which is external dns name.
Otherwise, if you deploy PostgreSQL to your Kubernetes Cluster, you need to create PersistentVolumeClaim, StorageClass, PostgreSQL Deployment and PostgreSQL ClusterIP service.
See the manifest examples here: https://github.com/stacksimplify/aws-eks-kubernetes-masterclass/tree/master/04-EKS-Storage-with-EBS-ElasticBlockStore/04-03-UserManagement-MicroService-with-MySQLDB/kube-manifests

Implement opentracing in Spring Boot for Datadog

I need to implement tracing opentracing (opentelementary) for datadog in my Spring Boot application with rest controller.
I have a given kubernetes endpoint, to which I should send traces.
Not sure I fully grasp the issue. Here are some steps to collect your traces:
Enable trace collection on Kubernetes and open relevant port (8126) doc
Configure your app to send traces to the right container. Here is an example to adapt based on your situation. doc on java
instrumentation
env:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Just in case, more info on Open Tracing here

nginx-ingress sticky-session for websocket application

I have a websocket .net application inside K8s cluster. I need to implement sticky session for the websocket using the nginx opensource.
I have read the documentation of nginx and kubernetes.
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#session-affinity
It says we can use below configuration for sticky session:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "ingresscoookie"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800
but this doesnot seem to work. I have tried the example code provided by kubernetes here https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/affinity/cookie/ingress.yaml.
This works for me, so I believe cookie based session affinity does not seem to work for websockets.
On further digging the documentation it says that I can use IP hashing
algorithm. so I tried using below annotation.
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
this also failed. The requests are still balanced using the default algorithm.
How can I achieve session persistence?
Stale post I know, but might help others. Did you remove/comment out the affinity and session annotations ?
This snippet works for me, but notably doesn't work if you've left the other annotations in (Like you, I couldn't get cookie based affinity to work - and I needed sticky sessions as antiforgery tokens were created locally to my webservices).
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: nginx
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/ssl-services: "hello-world-svc"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/upstream-hash-by: $remote_addr
spec:
tls:
- hosts:
- nginx.mydomain.co.uk
secretName: tls-certificate
rules:
- host: nginx.mydomain.co.uk
http:
paths:
- path: /web1(/|$)(.*)
backend:
serviceName: hello-world-svc
servicePort: 80

Resources