I have a websocket .net application inside K8s cluster. I need to implement sticky session for the websocket using the nginx opensource.
I have read the documentation of nginx and kubernetes.
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#session-affinity
It says we can use below configuration for sticky session:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "ingresscoookie"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800
but this doesnot seem to work. I have tried the example code provided by kubernetes here https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/affinity/cookie/ingress.yaml.
This works for me, so I believe cookie based session affinity does not seem to work for websockets.
On further digging the documentation it says that I can use IP hashing
algorithm. so I tried using below annotation.
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
this also failed. The requests are still balanced using the default algorithm.
How can I achieve session persistence?
Stale post I know, but might help others. Did you remove/comment out the affinity and session annotations ?
This snippet works for me, but notably doesn't work if you've left the other annotations in (Like you, I couldn't get cookie based affinity to work - and I needed sticky sessions as antiforgery tokens were created locally to my webservices).
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: nginx
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/ssl-services: "hello-world-svc"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/upstream-hash-by: $remote_addr
spec:
tls:
- hosts:
- nginx.mydomain.co.uk
secretName: tls-certificate
rules:
- host: nginx.mydomain.co.uk
http:
paths:
- path: /web1(/|$)(.*)
backend:
serviceName: hello-world-svc
servicePort: 80
Related
This question already has an answer here:
How to exclude pattern in <match> for fluentd config?
(1 answer)
Closed 6 months ago.
we have Fluentd running on our multiple K8s clusters. and with Fluentd we are using Elasticsearch to store our logs from all remote K8s clusters.
there are a few applications for which we do not want Fluentd to push logs to Elasticsearch.
for example, there is a pod running that has container named testing or it has labels as mode: testing. and we want FluentD to not process the logs of this container and drop them.
looking for suggestion on this how we can achieve this.
Thanks
Here is an explanation to do that with Fluentd.
But I would like to recommend another tool developed by the same team: Fluent Bit. It is very light (needs <1MB of memory as Fluentd needs about 40MB), and more suitable for K8S, you can install it as a damonset (a pod on each node), with a Fluentd deployment (1-3 replicas), every Fluent Bit pod will collect the log and forwards it to a Fluentd instance which aggregate it and send it to ES. In this case, you can easily filter the records using the pod annotations (more info):
apiVersion: v1
kind: Pod
metadata:
name: apache-logs
labels:
app: apache-logs
annotations:
fluentbit.io/exclude: "true"
spec:
containers:
- name: apache
image: edsiper/apache_logs
I am new to Grafana and Spring Boot. I am trying to create a Spring Boot application, and use Grafana SimpleJSON Datasource plugin to get data from my Spring Boot APIs and create graphs. I'm following instructions here https://grafana.com/grafana/plugins/grafana-simple-json-datasource/. Right now I am just hard-coding data into my Spring Boot App.
My question is - Are there other better plugins or approaches people would suggest me to use? SimpleJSON seems to require a very specific format of JSON response, and I don't see too many detailed docs online. Is there any way that I can be more free on my JSON responses of my APIs, and set the parameters needed to plot graphs in Grafana?
Thank you.
You can use Micrometer with Spring Boot Actuator framework to expose metric data to a time series database such as Prometheus. Or you can simply write log files, collect them with Promtail and store them in Loki.
At first this might seem like a lot of work to get these things running, but it might be worth it.
I found it surprisingly simple to get the whole monitoring stack running locally with docker-compose:
Add services grafana, prometheus, promtail and loki.
Configure each of them.
The docker-compose might look like this:
version: "3"
services:
prometheus:
image: prom/prometheus:latest
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ./config/prometheus/prometheus_local.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
loki:
depends_on:
- promtail
image: grafana/loki:latest
volumes:
- ./config/loki:/etc/loki
ports:
- "3100:3100"
command: -config.file=/etc/loki/loki-local-config.yaml
promtail:
image: grafana/promtail:latest
volumes:
- .log:/var/log
- ./config/promtail/promtail-docker-config.yaml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
grafana:
depends_on:
- prometheus
- loki
image: grafana/grafana:latest
volumes:
- ./config/grafana/grafana.ini:/etc/grafana/grafana.ini
- ./config/grafana/provisioning:/etc/grafana/provisioning
- ./config/grafana/dashboards:/etc/grafana/dashboards
ports:
- "3000:3000"
Sample config files for provisioning grafana can be found in the grafana git repository. Loki provides sample configurations for itself and promtail. For Prometheus, see here.
The official documentaion about installing Grafana with loki can be found here. There is also documentation about the Prometheus configuration.
Now you need to configure your application. Enable and expose the endpoint prometheus as described in the spring boot documention. Configure a log file appender to write the logs to the above configured log directory .log.
Your logs will now get collected by Promtail and sent to Loki. Metrics will get stored in Prometheus. You can use PromQL and LogQL to write Grafana queries and render the results in Grafana panels.
With this solution you can add tags to your data that can later be used by grafana.
I've followed Kongs Guide for using the cert-manager and it works great out of the box. I've installed the demo server for path '/' of "my.domain.com" with cert-manager annotations and TLS spec. I realized when I add other ingresses, e.g. "/api" within "my.domain.com", I don't need to repeat myself, with ingress annotations, TLS spec, hostname etc. This stopped working, as soon as I removed the demo server from '/'.
Is it possible to specify default annotations / TLS spec for ingresses without having a 'defaultBackend' ?
I'm new to setting up k8s - so I might miss something obvious.
I need to implement tracing opentracing (opentelementary) for datadog in my Spring Boot application with rest controller.
I have a given kubernetes endpoint, to which I should send traces.
Not sure I fully grasp the issue. Here are some steps to collect your traces:
Enable trace collection on Kubernetes and open relevant port (8126) doc
Configure your app to send traces to the right container. Here is an example to adapt based on your situation. doc on java
instrumentation
env:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Just in case, more info on Open Tracing here
I am creating a Facebook multiplayer game and am currently evaluating my tech stack.
My game would need to use websockets and I would like to use Spring Boot. Now, I can not find the info if websocket server will work nicely in Kubernetes? For example if I deploy 5 instances of the server in Kubernetes pods will load balancing/forwarding work correctly for websockets between game clients loaded in browsers and servers in Kubernetes and is there any additional work to enable it? Each pod/server would be stateless and current game info for each player will be stored/read from redis or some other memory db.
If this would not work, how can I work around it and still use Kubernetes? Maybe add one instance of rabbitmq to the stack just for the websockets?
An adequate way to handle this would be to use "sticky sessions". This is where the user is pinned down to a specific pod based on the setting of a cookie.
Here is an example of configuring the Ingress resource object to use sticky sessions:
#
# https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
#
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/affinity: "cookie"
ingress.kubernetes.io/session-cookie-name: "route"
ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Now with that being said, the proper way to handle this would be to use a message broker or a websocket implementation that supports clustering such as socketcluster (https://socketcluster.io).