I need to implement tracing opentracing (opentelementary) for datadog in my Spring Boot application with rest controller.
I have a given kubernetes endpoint, to which I should send traces.
Not sure I fully grasp the issue. Here are some steps to collect your traces:
Enable trace collection on Kubernetes and open relevant port (8126) doc
Configure your app to send traces to the right container. Here is an example to adapt based on your situation. doc on java
instrumentation
env:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Just in case, more info on Open Tracing here
Related
I have few spring-boot microservices with actuator and exposed prometheus metrics. For example:
# HELP process_uptime_seconds The uptime of the Java virtual machine
# TYPE process_uptime_seconds gauge
process_uptime_seconds 3074.971
But there is no application tag, so I'm not able to bind it to a certain application within a grafana dashboard...
Also I expect to have few application instances of some microservice, so in general it would be great to add an instance tag also.
Is there any way to customize the standard metrics with these tags?
The best way to add tags is to use the Prometheus service discovery. This keeps these tags out of your application code and keeps it from being concerned about where it exists.
However sometime if you absolutely need those extra tags (due to the service having extra insight that Prometheus service discovery isn't surfacing) you can't use the Java Simple Client (the Go client does support this though)
I turns out this feature is offered via a Micrometer feature called 'Common Tags' which wraps the Prometheus Java client. You setup your client so the tags are available via a config() call.
registry.config().commonTags("stack", "prod", "region", "us-east-1");
What I am usually doing is using Maven filtering on the resource files (e.g. application.yml) which will replace Maven-known properties like project.artifactId. Springs configuration then takes care of interpolating management.metrics.tags.application.
An application.yml example:
spring:
application:
name: ${project.artifactId}
management:
metrics:
tags:
application: ${spring.application.name}
I am new to Grafana and Spring Boot. I am trying to create a Spring Boot application, and use Grafana SimpleJSON Datasource plugin to get data from my Spring Boot APIs and create graphs. I'm following instructions here https://grafana.com/grafana/plugins/grafana-simple-json-datasource/. Right now I am just hard-coding data into my Spring Boot App.
My question is - Are there other better plugins or approaches people would suggest me to use? SimpleJSON seems to require a very specific format of JSON response, and I don't see too many detailed docs online. Is there any way that I can be more free on my JSON responses of my APIs, and set the parameters needed to plot graphs in Grafana?
Thank you.
You can use Micrometer with Spring Boot Actuator framework to expose metric data to a time series database such as Prometheus. Or you can simply write log files, collect them with Promtail and store them in Loki.
At first this might seem like a lot of work to get these things running, but it might be worth it.
I found it surprisingly simple to get the whole monitoring stack running locally with docker-compose:
Add services grafana, prometheus, promtail and loki.
Configure each of them.
The docker-compose might look like this:
version: "3"
services:
prometheus:
image: prom/prometheus:latest
command:
- --config.file=/etc/prometheus/prometheus.yml
volumes:
- ./config/prometheus/prometheus_local.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
loki:
depends_on:
- promtail
image: grafana/loki:latest
volumes:
- ./config/loki:/etc/loki
ports:
- "3100:3100"
command: -config.file=/etc/loki/loki-local-config.yaml
promtail:
image: grafana/promtail:latest
volumes:
- .log:/var/log
- ./config/promtail/promtail-docker-config.yaml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
grafana:
depends_on:
- prometheus
- loki
image: grafana/grafana:latest
volumes:
- ./config/grafana/grafana.ini:/etc/grafana/grafana.ini
- ./config/grafana/provisioning:/etc/grafana/provisioning
- ./config/grafana/dashboards:/etc/grafana/dashboards
ports:
- "3000:3000"
Sample config files for provisioning grafana can be found in the grafana git repository. Loki provides sample configurations for itself and promtail. For Prometheus, see here.
The official documentaion about installing Grafana with loki can be found here. There is also documentation about the Prometheus configuration.
Now you need to configure your application. Enable and expose the endpoint prometheus as described in the spring boot documention. Configure a log file appender to write the logs to the above configured log directory .log.
Your logs will now get collected by Promtail and sent to Loki. Metrics will get stored in Prometheus. You can use PromQL and LogQL to write Grafana queries and render the results in Grafana panels.
With this solution you can add tags to your data that can later be used by grafana.
We have a Spring Boot microservice which should get some data from old / legacy system. This microservice exposes external modern REST API. Sometimes we have to issue 7-10 requests to the legacy system in order to get all the data we need for single API call. Unfortunately we can't use Reactor / WebClient and have to stick with WebServiceTemplate to issue those "legacy" calls. We can't also use Reactive Spring WebClient - Making a SOAP call
What is the best way to scale such a miroservice in Kubernetes? We have very big concerns that Thread Pool used for parallel WebServiceTemplate invocation will be depleted very fast, but I'm not sure that creating and exposing custom metric based on active threads count / thread pool size is a good idea.
Any advice will be helpful.
Enable Prometheus exporter in Spring
Make sure metrics are scraped. You're going to watch for a threadpool_size metric. Refer your k8s/prometheus distro docs to get prometheus service discovery working for you.
Write a horizontal pod autoscaler (HPA) based on a Prometheus metric:
Setup Prometheus-Adapter and follow the HPA walkthrough.
Or follow this guide https://github.com/stefanprodan/k8s-prom-hpa
Depending on what k8s distro you are using, you might have different ways to get the Prometheus and prometheus discovery:
(example platform built-in) https://cloud.google.com/stackdriver/docs/solutions/gke/prometheus
(example product) https://docs.datadoghq.com/integrations/prometheus/
(example opensource) https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
any other prometheus solution
I am having below requirement for which is there any open source library will cover all of them.
1.We are building a distributed micro service architecture with Spring Boot.Which includes more than 100 micro services.
2.There is a lot if inter micro service communications possible to achieve single transaction.
3.We want to trace every micro service call and the trace should provide following information.
a.Transaction ID/Trace ID
b. Back end transaction status-HTTP status for REST.Like wise for SOAP as well.
c.Time taken for that call.
d.Request and Response payload.
Currently we are achieving this using indigenous tracing frame work.Is there any open source project will handle all this without any coding from developer.I know we have few options with spring Boot Cloud Zipkin,Seluth etc does this handle above requirements.
My project has similar requirements to yours. IMHO, Spring-cloud-sleuth + Zipkin work well in my case.
For any inter microservices communication, we are using Kafka, and Spring-cloud-sleuth + zipkin has no problem to trace all the call, from REST -> Kafka -> More Kafka -> REST.
To enable Kafka Tracing, just simply add
spring:
sleuth:
propagation-keys: some-key
sampler:
probability: 1
messaging:
kafka:
enabled: true
We are also using Azure ApplicationInsights to do centralized logging, which is well integrated with Spring Cloud.
Hope above give you some confidence of using Sleuth + Zipkin.
I am creating a Facebook multiplayer game and am currently evaluating my tech stack.
My game would need to use websockets and I would like to use Spring Boot. Now, I can not find the info if websocket server will work nicely in Kubernetes? For example if I deploy 5 instances of the server in Kubernetes pods will load balancing/forwarding work correctly for websockets between game clients loaded in browsers and servers in Kubernetes and is there any additional work to enable it? Each pod/server would be stateless and current game info for each player will be stored/read from redis or some other memory db.
If this would not work, how can I work around it and still use Kubernetes? Maybe add one instance of rabbitmq to the stack just for the websockets?
An adequate way to handle this would be to use "sticky sessions". This is where the user is pinned down to a specific pod based on the setting of a cookie.
Here is an example of configuring the Ingress resource object to use sticky sessions:
#
# https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
#
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/affinity: "cookie"
ingress.kubernetes.io/session-cookie-name: "route"
ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Now with that being said, the proper way to handle this would be to use a message broker or a websocket implementation that supports clustering such as socketcluster (https://socketcluster.io).