I am having below requirement for which is there any open source library will cover all of them.
1.We are building a distributed micro service architecture with Spring Boot.Which includes more than 100 micro services.
2.There is a lot if inter micro service communications possible to achieve single transaction.
3.We want to trace every micro service call and the trace should provide following information.
a.Transaction ID/Trace ID
b. Back end transaction status-HTTP status for REST.Like wise for SOAP as well.
c.Time taken for that call.
d.Request and Response payload.
Currently we are achieving this using indigenous tracing frame work.Is there any open source project will handle all this without any coding from developer.I know we have few options with spring Boot Cloud Zipkin,Seluth etc does this handle above requirements.
My project has similar requirements to yours. IMHO, Spring-cloud-sleuth + Zipkin work well in my case.
For any inter microservices communication, we are using Kafka, and Spring-cloud-sleuth + zipkin has no problem to trace all the call, from REST -> Kafka -> More Kafka -> REST.
To enable Kafka Tracing, just simply add
spring:
sleuth:
propagation-keys: some-key
sampler:
probability: 1
messaging:
kafka:
enabled: true
We are also using Azure ApplicationInsights to do centralized logging, which is well integrated with Spring Cloud.
Hope above give you some confidence of using Sleuth + Zipkin.
Related
In a few words:
I'm trying to decide between using the default Spring for Apache Kafka stack, KafkaTemplate or the pair, ReactiveKafkaProducerTemplate and ReactiveKafkaConsumerTemplate for my Reactor based application.
Some more context:
In the company I work we're developing a high-disponibility application aiming to publish a set of requests directly to a Kafka Broker. Since this is an API centric application expecting to receive a few millions of requests per week, we decided to go with a stack based on the Project Reactor with Spring WebFlux and Kotlin.
After doing some digging I've discovered that the Spring for Apache Kafka has a simple wrapper designed around the Reactor Kafka implementation, but this wrapper lacks a lot of the functionalities present in the default KafkaTemplate mentioned before, things like: A Metrics Binder out of the box (for prometheus integration), associated factories, extensive documentation, Auto configuration, etc.
I'm trying to understand what I'm really giving up when using the default implementation in favor of the Reactive one. Am I giving up back pressure functionality? Am I sacrificing the Reactive Stack present in my application? Will this be a toll in the future? Does anyone has some experience in working with a Reactive Stack alongside a non-reactive solution?
I have, also, a few concerns regarding the DLT flow facilitated in the default implementation, things like the SeekToCurrentErrorHandler strategy
We have a Spring Boot microservice which should get some data from old / legacy system. This microservice exposes external modern REST API. Sometimes we have to issue 7-10 requests to the legacy system in order to get all the data we need for single API call. Unfortunately we can't use Reactor / WebClient and have to stick with WebServiceTemplate to issue those "legacy" calls. We can't also use Reactive Spring WebClient - Making a SOAP call
What is the best way to scale such a miroservice in Kubernetes? We have very big concerns that Thread Pool used for parallel WebServiceTemplate invocation will be depleted very fast, but I'm not sure that creating and exposing custom metric based on active threads count / thread pool size is a good idea.
Any advice will be helpful.
Enable Prometheus exporter in Spring
Make sure metrics are scraped. You're going to watch for a threadpool_size metric. Refer your k8s/prometheus distro docs to get prometheus service discovery working for you.
Write a horizontal pod autoscaler (HPA) based on a Prometheus metric:
Setup Prometheus-Adapter and follow the HPA walkthrough.
Or follow this guide https://github.com/stefanprodan/k8s-prom-hpa
Depending on what k8s distro you are using, you might have different ways to get the Prometheus and prometheus discovery:
(example platform built-in) https://cloud.google.com/stackdriver/docs/solutions/gke/prometheus
(example product) https://docs.datadoghq.com/integrations/prometheus/
(example opensource) https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
any other prometheus solution
I am playing around with Spring Boot v2 at the moment. So far, my set up looks like this:
Spring -> Telegraf -> Kafka -> Telegraf -> influx
I am wondering whether or not it's possible to take out the the first telegraf inbetween Spring and Kafka, so something like this:
Spring -> Kafka -> Telegraf -> Influx
I've looked at the configurations of micrometer and there is no config for Kafka. Also, telegraf was pulling data from Spring.. and as Kafka is a push model (i.e. you are pushing data into Kafka), would Spring be able to push data to Kafka? If yes, how? Through the use of HTTP POST methods?
New to the whole concept.
would Spring be able to push data to Kafka? If yes, how? Through the use of HTTP POST methods?
Kafka uses its own TCP protocol, not HTTP so no. At least not without using the Kafka REST Proxy.
You would basically be embedding the same thing that Telegraf does into your Spring code.
It's possible, sure, but built into Micrometer? Not that I'm aware of.
Plus, it would be overhead on your app having an internal producer thread, and you'd be required to include kafka clients with each of your monitored apps, plus have some control preventing your app from failing if Kafka connection isn't possible...
I would suggest keeping Telegraf installed on each host machine, or at the very least, Prometheus JMX exporter or Jolokia for your individual Java apps. From this, JMX metrics can be collected and pushed to downstream monitoring systems
Or, as commented, you could skip Kafka, but I'm guessing you want to keep it there as a buffer.
On the other side, you can use Kafka Connect Influxdb sink to get optimal performance of consumer scaling
I've been roaming the depths of the internet but I find myself unsatisfied by the examples I've found so far. Can someone point me or, show me, a good starting point to integrate zipkin tracing with jaxrs clients and amqp clients?
My scenario is quite simple and I'd expect this task to be trivial tbh. We have a micro services based architecture and it's time we start tracing our requests and have global perspective of our inter service dependencies and what the requests actually look like (we do have metrics but I want more!) . The communication is done via jax-rs auto generated clients and we use rabbit template for messaging.
I've seen brave integrations with jaxrs but they are a bit simplistic. My zipkin server is a spring boot mini app using stream-rabbit, so zipkin data is sent using rabbitmq.
Thanks in advance.
After some discussion with Marcin Grzejszczak and Adrien Cole (zipkin and sleuth creators/active developers) I ended up creating a Jersey filter that acts as bridge between sleuth and brave. Regarding AMQP integration, added a new #StreamListener with a conditional for zipkin format spans (using headers). Sending messages to the sleuth exchange with zipkin format will then be valid and consumed by the listener. For javascript (zipkin-js), I ended up creating a new AMQP Logger that sends zipkin spans to a determined exchange. If someone ends up reading this and needs more detail, you're welcome to reach out to me.
I am novice to Spring Cloud Data flow and Stream Cloud Streaming Applications.
Currently my project diagram looks like following :
I route a POST request from outside client using zuul API gateway to a microservice called Composite. Composite creates a stream using REST POST and deployes onto Spring Cloud Data Flow Server. As far as I know the microservices mongodb and file run as co-existing JVM processes. If My client has to know the status of stream, status of the processed data, How should Composite Microservice interact with Spring Cloud Data Flow Server? Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server. Does SCDF expose any hooks to look at the individual apps? Also how can I change the flow #runtime to create a dynamic mesh?
Currently I am using Local Spring Cloud Data Flow Server for development.
Runtime platform is local
Local runtime is recommended only for development purpose and if you're preparing for production, please make sure to choose a platform variant (eg: cf, k8s, yarn, ..) that comes with non-functional requirements to support reliable and durable execution of all the applications running in streaming pipeline.
As far as I know the microservices mongodb and file run as co-existing JVM processes.
If your stream definition is file | mongodb, you'd have 2 different JVM's even when using Local runtime. They're independent Boot applications.
How should Composite Microservice interact with Spring Cloud Data Flow Server?
Not clear what you mean by "composite" here. All the microservice applications in SCDF communicate via messaging middleware such as Kafka or Rabbit. SCDF provides the orchestration capability to run such applications into various runtime platforms.
Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server
You can use SCDF's REST-APIs to query for current status of the apps and it is platform agnostic. You can view the list of supported APIs by hitting the root URL (see image below) - there's a gap in docs - we will fix it. Following APIs could be useful for status checks.
Does SCDF expose any hooks to look at the individual apps?
Once the apps are deployed in a runtime platform, you can take advantage of Boot's actuator endpoints to explore more details such as trace, metrics, health, env among others at each application level. See Boot's actuator endpoints for more details. For instance, if your mongodb app is running locally and on port 23000, then you can check granular metrics for this application at: http://localhost:23000/metrics.
[As an FYI: future SCDF releases would include integrating Spring Boot + Spring Cloud Sleuth metrics and visual representation of the same.]
Also how can I change the flow #runtime to create a dynamic mesh?
If you're referring to editing a running streaming pipeline with addition/deletes, we are currently exploring design approach to support this functionality.