Measuring HTTP Performance using Micrometer in Spring Boot Application - spring-boot

I am developing a Spring Boot 2 Application with Micrometer for reporting Metrics. One of the functionality is sending large amounts of Data to a Restful Web Service.
I would like to measure the amount of data sent and the time taken to complete the request. Using the Timer metric gives me the time as well as the number of times the request is made. But how can I include the bytes transferred also in the same metric? My Grafana dashboard is supposed to plat the amount of data transferred and the time is taken to accomplish it.
I looked at Counter and Gauges but they don't look like the right fit for what I am trying to do. Is there a way to add a custom field to the Grafana metric?

You'd use a DistributionSummaryfor that. See here and here.
Regarding instrumentation, you'd have to currently instrument your Controllers manually or wire an Aspect around them.
IIRC at least the Tomcat metrics provide some data-in and data-out metrics, but not down to the path level.

Related

Does datadog agent get into the host application program?

Does a datadog agent generate metrics?
How does it collect metrics that the host's app generates?
Does it intrude the app code environment to collect metrics?
Let's say that the app is a Spring Boot app. It has a set of metrics already being generated by Micrometer and is exposed on the /metrics endpoint. How does a datadog agent fit in, here?
Let's say that the app is the same this time. But, does not have micrometer enabled.
How would datadog fit in here?
Would it have the capability to generate metrics from this app? If so, how does it do the same? Furthermore, in doing so, does it access the application's source code? Or gets into the runtime and adds bytecode to generate metrics by observing the events?
Let's say that, we have an application running on the host, that already generates metrics and can ship it to a network accessible storage. Can datadog be used just to collect the data and visualize it? Without an agent?
Does datadog only collect metrics that are exposed by the host's app?
The reason I am curious to know these aspects is to analyze the vulnerability of the host with this respect, understand the added overhead in terms of infrastructural resources, understand the performance overhead and the cost involved.
At the same time, a stronger question that stands is, why datadog?
Any thoughts on Dynatrace in the same respect?

Spring Boot Microservices - Design of API to get the response as a List by passing Ids

I am using Spring Boot and Spring Cloud for Microservices architecture and using various things like API Gateway, Distributed Config, Zipkin + Sleuth, Cloud and 12 factor methodologies where we've single DB server has the same schema but tables are private.
Now I am looking to have below things - Note - Response Object is nested and gives data in hierarchy.
Can we ask downstream system to develop API to accept List of CustomerId and given response in one go?
Or can we simply call the same API multiple times giving single CustomerId and get the response?
Please suggest having complex response set and also having simple response set. What would be better considering performance and microservices in mind.
I would go with option 1. This may be less RESTful but it is more performant, especially if the list of CustomerId is large. Following standards is for sure good, but sometimes the use case requires us to bend a bit the standards so that the system is useful.
With option 2. you will most probably "waste" more time with HTTP connection "dance" than with your actual use case of getting the data. Imagine having to call 50 times the same downstream service if you are required to retrieve the data from 50 CustomerIds.

How to display all processor in graphical view?

Tool: Spring Cloud Data Flow
I have created a sample with Source, Processor and Sink.
The graphical view of the whole app is
As I have an existing application which contains multiple Processor in a single project and enabled at once using code below
#EnableBinding({Processor.class, Processor1.class, Processor2.class})
Then, is there any possibility or configuration required so Data Flow can display all processor from the project?
It's really helpful if the Data Flow display processor with boundary and contains multiple processors in it (Shown in below image)
The SCDF Dashboard doesn't yet support this functionality.
Though it is possible to build multi-input/output based processors (in the same app) using Spring Cloud Stream, in SCDF today, primarily, the data pipelines are linear, with one-input/output representation.
We are exploring ideas to support this in SCDF proper. Please feel free to open a new issue with your use-cases, and as much as details as possible of the requirements - we could use it for the acceptance.

Showing HTTP Request API latency using the Spring Boot Micrometer metrics

We use Prometheus to scrape Spring Boot 2.0.0 metrics and then persist them in InfluxDB.
We then use Grafana to visualize them from InfluxDB.
Our micrometer dependencies are
micrometer-core
micrometer-registry-prometheus
I want to be able to show a latency metric for our REST APIs.
From our Prometheus scraper I can see these metrics are generated for HTTP requests.
http_server_requests_seconds_count
http_server_requests_seconds_sum
http_server_requests_seconds_max
I understand from the micrometer documentation, https://micrometer.io/docs/concepts#_client_side, that latency can be done by combining 2 of the above generated metrics: totalTime / count.
However our data source is InfluxDB which does not support combining measurements, https://docs.influxdata.com/influxdb/v1.7/troubleshooting/frequently-asked-questions/#how-do-i-query-data-across-measurements,
so I am unable to implement that function in InfluxDB.
Do I need to provide my own implementation of this latency metric in the Spring Boot component or is their an easier way that I can achieve this?
You essentially can join your measurements in Kapacitor, another component of Influxdata TICK stack.
It's going to be pretty simple with JoinNode, possibly followed by Eval to calculate what you want right in place. There's tons of examples around it in documentation.
Although the problem is different there: you'd unnecessarily overingeneered your solution, and moreover - you're trying to combine two products that has the same purpose, but uses different approach to it. How smart is that?
You're already scraping things with Prometheus? Fine! Stay with it, do the math there, it's simple. And Grafana works with Prometheus too, right out of the box!
You wanna have your data in Influx (I can understand that, it's certainly more advanced)?
Fine! Micrometer can send it right to Influx out of the box - and in at least two ways!
I, personally, don't see any reason to do what you suppose to do, can you share one?

Spring Cloud Netflix & Spring Cloud Data Flow microservice arheticture

I'm developing an application that must both handle events coming from other systems and provide a REST API. I want to split the applications into micro services and I'm trying to figure out which approach I should use. I drew attention to the Spring Cloud Netflix and the Spring Cloud Data Flow toolkit, but it's not clear to me whether they can be integrated and how.
As an example, suppose we have the following functionality in the system:
1. information about users
card of orders
product catalog
sending various notifications
obtaining information about the orders from third-party systems
processing, filtering, and transformation of order events
processing of various rules based on orders and sending notifications
sending information about user orders from third-party systems to other users using websockets (with pre-filtering)
Point 1-4 - there I see the classical micro service architecture. Framework - Spring Netflix Stack.
Point 5-9 - it's best to use an event-driven approach. Toolkit - Spring Data Flow.
The question is how to build communication between these platforms.
In particular - POPULATE ORDER DETAILS SERVICE must transform the incoming orders and save additional information (in case it needed) in the database. ORDER RULE EXECUTOR SERVICE should obtain information about the current saved rules, execute them and send notifications. WEB SOCKET SERVICE should send orders information only if a particular user has set the filters, and ORDER SAVER SERVICE should store the information about the transformed orders in the database.
1.
Communication between the micro-services within the two platforms could be using the API GATEWAY, but in this case, I have the following questions:
Does the Spring Cloud platform allow to work with micro services that way?
Performance - the number of events is very huge, which can significantly slow down the processing of events. Is it possible to use other approaches, for example, communication not through the API Gateway but through in-memory cache?
2.
Since some functionality intersects between these services, I have a question about what is "microservice" in the understanding of the Spring Cloud Stream framework. In particular, does it make sense to have separate services? Can the microservice in the Spring Cloud Stream have a REST API, work with the database and simultaneously process the events? Does such a diagram make sense and is it possible to build such a stack at the moment?
The question is which of these approaches is more correct? What did Spring Data Streams mean by "microservice"?
Given the limited information in the post, it is hard to convince on all the matters pertaining to this type of architecture, but I'll attempt to share some specifics, and point to samples. Also for the same reasons, it is hard to solve for your needs end-to-end. From the surface, it appears you're attempting to build event-driven applications and wondering whether Spring Cloud Stream (SCSt) and Spring Cloud Data Flow (SCDF) could help.
They can, yes.
The Order, User, and Catalog seem like domain objects and it would all come together to solve for a use-case. For instance, querying for a number of orders for a particular product, and group by the user. There are a few samples that articulate the data flow between the entities to solve similar problems. Here's a live code-walkthrough of event-driven systems in action. There's another example of social-graph application, too.
Though these event-driven applications can run standalone as individual services with the help of of message broker (eg: Kafka or RabbitMQ), you could of course also register them in SCDF and use them in the SCDF DSL to build a coherent data pipeline. We are expanding on more direct capabilities in SCDF for these types of use-cases, but there are ways to orchestrate them today with current abilities, too. Follow spring-cloud/spring-cloud-#2331#issuecomment-406444350 for more details.
I hope this gives an idea. Try to build something small using SCSt/SCDF, prove it out, and expand to more complex use-cases.

Resources