Micrometer With Microservice - spring-boot

I am newbie to micrometer. could anyone let me know how to manage microservice metrics centrally in spring boot ?
Where i can get all registered service information and matrices and stored metrics in influxdb ?

Assuming that you're asking "How to use Micrometer with Spring Boot for collecting metrics from heterogeneous services which have multiple instances on multiple hosts" as there is nothing special with the microservice architecture compared to the assumed environment, you need to add dimensions to metrics for hosts, application instances, and so on. You can achieve it with the common tags support. See the section for it in the Spring Boot reference guide.
UPDATED:
To answer the additional question on the below comment, I created a sample showing how to use common tags with environment variables. Note that it's on the branch common-tags-2.1.x-with-env, not the master.

Related

Scale SpringBoot App based on Thread Pool State

We have a Spring Boot microservice which should get some data from old / legacy system. This microservice exposes external modern REST API. Sometimes we have to issue 7-10 requests to the legacy system in order to get all the data we need for single API call. Unfortunately we can't use Reactor / WebClient and have to stick with WebServiceTemplate to issue those "legacy" calls. We can't also use Reactive Spring WebClient - Making a SOAP call
What is the best way to scale such a miroservice in Kubernetes? We have very big concerns that Thread Pool used for parallel WebServiceTemplate invocation will be depleted very fast, but I'm not sure that creating and exposing custom metric based on active threads count / thread pool size is a good idea.
Any advice will be helpful.
Enable Prometheus exporter in Spring
Make sure metrics are scraped. You're going to watch for a threadpool_size metric. Refer your k8s/prometheus distro docs to get prometheus service discovery working for you.
Write a horizontal pod autoscaler (HPA) based on a Prometheus metric:
Setup Prometheus-Adapter and follow the HPA walkthrough.
Or follow this guide https://github.com/stefanprodan/k8s-prom-hpa
Depending on what k8s distro you are using, you might have different ways to get the Prometheus and prometheus discovery:
(example platform built-in) https://cloud.google.com/stackdriver/docs/solutions/gke/prometheus
(example product) https://docs.datadoghq.com/integrations/prometheus/
(example opensource) https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
any other prometheus solution

Spring Cloud Connector Plan Information

I am using Spring Cloud Connector to bind to databases. Is there any way to get the plan of the bound service? When I extend an AbstractCloudConfig and do
cloud().getSingletonServiceInfosByType(PostgresqlServiceInfo.class)...
I will have information on the url and how to connect to the postgres. PostgresqlServiceInfo and others do not carry along the plan data. How can I extend the service info, in order to read this information form VCAP_SERVICES?
Thanks
By design, the ServiceInfo classes in Spring Cloud Connectors carry just enough information to create the connection beans necessary for an app to consume the service resources. Connectors was designed to be platform-neutral, and fields like plan, label, and tags that are available on Cloud Foundry are not captured because they might not be available on other platforms (e.g. Heroku).
To add the plan information to a ServiceInfo, you'd need to write your own ServiceInfo class that includes a field for the value, then write a CloudFoundryServiceInfoCreator to populate the value from the VCAP_SERVICES data that the framework provides as a Map. See the project documentation for more information on creating such an extension.
Another (likely easier) option is to use the newer java-cfenv project instead of Spring Cloud Connectors. java-cfenv supports Cloud Foundry only, and gives access to the full set of information in VCAP_SERVICES. See the project documentation for an example of how you can use this library.

Passing data between tasks in a Spring Cloud Composed Task

I have been working with Spring Cloud DataFlow and I have created my Composed Task which involves 3 Tasks.
My question is: Is possible to share information between those tasks? What would be a good pattern to do this?
I know that Spring Dataflow executes composed tasks as Spring Batch Jobs, so I was thinking that might be feasible share information using the Context.
As responded in the other thread, we don't provide out-of-the-box opinions because of the said reason.
However, if you have a requirement for it, you could use a shared database or a pub/sub broker for sharing information.

Merge Spring Boot actuator and Micrometer data on one endpoint

I have a number of applications that are using the SpringBoot actuator to publish metrics to the /metrics endpoint.
I have some other applications that are also using Micrometer to publish metrics to a /prometheus endpoint.
And finally, I have a cloud provider that will only allow me to pull metrics from a single end point. They have many preprepared Grafana dashboards, but most are targeted at the Actuator variable names. Some are targeted at the Micrometer variable names.
Micrometer puts out the same data, but it uses different names than Actuator, eg "jvm_memory" instead of "mem".
I would really like to find a way to merge both of these data sources so that they dump data to a single endpoint, and all of my Grafana dashboards would just work with all of the applications.
But I'm at a loss as to the best way to do this. Is there a way to tell Micrometer to use /metrics as a datasource so that any time it is polled it will include those?
Any thoughts are greatly appreciated.
The best solution probably depends on the complexity of your dashboard. You might just configure a set of gauges to report the value under a different name and then only use the Micrometer scrape endpoint. For example:
#Bean
public MeterBinder mapToOldNames() {
return r -> {
r.gauge("mem", Tags.empty(), r, r2 -> r2.find("jvm.memory.used").gauges()
.stream().mapToDouble(Gauge::value).sum());
};
}
Notice how in this case we are converting a memory gauge that is dimensioned in Micrometer (against the different aspects of heap/non-heap memory) and rolling them up into one gauge to match the old way.
For Spring Boot 1.5 you could do something like the Prometheus `simpleclient_spring_boot' does.
You collect the PublicMetrics from the actuator-metrics context and expose/register them as Gauges/Counters in the Micrometer MeterRegistry. This in term will expose those actuator metrics under your Prometheus scrape endpoint.
I assume you'd filter out non-functional metrics which are duplicates of the Micrometer ones. So the only thing I can think of is functional/business metrics to actually take over. But if you have the chance to actually change the code to Micrometer, I'd say that's the better approach.
I haven't tried this, just remembered I had seen this concept.

How to monitor streaming apps Inside SCDF?

I am novice to Spring Cloud Data flow and Stream Cloud Streaming Applications.
Currently my project diagram looks like following :
I route a POST request from outside client using zuul API gateway to a microservice called Composite. Composite creates a stream using REST POST and deployes onto Spring Cloud Data Flow Server. As far as I know the microservices mongodb and file run as co-existing JVM processes. If My client has to know the status of stream, status of the processed data, How should Composite Microservice interact with Spring Cloud Data Flow Server? Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server. Does SCDF expose any hooks to look at the individual apps? Also how can I change the flow #runtime to create a dynamic mesh?
Currently I am using Local Spring Cloud Data Flow Server for development.
Runtime platform is local
Local runtime is recommended only for development purpose and if you're preparing for production, please make sure to choose a platform variant (eg: cf, k8s, yarn, ..) that comes with non-functional requirements to support reliable and durable execution of all the applications running in streaming pipeline.
As far as I know the microservices mongodb and file run as co-existing JVM processes.
If your stream definition is file | mongodb, you'd have 2 different JVM's even when using Local runtime. They're independent Boot applications.
How should Composite Microservice interact with Spring Cloud Data Flow Server?
Not clear what you mean by "composite" here. All the microservice applications in SCDF communicate via messaging middleware such as Kafka or Rabbit. SCDF provides the orchestration capability to run such applications into various runtime platforms.
Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server
You can use SCDF's REST-APIs to query for current status of the apps and it is platform agnostic. You can view the list of supported APIs by hitting the root URL (see image below) - there's a gap in docs - we will fix it. Following APIs could be useful for status checks.
Does SCDF expose any hooks to look at the individual apps?
Once the apps are deployed in a runtime platform, you can take advantage of Boot's actuator endpoints to explore more details such as trace, metrics, health, env among others at each application level. See Boot's actuator endpoints for more details. For instance, if your mongodb app is running locally and on port 23000, then you can check granular metrics for this application at: http://localhost:23000/metrics.
[As an FYI: future SCDF releases would include integrating Spring Boot + Spring Cloud Sleuth metrics and visual representation of the same.]
Also how can I change the flow #runtime to create a dynamic mesh?
If you're referring to editing a running streaming pipeline with addition/deletes, we are currently exploring design approach to support this functionality.

Resources