Spring Boot 2.0.0.M6: Show all metrics with one request - spring

With Spring Boot 2.0.0.M6 and the new actuator metrics endpoint, when I request
GET /application/metrics
the only the names of the metrics are shown
{
"names" : [ "data.source.active.connections", "jvm.buffer.memory.used", "jvm.memory.used", "jvm.buffer.count", "logback.events", "process.uptime", "jvm.memory.committed", "data.source.max.connections", "http.server.requests", "system.load.average.1m", "jvm.buffer.total.capacity", "jvm.memory.max", "process.start.time", "cpu", "data.source.min.connections" ]
}
Clearly I can access a specific metric using
GET /application/metrics/jvm.memory.used
But is there a way to see all metrics with one request?

That's how the metrics endpoint behaves in the Spring Boot 2.0.0M* releases. There are only two read operations defined in the endpoint class:
ListNamesResponse listNames()
Resolves to GET /application/metrics
MetricResponse metric(#Selector String requiredMetricName,
#Nullable List<String> tag)
Resolves to GET /application/metrics/jvm.memory.used
Metrics support has changed quite dramatically in 2.x (now backed by Micrometer) and the Spring Boot 2.x upgrade guide is lacking any details on metrics at the moment but it's a work in progress, so presumably more details will arive as Spring Boot 2.0 gets closer to a GA release.
I suspect the move from hierarchical metrics to dimensional metrics resulted in the maintainers deeming the 1.x (hierarchical) metrics display to be no longer viable/suitable.

Spring boot 2 has removed this functionality by default, but if this is requirement of your application then this custom implementation will serve your purpose: https://github.com/csankhala/spring-metrics-grabber
Add dependency in you application from local repository
compile("org.springframework.boot:spring-metrics-grabber:1.0.0-SNAPSHOT");
compile("org.springframework.boot:spring-boot-starter-actuator");
Add MetricxEndpoint.class to #SpringBootApplication scan path
import org.springframework.metricx.controller.MetricxEndpoint;
#SpringBootApplication(scanBasePackageClasses = { MetricxEndpoint.class, YourSpringBootApplication.class })
public class YourSpringBootApplication {
public static void main(String[] args) {
new SpringApplication(YourSpringBootApplication.class).run(args);
}
}
All Metrics will be published on '/metricx' endpoint with name and value both
Pattern search is also supported e.g. '/metricx/jvm.*'

Related

Spring Data Gemfire: TTL expiration annotation is not working; how to set TTL using annotations?

In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}
First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.

Use Micrometer with OpenFeign in spring-boot application

The OpenApi documentation says that it supports micrometer. How does the integration works? I could not find anything except this little documentation.
I have a FeignClient in a spring boot application
#FeignClient(name = "SomeService", url = "xxx", configuration = FeignConfiguration.class)
public interface SomeService {
#GET
#Path("/something")
Something getSomething();
}
with the configuration
public class FeignConfiguration {
#Bean
public Capability capability() {
return new MicrometerCapability();
}
}
and the micrometer integration as a dependency
<dependency>
<groupId>io.github.openfeign</groupId>
<artifactId>feign-micrometer</artifactId>
<version>10.12</version>
</dependency>
The code makes a call but I could not find any new metrics via the actuator overview, expecting some general information about my HTTP requests. What part is missing?
Update
I added the support for this to spring-cloud-openfeign. After the next release (2020.0.2), if micrometer is set-up, the only thing you need to do is putting feign-micrometer onto your classpath.
Old answer
I'm not sure if you do but I recommend to use spring-cloud-openfeign which autoconfigures Feign components for you. Unfortunately, it seems it does not autoconfigure Capability (that's one reason why your solution does not work) so you need to do it manually, please see the docs how to do it.
I was able to make this work combining the examples in the OpenFeign and Spring Cloud OpenFeign docs:
#Import(FeignClientsConfiguration.class)
class FooController {
private final FooClient fooClient;
public FooController(Decoder decoder, Encoder encoder, Contract contract, MeterRegistry meterRegistry) {
this.fooClient = Feign.builder()
.encoder(encoder)
.decoder(decoder)
.contract(contract)
.addCapability(new MicrometerCapability(meterRegistry))
.target(FooClient.class, "https://PROD-SVC");
}
}
What I did:
Used spring-cloud-openfeign
Added feign-micrometer (see feign-bom)
Created the client in the way you can see above
Importing FeignClientsConfiguration and passing MeterRegistry to MicrometerCapability are vital
After these, and calling the client, I had new metrics:
feign.Client
feign.Feign
feign.codec.Decoder
feign.codec.Decoder.response_size

Spring Boot 2 integrate Brave MySQL-Integration into Zipkin

I am trying to integrate the Brave MySql Instrumentation into my Spring Boot 2.x service to automatically let its interceptor enrich my traces with spans concerning MySql-Queries.
The current Gradle-Dependencies are the following
compile 'io.zipkin.zipkin2:zipkin:2.4.5'
compile('io.zipkin.reporter2:zipkin-sender-okhttp3:2.3.1')
compile('io.zipkin.brave:brave-instrumentation-mysql:4.14.3')
compile('org.springframework.cloud:spring-cloud-starter-zipkin:2.0.0.M5')
I already configured Sleuth successfully to send traces concerning HTTP-Request to my Zipkin-Server and now I wanted to add some spans for each MySql-Query the service does.
The TracingConfiguration it this:
#Configuration
public class TracingConfiguration {
/** Configuration for how to send spans to Zipkin */
#Bean
Sender sender() {
return OkHttpSender.create("https://myzipkinserver.com/api/v2/spans");
}
/** Configuration for how to buffer spans into messages for Zipkin */
#Bean AsyncReporter<Span> spanReporter() {
return AsyncReporter.create(sender());
}
#Bean Tracing tracing(Reporter<Span> spanListener) {
return Tracing.newBuilder()
.spanReporter(spanReporter())
.build();
}
}
The Query-Interceptor works properly, but my problem now is that the spans are not added to the existing trace but each are added to a new one.
I guess its because of the creation of a new sender/reporter in the configuration, but I have not been able to reuse the existing one created by the Spring Boot Autoconfiguration.
That would moreover remove the necessity to redundantly define the Zipkin-Url (because it is already defined for Zipkin in my application.yml).
I already tried autowiring the Zipkin-Reporter to my Bean, but all I got is a SpanReporter - but the Brave-Tracer-Builder requries a Reporter<Span>
Do you have any advice for me how to properly wire things up?
Please use latest snapshots. Sleuth in latest snapshots uses brave internally so integration will be extremely simple.

Implement multi-tenanted application with Keycloak and springboot

When we use 'KeycloakSpringBootConfigResolver' for reading the keycloak configuration from Spring Boot properties file instead of keycloak.json.
Now there are guidelines to implement a multi-tenant application using keycloak by overriding 'KeycloakConfigResolver' as specified in http://www.keycloak.org/docs/2.3/securing_apps_guide/topics/oidc/java/multi-tenancy.html.
The steps defined here can only be used with keycloak.json.
How can we adapt this to a Spring Boot application such that keycloak properties are read from the Spring Boot properties file and multi-tenancy is achieved.
You can access the keycloak config you secified in your application.yaml (or application.properties) if you inject org.keycloak.representations.adapters.config.AdapterConfig into your component.
#Component
public class MyKeycloakConfigResolver implements KeycloakConfigResolver {
private final AdapterConfig keycloakConfig;
public MyKeycloakConfigResolver(org.keycloak.representations.adapters.config.AdapterConfig keycloakConfig) {
this.keycloakConfig = keycloakConfig;
}
#Override
public KeycloakDeployment resolve(OIDCHttpFacade.Request request) {
// make a defensive copy before changing the config
AdapterConfig currentConfig = new AdapterConfig();
BeanUtils.copyProperties(keycloakConfig, currentConfig);
// changes stuff here for example compute the realm
return KeycloakDeploymentBuilder.build(currentConfig);
}
}
After several trials, the only feasible option for spring boot is to have
Multiple instances of the spring boot application running with different spring 'profiles'.
Each application instance can have its own keycloak properties (as it is under different profiles) including the realm.
The challenge is to have an upgrade path for all instances for version upgrades/bug fixes, but I guess there are multiple strategies already implemented (not part of this discussion)
there is a ticket regarding this problem: https://issues.jboss.org/browse/KEYCLOAK-4139?_sscc=t
Comments for that ticket also talk about possible workarounds intervening in servlet setup of the service used (Tomcat/Undertow/Jetty), which you could try.
Note that the documentation you linked in your first comment is super outdated!

Need matching class for LoggersMvcEndpoint. in spring-boot 2.1.9 release

I am upgrading my project from spring-boot 1.5.12.release to 2.1.9.release. I am unable to find LoggersMvcEndpoint (https://docs.spring.io/spring-boot/docs/1.5.12.RELEASE/api/org/springframework/boot/actuate/endpoint/mvc/LoggersMvcEndpoint.html) in latest version.
In one of my controller I had this. Can some one help me to fix this.
public class LoggerController extends CloudRestTemplate {
#Autowired
LoggersMvcEndpoint loggerAPI;
#Override
public Object getFromInternalApi(final String param) {
return StringUtils.isEmpty(param) ? loggerAPI.invoke() : loggerAPI.get(param);
}
#Override
public Object postToInternalApi(final String param, final Object request) {
return loggerAPI.set(param, (Map<String, String>) request);
}
}
As per Spring docs here
Endpoint infrastructure
Spring Boot 2 brings a brand new endpoint
infrastructure that allows you to define one or several operations in
a technology independent fashion with support for Spring MVC, Spring
WebFlux and Jersey! Spring Boot 2 will have native support for Jersey
and writing an adapter for another JAX-RS implementation should be
easy as long as there is a way to programmatically register resources.
The new #Endpoint annotation declares this type to be an endpoint with
a mandatory, unique id. As we will see later, a bunch of properties
will be automatically inferred from that. No additional code is
required to expose this endpoint at /applications/loggers or as a
org.springframework.boot:type=Endpoint,name=Loggers JMX MBean.
Refer to documentation, it will help you further
and for your info LoggersMvcEndpoint was there until 2.0.0.M3 https://docs.spring.io/spring-boot/docs/2.0.0.M3/api/org/springframework/boot/actuate/endpoint/mvc/LoggersMvcEndpoint.html however there is no reference of deprecation in subsequent version's release notes of 2.0.0.M4
https://docs.spring.io/spring-boot/docs/2.0.0.M4/api/deprecated-list.html#class

Resources