Spring Boot 2 integrate Brave MySQL-Integration into Zipkin - spring-boot

I am trying to integrate the Brave MySql Instrumentation into my Spring Boot 2.x service to automatically let its interceptor enrich my traces with spans concerning MySql-Queries.
The current Gradle-Dependencies are the following
compile 'io.zipkin.zipkin2:zipkin:2.4.5'
compile('io.zipkin.reporter2:zipkin-sender-okhttp3:2.3.1')
compile('io.zipkin.brave:brave-instrumentation-mysql:4.14.3')
compile('org.springframework.cloud:spring-cloud-starter-zipkin:2.0.0.M5')
I already configured Sleuth successfully to send traces concerning HTTP-Request to my Zipkin-Server and now I wanted to add some spans for each MySql-Query the service does.
The TracingConfiguration it this:
#Configuration
public class TracingConfiguration {
/** Configuration for how to send spans to Zipkin */
#Bean
Sender sender() {
return OkHttpSender.create("https://myzipkinserver.com/api/v2/spans");
}
/** Configuration for how to buffer spans into messages for Zipkin */
#Bean AsyncReporter<Span> spanReporter() {
return AsyncReporter.create(sender());
}
#Bean Tracing tracing(Reporter<Span> spanListener) {
return Tracing.newBuilder()
.spanReporter(spanReporter())
.build();
}
}
The Query-Interceptor works properly, but my problem now is that the spans are not added to the existing trace but each are added to a new one.
I guess its because of the creation of a new sender/reporter in the configuration, but I have not been able to reuse the existing one created by the Spring Boot Autoconfiguration.
That would moreover remove the necessity to redundantly define the Zipkin-Url (because it is already defined for Zipkin in my application.yml).
I already tried autowiring the Zipkin-Reporter to my Bean, but all I got is a SpanReporter - but the Brave-Tracer-Builder requries a Reporter<Span>
Do you have any advice for me how to properly wire things up?

Please use latest snapshots. Sleuth in latest snapshots uses brave internally so integration will be extremely simple.

Related

Spring Data Gemfire: TTL expiration annotation is not working; how to set TTL using annotations?

In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}
First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.

Use Micrometer with OpenFeign in spring-boot application

The OpenApi documentation says that it supports micrometer. How does the integration works? I could not find anything except this little documentation.
I have a FeignClient in a spring boot application
#FeignClient(name = "SomeService", url = "xxx", configuration = FeignConfiguration.class)
public interface SomeService {
#GET
#Path("/something")
Something getSomething();
}
with the configuration
public class FeignConfiguration {
#Bean
public Capability capability() {
return new MicrometerCapability();
}
}
and the micrometer integration as a dependency
<dependency>
<groupId>io.github.openfeign</groupId>
<artifactId>feign-micrometer</artifactId>
<version>10.12</version>
</dependency>
The code makes a call but I could not find any new metrics via the actuator overview, expecting some general information about my HTTP requests. What part is missing?
Update
I added the support for this to spring-cloud-openfeign. After the next release (2020.0.2), if micrometer is set-up, the only thing you need to do is putting feign-micrometer onto your classpath.
Old answer
I'm not sure if you do but I recommend to use spring-cloud-openfeign which autoconfigures Feign components for you. Unfortunately, it seems it does not autoconfigure Capability (that's one reason why your solution does not work) so you need to do it manually, please see the docs how to do it.
I was able to make this work combining the examples in the OpenFeign and Spring Cloud OpenFeign docs:
#Import(FeignClientsConfiguration.class)
class FooController {
private final FooClient fooClient;
public FooController(Decoder decoder, Encoder encoder, Contract contract, MeterRegistry meterRegistry) {
this.fooClient = Feign.builder()
.encoder(encoder)
.decoder(decoder)
.contract(contract)
.addCapability(new MicrometerCapability(meterRegistry))
.target(FooClient.class, "https://PROD-SVC");
}
}
What I did:
Used spring-cloud-openfeign
Added feign-micrometer (see feign-bom)
Created the client in the way you can see above
Importing FeignClientsConfiguration and passing MeterRegistry to MicrometerCapability are vital
After these, and calling the client, I had new metrics:
feign.Client
feign.Feign
feign.codec.Decoder
feign.codec.Decoder.response_size

Spring Boot 2.3.0.M4, Cassandra, and SSL

I have been using ClusterBuilderCustomizer to customize the SSL connection between my Spring Boot application (2.2.5.RELEASE) and the Cassandra database. After migrating to Spring Boot 2.3.0.M4, my code no longer compiles as the ClusterBuilderCustomizer doesn't exist anymore.
As per Spring Boot 2.3.0 release notes, it has been replaced with DriverConfigLoaderBuilderCustomizer and CqlSessionBuilderCustomizer. Does anyone have a working example on how to use any of these customizer classes with SSL?
You just need to declare two beans having these types:
#Bean
public CqlSessionBuilderCustomizer cqlSessionBuilderCustomizer() {
return cqlSessionBuilder -> cqlSessionBuilder
.withNodeStateListener(new MyNodeStateListener())
.withSchemaChangeListener(new MySchemChangeListener());
}
#Bean
public DriverConfigLoaderBuilderCustomizer driverConfigLoaderBuilderCustomizer() {
return loaderBuilder -> loaderBuilder
.withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofSeconds(10));
}
}
Use CqlSessionBuilderCustomizer to pass runtime objects to the session builder, e.g. node state listeners or schema change listeners.
Use DriverConfigLoaderBuilderCustomizer to programmatically customize the driver configuration. See the driver docs for more information on how to programmatically configure the driver.

Reload property value when external property file changes ,spring boot

I am using spring boot, and I have two external properties files, so that I can easily change its value.
But I hope spring app will reload the changed value when it is updated, just like reading from files. Since property file is easy enough to meet my need, I hope I don' nessarily need a db or file.
I use two different ways to load property value, code sample will like:
#RestController
public class Prop1Controller{
#Value("${prop1}")
private String prop1;
#RequestMapping(value="/prop1",method = RequestMethod.GET)
public String getProp() {
return prop1;
}
}
#RestController
public class Prop2Controller{
#Autowired
private Environment env;
#RequestMapping(value="/prop2/{sysId}",method = RequestMethod.GET)
public String prop2(#PathVariable String sysId) {
return env.getProperty("prop2."+sysId);
}
}
I will boot my application with
-Dspring.config.location=conf/my.properties
I'm afraid you will need to restart Spring context.
I think the only way to achieve your need is to enable spring-cloud. There is a refresh endpoint /refresh which refreshes the context and beans.
I'm not quite sure if you need a spring-cloud-config-server (its a microservice and very easy to build) where your config is stored(Git or svn). Or if its also useable just by the application.properties file in the application.
Here you can find the doc to the refresh scope and spring cloud.
You should be able to use Spring Cloud for that
Add this as a dependency
compile group: 'org.springframework.cloud', name: 'spring-cloud-starter', version: '1.1.2.RELEASE'
And then use #RefreshScope annotation
A Spring #Bean that is marked as #RefreshScope will get special treatment when there is a configuration change. This addresses the problem of stateful beans that only get their configuration injected when they are initialized. For instance if a DataSource has open connections when the database URL is changed via the Environment, we probably want the holders of those connections to be able to complete what they are doing. Then the next time someone borrows a connection from the pool he gets one with the new URL.
Also relevant if you have Spring Actuator
For a Spring Boot Actuator application there are some additional management endpoints:
POST to
/env to update the Environment and rebind #ConfigurationProperties and log levels
/refresh for re-loading the boot strap context and refreshing the #RefreshScope beans
Spring Cloud Doc
(1) Spring Cloud's RestartEndPoint
You may use the RestartEndPoint: Programatically restart Spring Boot application / Refresh Spring Context
RestartEndPoint is an Actuator EndPoint, bundled with spring-cloud-context.
However, RestartEndPoint will not monitor for file changes, you'll have to handle that yourself.
(2) devtools
I don't know if this is for a production application or not. You may hack devtools a little to do what you want.
Take a look at this other answer I wrote for another question: Force enable spring-boot DevTools when running Jar
Devtools monitors for file changes:
Applications that use spring-boot-devtools will automatically restart
whenever files on the classpath change.
Technically, devtools is built to only work within an IDE. With the hack, it also works when launched from a jar. However, I may not do that for a real production application, you decide if it fits your needs.
I know this is a old thread, but it will help someone in future.
You can use a scheduler to periodically refresh properties.
//MyApplication.java
#EnableScheduling
//application.properties
management.endpoint.refresh.enabled = true
//ContextRefreshConfig.java
#Autowired
private RefreshEndpoint refreshEndpoint;
#Scheduled(fixedDelay = 60000, initialDelay = 10000)
public Collection<String> refreshContext() {
final Collection<String> properties = refreshEndpoint.refresh();
LOGGER.log(Level.INFO, "Refreshed Properties {0}", properties);
return properties;
}
//add spring-cloud-starter to the pom file.
Attribues annotated with #Value is refreshed if the bean is annotated with #RefreshScope.
Configurations annotated with #ConfigurationProperties is refreshed without #RefreshScope.
Hope this will help.
You can follow the ContextRefresher.refresh() code implements.
public synchronized Set<String> refresh() {
Map<String, Object> before = extract(
this.context.getEnvironment().getPropertySources());
addConfigFilesToEnvironment();
Set<String> keys = changes(before,
extract(this.context.getEnvironment().getPropertySources())).keySet();
this.context.publishEvent(new EnvironmentChangeEvent(context, keys));
this.scope.refreshAll();
return keys;
}

Need matching class for LoggersMvcEndpoint. in spring-boot 2.1.9 release

I am upgrading my project from spring-boot 1.5.12.release to 2.1.9.release. I am unable to find LoggersMvcEndpoint (https://docs.spring.io/spring-boot/docs/1.5.12.RELEASE/api/org/springframework/boot/actuate/endpoint/mvc/LoggersMvcEndpoint.html) in latest version.
In one of my controller I had this. Can some one help me to fix this.
public class LoggerController extends CloudRestTemplate {
#Autowired
LoggersMvcEndpoint loggerAPI;
#Override
public Object getFromInternalApi(final String param) {
return StringUtils.isEmpty(param) ? loggerAPI.invoke() : loggerAPI.get(param);
}
#Override
public Object postToInternalApi(final String param, final Object request) {
return loggerAPI.set(param, (Map<String, String>) request);
}
}
As per Spring docs here
Endpoint infrastructure
Spring Boot 2 brings a brand new endpoint
infrastructure that allows you to define one or several operations in
a technology independent fashion with support for Spring MVC, Spring
WebFlux and Jersey! Spring Boot 2 will have native support for Jersey
and writing an adapter for another JAX-RS implementation should be
easy as long as there is a way to programmatically register resources.
The new #Endpoint annotation declares this type to be an endpoint with
a mandatory, unique id. As we will see later, a bunch of properties
will be automatically inferred from that. No additional code is
required to expose this endpoint at /applications/loggers or as a
org.springframework.boot:type=Endpoint,name=Loggers JMX MBean.
Refer to documentation, it will help you further
and for your info LoggersMvcEndpoint was there until 2.0.0.M3 https://docs.spring.io/spring-boot/docs/2.0.0.M3/api/org/springframework/boot/actuate/endpoint/mvc/LoggersMvcEndpoint.html however there is no reference of deprecation in subsequent version's release notes of 2.0.0.M4
https://docs.spring.io/spring-boot/docs/2.0.0.M4/api/deprecated-list.html#class

Resources