What I want?
To use caching and metrics
Why?
Faster response
Some metric data to evaluate things like: total hits, average duration, minimum duration, max duration... etc
I tried:
#CacheResult
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-cache</artifactId>
</dependency>
and
#SimplyTimed
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-metrics</artifactId>
</dependency>
Applied both of them like:
#GET
#CacheResult(cacheName = "someData")
#SimplyTimed
#Produces(MediaType.APPLICATION_JSON)
public List<String> getSome() {
return ... some data;
}
Both work as expected (on the first call)... sweet!
The thing is, because caching only runs the method on the first call, subsequent calls are handled straight through the cache and the metrics are no longer recorded.
I know that quarkus-cache extension is still on preview. As far as I know, microprofile have no business with caching.
And yes...
"Micrometer is the recommended approach to metrics for Quarkus. Use the SmallRye Metrics extension when it’s required to retain MicroProfile specification compatibility."
At this point I didn't find any objective/elegant solution through micrometer. From what I saw so far, I would have to abandon quarkus-cache and quarkus-smallrye-metrics and work manually with Caffeine and micrometer metrics.
Which brings me to the question: is there any possibility for quarkus-cache to keep metrics recording work out of the box, whether is annotation quarkus-smallrye-metrics or any other annotation metrics framework alike?
As explained before, the constraints about quarkus-cache and quarkus-smallrye-metrics "looks like" mutually exclusive. I understand that's a tough call. Please, go easy on me, ok?
Since with these libraries code generation happens at build time, sometimes it matters in which sequence you use the annotations or even in rare cases, like with lombok, the sequence of dependencies.
So as a wild guess it might be worth to try adding the metrics first:
#GET
#SimplyTimed //<-------------------------------/ like this
#CacheResult(cacheName = "someData") //<------/
#Produces(MediaType.APPLICATION_JSON)
public List<String> getSome() {
return ... some data;
}
Related
I currently have a Spring Boot based application where there is no active cache. Our application is heavily dependent on key-value configurations which we maintain in an Oracle DB. Currently, without cache, each time I want to get any value from that table, it is a database call. This is, expectedly causing a lot of overhead due to high number of transactions to the DB. Hence, the need for cache arrived.
On searching for caching solutions for SpringBoot, I mostly found links where we are caching object while any CRUD operation is performed via the application code itself, using annotations like #Cacheable, #CachePut, #CacheEvict, etc. but this is not applicable for me. I have a master data of key-value pairs in the DB, any change needs approvals and hence the access is not directly provided to the user, it is made once approved directly in the DB.
I want to have these said key-values to be loaded at startup time and kept in the memory, so I tried to implement the same using #PostConstruct and ConcurrentHashMap class, something like this:
public ConcurrentHashMap<String, String> cacheMap = new ConcurrentHashMap<>();
#PostConstruct
public void initialiseCacheMap() {
List<MyEntity> list = myRepository.findAll();
for(int i = 0; i < list.size(); i++) {
cacheMap.put(list.get(i).getKey(), list.get(i).getValue());
}
}
In my service class, whenever I want to get something, I am first checking if the data is available in the map, if not I am checking the DB.
My purpose is getting fulfilled and I am able to drastically improve the performance of the application. A certain set of transactions were earlier taking 6.28 seconds to complete, which are now completed in mere 562 milliseconds! however, there is just one problem which I am not able to figure out:
#PostConstruct is called once by Spring, on startup, post dependency injection. Which means, I have no means to re-trigger the cache build without restart or application downtime, this is not acceptable unfortunately. Further, as of now, I do not have the liberty to use any existing caching frameworks or libraries like ehcache or Redis.
How can I achieve periodic refreshing of this cache (let's say every 30 minutes?) with only plain old Java/Spring classes/libraries?
Thanks in advance for any ideas!
You can do this several ways, but how you can also achieve this is by doing something in the direction of:
private const val everyThrityMinute = "0 0/30 * * * ?"
#Component
class TheAmazingPreloader {
#Scheduled(cron = everyThrityMinute)
#EventListener(ApplicationReadyEvent::class)
fun refreshCachedEntries() {
// the preloading happens here
}
}
Then you have the preloading bits when the application has started, and also the refreshing mechanism in place that triggers, say, every 30 minutes.
You will require to add the annotation on some #Configuration-class or the #SpringBootApplication-class:
#EnableScheduling
I am working on a Quarkus app that uses the smallrye microprofile fault tolerance implementation.
We have configured fault tolerance on the client definitions via the annotations API (#Retry, #Bulkhead, etc) and it seems to work but we don't get any sort of feedback about what is happening. Ideally we would like to get some sort of callback but even just having logs would help out in the first step.
The rest clients look something like this:
#RegisterRestClient(configKey = "foo-backend")
#Path("/backend")
interface FooClient {
#POST
#Retry(maxRetries = 4, delay = 900)
#ExponentialBackoff
#Timeout(value = 3000)
fun getUser(payload: GetFooUserRequest): GetFooUserResponse
}
Looking at the logs, even though we trace all communication, I cannot see any event even if I manually stop foo-backend and start it again before the retires run out.
Our logging config looks like this right now but still nothing
quarkus.rest-client.logging.scope=request-response
quarkus.rest-client.logging.body-limit=2048
quarkus.log.category."org.jboss.resteasy.reactive.client.logging".level=DEBUG
Is there a way to get callbacks when a fault tolerance event happens? Or a setting which logs them out? I also would be interested in knowing when out Circuit Breakers are triggered or when a Bulkhead fills up. Logging them would be good enough for now but Ideally I would like to somehow listen for them.
You can enable DEBUG logging for the io.smallrye.faulttolerance category, and you should get all the information you need.
Specifically for circuit breakers, you can register state change listeners for circuit breakers that have been given a name using #CircuitBreakerName -- just inject CircuitBreakerMaintenance and use onStateChange. See https://smallrye.io/docs/smallrye-fault-tolerance/5.6.0/usage/extra.html#_circuit_breaker_maintenance
There's unfortunately nothing similar for bulkheads yet.
In our Spring Boot application (2.0.4.RELEASE), we use Zipkin to integrate distributed tracing.
When creating the integration manually with a 10% sampling rate, meaning with a #Configuration like this:
#Configuration
public class ZipkinConfiguration {
#Value("${grpc.zipkin.endpoint:}")
private String zipkinEndpoint;
#Bean
public SpanCustomizer currentSpanCustomizer(Tracing tracing) {
return CurrentSpanCustomizer.create(tracing);
}
#Bean
public Tracing tracing(#Value("${spring.application.name}") String serviceName) {
return Tracing.newBuilder().localServiceName(serviceName).spanReporter(spanReporter()).build();
}
private Reporter<Span> spanReporter() {
return AsyncReporter.create(sender());
}
private Sender sender() {
return OkHttpSender.create(zipkinEndpoint);
}
}
our application has a 50 percentile performance of about 19ms and a 99.9 percentile of about 90ms at around 10 requests per second.
When integrating Sleuth 2.0.2.RELEASE instead like this in gradle:
compile "org.springframework.cloud:spring-cloud-starter-sleuth:2.0.2.RELEASE"
compile "org.springframework.cloud:spring-cloud-sleuth-zipkin:2.0.2.RELEASE"
the performance drops massively to a p50 of 49ms and a p999 of 120ms.
I tried disabling the different parts of the Sleuth integration (spring.sleuth.async.enabled, spring.sleuth.reactor.enabled, etc.).
Disabling all these integrations brings the performance to p50: 25ms, p999: 103 ms. Just having Sleuth adds about 15-25% of overhead.
It turns out that the one thing with the significant impact is setting spring.sleuth.log.slf4j.enabled to false. If all other integrations are enabled, but this is disabled, the performance stays within the Sleuth overhead mentioned above, although nothing is logged.
So my question is:
Is there a way to avoid the overhead by Sleuth (compared to "manual" tracing) and especially the one done by the SLF4J integration?
The option is to disable Slf4j integration as you mentioned. When a new span / scope is created, we go through Slf4j to put data in MDC and it takes time unfortunately. Disabling that will save it.
I've been working with spring-boot 2.0.0.RC1 using the webflux starter (spring-boot-starter-webflux). I created a simple controller that returns a infinite flux. I would like that the Publisher only does its work if there is a client (Subscriber). Let's say I have a controller like this one:
#RestController
public class Demo {
#GetMapping(value = "/")
public Flux<String> getEvents(){
return Flux.create((FluxSink<String> sink) -> {
while(!sink.isCancelled()){
// TODO e.g. fetch data from somewhere
sink.next("DATA");
}
sink.complete();
}).doFinally(signal -> System.out.println("END"));
}
}
Now, when I try to run that code and access the endpoint http://localhost:8080/ with Chrome, then I can see the data. However, once I close the browser the while-loop continues since no cancel event has been fired. How can I terminate/cancel the streaming as soon as I close the browser?
From this answer I quote that:
Currently with HTTP, the exact backpressure information is not
transmitted over the network, since the HTTP protocol doesn't support
this. This can change if we use a different wire protocol.
I assume that, since backpressure is not supported by the HTTP protocol, it means that no cancel request will be made either.
Investigating a little bit further, by analyzing the network traffic, showed that the browser sends a TCP FIN as soon as I close the browser. Is there a way to configure Netty (or something else) so that a half-closed connection will trigger a cancel event on the publisher, making the while-loop stop?
Or do I have to write my own adapter similar to org.springframework.http.server.reactive.ServletHttpHandlerAdapter where I implement my own Subscriber?
Thanks for any help.
EDIT:
An IOException will be raised on the attempt to write data to the socket if there is no client. As you can see in the stack trace.
But that's not good enough, since it might take a while before the next chunk of data will be ready to send and therefore it takes the same amount of time to detect the gone client. As pointed out in Brian Clozel's answer it is a known issue in Reactor Netty. I tried to use Tomcat instead by adding the dependency to the POM.xml. Like this:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
Although it replaces Netty and uses Tomcat instead, it does not seem reactive due to the fact that the browser does not show any data. However, there is no warning/info/exception in the console. Is spring-boot-starter-webflux as of this version (2.0.0.RC1) supposed to work together with Tomcat?
Since this is a known issue (see Brian Clozel's answer), I ended up using one Flux to fetch my real data and having another one in order to implement some sort of ping/heartbeat mechanism. As a result, I merge both together with Flux.merge().
Here you can see a simplified version of my solution:
#RestController
public class Demo {
public interface Notification{}
public static class MyData implements Notification{
…
public boolean isEmpty(){…}
}
#GetMapping(value = "/", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<? extends Notification>> getNotificationStream() {
return Flux.merge(getEventMessageStream(), getHeartbeatStream());
}
private Flux<ServerSentEvent<Notification>> getHeartbeatStream() {
return Flux.interval(Duration.ofSeconds(2))
.map(i -> ServerSentEvent.<Notification>builder().event("ping").build())
.doFinally(signalType ->System.out.println("END"));
}
private Flux<ServerSentEvent<MyData>> getEventMessageStream() {
return Flux.interval(Duration.ofSeconds(30))
.map(i -> {
// TODO e.g. fetch data from somewhere,
// if there is no data return an empty object
return data;
})
.filter(data -> !data.isEmpty())
.map(data -> ServerSentEvent
.builder(data)
.event("message").build());
}
}
I wrap everything up as ServerSentEvent<? extends Notification>. Notification is just a marker interface. I use the event field from the ServerSentEvent class in order to separate between data and ping events. Since the heartbeat Flux sends events constantly and in short intervals, the time it takes to detect that the client is gone is at most the length of that interval. Remember, I need that because it might take a while before I get some real data that can be sent and, as a result, it might also take a while before it detects that the client is gone. Like this, it will detect that the client is gone as soon as it can’t sent the ping (or possibly the message event).
One last note on the marker interface, which I called Notification. This is not really necessary, but it gives some type safety. Without that, we could write Flux<ServerSentEvent<?>> instead of Flux<ServerSentEvent<? extends Notification>> as return type for the getNotificationStream() method. Or also possible, make getHeartbeatStream() return Flux<ServerSentEvent<MyData>>. However, like this it would allow that any object could be sent, which I don’t want. As a consequence, I added the interface.
I'm not sure why this behaves like this, but I suspect it is because of the choice of generation operator. I think using the following would work:
return Flux.interval(Duration.ofMillis(500))
.map(input -> {
return "DATA";
});
According to Reactor's reference documentation, you're probably hitting the key difference between generate and push (I believe a quite similar approach using generate would probably work as well).
My comment was referring to the backpressure information (how many elements a Subscriber is willing to accept), but the success/error information is communicated over the network.
Depending on your choice of web server (Reactor Netty, Tomcat, Jetty, etc), closing the client connection might result in:
a cancel signal being received on the server side (I think this is supported by Netty)
an error signal being received by the server when it's trying to write on a connection that's been closed (I believe the Servlet spec does not provide that that callback and we're missing the cancel information).
In short: you don't need to do anything special, it should be supported already, but your Flux implementation might be the actual problem here.
Update: this is a known issue in Reactor Netty
I have a spring MVC rest service that returns data in XML. I would like to cache this xml response. How can I achieve this? Is it possible to do this using mvc:interceptors?
You could make this work, but I think there are better solutions.
First, if you want to use Spring MVC interceptors, you'll use the postHandle method to store something in your cache and the preHandle to check the cache and possible circumvent processing. The question is, what do you store in the cache. You would need to store the complete response. This means that you would have to easily get the full response from your ModelAndView in postHandle. This may or may not be easy, depending on how you're doing things.
You're most likely better off using a different caching mechanism all together. I recommend caching at the web server level. This is especially true if you're looking to cache in the interceptor level as that is right "next" to the web server and I don't see any benefit in re-inventing the wheel there. Apache has a cache module. So does nginx. Varnish is pretty awesome too.
I should also mention that you should not cache until you've determined that you need to (don't prematurely optimize). This is a waste of your time and effort. Secondly, when you've determined that you do have performance issues that need to be fixed (and caching is the correct solution), you should cache the right data in the right place.
Now, say you've determined that you do have a performance problem and some sort of caching is a good solution. The next thing to determine is what can be cached. If, for every URL, you return the same data, then caching at the web server (Apache, nginx, Varnish, etc.) level will be your best bet.
Often, you will have cases where two clients will hit the same URL and get different data. This is most easily seen on a site like Facebook. I see different data when I'm logged in than my friend sees. In this case, you will not be able to cache at the web server level. You will need to cache inside your application. Usually this means caching at the database level.
I couldn't disagree with the optimization part of the solution more.
Web requests are inherently slow as you're loading data from a remote location, possibly a few thousand miles away. Each call must suffer a full TCP round-trip time for at least the packets themselves, possibly the connect and fin for each request, which for connect is a three packet synchronous exchange before you start to transfer data.
US coast-to-coast latency is about 50ms on a good day, so every connection suffers a 150ms penalty, which for most implementations is incurred for every request.
Caching the response on the client-side removes this latency entirely, and if the service has correct headers on their response, is trivial. If they don't, you'll have to define a caching policy, which for the most part isn't particularly difficult. Most API calls are either real-time or not.
In my opinion, caching REST responses isn't premature optimization, it's common sense.
Don't use spring cache it is not what you need. You need to reduce load to your Server, not speed up your inner spring application execution.
Try use som HTTP-related caching strategies.
You can add one of HTTP-headers to your requests
#cache expires in 3600 seconds
cache-control: private, max-age=3600
#hash of your content
ETag: "e6811cdbcedf972c5e8105a89f637d39-gzip"
# redirect caching to any HTTP header
vary: User-Agent
Detailed description of caching techniques
Spring example
#RequestMapping (value = "/resource/1.pdf", produces = "application/octet-stream")
public ResponseEntity<InputStreamResource> getAttachement (#RequestParam (value = "id") Long fileId)
{
InputStreamResource isr = new InputStreamResource(javaInputStream);
HttpHeaders headers = new HttpHeaders();
//other headers
headers.setCacheControl("private, max-age=3600");
return new ResponseEntity<>(irs, headers, HttpStatus.OK);
}
I use this and it works with awesome speed.
Really easy to use spring + ehcache:
1)Controller:
#Cacheable("my.json")
#RequestMapping("/rest/list.json")
public ResponseEntity list(#RequestParam(value = "page", defaultValue = "0", required = false)
int pageNum,
#RequestParam(value = "search", required = false)
String search) throws IOException {
...
}
2) At ehcache.xml some like this:
<cache name="my.json" maxElementsInMemory="10000" eternal="true" overflowToDisk="false"/>
3) Configure spring. I'm using spring javaconf style:
#Configuration
#EnableCaching
public class ApplicationConfiguration {
#Bean
public EhCacheManagerFactoryBean ehCacheManagerFactoryBean() throws MalformedURLException {
EhCacheManagerFactoryBean ehCacheManagerFactoryBean = new EhCacheManagerFactoryBean();
ehCacheManagerFactoryBean.setConfigLocation(new ClassPathResource("ehcache.xml"));
return ehCacheManagerFactoryBean;
}
#Bean
#Autowired
public EhCacheCacheManager cacheManager(EhCacheManagerFactoryBean ehcache) {
EhCacheCacheManager ehCacheCacheManager = new EhCacheCacheManager();
ehCacheCacheManager.setCacheManager(ehcache.getObject());
return ehCacheCacheManager;
}
}
At the application level, I would go with a plain Java cache as EHCache. EHCache is pretty easy to integrate with methods on Spring beans. You could annotate your service methods as #Cacheable and it's done. Check it out at EHCache Spring Annotations.
At the HTTP level, Spring MVC provides a useful ETag filter. But I think it would be better if you could configure this kind of caching at the server level more than at app level.
As of Spring 3.1, you can use the #Cachable annotation. There is also support for conditional caching, and some sibling annotations like #CachePut, #CacheEvict and #Caching for more fine grained control.
Spring currently supports two different cache managers, one that is backed by a ConcurrentHashMap and one that is backed by Ehcache.
Lastly, don't forget to read the details about how to enable the annotations.