Using Quarkus Cache with Reactive and Mutiny correctly - caching

I'm trying to migrate my project to Quarkus Reactive with Hibernate Reactive Panache and I'm not sure how to deal with caching.
My original method looked like this
#Transactional
#CacheResult(cacheName = "subject-cache")
public Subject getSubject(#CacheKey String subjectId) throws Exception {
return subjectRepository.findByIdentifier(subjectId);
}
The Subject is loaded from the cache, if available, by the cache key "subjectId".
Migrating to Mutiny would look like this
#CacheResult(cacheName = "subject-cache")
public Uni<Subject> getSubject(#CacheKey String subjectId) {
return subjectRepository.findByIdentifier(subjectId);
}
However, it can't be right to store the Uni object in the cache.
There is also the option to inject the cache as a bean, however, the fallback function does not support to return an Uni:
#Inject
#CacheName("subject-cache")
Cache cache;
//does not work, cache.get function requires return type Subject, not Uni<Subject>
public Uni<Subject> getSubject(String subjectId) {
return cache.get(subjectId, s -> subjectRepository.findByIdentifier(subjectId));
}
//This works, needs blocking call to repo, to return response wrapped in new Uni
public Uni<Subject> getSubject(String subjectId) {
return cache.get(subjectId, s -> subjectRepository.findByIdentifier(subjectId).await().indefinitely());
}
Can the #CacheResult annotations be used with Uni / Multi and everything is handled under the hood correctly?

Your example with a #CacheResult on a method that returns Uni should actually work. The implementation will automatically "strip" the Uni type and only store the Subject in the cache.

The problem with caching Unis is that depending on how this Uni is created, multiple subscriptions can trigger some code multiple times. To avoid this you have to memoize the Uni like this:
#CacheResult(cacheName = "subject-cache")
public Uni<Subject> getSubject(#CacheKey String subjectId) {
return subjectRepository.findByIdentifier(subjectId)
.memoize().indefinitely();
}
This will ensure that every subscription to the cached Uni will always return the same value (item or failure) without re-executing anything of the original Uni flow.

Related

AsyncCassandraOperations examples

I am reading up on AsyncCassandraOperations to perform async inserts to improve performance based on another post here. But I am unable to find a lot of help on google or spring data documentation.
Previously I was using Cassandra Repository for all data extraction and insert/updates which I found to be super slow. As per recommendation I am now using AsyncCassandraOperations for the insert operation alone, but it wont let me. I encounter required a bean of type 'org.springframework.data.cassandra.core.AsyncCassandraOperations' error.
What would be the correct way to use AsyncCassandraOperations please?
#Autowired private MyRepository repository_name;
#Autowired private AsyncCassandraOperations acops;
public void persist(List<POJO> l_POJO)
{
System.out.println("Enter Persist: "+new java.util.Date());
List<l_POJO> l_POJO_stale = repository_name.findBycol1AndStale("sample",false);
l_POJO_stale.forEach(s -> s.setStale(true));
l_POJO_stale.forEach(s -> acops.update(s));
try
{
acops.insert(l_POJO);
}
catch (Exception e)
{
System.out.println("Error in persisting new data");
}
}
Don't know whether spring boot is used, if so the AsyncCassandraOperations(AsyncCassandraTemplate is the implementation class) should be created automatically.
If the error shows you need an AsyncCassandraOperations bean, the straight way is to create one as shown below.
#Bean
AsyncCassandraTemplate asyncCassandraTemplate(Session session) {
return new AsyncCassandraTemplate(session);
}
Since you are using Spring data Repository interface, you can alse use the ReactiveCrudRepository interface to update or insert entity objects to Cassandra, which is shown in this spring data example project , as an alternative way to using the AsyncCassandraTemplate class.
In the case of using ReactiveCrudRepository and regarding what you want to do, your code needs the following changes.
change the return type of WRRepository.findByCol1AndCol2AndCol3(String, boolean, String) from List<WRpojo> to Flux<WRpojo> , in order to fully utilize the reactive functionality.
change the return type of persist(List<WRpojo>) from boolean to Mono<Void> , making the result non-blocking too.
change your persist(List<WRpojo>) to the following.
public Mono<Void> persist(List<WRpojo> l_wr) {
Flux<WRpojo> l_old_wr = objWRRepository.findByCol1AndCol2AndCol3("1", false, "2").doOnNext(s -> s.setStale(true));
return objWRRepository.saveAll(l_old_wr).thenMany(objWRRepository.saveAll(l_wr)).then();
}
In reactive programming, basically we don't block any code, this means that somewhere the returned Mono<Void> should be subscribed somewhere downstream, if you do want to block and wait for all operations complete, you can call block() on Mono<Void> , which is not recommended.

How to get certain fields from spring boot health endpoint

I have successfully created a springboot app that returns all the basic endpoints. Now I want to return just few fields from that endpoint in my request. For instance, return status from /health page to my rest call. How do I filter this or make my rest call more specific?
The actual requirement is two return few fields from /env, /health of different apps in one call. For which I am able to do it by returning all fields for both env and health. I just need to return specific fields from them. Also can I use in memory json objects, if so how should I do it?
Finally I figured out as how to create it. So the incoming json object consists of fields in LinkedHashMap type. So I consumed its field values using key
LinkedHashMap response = (LinkedHashMap)restTemplate.getForObject("http://localhost:8080/env",Object.class);
EnvProperties variables = new EnvProperties (response);
Wrapper POJO for all fields,
public EnvProperties (LinkedHashMap body) {
this.sysProperties = new SysEnvProperties((LinkedHashMap) body.get("systemProperties"));
}
POJO for this field,
public SysEnvProperties(LinkedHashMap body) {
this.javaVersion = body.get("java.version").toString();
}
later creating a new json string
#Override
public String toString() {
String s = null;
try {
s = mapper.writeValueAsString(this);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return s;
}
I repeated the same for fields of interest, creating a POJO for each. Finally called these fields using similar wrapper class whose toString method returned the expected json object of desired fields only.
You can create Custom health endpoint or custom heath checker too.
For e.g.
#Component
public class CustomHealthCheck extends AbstractHealthIndicator {
#Override
protected void doHealthCheck(Health.Builder bldr) throws Exception {
// TODO implement some check
boolean running = true;
if (running) {
bldr.up();
} else {
bldr.down();
}
}
}
For further reading:
http://www.christianmenz.ch/programmieren/spring-boot-health-checks/
http://briansjavablog.blogspot.be/2017/09/health-checks-metric-s-more-with-spring.html
http://www.baeldung.com/spring-boot-actuators
You can find a tutorial here. However, the interfaces you want to look into implementing are:
org.springframework.boot.actuate.endpoint.Endpoint
Similar to creating a Controller. This is your /custom-health endpoint.
org.springframework.boot.actuate.metrics.CounterService
You can count integer-value metrics which will be available at /metrics.
org.springframework.boot.actuate.metrics.GaugeService
Or, you can measure double-value metrics which will be available at /metrics.
org.springframework.boot.actuate.health.HealthIndicator
Add metrics to the /health endpoint.

Spring Data Solr #Transaction Commits

I currently have a setup where data is inserted into a database, as well as indexed into Solr. These two steps are wrapped in a spring-managed transaction via the #Transaction annotation. What I've noticed is that spring-data-solr issues an update with the following parameters whenever the transaction is closed : params{commit=true&softCommit=false&waitSearcher=true}
#Transactional
public void save(Object toSave){
dbRepository.save(toSave);
solrRepository.save(toSave);
}
The rate of commits into solr is fairly high, so ideally I'd like send data to the solr index, and have solr auto commit at regular intervals. I have the autoCommit (and autoSoftCommit) set in my solrconfig.xml, but since spring-data-solr is sending those commit parameters, it does a hard commit every time.
I'm aware that I can drop down to the SolrTemplate API and issue commits manually, I would like to keep the solr repository.save call within a spring-managed transaction if possible. Is there a way to modify the parameters that are sent to solr on commit?
After putting in an IDE debug breakpoint in org.springframework.data.solr.repository.support.SimpleSolrRepository here:
private void commitIfTransactionSynchronisationIsInactive() {
if (!TransactionSynchronizationManager.isSynchronizationActive()) {
this.solrOperations.commit(solrCollectionName);
}
}
I discovered that wrapping my code as #Transactional (and other details to actually enable the framework to begin/end code as a transaction) doesn't achieve what we expect with "Spring Data for Apache Solr". The stacktrace shows the Proxy and Transaction Interceptor classes for our code's Transactional scope but then it also shows the framework starting its own nested transaction with another Proxy and Transaction Interceptor of its own. When the framework exits its CrudRepository.save() method my code calls, the action to commit to Solr is done by the framework's nested transaction. It happens before our outer transaction is exited. So, the attempt to batch-process many saves with one commit at the end instead of one commit for every save is futile. It seems, for this area in my code, I'll have to make use of SolrJ to save (update) my entities to Solr and then have "my" transaction's exit be followed with a commit.
If using Spring Solr, I found using the SolrTemplate bean allows you to 'batch' updates when adding data to the Solr index. By using the bean for SolrTemplate, you can use "addBeans" method, which will add a collection to the index and not commit until the end of the transaction. In my case, I started out using solrClient.add() and taking up to 4 hours for my collection to get saved to the index by iterating over it, as it commits after every single save. By using solrTemplate.addBeans(Collect<?>), it finishes in just over 1 second, as the commit is on the entire collection. Here is a code snippet:
#Resource
SolrTemplate solrTemplate;
public void doReindexing(List<Image> images) {
if (images != null) {
/* CMSSolrImage is a class with #SolrDocument mappings.
* the List<Image> images is a collection pulled from my database
* I want indexed in Solr.
*/
List<CMSSolrImage> sImages = new ArrayList<CMSSolrImage>();
for (Image image : images) {
CMSSolrImage sImage = new CMSSolrImage(image);
sImages.add(sImage);
}
solrTemplate.saveBeans(sImages);
}
}
The way I've done something similar is to create a custom repository implementation of the save methods.
Interface for the repository:
public interface FooRepository extends SolrCrudRepository<Foo, String>, FooRepositoryCustom {
}
Interface for the custom overrides:
public interface FooRepositoryCustom {
public Foo save(Foo entity);
public Iterable<Foo> save(Iterable<Foo> entities);
}
Implementation of the custom overrides:
public class FooRepositoryImpl {
private SolrOperations solrOperations;
public SolrSampleRepositoryImpl(SolrOperations fooSolrOperations) {
this.solrOperations = fooSolrOperations;
}
#Override
public Foo save(Foo entity) {
Assert.notNull(entity, "Cannot save 'null' entity.");
registerTransactionSynchronisationIfSynchronisationActive();
this.solrOperations.saveBean(entity, 1000);
commitIfTransactionSynchronisationIsInactive();
return entity;
}
#Override
public Iterable<Foo> save(Iterable<Foo> entities) {
Assert.notNull(entities, "Cannot insert 'null' as a List.");
if (!(entities instanceof Collection<?>)) {
throw new InvalidDataAccessApiUsageException("Entities have to be inside a collection");
}
registerTransactionSynchronisationIfSynchronisationActive();
this.solrOperations.saveBeans((Collection<? extends T>) entities, 1000);
commitIfTransactionSynchronisationIsInactive();
return entities;
}
private void registerTransactionSynchronisationIfSynchronisationActive() {
if (TransactionSynchronizationManager.isSynchronizationActive()) {
registerTransactionSynchronisationAdapter();
}
}
private void registerTransactionSynchronisationAdapter() {
TransactionSynchronizationManager.registerSynchronization(SolrTransactionSynchronizationAdapterBuilder
.forOperations(this.solrOperations).withDefaultBehaviour());
}
private void commitIfTransactionSynchronisationIsInactive() {
if (!TransactionSynchronizationManager.isSynchronizationActive()) {
this.solrOperations.commit();
}
}
}
and you also need to provide a SolrOperations bean for the right solr core:
#Configuration
public class FooSolrConfig {
#Bean
public SolrOperations getFooSolrOperations(SolrClient solrClient) {
return new SolrTemplate(solrClient, "foo");
}
}
Footnote: auto commit is (to my mind) conceptually incompatible with a transaction. An auto commit is a promise from solr that it will try to start to write it to disk within a certain time limit. Many things might stop that from actually happening however - a timely power or hardware failure, errors between the document and the schema, etc. But the client won't know that solr failed to keep its promise, and the transaction will see a success when it actually failed.

Spring Cache Abstraction: How to Deal With java.util.Optional<T>

We have a lot of code in our code base that's similar to the following interface:
public interface SomethingService {
#Cacheable(value = "singleSomething")
Optional<Something> fetchSingle(int somethingId);
// more methods...
}
This works fine as long we're only using local caches. But as soon as we're using a distributed cache like Hazelcast, things start to break because java.util.Optional<T> is not serializable and thus cannot be cached.
With what I've come up so far to solve this problem:
Removing java.util.Optional<T> from the method definitions and instead checking for the trusty null.
Unwrapping java.util.Optional<T> before caching the actual value.
I want to avoid (1) because it would involve a lot of refactoring. And I have no idea how to accomplish (2) without implementing my own org.springframework.cache.Cache.
What other options do I have? I would prefer a generic (Spring) solution that would work with most distributed caches (Hazelcast, Infinispan, ...) but I would accept a Hazelcast-only option too.
A potential solution would be to register a serializer for the Optional type. Hazelcast has a flexibile serialization API and you can register a serializer for any type.
For more information see the following example:
https://github.com/hazelcast/hazelcast-code-samples/tree/master/serialization/stream-serializer
So something like this:
public class OptionalSerializer implements StreamSerializer<Optional> {
#Override
public void write(ObjectDataOutput out, Optional object) throws IOException {
if(object.isPresent()){
out.writeObject(object.get());
}else{
out.writeObject(null);
}
}
#Override
public Optional read(ObjectDataInput in) throws IOException {
Object result = in.readObject();
return result == null?Optional.empty():Optional.of(result);
}
#Override
public int getTypeId() {
return 0;//todo:
}
#Override
public void destroy() {
}
}
However the solution isn't perfect because this Optional thing will be part of the actual storage. So internally the Optional wrapper is also stored and this can lead to problems with e.g. queries.

play framework cache annotation

Can somebody exlain, with a sample, how works the cache annotation in play framework 2 in java
I would like to cache the result of the method with his parameters; something like this:
#Cache(userId, otherParam)
public static User getUser(int userId, String otherParam){
return a User from dataBase if it isn't in cache.
}
Maybe a tutorial is available?
Thanks for your help.
The #Cached annotation doesn't work for every method call. It only works for Actions and moreover, you can't use parameters as a cache key (it's only a static String). If you want to know how it works, look at the play.cache.CachedAction source code.
Instead, you will have to use either Cache.get(), check if result is null and then Cache.set() or the Cache.getOrElse() with a Callable with a code like :
public static User getUser(int userId, String otherParam){
return Cache.getOrElse("user-" + userId + "-" + otherParam, new Callable<User>() {
#Override
public User call() throws Exception {
return getUserFromDatabase(userId, otherParam);
}
}, DURATION);
}
Be careful when you construct your cache keys to avoid naming-collision as they are shared across the whole application.

Resources