Reactor changes in Spring Boot 2 M4 - spring

I have updated from Spring Boot 2.0.0.M3 to 2.0.0.M4, which updates Reactor from 3.1.0.M3 to 3.1.0.RC1. This causes my code to break in a number of places.
Mono.and() now returns Mono<Void>, where previously it returned Mono<Tuple>
This is also the case for Mono.when()
The following code compiles with the older versions, but not with the new version
Mono<String> m1 = Mono.just("A");
Mono<String> m2 = Mono.just("B");
Mono<String> andResult = m1.and(m2).map(t -> t.getT1() + t.getT2());
Mono<String> whenResult = Mono.when(m1, m2).map(t -> t.getT1() + t.getT2());
Has there been any changes to how this should work?

when and and that produce Tuple have been replaced with zip/zipWith which are their exact equivalent in the Flux API, in order to align the APIs. Remaining when and and methods, which are found only in Mono, are now purely about combining the completion signals, discarding the onNexts (hence they return a Mono<Void>)

I switched to Mono.zip(...):
mono1.and(mono2).map(...)
=>
Mono.zip(mono1, mono2).map(...)

Related

Springboot start manual Transaction to support optimistic lock feature for RabbitMQ event listener

I am using springboot application which accept request either from Web or accept request from RabbitMQ listener.
Springboot version : 1.5.18.RELEASE
I want to prevent stale object update, so for that I use #Version to support hibernete Optimistic locking. And it is working fine when any request come from Web.
After search I found this is happening because of OpenEntityManagerInViewInterceptor (Referenc : https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/orm/jpa/support/OpenEntityManagerInViewInterceptor.html)
Now my service also accept request from RabbitMQ listener and update entity. So to support that I use #org.springframework.transaction.annotation.Transactional annotation on service method.
Here is the sample code.
1. class LeadController{
2. .......
3. updateViaWebRequest(lead); // works fine
4. .......
5. }
6.
7. class LeadService{
8. //dependencies
9.
10. public void updateViaWebRequest(Lead lead){
11. Lead persistedLead = leadRepo.get(lead.id)
12. //update value into persistedLead usign lead
13. leadRepo.save(lead);
14. return
15. }
16.
17. #org.springframework.transaction.annotation.Transactional
18. public void updateViaRabbitMQ(Lead lead){
19. Lead persistedLead = leadRepo.get(lead.id)
20. //update value into persistedLead usign lead
21. //Save Lead
22. leadRepo.save(lead);
23. return
24. }
25. }
26.
27. class LeadListener(){
28. ...........
29.
30. #Override
31. public void onMessage(Message message) {
32. //convert message into Lead class
33. updateViaRabbitMQ(lead);
34. }
35. ...........
36. }
Now assument At line no.20 some other web request come and updated the lead record, so the version is updated into database.
Expected Behaviour:
Line no.22 should give an ObjectOptimisticLock error. (But instead of this, it save the object successfully)
Does anyone know how to prevent this ?
You are not really using the version that is included in the Lead entity that you pass and it seems you don't fully understand how the entity lifecycle works, given that you try to save lead after you read from the database the persistedLead already.
If you want optimistic locking to work, you will have to merge the detached entity i.e. call just leadRepo.save(lead); and skip the other code before. If you don't want to do that, you will have to check versions yourself and throw an exception i.e.
Lead persistedLead = leadRepo.get(lead.id)
if ( persistedLead.version != lead.version ) {
throw new OptimisticLockException();
}

Jdbi transaction - multiple methods - Resources should be closed

Suppose I want to run two sql queries in a transaction I have code like the below:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
handle.createUpdate("INSERT INTO SOMETABLE (id) " +
"VALUES (:id , xxx);")
.bind("id")
.execute();
}
));
Now as the complexity grows I want to extract each update in into it's own method:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = someQuery1(h);
someQuery2(id, h);
}
));
...with someQuery1 looking like:
private Long someQuery1(Handle handle) {
return handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
}
Now when I refactor to the latter I get a SonarQube blocker bug on the someQuery1 handle.createUpdate stating:
Resources should be closed
Connections, streams, files, and other
classes that implement the Closeable interface or its super-interface,
AutoCloseable, needs to be closed after use....*
I was under the impression, that because I'm using jdbi.useHandle (and passing the same handle to the called methods) that a callback would be used and immediately release the handle upon return. As per the jdbi docs:
Both withHandle and useHandle open a temporary handle, call your
callback, and immediately release the handle when your callback
returns.
Any help / suggestions appreciated.
TIA
SonarQube doesn't know any specifics regarding JDBI implementation and just triggers by AutoCloseable/Closable not being closed. Just suppress sonar issue and/or file a feature-request to SonarQube team to improve this behavior.

couchbase upsert/insert silently failing with ttl

i am trying to upsert 10 documents using spring boot. It is failing to upsert "few documents" with TTL.There is no error or exception. If i do not provide ttl then it is working as expected.
In addition to that, if i increase the ttl to a different value then also all the documents are getting created.
On the other hand, if i reduce the ttl then failing to insert few more docuemnts.
I tried to insert the failed document(single document out of 10) from another poc with the same ttl the document is getting created.
public Flux<JsonDocument> upsertAll(final List<JsonDocument> jsonDocuments) {
return Flux
.from(keys())
.flatMap(key -> Flux
.fromIterable(jsonDocuments)
.parallel()
.runOn(Schedulers.parallel())
.flatMap(jsonDocument -> {
final String arg = String.format("upsertAll-%s", jsonDocument);
return Mono
.just(asyncBucket
.upsert(jsonDocument, 1000, TimeUnit.MILLISECONDS)
.doOnError(error -> log.error(jsonDocument.content(), error, "failed to upsert")))
.map(obs -> Tuples.of(obs, jsonDocument.content()))
.map(tuple2 -> log.observableHandler(tuple2))
.map(observable1 -> Tuples.of(observable1, jsonDocument.content()))
.flatMap(tuple2 -> log.monoHandler(tuple2))
;
})
.sequential())
;
}
List<JsonDocument> jsonDocuments = new LinkedList<>();
dbService.upsertAll(jsonDocuments)
.subscribe();
some one please suggest how to resolve this issue.
Due to an oddity in the Couchbase server API, TTL values less than 30 days are treated differently than values greater than 30 days.
In order to get consistent behavior with Couchbase Java SDK 2.x, you'll need to adjust the TTL value before passing it to the SDK:
// adjust TTL for Couchbase Java SDK 2.x
public static int adjustTtl(int ttlSeconds) {
return ttlSeconds < TimeUnit.DAYS.toSeconds(30)
? ttlSeconds
: (int) (ttlSeconds + (System.currentTimeMillis() / 1000));
}
In Couchbase Java SDK 3.0.6 this is no longer required; just pass a Duration and the SDK will adjust the value behind the scenes if necessary.

Ignite 2.4.0 - SqlQuery results do not match with results of query from H2 console

We implemented a caching solution using ignite 2.0.0 version for data structure that looks like this.
public class EntityPO {
#QuerySqlField(index = true)
private Integer accessZone;
#QuerySqlField(index = true)
private Integer appArea;
#QuerySqlField(index = true)
private Integer parentNodeId;
private Integer dbId;
}
List<EntityPO> nodes = new ArrayList<>();
SqlQuery<String, EntityPO> sql =
new SqlQuery<>(EntityPO.class, "accessZone = ? and appArea = ? and parentNodeId is not null");
sql.setArgs(accessZoneId, appArea);
CacheConfiguration<String, EntityPO> cacheconfig = new
CacheConfiguration<>(cacheName);
cacheconfig.setCacheMode(CacheMode.PARTITIONED);
cacheconfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheconfig.setIndexedTypes(String.class, EntityPO.class);
cacheconfig.setOnheapCacheEnabled(true);
cacheconfig.setBackups(numberOfBackUpCopies);
cacheconfig.setName(cacheName);
cacheconfig.setQueryParallelism(1);
cache = ignite.getOrCreateCache(cacheconfig);
We have method that looks for node in a particular accessZone and appArea. This method works fine in 2.0.0, we upgraded to the latest version 2.4.0 version and this method no longer returns anything(zero records). We enabled H2 debug console and ran the same query and we are seeing the same atleast 3k records. Downgrading the library back to 2.0.0 makes the code work again. Please let me know if you need more information to help with this question
Results from H2 console.
H2 Console Results
If you use persistence, please check a baseline topology for you cluster.
Baseline Topology is the major feature introduced in the 2.4 version.
Briefly, the baseline topology is a set server nodes that could store the data. Most probably the cause of your issue is to one or several server nodes are not in the baseline.

Spring amqp: Remove all bindings for specific queue

Is there a way to remove all bindings for specific queue using spring-amqp?
There's a workaround, first delete a queue, and then redeclare it
amqpAdmin.deleteQueue("testQueue");
amqpAdmin.declareQueue(new Queue("testQueue"));
but this is pretty ugly solution and I'd like to avoid it
You can use the REST API to list the bindings and amqpAdmin.removeBinding() for those you want to remove.
EDIT
Here's the code using a Java 8 Stream - you can do the same thing by iterating over the list if you are not using Java 8...
RabbitManagementTemplate rmt = new RabbitManagementTemplate("http://localhost:15672/api/", "guest", "guest");
rmt.getBindings().stream()
.filter(b -> b.getDestination().equals("q1") && b.isDestinationQueue())
.forEach(b -> {
System.out.println("Deleting " + b);
amqpAdmin.removeBinding(b);
});
Result:
Deleting Binding [destination=q1, exchange=, routingKey=q1]
Deleting Binding [destination=q1, exchange=ex1, routingKey=foo]
Deleting Binding [destination=q1, exchange=ex2, routingKey=foo]
(when q1 was bound to the default exchange and 2 others).
The RabbitAdmin amqpAdmin is used to do the deletes.

Resources