How does reactive java access data from couchbase? - spring-boot

I'm working with spring 5 and reactive programming with couchbase. Can anyone explain in detail how does reactive java pull in data from couchbase? How does couchbase provide reactive support? Thanks in advance.

The Couchbase Java client is implemented with RxJava 1, at least through the 2.x versions.
If you look at, for example, a document get operation, you can turn the operation into an observable stream by inserting a call to async. That is, bucket.async().get(id) returns a type of Observable<JsonDocument>.

Related

Calling Hibernate in Spring cloud Stream

I'm new to Spring cloud stream.
Say I Spring cloud stream app that listen to some topic from kafka using #StreamListener("input-channel").
I want to do some calculation and send the result to another topic but in the middle of the processing I also need to call Hibernate (via spring data jpa) to persist some data to my mySQL data base.
Is it valid to call Hibernate in the middle of stream processing? is there other pattern to do it?
Yes, it's a database call, so why not. People do it all the time.
Also, #StreamListener, has been deprecated for 3 years now, and is already removed from the new versions, so please transition to functional programming model

Spring Cloud Stream with Project Reactor Stability

I want to use Spring Cloud Stream for consuming and processing Apache Kafka queues and writing them to MongoDB. I saw that there is an option of using the library so that functions will be Reactive, or Imperative. In most Spring projects the imperative way is the default, but as for my understanding, in spring cloud stream the reactive paradigm is the default.
I wonder what is considered the most “stable” api e.g. what is recommended to use for enterprise?
Reactive API is stable and yes we provide support for it. In other words you can write functions using reactive API (e.g., Function<Flux, Flux>).
However, i want to be very clear that support for API does not mean support for the full stack of reactive capabilities since those actually depend on source and targets which are not reactive.
That said, with Kafka you can rely on native reactive support provided by Kafka itself and Spring Cloud Stream using Kafka Streams binder - https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.1.5/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_streams_binder

Elastic search high/low rest client vs spring rest template

I am in a dilemma over to use spring's rest template or elasticsearch's own high/low rest client while searching in es . Does es client provide any advantage like HTTP connection pooling , performance while compared to spring rest template . Which of the two take less time in getting response from the server . Can some one please explain this ?
The biggest advantage of using Spring Data Elasticsearch is that you don't have to bother about the things like converting your requests/request bodies/responses from your POJO domain classes to and from the JSON needed by Elasticsearch. You just use the methods defined in the ElasticsearchOperations class which is implemented by the *Template classes.
Or going one abstraction layer up, use the Repository interfaces the all the Spring Data modules provide to store and search/retrieve your data.
Firstly, This is a very broad question. Not sure if it suits the SO guidelines.
But my two cents:
High Level Client uses Low Level client which does provide connection pooling
High Level client manages the marshalling and unmarshalling of the Elastisearch query body and response, so it might be easier to work using the APIs.
On the other hand, if you are familiar with the Elasticsearch querying by providing the JSON body then you might find it a bit difficult to translate between the JSON body and the Java classes used for creating the query (i.e when you are using Kibana console or other REST API tools)
I generally overcome this by logging the query generated by the Java API so that I can use it with Kibana console or other REST API tools.
Regarding which one is efficient- the library will not matter that much to affect the response times.
If you want to use Spring Reactive features and make use of WebClient, ES Libraries do provide support for Async search.
Update:
Please check the answer by Wim Van den Brande below. He has mentioned a very valid point of using Transport Client which has been deprecated over REST API.
So it would be interesting to see how RestTemplate or Spring Data ElasticSearch will update their API to replace TransportClient.
One important remark and caveat with regards to the usage of Spring Data Elasticsearch. Currently, Spring Data Elasticsearch doesn't support the communication by the High Level REST Client API. They are using the transport client. Please note, the TransportClient is deprecated as of Elasticsearch 7.0.0 and is expected to be removed in Elasticsearch 8.0!!!
FYI, this statement has been confirmed already by another post: Elasticsearch Rest Client with Spring Data Elasticsearch

How to design reactive microservices, which have external blocking API calls?

I have some microservices, which should work on top of WebFlux framework. Each server has own API with Mono or Flux. We are using MongoDB, which is supported by Spring (Spring Data MongoDb Reactive).
The problem is external blocking API, which I have to use in my system.
I have one solution. I can just wrap blocking API calls in dedicated thread pool and use it with CompletableFuture.
Is there anything else to solve my problem? I think, that brand new Rsocket cannot solve my problem.
1.If possible, you can change your blocking API call to the reactive way using the WebClient class.
References:
Reference guide
WebClient API
A simple, complete sample
2.If the blocking API can't be changed to reactive ones, we should have a dedicated, well-tuned thread pool and isolate the blocking code there.
There is also an example here.
I don't see why you cannot wrap a blocking API call in a Flux or a Mono. You can also integrate Akka with Spring if the actor model seems easier to you.
RSocket should be a perfect fit, good tutorials to get you started
https://www.baeldung.com/spring-boot-rsocket
https://spring.io/blog/2020/04/06/getting-started-with-rsocket-spring-boot-channels

Checkpointing with Spring AWS Integration

According to Spring release notes, spring-integration-aws.1.1.0.M1 does not include DynamoDB MetaDataStore implementation. There is still ConcurrentMetadataStore class which is a key-value based store and based on implementation I suppose it maps streams with latest sequence number read. But it does not use any data store as to retrieve checkpoints.
I am using spring integration for kinesis consuming and need to implement checkpointing. I am wondering if I need to do it manually by connecting to DynamoDB and always update checkpoints or there is another way of doing it using spring framework?
P.S: I can't use Spring Cloud KinesisBinderConfiguration as I dynamically consume events from a list of configurable streams.
Thank you
If you are not talking about Spring Cloud Stream and the AWS Kinesis Binder implementation, then I don't see any blockers for you to upgrade your solution to the Spring Integration AWS 2.0 and go ahead with already provided DynamoDbMetaDataStore.
Or if that is so hard for you to move to the Spring Integration 5.0, then you simply can consider to copy/paste an implementation to your own class and inject it into the KinesisMessageDrivenChannelAdapter: https://github.com/spring-projects/spring-integration-aws/blob/master/src/main/java/org/springframework/integration/aws/metadata/DynamoDbMetaDataStore.java
Although it is really available in the 1.1.0.RELEASE - I don't see reason for your to stick with the 1.1.0.M1: https://spring.io/blog/2017/11/27/spring-integration-for-aws-1-1-ga-available

Resources