Calling Hibernate in Spring cloud Stream - spring-boot

I'm new to Spring cloud stream.
Say I Spring cloud stream app that listen to some topic from kafka using #StreamListener("input-channel").
I want to do some calculation and send the result to another topic but in the middle of the processing I also need to call Hibernate (via spring data jpa) to persist some data to my mySQL data base.
Is it valid to call Hibernate in the middle of stream processing? is there other pattern to do it?

Yes, it's a database call, so why not. People do it all the time.
Also, #StreamListener, has been deprecated for 3 years now, and is already removed from the new versions, so please transition to functional programming model

Related

Spring Boot application with Redis cache without database

I am using Spring Boot application , I have below requirement.
I would like to cache all the rows from a particular table(here I have to convert the row into a particular xml format and maintain that in cache) and then also if any updates happens to that row by another application then in my application I will receive an message from Kafka topic. So I want to update the existing xml in the cache with the latest xml message from Kafka topic.
Here I want to use Redis cache, so far what ever the examples I saw are dealing with database only. So I want to know how I can populate cache from a Kafka topic message(or any xml message).Is it possible to cache an xml message in spring boot using Redis cache? Can some body share idea or any practical example?

how to initialize a continous running stream using alpakka, spring boot & Akka-stream?

All,
I am developing an application, which use alpakka spring boot integration to read data from kafka. I have most of the code ready, the only place i am stuck is how to initialize a continuous running stream, as this is going to be a backend application and wont be having any api to be called from ?
As far as I know, Alpakka's Spring integration is basically designed around exposing Akka Streams via a Spring HTTP controller. So I'm not sure what purpose bringing Spring into this serves, since there's quite an impedance mismatch between the way an Akka application will tend to like to work and the way a Spring application will tend to like to work.
Assuming you're talking about using Alpakka Kafka, the most idiomatic thing to do would be to just start a stream fed by an Alpakka Kafka Source in your main method and it will run until killed or it fails. You may want to use a RestartSource around the consumer and business logic to ensure that in the event of failure the stream restarts (note that one should generally expect messages for which the offset commit hadn't happened to be processed again, as Kafka in typical cases can only guarantee at-least-once processing).

Spring Cloud Stream - query topic without consuming a KTable/KStream explicitly?

I'm using Spring Cloud Stream library in a Java application. I want to use the Kafka Streams binder for a state store. The application will post messages to a topic, and I wish to use the Kafka Streams InteractiveQueryService to retrieve data from the same topic. Is it possible to perform such queries as-is, or do I need to first consume the topic as a KTable/KStream and materialize it before I can perform queries? I don't have any requirement to perform KTable/KStream processing on the topic, I just want to query the topic contents. I'm hoping there is some way to implicitly materialize it as a state store.
Interactive Queries is a feature that allows you to query client side state states. It's not a feature that allows you to query topics.
Hence, if you have data in a topic that you want to query it using "Interactive Queries", you need to load the data into a state store within Kafka Streams.

How does reactive java access data from couchbase?

I'm working with spring 5 and reactive programming with couchbase. Can anyone explain in detail how does reactive java pull in data from couchbase? How does couchbase provide reactive support? Thanks in advance.
The Couchbase Java client is implemented with RxJava 1, at least through the 2.x versions.
If you look at, for example, a document get operation, you can turn the operation into an observable stream by inserting a call to async. That is, bucket.async().get(id) returns a type of Observable<JsonDocument>.

Checkpointing with Spring AWS Integration

According to Spring release notes, spring-integration-aws.1.1.0.M1 does not include DynamoDB MetaDataStore implementation. There is still ConcurrentMetadataStore class which is a key-value based store and based on implementation I suppose it maps streams with latest sequence number read. But it does not use any data store as to retrieve checkpoints.
I am using spring integration for kinesis consuming and need to implement checkpointing. I am wondering if I need to do it manually by connecting to DynamoDB and always update checkpoints or there is another way of doing it using spring framework?
P.S: I can't use Spring Cloud KinesisBinderConfiguration as I dynamically consume events from a list of configurable streams.
Thank you
If you are not talking about Spring Cloud Stream and the AWS Kinesis Binder implementation, then I don't see any blockers for you to upgrade your solution to the Spring Integration AWS 2.0 and go ahead with already provided DynamoDbMetaDataStore.
Or if that is so hard for you to move to the Spring Integration 5.0, then you simply can consider to copy/paste an implementation to your own class and inject it into the KinesisMessageDrivenChannelAdapter: https://github.com/spring-projects/spring-integration-aws/blob/master/src/main/java/org/springframework/integration/aws/metadata/DynamoDbMetaDataStore.java
Although it is really available in the 1.1.0.RELEASE - I don't see reason for your to stick with the 1.1.0.M1: https://spring.io/blog/2017/11/27/spring-integration-for-aws-1-1-ga-available

Resources