Currently I'm using SCS with almost default configuration for sending and receiving message between microservices.
Somehow I've read this
https://www.confluent.io/blog/enabling-exactly-kafka-streams
and wonder that it is gonna works or not if we just put the property called "processing.guarantee" with value "exactly-once" there through properties in Spring boot application ?
In the context of your question you should look at Spring Cloud Stream as just a delegate between target system (e.g., Kafka) and your code. The binders that enable such delegation are usually implemented in such way that they propagate whatever functionality supported by the target system.
Related
I have an application which listen to an activeMQ queue and start a Batch Job when receiving a message.
I'd like to use Spring Cloud Dataflow to provide an UI but I can't find informations on how to configure it.
Since it uses Spring Boot I should be able to replicate how my application currently works (use a REST API to make it listen to activeMQ and start job when receiving message), but I can't find anything on how to make it start the batch in Cloud Dataflow.
You have a few options here.
Option 1: Launch your application as-is and manually send message to launch task.
Any arbitrary Spring Boot application can be launched from Dataflow (simply register it as type = "App").
Taken from https://github.com/spring-cloud/spring-cloud-dataflow/blob/main/spring-cloud-dataflow-docs/src/main/asciidoc/streams.adoc#register-a-stream-application:
Registering an application by using --type app is the same as registering a source, processor or sink. Applications of the type app can be used only in the Stream Application DSL (which uses double pipes || instead of single pipes | in the DSL) and instructs Data Flow not to configure the Spring Cloud Stream binding properties of the application. The application that is registered using --type app does not have to be a Spring Cloud Stream application. It can be any Spring Boot application. See the Stream Application DSL introduction for more about using this application type.
You would have to send the task launch in your code. You can use the Dataflow REST client to do this. You can get an idea of how to do that by looking at https://github.com/spring-cloud/spring-cloud-dataflow/tree/main/spring-cloud-dataflow-tasklauncher/spring-cloud-dataflow-tasklauncher-sink.
Option 2: Use pre-built stream applications to model the same flow as your application.
The app you describe can be logically modeled as a Spring Cloud Stream application.
There is a JMS source (provides messages to signal the need to kickoff task/batch job)
There is a TaskLauncher sink (receives messages and kicks off the task/batch job)
This app can actually be constructed w/ little effort by using the pre-packaged applications to model this flow.
JMS Source
Dataflow Tasklauncher Sink
If you have to register these applications in the UI - they can be found at:
maven://org.springframework.cloud.stream.app:jms-source-kafka:3.1.1
maven://org.springframework.cloud:spring-cloud-dataflow-tasklauncher-sink-kafka:2.9.2
Stream definition:
jms-source | dataflow-tasklauncher-sink
The README(s) on the above source/sinks give details about the configuration options.
Option 3: Custom Spring Cloud Stream app w/ function composition
The previous option would create 2 separate apps. However, if you want to keep the logic in a single app then you can look into creating a custom Spring Cloud Stream app that uses function composition and leverage the pre-built reusable Java functions that the apps in option 2 are built upon.
JMS Supplier
TaskLauncherFunction
All,
I am developing an application, which use alpakka spring boot integration to read data from kafka. I have most of the code ready, the only place i am stuck is how to initialize a continuous running stream, as this is going to be a backend application and wont be having any api to be called from ?
As far as I know, Alpakka's Spring integration is basically designed around exposing Akka Streams via a Spring HTTP controller. So I'm not sure what purpose bringing Spring into this serves, since there's quite an impedance mismatch between the way an Akka application will tend to like to work and the way a Spring application will tend to like to work.
Assuming you're talking about using Alpakka Kafka, the most idiomatic thing to do would be to just start a stream fed by an Alpakka Kafka Source in your main method and it will run until killed or it fails. You may want to use a RestartSource around the consumer and business logic to ensure that in the event of failure the stream restarts (note that one should generally expect messages for which the offset commit hadn't happened to be processed again, as Kafka in typical cases can only guarantee at-least-once processing).
My requirement is to for starters send a string from one spring-boot application to another using AMQP.
I am new to it and I have gone through this spring-boot guide, so i know the basic fundamentals of Queue, Exchange, Binding, Container and listener.
So, above guide shows the steps when amqp is received in same application.
I am a little confused on where to start if I want to achieve above type of communication between 2 different spring-boot applications.
What are the properties needed for that, etc.
Let me know if any details required.
Just divide the application into two:
One without Receiver and ...
Another without Sender
Make sure your application and configuration etc stays the same. With Spring boot's built-in RabbitMQ, you will be able to run it alright.
Next step is to call sender as and when needed from your business logic.
According to Spring release notes, spring-integration-aws.1.1.0.M1 does not include DynamoDB MetaDataStore implementation. There is still ConcurrentMetadataStore class which is a key-value based store and based on implementation I suppose it maps streams with latest sequence number read. But it does not use any data store as to retrieve checkpoints.
I am using spring integration for kinesis consuming and need to implement checkpointing. I am wondering if I need to do it manually by connecting to DynamoDB and always update checkpoints or there is another way of doing it using spring framework?
P.S: I can't use Spring Cloud KinesisBinderConfiguration as I dynamically consume events from a list of configurable streams.
Thank you
If you are not talking about Spring Cloud Stream and the AWS Kinesis Binder implementation, then I don't see any blockers for you to upgrade your solution to the Spring Integration AWS 2.0 and go ahead with already provided DynamoDbMetaDataStore.
Or if that is so hard for you to move to the Spring Integration 5.0, then you simply can consider to copy/paste an implementation to your own class and inject it into the KinesisMessageDrivenChannelAdapter: https://github.com/spring-projects/spring-integration-aws/blob/master/src/main/java/org/springframework/integration/aws/metadata/DynamoDbMetaDataStore.java
Although it is really available in the 1.1.0.RELEASE - I don't see reason for your to stick with the 1.1.0.M1: https://spring.io/blog/2017/11/27/spring-integration-for-aws-1-1-ga-available
The case for event-driven microservices such as Spring Cloud Stream is their asynchronous nature, which I do agree it makes them more scalable
But I have an issue regarding how to code it in a way where I don't lose certain key features that I have access to using synchronous services
In a servlet-based MS, I make full use of servlet context variables and servlet-based Spring autowiring functions
For e.g., I leverage heavily on HTTP headers to carry metadata between microservices without having to impact the payload. But in Spring Cloud Stream using Kafka, Kafka doesn't support message headers of any kind! I lose that immediately if I use SCS. Putting them into the payload causes all sort of changes in my model classes if I define the attributes clearly. Yes, I can use a simple Hashmap to simulate the HTTP header object but it really seems like reinventing the wheel to me.
On the auto-wiring side: I maintain an audit log record per request, which I implement by declaring a request-scoped Hashmap bean and autowiring it into any methods in the Servlet's call stack that needs to append data to the audit log. Basically it's just a global variable to hold some data within a single request. But in SCS, again, I lose that cos bean scopes that leverage on servlets are not available.
So far, there seems to be a lot of trade-offs that I have to make just to make Spring Cloud Stream work for me.
I thought about an alternative approach where I use SCS just to create an entry point but the Source method would just get the event, use a Processor to construct a HTTP request and send the request along to a HTTP endpoint. But, why go through all that trouble then?
Hoping that some more experienced devs would be able to shed some light on how they leverage on SCS.
#feicipet Thanks for the detailed question. let me try to address some of your concerns in the order you have listed them:
+1
+1
I am not sure why you are referring to it as servlet-based instead of Spring-based? Those are features provided by Spring, but read on. . .
Spring Cloud Stream doesn't use Kafka, the end user does while Spring Cloud Stream provides Kafka binder allowing Spring Cloud Stream to integrate with Kafka. Further more, while Kafka indeed did not support headers prior to version 0.11, Spring Cloud Stream always supported and will continue support headers even with Kafka pre-0.11, embedding them in the Message and then extracting them in the consumer side into the proper Message headers completely transparent to the end user. In other words one would assume that Kafka did support headers by simply using Spring Cloud Stream. With Kafka 0.11+ headers are supported natively and we have adjusted to that with the same level of transparency.
So, you don't need to put anything in the payload. Just create an appropriate Message<payload, headers> and SCSt will take care of the rest regardless of the broker (Kafka, Rabbit, Foo etc.).
Yes you do simply due to the fact that as you eluded earlier SCSt promotes an asynchronous and stateless architecture. However, I do not agree that what you are trying to accomplish is un-accomplishable. Rather it is accomplishable the way you are describing, but there are other way to maintain context and I would be more then glad to discuss it as a separate topic.
I would not call them trade-offs, rather difference in the architecture, that has its benefits, but it is a not one-size-fits-all architecture and therefore its viability should be discussed within the context of a concrete use case.
+1. You don't have to separate it as Source and Processor. You can simply create a custom Source app with exposed REST endpoint and custom processing logic. However we are currently working on enhancements i the framework to ensure that you could do the same with the existing starter apps.
Obviously we have touched on many points here and some of them would probably need to be debated further, but I hope this clears up some of your concerns.
Cheers