Different polling delay for different suppliers in Spring Cloud Stream Function - spring-boot

I'm trying to implement suppliers using Spring Cloud Function and Kafka. I need that one supplier should publish after every 10 secs and other should publish after every 30 secs. I could see from documentation, I can change delay using spring.cloud.stream.poller.fixed-delay property. Reference
But I need to set different delay for each topic. Is there any way to do it?

From the spring-cloud-function perspective there isn't any kind of polling as it is not the responsibility of the framework.
From the spring-cloud-stream perspective that uses spring-cloud-function indeed there is a mechanism that you have described. However, keep in mind that spring-cloud-stream is primarily designed to support concept of microservices (not your general messaging framework) and in microservices we embrace do one thing but do it well without affecting others approach. So having more then one supplier kind of goes against this model.
If you are building a general purpose messaging app, then i'd suggest to use Spring Integration framework which provides all the necessary hooks to accomplish what you need, but will require a bit more configuration details.

Related

Separate microservice just for microservices orchestration?

I have a few microservices where each microservice has REST endpoints for CRUD operations.
I have to create a workflow that will start from one microservice with some initial input, but later outputs from a microservice can be used as input to other microservices. There can be some synchronous and asynchronous calls to these REST APIs.
I have looked for some of the workflow-engines but I do not think that I can create my workflow without writing any java code.
Should I write a separate microservice just for microservices orchestration? This orchestration microservice will know the exact workflow and can be configurable for inputs required to start the workflow, and it can also use some third-party workflow engines like Camunda to store the definition of the workflow.
Is this correct thinking to have a separate microservice just for microservices orchestration? Till now the existing microservices have no idea about the other microservices. There could be a chance that output from one microservice needs to be massaged before using as input for other microservice.
I have looked for some of the workflow-engines but I do not think that
I can create my workflow without writing any java code.
This depends on your business processes and the complexity of your workflow. Usually yes you will need to write some code to achieve it.
Should I write a separate microservice just for microservices
orchestration? This orchestration microservice will know the exact
workflow and can be configurable for inputs required to start the
workflow, and it can also use some third-party workflow engines like
Camunda to store the definition of the workflow.
Yes you can do that. I did something similar on a system using micro-services. This would be a very good Idea on the long run as you could configure your workflow based on environments as well. For example on your development machine you would have a little different workflow/configuration. This is practical for Developers or QA's testing their solutions. On the other hand on Staging/Production you can pre-define Customer setups/orchestration which you can reuse any time if you get new customers or users.
Is this correct thinking to have a separate microservice just for
microservices orchestration? Till now the existing microservices have
no idea about the other microservices. There could be a chance that
output from one microservice needs to be massaged before using as
input for other microservice.
Yes you can do that without problems although I would be careful with the name orchestration as this has another meaning in context in micro-service architecture(Docker, Docker-Swarm, Kubernetes). Similar examples would be some kind of EndToEndTest or Cross micro-service testing-micro-service. That would test cross micro-service business operations and assert the results. Usually business operations involve more then 1 micro-service so in order to test that you can use this approach. This micro-service would call APIs from multiple micro-services and test the results and scenarios based on your Business rules. Another example would be something like seeder-micro-service(which seems to be very similar to what you are trying to do here). This seeder-micro-service would be responsible for seeding(creating) test data to your micro-services. This test data is some basic setup/configuration data which you need in order to have your micro-service business processes to work. This is very handy for development machines or some test environments where you need to quickly setup an environment. Using this seeder-micro-service you can easily setup do your work or tests and dispose the environment(data) as you need it. This is especially useful for development machines setups but it can also be used on shared test environments and etc. Both of those examples are micro-services which server your needs and make your life easier to work with your system.
One final note regarding this:
Till now the existing microservices have no idea about the other
microservices.
They should be abstracted from each other in a way that they are not aware of internal implementation or data(separate databases) but they should communicate between each other in order to perform business operations which sometimes are cross micro-services. Like the typical example of payment-micro-service and order-micro-service from an online shop example. So it is fine that they know about each other and communicate but this communication has to be very carefully designed in order to avoid some common pitfalls.
They usually communicate with each other with direct calls over HTTP or some other protocol or through some message queue like Apache Kafka or RabbitMq or others. You can read more about it in this answer.
Yes, you should cover the orchestration part in a separate service. And, yes, go with a BPMN 2 process engine as orchestrator, as you already suspected. Yes, this may include writing a little code mostly for data mapping or connectors.
Benefits include for instance ootb support for:
state management and long running processes / persistence for data
versioning (!)
retries and error handling
tooling to modify state and date in case something went wrong
timeouts, parallel execution (if necessary)
scalability
graphical process model
audit trail
and end to end visibility in monitoring tools based on BPMN 2 model
ability to include business rules tasks (DMN) for more complex rules
combination of push and pull communication pattern and a/sync communication
business-IT alignment via BPMN 2
support for the various BPMN 2 events
standardization (skills, security, software quality, features)
...
This is a great related article about the WHY using an airline ticket booking as an example:
https://blog.bernd-ruecker.com/3-common-pitfalls-in-microservice-integration-and-how-to-avoid-them-3f27a442cd07
This is about the important design consideration in case you go with a process engine:
https://blog.bernd-ruecker.com/the-microservice-workflow-automation-cheat-sheet-fc0a80dc25aa
I faced a similar problem as in the original question. I'd a few microservices with simple dependencies that needed to be managed and did go down the path of writing my own microservice https://github.com/pedro-r-marques/workflow to manage the dependencies and do the orchestration. It uses a yaml definition file to describe the dependencies and rabbitmq for message passing. One could also replace the usage of rabbitmq by REST API calls front-ended by a load-balancer.

Data Migration using Spring

We are beginning the process of re-architecting the systems within our company.
One of the key components of the work is a new data model which better meets our requirements.
A major part of the initial phase of the work is to design and build a data migration tool.
This will take data from one or more existing systems and migrate it to the new model.
Some requirements:
Transformation of data to the new model
Enrichment of data, with default values or according to business rules
Integration with existing systems to pull data
Integration with Salesforce CRM which is being introduced into the company.
Logging and notification about failures
Within the Spring world, which is the best Spring project to use as the underlying framework for such a data migration tool?
My initial thoughts are to look at implementing the tool using Spring Integration.
This would:
Through the XML or DSL, allow for the high level data flow to be seen, understood, and edited (possibly using a visual tool such as a STS plugin). Being able to view the high level flow in such a way is a big advantage.
Connectors to work with different data sources.
Transformers components to be built to migrate data formats.
Routers to route the data in the new model to endpoints which connect with systems.
However, are there other Spring projects, such as Spring Data or Spring Batch, which are a better match for the requirements?
Very much appreciate feedback and ideas.
I would certainly start with spring-integration which exposes bare bones implementation for Enterprise Integration Patterns which are at the core of most/all of your requirements listed.
It is also an exceptionally great problem modelling tool which helps you better understand the problem and then envision its implementation in one cohesive integration flow
Later on, once you have a clear understanding of how things are working it would be extremely simple to take it to the next level by introducing the "other frameworks" you mentioned/tagged adding #spring-cloud-data-flow and #spring-cloud-stream.
Overall this question is rather broad, so consider following the above pointers and get started and raise more concrete questions.

What is the typical use case of Spring Application Events?

There is an amazing mechanism is spring: Spring Application Events, I see it helps to build loosely coupled application, implement observer pattern, reactor pattern.
My question is, what is the trigger in Spring application architecture when Spring Application Events is absolutely inevitable? Actually any application classes relationship can be build using events only as well as using class associations and hierarchy only (i'm talking about monolith service now).
May be it's more architectural question, but what is the threshold when it's need to consider events between objects inside a spring application?
Could it be definitely seen the cases when Spring Application Events absolutely needed?
Spring Security pushes events on certain event occurrences. See list of events
In some cases, it's much better to push message instead invoke a method in a classical way. Eg if you need to invoke the same function I multiple places in the code, just to notify another object.
It allows us to build a system which utilizes event-driven architecture. Read more
Helps in solving a producer-consumer problem
Send emails by pushing event object to ApplicationEventPublisher See Spring Higher-Order Components and #EnableEmailSending

Spring Boot 2. Async API. CompletableFuture vs. Reactive

My application is heavily relying on asynchronous web-services.
It is built with spring boot 1.5.x, which allows me to utilize standard Java 8 CompletableFuture<T> in order to produce deferred async responses. For more info see
https://nickebbitt.github.io/blog/2017/03/22/async-web-service-using-completable-future
Spring boot 2.0.x now comes with the starter pack that can utilize reactive paradigm. Spring WebFlux is the framework, which is implementing reactive HTTP.
Since I have my API implemented as described in the first paragraph, will I gain much by redoing my services to use non-blocking reactive approach? In a nutshell, I'll have non-blocking API as well, right?
Is there an example how to convert async API that is based on CompletableFuture<T> to Mono<T>\Flux<T>?
I was thinking to get rid of servlet-based server altogether (Jetty in my case) and go with Netty + Reactor.
Needless to say that I am new to the whole reactive paradigm.
I would like to hear your opinions.
I have two things to say:
Q: Is there an example how to convert async API that is based on CompletableFuture to Mono\Flux?
A:
1) You have to configure endpoint in a bit different way https://docs.spring.io/spring/docs/current/spring-framework-reference/web-reactive.html
2) CompletableFuture to Mono\Flux example: Mono.fromFuture(...)
As for the question: "will I gain much by redoing my services to use non-blocking reactive approach". The general answer is provided in the documentation: https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html#webflux-performance .. and it is no.
Performance has many characteristics and meanings. Reactive and non-blocking generally do not make applications run faster. They can, in some cases, (for example, if using the WebClient to run remote calls in parallel). On the whole, it requires more work to do things the non-blocking way and that can slightly increase the required processing time.
The key expected benefit of reactive and non-blocking is the ability to scale with a small, fixed number of threads and less memory. That makes applications more resilient under load, because they scale in a more predictable way. In order to observe those benefits, however, you need to have some latency (including a mix of slow and unpredictable network I/O). That is where the reactive stack begins to show its strengths, and the differences can be dramatic.
This is general answer, but the specifics will depend and you must measure and see. I would start by recreating a simple part of the application and checking the performance of both in an isolated environment.

Will Reactor provide remoting?

I'm trying to find out whether we should use Akka or Reactor for our next project. One of the most important questions is if the future framework of choice will provide remoting. As i saw, Akka offers this just in the way we'd like to have it.
In the GitHub wiki, unfortunately the TCP-server/client sections are blank and i couldn't find other informations about that yet.
Will reactor provide remoting?
I don't think Akka and Reactor are Apples to Apples. Reactor is purposely minimal, with only a couple external dependencies. It gives you a basic set of tools in which to write event-driven applications but it by design does not enforce a particular model. It would actually not take that long to implement a Dynamo system using Reactor components. Everything that would be needed is there and it would likely only take writing a tutorial on it to show how to wire things together.
The Dynamo model that Akka uses is a proven system. Basho has done a fantastic implementation of it in Riak. It's great to see Akka following their lead in that respect. If we were to implement a Reactor clustering system, it would likely be the Dynamo model. But since a Reactor is basically just event handlers and topic pub/sub, your Consumers can do any remote communication you want. They can integrate with HTTP, AMQP, Redis, whatever. There's no need to have special APIs for doing this sort of thing because they're just events. You can code up an AMQP client application in about 10 minutes and be publishing data from RabbitMQ into a Reactor application.
We might very well at some point have different implementations of clustering for different purposes. The Dynamo model might work well for some while others would want a simple Redis-based system. Or maybe one could leverage the components already in Reactor to work with the Java Chronicle to do disk-based clustering--something you can do right now just by wiring up the right Consumers. But those will be in external modules that could be added to Reactor. reactor-core itself will likely never have an opinionated clustering solution simply because it doesn't fit the purpose of those core components: a foundational framework for event-driven applications on the JVM.
(I'm working on the TcpClient/TcpServer wiki docs right now, so hopefully those will be filled in for the M2 of Reactor which will be happening very soon.)

Resources