I am using Kinesis to consume a stream inside of a Spring Boot App. I'm using the KCL provided by AWS for this and to start it, you define a Kinesis com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker instance and call .run() on it. I do not wish to use spring-integration for this task.
I want to build an abstraction here so that developers can create several workers and have them run automatically during or after application startup, and then call the respective .shutdown() method on Application termination. Right now I'm doing this by creating a #Component for each Worker and then run() on #PostConstruct and #PreDestroy.
Is there a better way?
Related
All,
I am developing an application, which use alpakka spring boot integration to read data from kafka. I have most of the code ready, the only place i am stuck is how to initialize a continuous running stream, as this is going to be a backend application and wont be having any api to be called from ?
As far as I know, Alpakka's Spring integration is basically designed around exposing Akka Streams via a Spring HTTP controller. So I'm not sure what purpose bringing Spring into this serves, since there's quite an impedance mismatch between the way an Akka application will tend to like to work and the way a Spring application will tend to like to work.
Assuming you're talking about using Alpakka Kafka, the most idiomatic thing to do would be to just start a stream fed by an Alpakka Kafka Source in your main method and it will run until killed or it fails. You may want to use a RestartSource around the consumer and business logic to ensure that in the event of failure the stream restarts (note that one should generally expect messages for which the offset commit hadn't happened to be processed again, as Kafka in typical cases can only guarantee at-least-once processing).
I am working on the micro-service developed using Spring Boot . I have implemented following layers:
Controller layer: Invoked when user sends API request
Service layer: Processes the request. Either sends request to third-part service or sends request to database
Repository layer: Used to interact with the
database
.
Methods in all of above layers returns the CompletableFuture. I have following questions related to this setup:
Is it good practice to return Completable future from all methods across all layers?
Is it always recommended to use #Async annotation when using CompletableFuture? what happens when I use default fork-join pool to process the requests?
How can I configure the threads for above methods? Will it be a good idea to configure the thread pool per layer? what are other configurations I can consider here?
Which metrics I should focus while optimizing performance for this micro-service?
If the work your application is doing can be done on the request thread without too much latency, I would recommend it. You can always move to an async model if you find that your web server is running out of worker threads.
The #Async annotation is basically helping with scheduling. If you can, use it - it can keep the code free of the references to the thread pool on which the work will be scheduled. As for what thread actually does your async work, that's really up to you. If you can, use your own pool. That will make sure you can add instrumentation and expose configuration options that you may need once your service is running.
Technically you will have two pools in play. One that Spring will use to consume the result of your future, and another that you will use to do the async work. If I recall correctly, Spring Boot will configure its pool if you don't already have one, and will log a warning if you didn't explicitly configure one. As for your worker threads, start simple. Consider using Spring's ThreadPoolTaskExecutor.
Regarding which metrics to monitor, start first by choosing how you will monitor. Using something like Spring Sleuth coupled with Spring Actuator will give you a lot of information out of the box. There are a lot of services that can collect all the metrics actuator generates into time-based databases that you can then use to analyze performance and get some ideas on what to tweak.
One final recommendation is that Spring's Web Flux is designed from the start to be async. It has a learning curve for sure since reactive code is very different from the usual MVC stuff. However, that framework is also thinking about all the questions you are asking so it might be better suited for your application, specially if you want to make everything async by default.
I have a usecase in a spring boot application, where in we get a request, we send an acknowledgement back and then start a new executor task in background which will do some processing and send back some result.
Now I am having some doubts while creating the runnable task. I want for every request a new instance of this runnable task is submitted to the executor service.
Could some clarify if keeping the scope as "prototype" should resolve my purpose or the scope should be "request". And if the latter is correct, is the default context in spring boot is web-aware?
Also I need to pass in some parameters in the runnable task. Any pointers would be appreciated for both the above problems.
TA
Spring can manage threads for you using the #Async annotation. This can be much simpler than managing them yourself if you are already using Spring.
You can read about it here: https://www.baeldung.com/spring-async
i saw some code use ShutdownHook like this
Runtime.getRuntime().addShutdownHook(new Thread(){
ConfigurableApplicationContext.stop();
//close spring context;
threadpool.shutdownnow();
//close theadpool
});
is there anything useful to do like this?
i thought
when jvm exit ,maybe thread will be shutdown immediately
and spring context will close tooï¼›
what shall we do next when we need to call System.exit() ?
It really depends on your application and the lifecycle of your objects and those threads you appear to have outside of your context. If you are running the spring container inside a standalone java process, then trapping the shutdown hook like this is one way to do that. Another way is to have it listen on a tcp port and send a command to begin the shutdown process. If you are running in a web container like tomcat, then you should follow the standards on normal webapp shutdown, which Spring supports with Context Listeners.
I would also consider redesigning your app so that the threads are all managed with a bean that lives inside your spring container. For instance using a bean that is configured with directives (attributes) for start/stop methods and then that bean would use an Executor for thread pooling. This way, your shutdown is ONLY shutting down the Spring container, and Spring provides very good support for orderly shutdown of beans. One of those beans is your object holding the threads within the Executor. Its a much cleaner way than trying to integrate Spring beans with external threads.
Hope this helps.
Is there an easy/lightweight way to add persistence to Spring's JavaMailSender and have it operate asynchronously? Does Spring provide any "built-in" support for this? I'm currently looking at queues with JMS, but they seem like overkill for the task at hand (looking at ActiveMQ and RabbitMQ). Is there a lightweight JMS option?
Your approach with jms is fine. Unfortunately persistence and asynchronous processing is not such a simple task and you will have to code a bit.
However have a look at Spring integration, it provides built-in support for JMS inbounds and e-mail outbounds - all you have to do is connect the pieces via XML DSL.
If you want to make any method in Spring asynchronous, all you need to do is configure task namespace in the xml config via <task:annotation-driven/>. Then, you just annotate the method with #Async and it will run in its own thread. Note that an async call will run in its own transaction, as Spring grabs a new thread from its internal pool to service the call. If you do this, then you don't need JMS for aynchronous processing.