Spring struts2 asynchronous task - spring

I need to implement this functionality but I dont know to design and proceed on this.Please help me.
I have to import csv file from web ui. I used struts2(mvc)+spring(object injector)
I have done this task. But now the i have to import asynchronously i.e at a time many imports it should support. How can I do it?
I have done R&D. I found quartz. But can i use this?
Consider, I have two buttons. Clicking on first button it will go to first page there i can able to import cat1 type csv file containing 20k items. Clicking on second button it will go to another page there i can import cat2 type csv file containg 20k items.
How do i can implement it. Now its happening like at a time only one import can be done. but i want this to be asynchronous.

Judging from your requirements, I don't think Quartz will be needed. Quartz is a scheduler and what you need is a Spring asynchronous task execution facilities.
Essentially, what is done, upon first call of async bean, control is immediately returned and the called async bean is handed off to Spring's TaskExecutor which further on controls the execution of the given method logic.
High-level overview of your options is as follows: you will need to inject a TaskExecutor bean implementation into Spring context and your asynchronous method logic will have to somehow perform the handoff either by
(XML-config) wiring TaskExecutor as a collaborator inside your Spring bean containing method which you intend to execute asynchronously and calling execute() method of TaskExecutor, or
(annotation-config) by marking the intended method with #Async annotation. Be advised, TaskExecutor implementation must be injected into Spring context.
Also, take note that should you want to return something from your asynchronous task, the return type must be an implementation of Java's Future<T> interface, which is a requirement since TaskExecutor is built on java.concurrent.util.Executor interface.
I can't comment on Struts, though, as I've never worked with it, but as far as I can tell, Struts should have no part in realizing asynchronicity - the heavy lifting is done by Spring alone.
For a more thorough and complete look on mentioned subject, I suggest starting with the following links:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/htmlsingle/spring-framework-reference.html#scheduling
http://static.springsource.org/spring/docs/3.0.x/javadoc-api/
http://docs.oracle.com/javase/1.5.0/docs/api/index.html?java/util/concurrent/Executor.html

Related

Axon Event only picked up by one instance of my service instead of all (fan-out)

We have a Spring micro-service connected to Axon with an #Component and an #EventHandler function on this class. Like this:
#Component
#AllowReplay(false)
class AClass {
#EventHandler
fun on(someEvent: SomeEvent) {
...some handling here
}
}
The Event gets picked up normally and everything works fine but when we run multiple instances of our service only one instance of the service picks up the event. I userstand that this has to do with the way the event processors work (https://docs.axoniq.io/reference-guide/axon-framework/events/event-processors) but I need all instances of the service to pick up the event. How can I configure this?
It pretty much depends on how you are configuring and using it.
So, 'pick up' events, the way you describe, I assume that are events being handled.
In that case, and another assumption here that you are using some sort of Tracking Event Processor (TEP), this is where this logic and responsability is.
In essence, a TEP is responsible for 'tracking' which events it already received to not double react on those.
In your scenario, seems like your apps/instances are sharing the same database (hence sharing the same tokens) and that's why you see it.
About your 'workaround', you are just assigning names to a Processing Group (which can also be done using annotations like this: #ProcessingGroup("processing-group-name-you-want"). If you do not assign a name, the package is the default name used.
Every Processing Groups has a tracking token behind it. In this case, you get multiple tokens and 'react' to the same event multiple times.
For more info about Processing Groups, I recommend this answer.
The answer that works for me, although not the prettiest: Axon 'merges' processors based on package name so multiple instances of a service result in a single processor.
By using unique processing groups you can trick Axon into not 'merging' the different processors into one.
#Autowired
fun configure(configurer: EventProcessingConfigurer) {
configurer.assignProcessingGroup { t: String? -> "${t}_${UUID.randomUUID()}" }
}

Should we use #Scheduled together with controller method (like #PostMaping) in SpringBoot?

I'm using SpringBatch but I cant find any document or tutorial about Spring's #Scheduled , showing it being used along side with some controller method (annotated with #GetMapping, #PostMapping).
For example, I have this controller method
#Scheduled(cron = "0 */5 * ? * *")
#PostMapping("/create-progress-data")
public ResponseEntity<?> createSomething(#Request Something request) { }
I can easily create another method that do the same thing as in the body of createSomething then put it in a #Component or a Service, but between doing that and just applying #Scheduled on top of the controller method, I don't know which one is better.
I can see that:
Using #Scheduled : code is minimal, works just find. But we're kind of using the controller method in the wrong way. However I don't see it violates Single responsibility point in design pattern.
Create another method and put it in other #Component or #Service: separate the duty of the controller method and the cron job but duplicate the code.
Ps: I need to implement it like this because we need to support two ways to trigger the job. Either via api call (controller method), or periodically (with #Scheduled).
Please notice in this case the code of controller method and the expected cron job is the same.
I believe it is better to separate the two ways of launching your jobs. You can use the controller for API-based on-demand job launches, and use the #Scheduled way for a background process that launches jobs on the defined schedule.
That said, you should take into consideration the concurrency policy for your job definitions. For example, what should your system do if a job launch request comes through the API and tries to launch the same job that has been launched by a schedule (which could be running at that time)?. There are other use cases, but those depend on your requirement.

why does spring cloud stream's #InboundChannelAdapter accept no parameters?

I'm trying to use spring cloud stream to send and receive messages on kafka. The examples for this use a simple example of using time stamps as the messages. I'm trying to go just one step further into a real world application when I ran into this blocker on the InboundChannelAdapter docs:
"A method annotated with #InboundChannelAdapter can't accept any parameters"
I was trying to use it like so:
#InboundChannelAdapter(value = ChannelManager.OUTPUT)
public EventCreated createCustomerEvent(String customerId, String thingId) {
return new EventCreated(customerId, thingId);
}
What usage am I missing? I imagine that when you want to create an event, you have some data that you want to use for that event, and so you would normally pass that data in via parameters. But "A method annotated with #InboundChannelAdapter can't accept any parameters". So how are you supposed to use this?
I understand that #InboundChannelAdapter comes from spring-integration, which spring-cloud-stream extends, and so spring-integration may have a different context in which this makes sense. But it seems un-intuitive to me (as does using an _INBOUND_ChannelAdapter for an output/producer/source)
Well, first of all the #InboundChannelAdapter is defined exactly in Spring Integration and Spring Cloud Stream doesn't extend it. That's false. Not sure where you have picked up that info...
This annotation builds something like SourcePollingChannelAdapter which provides a poller based on the scheduler and calls periodically a MessageSource.receive(). Since there is no any context and end-user can't effect that poller's behavior with his own arguments, the requirement for empty method parameters is obvious.
This #InboundChannelAdapter is a beginning of the flow and it is active. It does its logic on background without your events.
If you would like to call some method with parameters and trigger with that some flow, you should consider to use #MessagingGateway: http://docs.spring.io/spring-integration/reference/html/messaging-endpoints-chapter.html#messaging-gateway-annotation
How are you expecting to call that method? I think there was a miscommunication with your statement "stream extends integration" and Artem probably understood that we extend #InboundChannelAdatper
So, if you are actively calling this method, as it appears since you do have arguments that are passed to it, why not just using your source channel to send the data?
Usually sources do not require arguments as they are either push like the twitter stream that taps on twitter, listen for events and pushes them to the source channel, or they are polled, in which case, they are invoked on an interval defined via a poller.
As Artem pointed, if your intention is to call this method from your business flow, and deal with the return while triggering a message flow, then check his link from the docs.

Grails service transactional behaviour

In a Grails app, the default behaviour of service methods is that they are transactional and the transaction is automatically rolled-back if an unchecked exception is thrown. However, in Groovy one is not forced to handle (or rethrow) checked exceptions, so there's a risk that if a service method throws a checked exception, the transaction will not be rolled back. On account of this, it seems advisable to annotate every Grails service class
#Transactional(rollbackFor = Throwable.class)
class MyService {
void writeSomething() {
}
}
Assume I have other methods in MyService, one of which only reads the DB, and the other doesn't touch the DB, are the following annotations correct?
#Transactional(readOnly = true)
void readSomething() {}
// Maybe this should be propagation = Propagation.NOT_SUPPORTED instead?
#Transactional(propagation = Propagation.SUPPORTS)
void dontReadOrWrite() {}
In order to answer this question, I guess you'll need to know what my intention is:
If an exception is thrown from any method and there's a transaction in progress, it will be rolled back. For example, if writeSomething() calls dontReadOrWrite(), and an exception is thrown from the latter, the transaction started by the former will be rolled back. I'm assuming that the rollbackFor class-level attribute is inherited by individual methods unless they explicitly override it.
If there's no transaction in progress, one will not be started for methods like dontReadOrWrite
If no transaction is in progress when readSomething() is called, a read-only transaction will be started. If a read-write transaction is in progress, it will participate in this transaction.
Your code is right as far as it goes: you do want to use the Spring #Transactional annotation on individual methods in your service class to get the granularity you're looking for, you're right that you want SUPPORTS for dontReadOrWrite (NOT_SUPPORTED will suspend an existing transaction, which won't buy you anything based on what you've described and will require your software to spend cycles, so there's pain for no gain), and you're right that you want the default propagation behavior (REQUIRED) for readSomething.
But an important thing to keep in mind with Spring transactional behavior is that Spring implements transaction management by wrapping your class in a proxy that does the appropriate transaction setup, invokes your method, and then does the appropriate transaction tear-down when control returns. And (crucially), this transaction-management code is only invoked when you call the method on the proxy, which doesn't happen if writeSomething() directly calls dontReadOrWrite() as in your first bullet.
If you need different transactional behavior on a method that's called by another method, you've got two choices that I know of if you want to keep using Spring's #Transactional annotations for transaction management:
Move the method being called by the other into a different service class, which will be accessed from your original service class via the Spring proxy.
Leave the method where it is. Declare a member variable in your service class to be of the same type as your service class's interface and make it #Autowired, which will give you a reference to your service class's Spring proxy object. Then when you want to invoke your method with the different transactional behavior, do it on that member variable rather than directly, and the Spring transaction code will fire as you want it to.
Approach #1 is great if the two methods really aren't related anyway, because it solves your problem without confusing whoever ends up maintaining your code, and there's no way to accidentally forget to invoke the transaction-enabled method.
Approach #2 is usually the better option, assuming that your methods are all in the same service for a reason and that you wouldn't really want to split them out. But it's confusing to a maintainer who doesn't understand this wrinkle of Spring transactions, and you have to remember to invoke it that way in each place you call it, so there's a price to it. I'm usually willing to pay that price to not splinter my service classes unnaturally, but as always, it'll depend on your situation.
I think that what you're looking for is more granular transaction management, and using the #Transactional annotation is the right direction for that. That said, there is a Grails Transaction Handling Plugin that can give you the behavior that you're looking for. The caveat is that you will need to wrap your service method calls in a DomainClass.withTransaction closure and supply the non-standard behavior that you're looking for as a parameter map to the withTransaction() method.
As a note, on the backend this is doing exactly what you're talking about above by using the #Transactional annotation to change the behavior of the transaction at runtime. The plugin documentation is excellent, so I don't think you'll find yourself without sufficient guidance.
Hope this is what you're looking for.

Controller (Spring Managed Bean) Scope Question: Singleton, Request or Session?

The question is a bit long since it's conceptual. I hope it's not a bad read :)
I'm working in a performance critical Spring MVC/Tiles web-app (10,000 users typical load). We load an update employee screen, where we load an employee details screen (bound to an employee business object) for updates via a MultiActionController. There are multiple tabs on this screen, but only tab1 has the updatabale data. Rest of the tabs are read-only stuff, for reference basically.
Needless to say, we've decided to load these read-only tabs in a lazy manner, i.e., when each tab is activated, we fire an ajax call (one-time) for fetch the data from the server. We don't load everything via the update view loading method. Remember: this is one time, read-only data.
Now, I'm in a dilemma. I've made another multiaction controller, named "AjaxController" for handling these ajax calls. Now, my questions:
What should be the best scope for this controller?
Thoughts: If I make it request scoped, then 10,000 users together can create 10,000 instances of this bean: memory problem there. If I make it session scoped, then one will be created per user session. That means, when 10,000 users log in to the app, regardless of whether they hit the AjaxController methods, they will each have a bean in possession.
Then, is singleton the best scope for this controller?
Thoughts: A singleton bean will be created when spring boots, and this very instance will be provided throughout. Sounds good.
Should the handler methods (like fetchTab7DataInJsonFormat) be static and attached to the class?
Thoughts: In this case, can havign static methods semantically conflict with the scope? For example: does scope="session"/"request" + static methods make sense? I ask because even though each user session has its own AjaxController bean, the handler methods are actually attached to the class, and not the instances. Also, does scope="singleton" + static handler methods make sense?
Can I implement the singleton design pattern into AjaxController manually?
Thoughts: What if I control the creation: do the GoF singleton basically. Then what can the scope specification do? Scope session/request surely can't create multiple instances can they?
If, by whatever mechanism (bean specification/design pattern/static methods), I do manage to have one single instance of AjaxController: Will these STATIC methods need to be synchronized? I think not, because even if STATIC handler methods can talk to services (which talk to DB/WS/MQ etc.) which take time, I think each request thread entering the static methods will be returned by their thread Id's right? It's not like user1 enters the static method, and then user2 enters the static method before user1 has been returned, and then they both get some garbled data? This is probably silly, but I want to be sure.
I'm confused. I basically want exactly one single instance of the controller bean servicing all requests for all clients.
Critical Note: The AjaxController bean is not INJECTED anywhere else, it exists isolated. It's methods are hit via ajax calls.
If I were doing this, I would definitely make the LazyLoadController singleton without having static methods in it and without any state in it.
Also, you definitely shouldn't instantiate singletons manually, it's better to use Spring's common mechanism and let the framework control everything.
The overall idea is to avoid using any static methods and/or persistent data in controllers. The right mechanism would be use some service bean for generating data for request, so controller acts as request parameter dispatcher to fetch the data out into the view. No mutable state or concurrently unsafe stuff should be allowed in controller. If some components are user-specific, Spring's AOP system provides injection of the components based on session/request.
That's about good practice in doing thing like that. There's something to clarify to give more specific answer for your case. Did I understand it right that typical use case for will be that AjaxController will pass some of requests to LazyLoadController to get tab data? Please provide details about that in comment or your question, so I may update my answer.
The thing that is wrong with having static methods in controller is that you have to manage concurrent safety by yourself which is not just error-prone but will also reduce overall performance. Spring runs every request in its own thread, so if two concurrent calls need to use some static method and there are shared resources (so you need to use synchronize statement or locks), one of threads will have to wait for another one to complete working in protected block. From the other hand, if you use stateless services and avoid having data that may be shared for multiple calls, you get increased performance and no need to deal with concurrent data access.

Resources