In dropwizard, I need to implement asynchronous jobs and poll their status.
I have 2 endpoints for this in resource:
#Path("/jobs")
#Component
public class MyController {
#POST
#Produces(MediaType.APPLICATION_JSON)
public String startJob(#Valid MyRequest request) {
return 1111;
}
#GET
#Path("/{jobId}")
#Produces(MediaType.APPLICATION_JSON)
public JobStatus getJobStatus(#PathParam("id") String jobId) {
return JobStatus.READY;
}
}
I am considering to use quartz to start job, but only single time and without repeating. And when requesting status, I will get trigger status. But the idea of using quartz for none-scheduled usage looks weird.
Is there any better approaches for this? Maybe dropwizard provides better tools itself? Will appriciate any advices.
UPDATE: I also looking at https://github.com/gresrun/jesque, but can not find any way to poll the status of running job.
You can use the Managed interface. In the snippet below I am using the ScheduledExecutorService to exuecute jobs, but you can use Quartz instead if you like. I prefer working with ScheduledExecutorService as it is simpler and easier...
first step is to register your managed service.
environment.lifecycle().manage(new JobExecutionService());
Second step is to write it.
/**
* A wrapper around the ScheduledExecutorService so all jobs can start when the server starts, and
* automatically shutdown when the server stops.
* #author Nasir Rasul {#literal nasir#rasul.ca}
*/
public class JobExecutionService implements Managed {
private final ScheduledExecutorService service = Executors.newScheduledThreadPool(2);
#Override
public void start() throws Exception {
System.out.println("Starting jobs");
service.scheduleAtFixedRate(new HelloWorldJob(), 1, 1, TimeUnit.SECONDS);
}
#Override
public void stop() throws Exception {
System.out.println("Shutting down");
service.shutdown();
}
}
and the job itself
/**
* A very simple job which just prints the current time in millisecods
* #author Nasir Rasul {#literal nasir#rasul.ca}
*/
public class HelloWorldJob implements Runnable {
/**
* When an object implementing interface <code>Runnable</code> is used
* to create a thread, starting the thread causes the object's
* <code>run</code> method to be called in that separately executing
* thread.
* <p>
* The general contract of the method <code>run</code> is that it may
* take any action whatsoever.
*
* #see Thread#run()
*/
#Override
public void run() {
System.out.println(System.currentTimeMillis());
}
}
As mentioned in the comment below, if you use Runnable, you can Thread.getState(). Please refer to Get a List of all Threads currently running in Java. You may still need some intermediary pieces depending on how you're wiring you application.
Related
I'm working on a Springboot app that includes a task that's executed on a schedule. It typically takes about two to three minutes to run.
#Scheduled(cron = "* */30 * * * *")
public void stageOfferUpdates() throws SQLException {
...
}
We have a requirement to be able to kick off the execution of that task at any time by calling a rest endpoint. Is there a way my #GET method can programmatically kick this off and immediately return an http 200 OK?
So you just want to trigger an async task without waiting for results. Because you are using Spring, the #Async annotation is an easy way to achieve the goal.
#Async
public void asyncTask() {
stageOfferUpdates();
}
Couldn't you just run the method in another Thread:
executor.execute(() -> {
stageOfferUpdates();
}
and then procede and return 200?
I'm using jobrunr 5.1.4 in my spring boot application. I have a simple service declaring a recurring job which allows for some retries. A single failing job run is not that relevant for me. Instead, I'm interested in getting notified after all jobs, i.e. the initial job including all the retries, have failed.
I thought JobRunr's JobServerFilter would be a good idea. But the onProcessed() method never gets triggered in case of an exception only in case of a successful job run. And the ApplyStateFilter gets triggered on every state change. Far too often for my requirement. Leaving me clueless, if a change to a FAILED state was the last in a series of jobs belonging together (initial job + allowed retried jobs).
A simple example would look like this:
#Service
public class JobScheduler {
#Job(name = "My Recurring Job", retries = 2, jobFilters = ExceptionFilter.class)
#Recurring(id = "my-recurring-job", cron = "*/10 * * * *")
public void recurringJob() {
throw new RuntimeException("foo");
}
}
A basic implementation of my JobFilter looks like this:
#Component
public class ExceptionFilter implements JobServerFilter, ApplyStateFilter {
#Override
public void onProcessing(Job job) {
log.info("onProcessing: {}", job.getJobName());
log.info(job.getJobState().getName().name());
}
#Override
public void onProcessed(Job job) {
log.info("onProcessed: {}", job.getJobName());
log.info(job.getJobState().getName().name());
}
#Override
public void onStateApplied(Job job, JobState jobState1, JobState jobState2) {
log.info("onStateApplied: {}", job.getJobName());
log.info("jobState1: {}", jobState1.getName().name());
log.info("jobState2: {}", jobState2.getName().name());
}
}
Is this use case even possible with JobRunr? Or does anyone have an idea how to solve this issue in a different way?
Thank you very much in advance for you support.
I think you're on the right track with onStateApplied from ApplyStateFilter.
You can use the following approach:
#Override
public void onStateApplied(Job job, JobState oldState, JobState newState) {
if (isFailed(newState) && maxAmountOfRetriesReached(job)) {
// your logic here
}
}
OnProcessed is not triggered as your job was not processed (due to the failure).
I have a scheduled task that works perfectly, like this:
#Scheduled(cron="*/5 * * * * MON-FRI")
public void doSomething() {
// something that should execute on weekdays only
}
I want to create a REST endpoint that would start this task out of it's normal schedule.
How would I programatically fire-and-forget this task?
You could do something really simple.
Your schedule:
#Component
#RequiredArgsConstructor
public class MySchedule {
private final MyClassThatHasTheProcessing process;
#Scheduled(cron = "*/5 * * * * MON-FRI")
public void doSomething() {
// the actual process is made by the method doHeavyProcessing
process.doHeavyProcessing();
}
}
Your Controller
#RestController
#RequestMapping(path = "/task")
#RequiredArgsConstructor
public class MyController {
private final MyClassThatHasTheProcessing process;
// the executor used to asynchronously execute the task
private final ExecutorService executor = Executors.newSingleThreadExecutor();
#PostMapping
public ResponseEntity<Object> handleRequestOfStartTask() {
// send the Runnable (implemented using lambda) for the ExecutorService to do the async execution
executor.execute(() - >{
process.doHeavyProcessing();
});
// the return happens before process.doHeavyProcessing() is completed.
return ResponseEntity.accepted().build();
}
}
This will keep your scheduled task working as well as being able do trigger the task on demand by hitting your endpoint.
The HTTP 202 Accepted will be returned and the actual thread released, while the ExecutorService will delegate the process.doHeavyProcessing execution to another thread, which means that it will run in a 'fire and forget' style, because the thread that is serving the request will return even before the other task is finally terminated.
If you don't know what is an ExecutorService, this may help.
This can be done by writing something like below
#Controller
MyController {
#Autowired
MyService myService;
#GetMapping(value = "/fire")
#ResponseStatus(HttpStatus.OK)
public String fire() {
myService.fire();
return "Done!!";
}
}
MyService {
#Async
#Scheduled(cron="*/5 * * * * MON-FRI")
public void fire(){
// your logic here
}
}
I've got this simple bean for PerformanceMonitorInterceptor
#Configuration
#EnableAspectJAutoProxy
#Aspect
public class PerfMetricsConfiguration {
/**
* Monitoring pointcut.
*/
#Pointcut("execution(* com.lapots.breed.judge.repository.*Repository.*(..))")
public void monitor() {
}
/**
* Creates instance of performance monitor interceptor.
* #return performance monitor interceptor
*/
#Bean
public PerformanceMonitorInterceptor performanceMonitorInterceptor() {
return new PerformanceMonitorInterceptor(true);
}
/**
* Creates instance of performance monitor advisor.
* #return performance monitor advisor
*/
#Bean
public Advisor performanceMonitorAdvisor() {
AspectJExpressionPointcut pointcut = new AspectJExpressionPointcut();
pointcut.setExpression("com.lapots.breed.judge.repository.PerfMetricsConfiguration.monitor()");
return new DefaultPointcutAdvisor(pointcut, performanceMonitorInterceptor());
}
}
It supposed to trace any method invocation in the interfaces that ends with Repository in name.
I set logging level in application.properties
logging.level.org.springframework.aop.interceptor.PerformanceMonitorInterceptor=TRACE
But during execution it doesn't write anything in the console. What's the problem?
I was facing similar issue, after changing the useDynamicLogger to false the issue was fixed.
#Bean
public PerformanceMonitorInterceptor performanceMonitorInterceptor() {
return new PerformanceMonitorInterceptor(false);
}
Faced with the same issue. And as Manzoor suggested passing false to PerformanceMonitorInterceptor solves the problem.
Why? When you call new PerformanceMonitorInterceptor(true), the logger name used inside of PerformanceMonitorInterceptor will be: com.lapots.breed.judge.repository.SomeClass.
So in your particular case the following logging configuration is required:
logging.level.com.lapots.breed.judge.repository=TRACE, otherwise you do not see any logs, the breakpoint on PerformanceMonitorInterceptor.invokeUnderTrace() will not work and you spend lot's of time thinking you have wrong AOP configuration (while actually it's fine), but you did not set up logging level for proper class/package.
I am using spring logging (SLF4J logging). Instead of putting PerformanceMonitorInterceptor logger to TRACE , I added com.lapots.breed.judge.repository logger to TRACE.
This started printing logs for me.
I did this because the below method in AbstractTraceInterceptor is looking for TRACE enabled on the class(Repository) we executing but not on PerformanceMonitorInterceptor.
protected boolean isLogEnabled(Log logger) {
return logger.isTraceEnabled();
}
I just tried this, I simply added this to application.properties and it works:
logging.level.org.springframework.aop.interceptor.PerformanceMonitorInterceptor=trace
I have a simple gRPC client as follows:
/**
* Client that calls gRPC.
*/
public class Client {
private static final Context.Key<String> URI_CONTEXT_KEY =
Context.key(Constants.URI_HEADER_KEY);
private final ManagedChannel channel;
private final DoloresRPCStub asyncStub;
/**
* Construct client for accessing gRPC server at {#code host:port}.
* #param host
* #param port
*/
public Client(String host, int port) {
this(ManagedChannelBuilder.forAddress(host, port).usePlaintext(true));
}
/**
* Construct client for accessing gRPC server using the existing channel.
* #param channelBuilder {#link ManagedChannelBuilder} instance
*/
public Client(ManagedChannelBuilder<?> channelBuilder) {
channel = channelBuilder.build();
asyncStub = DoloresRPCGrpc.newStub(channel);
}
/**
* Closes the client
* #throws InterruptedException
*/
public void shutdown() throws InterruptedException {
channel.shutdown().awaitTermination(5, TimeUnit.SECONDS);
}
/**
* Main async method for communication between client and server
* #param responseObserver user's {#link StreamObserver} implementation to handle
* responses received from the server.
* #return {#link StreamObserver} instance to provide requests into
*/
public StreamObserver<Request> downloading(StreamObserver<Response> responseObserver) {
return asyncStub.downloading(responseObserver);
}
public static void main(String[] args) {
Client cl = new Client("localhost", 8999); // fail??
StreamObserver<Request> requester = cl.downloading(new StreamObserver<Response>() {
#Override
public void onNext(Response value) {
System.out.println("On Next");
}
#Override
public void onError(Throwable t) {
System.out.println("Error");
}
#Override
public void onCompleted() {
System.out.println("Completed");
}
}); // fail ??
System.out.println("Start");
requester.onNext(Request.newBuilder().setUrl("http://my-url").build()); // fail?
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
requester.onNext(Request.newBuilder().setUrl("http://my-url").build());
System.out.println("Finish");
}
}
I don't start any server and run the main method. I would suppose that the program fails on:
client creation
client.downloading call
or observer.onNext
but suprisingly (for me), the code runs successfully, only messages got lost. The output is:
Start
Finish
Error
Because of the asynchronnous nature, the finish can be called even before an error is propagated at least through the response observer. Is that a desired behavior? I can't lose any messages. Am I missing something?
Thank you, Adam
This is the intended behavior. As you mentioned the API is asynchronous and so errors must generally be asynchronous as well. gRPC does not guarantee message delivery and in the case of a streaming RPC failure does not indicate which messages were received by the remote side. The advanced ClientCall API calls this out.
If you need stronger guarantees it must be added at the application-level, such as with replies or with a Status of OK. As an example, in gRPC + Image Upload I mention using a bidirectional stream for acknowledgements.
Creating a ManagedChannelBuilder does not error because the channel is lazy: it only creates a TCP connection when necessary (and reconnects when necessary). Also since most failures are transient, we wouldn't want to prevent all future RPCs on the channel just because your client happened to start when the network was broken.
Since the API is asynchronous already, grpc-java can purposefully throw away messages when sending even when it knows an error has occurred (i.e., it chooses not to throw). Thus almost all errors are delivered to the application via onError().