GWT RPC - Parallel asynchronous calls - ajax

I have a list of promises that needs to be executed in parallel and in an asynchronous manner.Say,i have,
List<Promise<X>> list;
Once all the parallel request completes, i need to make another request say "Y". Here is my GWT code,
GQuery.when(list).done(...).fail(..)
But the above doesn seem to work!.How can i pass a list of promises to GQuery?.Is the above synctax valid?.

If you create a sample GWT project in Eclipse, a simple asynchronous RPC call is created. You can take that as a template to change it the way you need. With the callback of the request is it possible to display your "Y".
// Set up the callback object.
AsyncCallback<List<Promise<X>>> callback = new AsyncCallback<List<Promise<X>>>() {
public void onFailure(Throwable caught) {
// TODO: Do something with errors.
}
public void onSuccess(List<Promise<X>> result) {
// TODO: DO something with the result.
}
};
You should also read the documentations, at least...
http://www.gwtproject.org/doc/latest/tutorial/RPC.html

Related

Mono returned by ServerRequest.bodyToMono() method not extracting the body if I return ServerResponse immediately

I am using web reactive in spring web flux. I have implemented a Handler function for POST request. I want the server to return immediately. So, I have implemeted the handler as below -:
public class Sample implements HandlerFunction<ServerResponse>{
public Mono<ServerResponse> handle(ServerRequest request) {
Mono bodyMono = request.bodyToMono(String.class);
bodyMono.map(str -> {
System.out.println("body got is " + str);
return str;
}).subscribe();
return ServerResponse.status(HttpStatus.CREATED).build();
}
}
But the print statement inside the map function is not getting called. It means the body is not getting extracted.
If I do not return the response immediately and use
return bodyMono.then(ServerResponse.status(HttpStatus.CREATED).build())
then the map function is getting called.
So, how can I do processing on my request body in the background?
Please help.
EDIT
I tried using flux.share() like below -:
Flux<String> bodyFlux = request.bodyToMono(String.class).flux().share();
Flux<String> processFlux = bodyFlux.map(str -> {
System.out.println("body got is");
try{
Thread.sleep(1000);
}catch (Exception ex){
}
return str;
});
processFlux.subscribeOn(Schedulers.elastic()).subscribe();
return bodyFlux.then(ServerResponse.status(HttpStatus.CREATED).build());
In the above code, sometimes the map function is getting called and sometimes not.
As you've found, you can't just arbitrarily subscribe() to the Mono returned by bodyToMono(), since in that case the body simply doesn't get passed into the Mono for processing. (You can verify this by putting a single() call in that Mono, it'll throw an exception since no element will be emitted.)
So, how can I do processing on my request body in the background?
If you really still want to just use reactor to do a long task in the background while returning immediately, you can do something like:
return request.bodyToMono(String.class).doOnNext(str -> {
Mono.just(str).publishOn(Schedulers.elastic()).subscribe(s -> {
System.out.println("proc start!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("proc end!");
});
}).then(ServerResponse.status(HttpStatus.CREATED).build());
This approach immediately publishes the emitted element to a new Mono, set to publish on an elastic scheduler, that is then subscribed in the background. However, it's kind of ugly, and it's not really what reactor is designed to do. You may be misunderstanding the idea behind reactor / reactive programming here:
It's not written with the idea of "returning a quick result and then doing stuff in the background" - that's generally the purpose of a work queue, often implemented with something like RabbitMQ or Kafka. It's "raison d'ĂȘtre" is instead to be non-blocking, so a single thread is never idly blocked, waiting for something else to complete.
The map() method isn't designed for side effects, it's designed to transform each object into another. For side effects, you want doOnNext() instead;
Reactor uses a single thread by default, so your "additional processing" in your map() method would still block that thread.
If your application is for anything more than quick demo purposes, and/or you need to make heavy use of this pattern, then I'd seriously consider setting up a proper work queue instead.
This is not possible.
Web servers (including Reactor Netty, Tomcat, etc) clean up and recycle resources when request processing is done. This means that when your controller handler is done, the HTTP resources, the request itself, reusable buffers, etc are recycled or closed. At that point, you cannot read from the request body anymore.
In your case, you need to read and buffer the whole request body first, then return a response and kick off a task for processing that request in a separate execution.

Long computation AJAX causing duplicate controller Play Framework controller action calls

Basic Problem:
If I make an AJAX call to a controller method that performs a long computation (60 seconds or greater), I get a duplicate thread that comes in and follows the same path of execution (as best as I can tell from stack trace dumps -- and only one, this doesn't continue happening with the second thread). This appears to only happen when the controller action is called via AJAX. This can easily be replicated by creating a dummy controller method with nothing in it by a Thread.sleep() call that returns when finished.
I've tested this in a method that's loaded without an AJAX call and it doesn't produce the rogue thread. I tried various forms of AJAX calls (several forms of jQuery methods and base JavaScript) and got the same result with each. I initially thought it might be a thread problem so I implemented the dummy method using Promise(s) (http://www.playframework.com/documentation/2.1.x/JavaAsync, and http://www.playframework.com/documentation/2.1.x/JavaAkka) and AsyncResult, but it had no effect.
I know that the two threads are using the same execution context. Is that causing the problem here? Is it avoidable by moving the long computation to another context? Any ideas as to where this second, duplicate thread is coming from?
Controller Method (Long Computation):
public static Result test()
{
Logger.debug("*** TEST Controller entry: threadId=" + Thread.currentThread().getId());
StackTraceElement[] stack = Thread.currentThread().getStackTrace();
for(StackTraceElement e : stack)
{
Logger.debug("***" + e.toString());
}
Promise<Void> promiseString = Akka.future(
new Callable<Void>() {
public Void call() {
try
{
Logger.debug("*** going to sleep: threadId=" + Thread.currentThread().getId());
Thread.sleep(90000);
}
catch(InterruptedException e)
{
//swallow it whole and move on
}
return null;
}
}
);
Promise<Result> promiseResult = promiseString.map(
new Function<Void, Result>() {
public Result apply(Void voidParam) {
return ok("done");
}
}
);
return async(promiseResult);
}

How can I tread OpenDolphin client send HttpHostConnectException?

Is there way to handle situation when message is not delivered to server? Dolphin log infors about situation clearly, but I'would like to catch it from code. I was looking for some method like: onError to override like onFinished:
clientDolphin.send(message, new OnFinishedHandlerAdapter() {
#Override
public void onFinished(List<ClientPresentationModel> presentationModels) {
// Do something useful
}
}
});
, but there is nothing like that. Also wrapping send call in try/catch does not work(not suprising since send is not blocking its caller code).
I thing there is definitely some easy way to get informed about undelivered message, but I cant see it.
Thaks, in advace, for answers!
You can assign an onException handler to the ClientConnector - and you are actually supposed to do so. The exception handler will get the exception object passed in that happened in the asynchronous send action.
Below is the default handler that even tells you, what you should do ;-)
Closure onException = { Throwable up ->
def out = new StringWriter()
up.printStackTrace(new PrintWriter(out))
log.severe("onException reached, rethrowing in UI Thread, consider setting ClientConnector.onException\n${out.buffer}")
uiThreadHandler.executeInsideUiThread { throw up } // not sure whether this is a good default
}

How can a JSF/ICEfaces component's parameters be updated immediately?

I have an ICEfaces web app which contains a component with a property linked to a backing bean variable. In theory, variable value is programmatically modified, and the component sees the change and updates its appearance/properties accordingly.
However, it seems that the change in variable isn't "noticed" by the component until the end of the JSF cycle (which, from my basic understanding, is the render response phase).
The problem is, I have a long file-copy operation to perform, and I would like the the inputText component to show a periodic status update. However, since the component is only updated at the render response phase, it doesn't show any output until the Java methods have finished executing, and it shows it all changes accumulated at once.
I have tried using FacesContext.getCurrentInstance().renderResponse() and other functions, such as PushRenderer.render(String ID) to force XmlHttpRequest to initialize early, but no matter what, the appearance of the component does not change until the Java code finishes executing.
One possible solution that comes to mind is to have an invisible button somewhere that is automatically "pressed" by the bean when step 1 of the long operation completes, and by clicking it, it calls step 2, and so on and so forth. It seems like it would work, but I don't want to spend time hacking together such an inelegant solution when I would hope that there is a more elegant solution built into JSF/ICEfaces.
Am I missing something, or is resorting to ugly hacks the only way to achieve the desired behavior?
Multithreading was the missing link, in conjunction with PushRenderer and PortableRenderer (see http://wiki.icesoft.org/display/ICE/Ajax+Push+-+APIs).
I now have three threads in my backing bean- one for executing the long operation, one for polling the status, and one "main" thread for spawning the new threads and returning UI control to the client browser.
Once the main thread kicks off both execution and polling threads, it terminates and it completes the original HTTP request. My PortableRenderer is declared as PortableRender portableRenderer; and in my init() method (called by the class constructor) contains:
PushRenderer.addCurrentSession("fullFormGroup");
portableRenderer = PushRenderer.getPortableRenderer();
For the threading part, I used implements Runnable on my class, and for handling multiple threads in a single class, I followed this StackOverflow post: How to deal with multiple threads in one class?
Here's some source code. I can't reveal the explicit source code I've used, but this is a boiled-down version that doesn't reveal any confidential information. I haven't tested it, and I wrote it in gedit so it might have a syntax error or two, but it should at least get you started in the right direction.
public void init()
{
// This method is called by the constructor.
// It doesn't matter where you define the PortableRenderer, as long as it's before it's used.
PushRenderer.addCurrentSession("fullFormGroup");
portableRenderer = PushRenderer.getPortableRenderer();
}
public void someBeanMethod(ActionEvent evt)
{
// This is a backing bean method called by some UI event (e.g. clicking a button)
// Since it is part of a JSF/HTTP request, you cannot call portableRenderer.render
copyExecuting = true;
// Create a status thread and start it
Thread statusThread = new Thread(new Runnable() {
public void run() {
try {
// message and progress are both linked to components, which change on a portableRenderer.render("fullFormGroup") call
message = "Copying...";
// initiates render. Note that this cannot be called from a thread which is already part of an HTTP request
portableRenderer.render("fullFormGroup");
do {
progress = getProgress();
portableRenderer.render("fullFormGroup"); // render the updated progress
Thread.sleep(5000); // sleep for a while until it's time to poll again
} while (copyExecuting);
progress = getProgress();
message = "Finished!";
portableRenderer.render("fullFormGroup"); // push a render one last time
} catch (InterruptedException e) {
System.out.println("Child interrupted.");
}
});
statusThread.start();
// create a thread which initiates script and triggers the termination of statusThread
Thread copyThread = new Thread(new Runnable() {
public void run() {
File someBigFile = new File("/tmp/foobar/large_file.tar.gz");
scriptResult = copyFile(someBigFile); // this will take a long time, which is why we spawn a new thread
copyExecuting = false; // this will caue the statusThread's do..while loop to terminate
}
});
copyThread.start();
}
I suggest looking at our Showcase Demo:
http://icefaces-showcase.icesoft.org/showcase.jsf?grp=aceMenu&exp=progressBarBean
Under the list of Progress Bar examples is one called Push. It uses Ajax Push (a feature provided with ICEfaces) to do what I think you want.
There is also a tutorial on this page called Easy Ajax Push that walks you through a simple example of using Ajax Push.
http://www.icesoft.org/community/tutorials-samples.jsf

NodeJS wait for callback to finish on event emit

I have and application written in NodeJS with Express and am attempting to use EventEmitter to create a kind of plugin architecture with plugins hooking into the main code by listening to emitted events.
My problem comes when a plugin function makes an async request (to get data from mongo in this case) this causes the plugin code to finish and return control back to the original emitter which will then complete execution, before the async request in the plugin code finishes.
E.g:
Main App:
// We want to modify the request object in the plugin
self.emit('plugin-listener', request);
Plugin:
// Plugin function listening to 'plugin-listener', 'request' is an arg
console.log(request);
// Call to DB (async)
this.getFromMongo(some_data, function(response){
// this may not get called until the plugin function has finished!
}
My reason for avoiding a callback function back to the main code from the 'getFromMongo' function is that there may be 0 or many plugins listening to the event. Ideally I want some way to wait for the DB stuff to finish before returning control to the main app
Many Thanks
Using the EventEmitter for plugin/middleware management is not ideal, because you cannot ensure that the listeners are executed sequentially, if they have asynchroneous code. This especially is a problem when these listeners interact with each other or the same data.
That's why i.e. connect/express middleware functions are stored in an array and executed one after the other, instead of using an EventEmitter; They each need to call a next(); function when they are done doing their task.
You can't mix asynchronous calls with synchronous behavior. If you're going to stick with event emitter (which may not be ideal for you as Klovadis pointed out), you'll need to have your plugin emit an event that triggers a function in the main app which contains the code that you want to 'wait' to execute. You would also have to in turn keep track of all the plugin calls you made that you are waiting for event calls for so that your main code doesn't run until all the plugin calls have finished their MongoDB callbacks.
var callList = ['pluginArgs1', 'pluginArgs2', 'pluginArgs3'];
for (var i = 0; i < callList.length; i++){
self.emit('plugin-listener', callList[i], i);
}
self.on('plugin-callback', function(i){
callList.splice(i, 1);
if (callList.length < 1){
//we're done, do something
}
});
Had the same kind of decision to make about some events that I sometime need to wait for before returning the response to the client and sometimes not (when not in an HTTP request context).
The easiest way for me was to add a callback as the last argument of the event.
Stuff.emit('do_some_stuff', data, data2, callback);
In the event check if there is a callback:
Stuff.on('do_some_stuff', function(data, data2, callback) {
// stuff to do
// ...
if (typeof callback === "function") return callback(err, result);
});
I know that mixing event and callbacks can be messy but that work fine for what I need.
The other solution I see is the one proposed by #redben: add an emit function at the end of the event. The problem when in a HTTP context is that you need unique keys so your events don't mess up if they do different stuff per user.
Haven't tried it myself but you could use a property in the event's data object as an array of functions to execute by the code that emitted the event :
Listeners
foo.on('your-event', function(data) {
console.log(data);
// Then add the asynchronous code to a callbacks array
// in the event data object
data.callbacks.push(function(next) {
getFromMongo(some_data, function(err, result) { next(err) }
}
});
Emitter
self.emit('your-event', data);
// listeners have modified data object,
// some might have added callback to data.callbacks
// (suppose you use async)
async.series(data.callbacks);
This seems quite dangerous, but I have to do it anyway...
const ee = new EventEmitter();
if (ee.listeners("async-event").length > 0) {
await new Promise((resolve) => {
ee.emit("async-event", data1, data2, resolve);
});
}
Otherwise, just emit the event back-and-forth.

Resources