I need some ideas on how to implement a simple undo function. Its a Spring web-app with JS frontend and the typical Spring REST API. The undo feature should work very similar to the undo in Google Keep, e.g. User clicks delete, UI appears to delete object, small popup appears in corner with undo link for approximately 10 sec, after which it disappears and undo is no longer available.
My initial thoughts were to use an async message queue. Perhaps a commandQueue and an undoCommandQueue. When a message enters the commandQueue it is delayed for 10sec. After the delay a service activator recieves the message and checks to see if there is a corresponding undo message in the undoCommandQueue with matching Id, if not then it proceeds with the delete.
If possible(and not significantly more complicated) I'd like to avoid using a msg queue. Perhaps something like an async delete method that has a 10 sec sleep built in. But then how do I notify this async method after it's done sleeping that a completely separate undo REST API call was made?
I know the command pattern is the de-facto approach for undo, but given that I only want 1 level of undo for a very short amount of time that seems like overkill. The simplest solution would likely be to delay the delete API call altogether using javascript on the frontend, but that is an issue in some scenarios like immediately closing the browser(so the API never gets called).
Anyone done anything like this before? Any suggestions? Thanks!
Related
I've read a lot about Event::queue but I just cant get my head around it, so i have something like:
Event::listen('send_notification');
and in the controller I use
Event::fire('send_notification');
But because this takes sometime before sending the user to somewhere else, I instead want to use
Event::queue('send_notification');
To fire the event after the user has been redirected, but I don't know how.
(In the app/config/app.php i have the queue driver set to sync)
EDIT:
a small note about firing the event ,u can do all ur work just like normal ,and add all the Event::flush() as a filter ,then just call that filter through ->after() or afterFilter().
First, let me make something clear. Event::queue has nothing to do with the Queue facade and the query driver in the config. It won't enable you to fire the event after the request has happened.
But you can delay the firing of an event and therefore "prepare" it.
The usage is pretty basic. Obviously you need one or many Event::listen (well it works without them but makes no sense at all)
Event::listen('send_notification', function($text){
// send notification
});
Now we queue the event:
Event::queue('send_notification', array('Hello World'));
And finally, fire it by calling flush
Event::flush('send_notification');
In your comment you asked about flushing multiple events at once. Unfortunately that's not really possible. You have to call flush() multiple times
Event::flush('send_notification');
Event::flush('foo');
Event::flush('bar');
If you have a lot of events to flush you might need to think about your architecture and if it's possible to combine some of those into one event with multiple listeners.
Flushing the Event after redirect
Event::queue can't be used to fire an event after the request lifecycle has ended. You have to use "real" queues for that.
I have a class that implements a file-monitoring service to detect when a file I am interested in has been changed by something other than my application. I use the standard technique of opening the file (with the O_EVTONLY flag) and binding the file descriptor to a Grand Central Dispatch source of type DISPATCH_SOURCE_TYPE_VNODE. When I get an event, I notify my main thread with NSNotificationCenter's postNotificationName:object:userInfo: which calls an observer in my app delegate. So far so good. It works great. But, in general, if the triggering event is an attributes change (i.e. the DISPATCH_VNODE_ATTRIB flag is set on return from dispatch_source_get_data()) then I usually get two closely-spaced events. The behaviour is easily exhibited if I touch(1) the object I am monitoring. I hypothesise this is due to the file's mtime and atime being set non-atomically although I can't verify this. This can lead to spurious notifications being sent to my observer and this raises the possibility of race conditions etc.
What is the best way of dealing with this? I thought of storing a timestamp for the last event received and only sending a notification if the current event is later than this timestamp by some amount (a few tens of milliseconds?) Does this sound like a reasonable solution?
You can't ever escape the "race condition" in this situation, because the notification of your GCD event source in your process is not synchronous with the other process's modification of the underlying file. So, no matter what, you must always be tolerant of the possibility that the change you're being notified for could already be "gone."
As for coalescing, do whatever makes sense for your app. There are two obvious strategies. You can act immediately on a received event, and then drop subsequent events received in some time window on the floor, or you can delay every event for some time period during which you will drop other events for the same file on the floor. It really just depends on what's more important, acting quickly, or having a higher likelihood of a quiescent state (knowing that you can never be sure things are quiescent.)
The only thing I would add is to suggest that you do all your coalescence before dispatching anything to the main thread. The main thread has things like tracking loops, etc that will make it harder to get time-based coalescing right in certain cases.
Background
I'm working on a web application utilizing AJAX to fetch content/data and what have you - nothing out of the ordinary.
On the server-side certain events can happen that the client-side JavaScript framework needs to be notified about and vice versa. These events are not always related to the users immediate actions. It is not an option to wait for the next page refresh to include them in the document or to stick them in some hidden fields because the user might never submit a form.
Right now it is design in such a way that events to and from the server are riding a long with the users requests. For instance if the user clicks a 'view details' link this would fire a request to the server to fetch some HTML or JSON with details about the clicked item. Along with this request or rather the response, a server-side (invoked) event will return with the content.
Question/issue 1:
I'm unsure how to control the queue of events going to the server. They can ride along with user invoked events, but what if these does not occur, the events will get lost. I imagine having a timer setup up to send these events to the server in the case the user does not perform some action. What do you think?
Question/issue 2:
With regards to the responds, some being requested as HTML some as JSON it is a bit tricky as I would have to somehow wrap al this data for allow for both formalized (and unrelated) events and perhaps HTML content, depending on the request, to return to the client. Any suggestions? anything I should be away about, for instance returning HTML content wrapped in a JSON bundle?
Update:
Do you know of any framework that uses an approach like this, that I can look at for inspiration (that is a framework that wraps events/requests in a package along with data)?
I am tackling a similar problem to yours at the moment. On your first question, I was thinking of implementing some sort of timer on the client side that makes an asycnhronous call for the content on expiry.
On your second question, I normaly just return JSON representing the data I need, and then present it by manipulating the Document model. I prefer to keep things consistent.
As for best practices, I cant say for sure that what I am doing is or complies to any best practice, but it works for our present requirement.
You might want to also consider the performance impact of having multiple clients making asynchrounous calls to your web server at regular intervals.
I am trying to create real-time and collaborative application like - google wave for example.
When user1 writes something at the same time it shows on user2 screens.
I started a little research,and found some ways to this with Ajax -
1.every X seconds send request to the server and to check what is "happening"
2.timeout - long request ,Problem - I saw i can do this only with IE8
there are other options?what is the best way to this?
And with way number 2,this true I can do this only with IE8?
Yosy
The whole point of AJAX is that the server can wait for notifications from each clients, and notify all the other clients when something happens. There's no need for polling. Look up keywords like comet, and bayeux. Dojo has a good implementation.
I'm not sure what you are referring to in 2, but if I were going to implement something like this, I'd do what you explain in 1. Basically your server will be keeping track of the conversation, and the clients will constantly ask for updates.
Another possible option would be flash, but I don't know much about that other than it would be capable, so your on your own for researching that.
Some notes on keeping things running quickly in option 1:
Remember you only have 2 "ajax"
calls to work with on the client side (you can only have 2 calls
out at once). So keep track
of the calls that are out. Make use
of abort() if a call takes too long or its response is not going to be valid anymore.
Get the most out of your calls, if
you need to send text to the server,
use the response to get an update on
the current "conversation".
I am working on a non-document-based Core Data application.
I would like changes to be saved as they happen. This is what the user expects in this type of application. It is also what Apple has implemented in iPhoto or iTunes.
A brute force approach would be to set up a timer to save frequently. The method triggered by the saving would then swallow all validation errors so as not to bother the user. Only upon quitting, the user will be bugged to arrange the data so it can save. IMHO, that approach stinks.
So I am thinking, there must be a way to somehow hook saving to something like the NSEditor protocol. Every time the user (or a controller) finishes editing data, the application delegate should somehow be notified an trigger a save operation. Thing is I don't quite know where to look.
I would think that for more complicated operations, which may require some cross-validations to go through, I would present the user with bit of interface tied to a dedicated NSManagedObjectContext.
At the end of each event in an AppKit app, CoreData will run a -processPendingTransactions for you.
One side-effect of this is that if you've registered with your NSManagedObjectContext to receive change notifications, you'll get called at the end of each event.
So, for instance, in your notification handler, you could call just tell the context to save.
However, you might be paranoid about doing a save on a context while in a callback from that same context, so you'd probably feel better if you did a performSelector:#selector(save:) afterDelay: to push the save until after the -processPendingTransactions is done.
You could even do a cancel previous on the -save: selector and have the delay be like 5 seconds, so if the user or app is in the middle of a BUNCH of changes they'll all coalesce into a single save.
And, in fact, that's exactly how Delicious Library 1.0-1.09 worked.
-Wil