Safari extension context menu item command event is firing twice - events

I have developed an extension for Safari which uses a context menu.
In the code, I am listening to the command event of the context menu item using:
safari.application.addEventListener("command", commandHandler, false);
In the commandHandler() function, I have added an alert statement for debugging purposes. By doing so, I found that the function commandHandler() is firing twice whenever I click on the context menu item.
Also I added a tool bar item, which also fires the command() event on clicking. The function attached to the command() event is also firing twice after clicking on the item.
Does anybody know of this issue and how to resolve it?

Without more information, this sounds like a problem of insufficient filtering. That is, you're receiving all command messages without determining which they are or why they're flowing across your callback layer, and your callback layer receives two messages per click of, as given, unknown disposition.
The event notification callback structure for Safari extensions allows you to register multiple events against the same event type, and multiple distinct events may be generated in many cases. To this end, your attempt to add an event listener to the "command" scope means you're literally receiving all commands passed to the callback layer. These may be multi-firing in cases where you have, for example, a complex nesting relationship (A contains B, where A and B both notify) or a complex behavior pattern (for example, a mousedown followed by a mouseup).
Apple provides guidance on how to handle this scenario, by binding the command to a specific target or specific command, which is what you should do here. And just in case that's insufficient, here's additional documentation on how the callback system works to help you define your events properly.
Following the guidance should allow you to work through this issue by properly binding your events to your object and only operating on the events you need. Everything else should simply be ignored by your event handler.

Related

Implementing a Custom Cocoa Event Tracking Loop

I'm working on a custom cross platform UI library that needs a synchronous "ShowPopup" method that shows a popup, runs an event loop until it's finished and automatically cancels when clicking outside the popup or pressing escape. Keyboard, mouse and scroll wheel events need to be dispatched to the popup but other events (paint, draw, timers etc...) need to be dispatched to their regular targets while the loop runs.
Edit: for clarification, by popup, I mean this kind of menu style popup window, not an alert/dialog etc...
On Windows I've implemented this fairly simply by calling GetMessage/DispatchMessage and filtering and dispatching messages as appropriate. Works fine.
I've much less experience with Cocoa/OS X however and finding the whole event loop/dispatch paradigm a bit confusing. I've seen the following article which explains how to implement a mouse tracking loop which is very similar to what I need:
http://stpeterandpaul.ca/tiger/documentation/Cocoa/Conceptual/EventOverview/HandlingMouseEvents/chapter_5_section_4.html
but... there's some things about this that concern me.
The linked article states: "the application’s main thread is unable to process any other requests during an event-tracking loop and timers might not fire". Might not? Why not, when not, how to make sure they do?
The docs for nextEventMatchingMask:untilDate:inMode:dequeue: states "events that do not match one of the specified event types are left in the queue.". That seems a little odd. Does this mean that if an event loop only asks for mouse events then any pressed keys will be processed once the loop finishes? That'd be weird.
Is it possible to peek at a message in the event queue without removing it. eg: the Windows version of my library uses this to close the popup when it's clicked outside, but leaves the click event in the queue so that clicking outside the popup on a another button doesn't require a second click.
I've read and re-read about run loop modes but still don't really get it. A good explanation of what these are for would be great.
Are there any other good examples of implementing an event loop for a popup. Even better would be pseudo-code for what the built in NSApplication run loop does.
Another way of putting all this... what's the Cocoa equivalent of Windows' PeekMessage(..., PM_REMOVE), PeekMessage(..., PM_NOREMOVE) and DispatchMessage().
Any help greatly appreciated.
What exactly is a "popup" as you're using the term? That term means different things in different GUI APIs. Is it just a modal dialog window?
Update for edits to question:
It seems you just want to implement a custom menu. Apple provides a sample project, CustomMenus, which illustrates that technique. It's a companion to one of the WWDC 2010 session videos, Session 145, "Key Event Handling in Cocoa Applications".
Depending on exactly what you need to achieve, you might want to use an NSAlert. Alternatively, you can use a custom window and just run it modally using the -runModalForWindow: method of NSApplication.
To meet your requirement of ending the modal session when the user clicks outside of the window, you could use a local event monitor. There's even an example of just such functionality in the (modern, current) Cocoa Event Handling Guide: Monitoring Events.
All of that said, here are (hopefully no longer relevant) answers to your specific questions:
The linked article states: "the application’s main thread is unable to process any other requests during an event-tracking loop and
timers might not fire". Might not? Why not, when not, how to make
sure they do?
Because timers are scheduled in a particular run loop mode or set of modes. See the answer to question 4, below. You would typically use the event-tracking mode when running an event-tracking loop, so timers which are not scheduled in that mode will not run.
You could use the default mode for your event-tracking loop, but it really isn't a good idea. It might cause unexpected re-entrancy.
Assuming your pop-up is similar to a modal window, you should probably use NSModalPanelRunLoopMode.
The docs for nextEventMatchingMask:untilDate:inMode:dequeue:
states "events that do not match one of the specified event types are
left in the queue.". That seems a little odd. Does this mean that if
an event loop only asks for mouse events then any pressed keys will be
processed once the loop finishes? That'd be weird.
Yes, that's what it means. It's up to you to prevent that weird outcome. If you were to read a version of the Cocoa Event Handling Guide from this decade, you'd find there's a section on how to deal with this. ;-P
Is it possible to peek at a message in the event queue without removing it. eg: the Windows version of my library uses this to close
the popup when it's clicked outside, but leaves the click event in the
queue so that clicking outside the popup on a another button doesn't
require a second click.
Yes. Did you notice the "dequeue:" parameter of nextEventMatchingMask:untilDate:inMode:dequeue:? If you pass NO for that, then the event is left in the queue.
I've read and re-read about run loop modes but still don't really get it. A good explanation of what these are for would be great.
It's hard to know what to tell you without knowing what you're confused about and how the Apple guide failed you.
Are you familiar with handling multiple asynchronous communication channels using a loop around select(), poll(), epoll(), or kevent()? It's kind of like that, but a bit more automated. Not only do you build a data structure which lists the input sources you want to monitor and what specific events on those input sources you're interested in, but each input source also has a callback associated with it. Running the run loop is like calling one of the above functions to wait for input but also, when input arrives, calling the callback associated with the source to handle that input. You can run a single turn of that loop, run it until a specific time, or even run it indefinitely.
With run loops, the input sources can be organized into sets. The sets are called "modes" and identified by name (i.e. a string). When you run a run loop, you specify which set of input sources it should monitor by specifying which mode it should run in. The other input sources are still known to the run loop, but just ignored temporarily.
The -nextEventMatchingMask:untilDate:inMode:dequeue: method is, more or less, running the thread's run loop internally. In addition to whatever input sources were already present in the run loop, it temporarily adds an input source to monitor events from the windowing system, including mouse and key events.
Are there any other good examples of implementing an event loop for a popup. Even better would be pseudo-code for what the built in
NSApplication run loop does.
There's old Apple sample code, which is actually their implementation of GLUT. It provides a subclass of NSApplication and overrides the -run method. When you strip away some stuff that's only relevant for application start-up or GLUT, it's pretty simple. It's just a loop around -nextEventMatchingMask:... and -sendEvent:.

Monitor Appointment Cancel Event

I am trying to run some code upon cancellation of an AppointmentItem, however two of the events that I tried to capture fire more than once (Application.Send, AppointmentItem.Write, BeforeDelete doesn't fire). This bring me to re-think my logic and find a suitable place to implement it. I couldn't find a reason why the two events are fired twice in my case as I am using inspector wrapper to register these events on a new inspector window and Un-registering them on inspector close event.
Please note that I want to monitor all possible scenario where an Appointment can be canceled/deleted.
Why do you even need any inspector events? Monitor the Application.ItemSend event, check if you get a MeetingItem object as an argument, check that the message class is "IPM.Schedule.Meeting.Resp.Neg" or Class = 55 (OlObjectClass.olMeetingResponseNegative).

How modal dialog is implemented?

For a long time I have been wondering how modal dialog is implemented.
Let me take Qt as an example. (Nearly all GUI toolkit has this mechanism)
In the main event loop, a slot is called, and in this slot a modal dialog is opened. Before the dialog is closed, the slot doesn't return control to the main event loop. So I thought that the main event loop is blocked and become unresponsive. Apparently this is not true, since when you open a modal dialog, the background main window is still working, like repainting its UI or keep displaying a curve or some graph. It just becomes not to accept any user input.
I did an experiment. I didn't open a modal dialog in the slot, but start a new thread there, and wait for the thread to finish in that slot. This definitely blocked the main event loop.
How modal dialog is implemented after all? How does it keep main event loop unblocked but at the same time blocked the calling slot?
There is only ever a need for a single event loop, and it does not block when a modal dialog appears. Though, I suppose, different toolkits may handle this differently. You would need to consult the documentation to know for sure. Conceptually, however, it all works in the same way.
Every event has a source where the event occured. When a modal dialog appears, the event loop either ignores or redirects all events that originate outside of the dialog. There's really no magic to it. Generally speaking, it's like an if statement in the event loop code that says "if (modal_is_shown() and !event_is_in_modal_window()) {ignore_and_wait_for_next_event()}". Of course, the logic is a bit more complicated, but that's the gist of it.
If you're looking for examples here's another one:
In Tk, there is only ever one event loop. Modal behavior (doesn't have to be dialog, can also be tooltips, textbox etc) is simply implemented by making the main window ignore mouse and keyboard events. All other events like redraws etc. can still be serviced because the event loop is still running.
Tk implements this via the [grab] function. Calling grab on a UI object makes it the only object able to respond to keyboard and mouse events. Essentially blocking all other objects. This doesn't mess with the event loop. It merely temporarily disables event handlers until the grab is released.
It should be noted that Unix-like operating systems running X also has grab built in to the windowing system. So it's not necessarily implemented merely by UI toolkit libraries but is sometimes also a built in feature of the OS. Again, this is implemented by simple blocking/disabling of events instead of instantiating separate event loops. I believe this also used to be the case for the older MacOS before OSX. Not sure about OSX or Windows though. Even though modality is often implemented by the OS itself, toolkits like Qt and Tk often implement their own mechanisms to standardize behaviors across different platforms.
So the conclusion is, it is not necessary to block the main event loop to implement modality. You just need to block the events and/or event handlers.
The answer by https://stackoverflow.com/users/893/greg-hewgill is correct.
However, reading the follow-up discussion between him and https://stackoverflow.com/users/188326/solotim , I feel that there is room for further clarification, by means of prose and some pseudo-code.
I'll handle the prose part with a fact-list:
The main message loop does not run until the modal activity is finished
However, events are still delivered while the modal activity is running
This is because there is a nested event loop within the modal activity.
So far I just repeated Greg's answer, please bear with me as it is for continuity's sake. Below is where I hope to contribute additional, useful info.
The nested event loop is part of the GUI toolkit, and as such, it knows the callback functions related to every window in existence
When the nested event loop raises an event (such as a repaint event directed to the main window), it invokes the callback function associated with that event. Note that "callback" here may refer to a method of a class representing a window, in object-oriented systems.
the callback function does what is needed (e.g., repaint), and returns right back to the nested message loop (the one within the modal activity)
Last, but not least, here's pseudo-code to hopefully illustrate further, using a fictitious "GuiToolkit":
void GuiToolkit::RunModal( ModalWindow *m )
{
// main event loop
while( !GuiToolkit::IsFinished() && m->IsOpen() )
{
GuiToolkit::ProcessEvent(); // this will call
// MainWindow::OnRepaint
// as needed, via the virtual
// method of the base class
// NonModalWindow::OnRepaint
}
}
class AboutDialog: public ModalWindow
{
}
class MainWindow: public NonModalWindow
{
virtual void OnRepaint()
{
...
}
virtual void OnAboutBox()
{
AboutDialog about;
GuiToolkit::RunModal(&about); // blocks here!!
}
}
main()
{
MainWindow window;
GuiToolkit::Register( &window ) // GuiToolkit knows how to
// invoke methods of window
// main event loop
while( !GuiToolkit::IsFinished() )
{
GuiToolkit::ProcessEvent(); // this will call
// MainWindow::OnAboutBox
// at some point
}
}
In general, a modal dialog box of this type is implemented by running its own message loop instead of your application's message loop. Messages directed to your main window (such as timer or paint messages) will still get delivered, even during the modal operation.
In some situations, you may have to be careful that you don't recursively do the same thing repeatedly. For example, if you trigger a modal dialog box on a timer message combined with some persistent flag, you'll want to make sure you don't keep bringing up the same dialog box repeatedly whenever the timer message fires.

LWUIT tactile device issue

I need to capture the event that an app throws when you click on the screen, on a list. When I click on the screen, actionPerformeed(ActionEvent e) returns -1, I suppose that it is the default event.
In non-touch devices, the launched event by pressing the central button is Canvas.FIRE, why not in tactile devices?
How can I do that?
The actionEvent source argument will be from the list. Action events are designed to encapsulate the trigger for the action (e.g. key/touch) since that is irrelevant. There is no need to distinguish the trigger since you can always extract the lists selected item and use that.
There are use cases where one would like to know the location touched within the cell renderer but that is a special case unrelated to the question.

How flash dispatchEvent really works?

It is said in the docs, that EventDispatcher's dispatchEvent "...dispatches an event into the event flow". The phrase is nice-looking and doesn't really explain anything.
Say, we have two listeners waiting for an event "A" on object "a", so what behaviour do we have to expect on calling:
a.dispatchEvent("A")?
Would both listeners be called immediately, before return from distpatchEvent? Or they will be queued in some internal flash player queue and will be processed by entering the next frame? Can we rely on some defined behaviour of flash player here or the behaviour is undefined? How one should read "dispatches an event to event flow"? The question is important since in practice it affects the control flow of the code.
It all depends on your display list hierarchy.
Flash's event structure is based on its internal event model.
The Stage will be the first object
notified, and then the event will
trickle down the display list until
it reaches its target. This phase is
called the capture phase. To enable it, set useCapture to
true on an event listener. Do note
that it's pointless to do so unless
the object listening is a parent of
the object targeting the event. This
is called event intercepting.
The next phase is the target
phase. This is the behavior most
commonly known with events. The
targeted display object (the one the
has a listener for the event) will
receive the event and carry out the
code in the listener.
The final phase is called the
bubbling phase. This is when the event bubbles up the display list
after the event has been received. Event bubbling is very important for
dispatching custom events, as you'll
need to know how to listen for
events dispatched by an object's
children.
When dispatching an event, I generally use this syntax (Event.CHANGE is just a common example):
Object.dispatchEvent(new Event("CHANGE", true, false));
The Object is the object you're dispatching from. The first parameter is the event you're dispatching. The second is the bubbles parameter. The final is the cancelable property. Event.cancelable is used to prevent the default action of an event (IE: a mouse click) via Event.preventDefault().
Reference:
Chapter 21 of Colin Moock's
Essential Actionscript 3.0
EventDispatcher.dispatchEvent()
Event.cancelable
Just use Signals instead :P
https://github.com/robertpenner/as3-signals/wiki
No but really, they're very easy to use and understand, a great addition to the AS3 toolbox.
You can also learn a lot about how native AS3 events work by reading Rob Penner's critiques (scroll down to bottom of wiki page)

Resources