I'm creating an FPS demo using an Engine called: Gameplay. I'm currently trying to define a captureMouse() function into the engine so the player can look around the map. I've already been able to pin the cursor to the center of the window and turn it invisible, but as I move the mouse the screen (camera) seems to "vibrate" as it moves around. After a lot of tinkering with X11 functions I figured that the XWarpPointer() function I'm using to warp the cursor back to the center of the window is adding a "mouse moved" event to the event queue.
X11 Question: How can I identify and remove an event from the event queue before it is captured by the event cycle?
Question: Has anyone been a similar problem and solved in a different manner? If so, what did you do?
I'm sorry if I'm not being clear. I have no extensive knowledge of X11, but I really need to add this to the engine so I can, in turn, add it to my game.
I guess you're using XtAppMainLoop to handle your events.
This is actually a call to XtAppNextEvent followed by XtDispatchEvent.
If you replace the XtAppMainLoop with a loop calling XtAppNextEvent to get the next event and check its type (the type field of the XEvent structure).
If you want to handle the event call XtDispatchEvent, do nothing to ignore it.
The loop needs to exit when XtAppGetExitFlag returns true (or add your own exit flag).
Related
I'm working on a custom cross platform UI library that needs a synchronous "ShowPopup" method that shows a popup, runs an event loop until it's finished and automatically cancels when clicking outside the popup or pressing escape. Keyboard, mouse and scroll wheel events need to be dispatched to the popup but other events (paint, draw, timers etc...) need to be dispatched to their regular targets while the loop runs.
Edit: for clarification, by popup, I mean this kind of menu style popup window, not an alert/dialog etc...
On Windows I've implemented this fairly simply by calling GetMessage/DispatchMessage and filtering and dispatching messages as appropriate. Works fine.
I've much less experience with Cocoa/OS X however and finding the whole event loop/dispatch paradigm a bit confusing. I've seen the following article which explains how to implement a mouse tracking loop which is very similar to what I need:
http://stpeterandpaul.ca/tiger/documentation/Cocoa/Conceptual/EventOverview/HandlingMouseEvents/chapter_5_section_4.html
but... there's some things about this that concern me.
The linked article states: "the application’s main thread is unable to process any other requests during an event-tracking loop and timers might not fire". Might not? Why not, when not, how to make sure they do?
The docs for nextEventMatchingMask:untilDate:inMode:dequeue: states "events that do not match one of the specified event types are left in the queue.". That seems a little odd. Does this mean that if an event loop only asks for mouse events then any pressed keys will be processed once the loop finishes? That'd be weird.
Is it possible to peek at a message in the event queue without removing it. eg: the Windows version of my library uses this to close the popup when it's clicked outside, but leaves the click event in the queue so that clicking outside the popup on a another button doesn't require a second click.
I've read and re-read about run loop modes but still don't really get it. A good explanation of what these are for would be great.
Are there any other good examples of implementing an event loop for a popup. Even better would be pseudo-code for what the built in NSApplication run loop does.
Another way of putting all this... what's the Cocoa equivalent of Windows' PeekMessage(..., PM_REMOVE), PeekMessage(..., PM_NOREMOVE) and DispatchMessage().
Any help greatly appreciated.
What exactly is a "popup" as you're using the term? That term means different things in different GUI APIs. Is it just a modal dialog window?
Update for edits to question:
It seems you just want to implement a custom menu. Apple provides a sample project, CustomMenus, which illustrates that technique. It's a companion to one of the WWDC 2010 session videos, Session 145, "Key Event Handling in Cocoa Applications".
Depending on exactly what you need to achieve, you might want to use an NSAlert. Alternatively, you can use a custom window and just run it modally using the -runModalForWindow: method of NSApplication.
To meet your requirement of ending the modal session when the user clicks outside of the window, you could use a local event monitor. There's even an example of just such functionality in the (modern, current) Cocoa Event Handling Guide: Monitoring Events.
All of that said, here are (hopefully no longer relevant) answers to your specific questions:
The linked article states: "the application’s main thread is unable to process any other requests during an event-tracking loop and
timers might not fire". Might not? Why not, when not, how to make
sure they do?
Because timers are scheduled in a particular run loop mode or set of modes. See the answer to question 4, below. You would typically use the event-tracking mode when running an event-tracking loop, so timers which are not scheduled in that mode will not run.
You could use the default mode for your event-tracking loop, but it really isn't a good idea. It might cause unexpected re-entrancy.
Assuming your pop-up is similar to a modal window, you should probably use NSModalPanelRunLoopMode.
The docs for nextEventMatchingMask:untilDate:inMode:dequeue:
states "events that do not match one of the specified event types are
left in the queue.". That seems a little odd. Does this mean that if
an event loop only asks for mouse events then any pressed keys will be
processed once the loop finishes? That'd be weird.
Yes, that's what it means. It's up to you to prevent that weird outcome. If you were to read a version of the Cocoa Event Handling Guide from this decade, you'd find there's a section on how to deal with this. ;-P
Is it possible to peek at a message in the event queue without removing it. eg: the Windows version of my library uses this to close
the popup when it's clicked outside, but leaves the click event in the
queue so that clicking outside the popup on a another button doesn't
require a second click.
Yes. Did you notice the "dequeue:" parameter of nextEventMatchingMask:untilDate:inMode:dequeue:? If you pass NO for that, then the event is left in the queue.
I've read and re-read about run loop modes but still don't really get it. A good explanation of what these are for would be great.
It's hard to know what to tell you without knowing what you're confused about and how the Apple guide failed you.
Are you familiar with handling multiple asynchronous communication channels using a loop around select(), poll(), epoll(), or kevent()? It's kind of like that, but a bit more automated. Not only do you build a data structure which lists the input sources you want to monitor and what specific events on those input sources you're interested in, but each input source also has a callback associated with it. Running the run loop is like calling one of the above functions to wait for input but also, when input arrives, calling the callback associated with the source to handle that input. You can run a single turn of that loop, run it until a specific time, or even run it indefinitely.
With run loops, the input sources can be organized into sets. The sets are called "modes" and identified by name (i.e. a string). When you run a run loop, you specify which set of input sources it should monitor by specifying which mode it should run in. The other input sources are still known to the run loop, but just ignored temporarily.
The -nextEventMatchingMask:untilDate:inMode:dequeue: method is, more or less, running the thread's run loop internally. In addition to whatever input sources were already present in the run loop, it temporarily adds an input source to monitor events from the windowing system, including mouse and key events.
Are there any other good examples of implementing an event loop for a popup. Even better would be pseudo-code for what the built in
NSApplication run loop does.
There's old Apple sample code, which is actually their implementation of GLUT. It provides a subclass of NSApplication and overrides the -run method. When you strip away some stuff that's only relevant for application start-up or GLUT, it's pretty simple. It's just a loop around -nextEventMatchingMask:... and -sendEvent:.
Is there any general-purpose programming equivalent of low-level interrupts in microcontrollers/embedded systems?
I am vaguely familiar with the concept of events (mouse events and the like) which seems similar but not general enough.
Is there a mechanism (native or otherwise), specifically in C/C++, to handle custom events, that is, events whose triggering is decided by a user-defined condition like say, when the mouse-pointer moves into a particular region when a particular user action occurs?
To provide some context, I am working on an OpenCV based interactive project where I would like to trigger specific actions when the user points to a particular place on the screen.
It seems to be a particular waste of computation to check if the pointer is currently at so and so location on the screen in each iteration of the video stream and I would like to automate function calls based on a pre-defined condition.
Or is there any other(more efficient) mechanism by which I could improve this procedure?
Thanks.
There is no interrupt programming in C/C++ as in microP or microC .
If your screen is touch screen then you try to get hold of the SDK of your operating system or API of your OS to get notified when a touch happens. (The OS internally maintains an interrupt table for touch or key pad press or mouse movement. We can program the logic which we want to execute on such an event, nothing more than that..).
If its not touch then you have to monitor the position of the user with a sensor, usually a camera (a web cam). For that you have to check each frame of the camera to decide the position of the user.
EDIT :
What you mentioned is the correct way.Its better to check each frame or else the response of your system will be sluggish. You can assign a counter to 1 and increase it with each frame and reset it on reaching any desired value. This is almost equivalent to a infinite loop.
Or you can accept some key from the keyboard to break out the loop (OpenCV has such functions)
A little more advanced approach is to grab frames from the camera in a different thread than the main thread of your executing program. So all you need to do is to start and stop that thread.
You can probably figure out why I am asking this question. Even if not, it's very simple.
My question is whether it is possible to detect the use of SetCursorPos() on one's own application, without scanning other running applications for any calls to this API.
For example, if I have my cursor in a window and I call SetCursorPos(), can this window in anyway know that the cursor placement is not directly from the mouse (raw input)?
I am not oblivious to the fact that you can 'know' whether a mouse input is raw simply by checking how the position alters; for example, if the position changes from 100(X) & 100(Y) to 500(X) and 500(Y), without moving through each individual location between these two, then with certainty, something has altered the mouse position.
If anyone of you know of a way to produce 'raw mouse input', without any application being able to tell the difference between the output from a function, and that from a mouse--if there is such a difference--then that'd suffice, too.
Of course, whenever I move my mouse, the operating system I am using detects this and then appropriately moves the cursor accordingly. In practice, I should be able to alter this low level functionality as to my will?
There is no way for a window to directly determine how the mouse was moved. External applications could be using SetCursorPos(), but they could also be using lower level functions like mouse_event() or SendInput() instead. By the time the notification reach the target window, the OS has already normalized the data and any source information is lost If you really needed to detect use of SetCursorPos() or other functions, you would have to directly hook into those functions in every running process. Alternatively, you might try registering for "Raw Input" via RegisterRawInputDevices() and see if you get a corresponding notification from the mouse hardware directy, assumine those simulating functions do not trigger Raw notifications as well.
I have been all over the web looking for an answer to this, and my question is this:
How does a GUI framework work? for instance how does Qt work, is there any books or wibsites on the topic of writing a GUI framework from scratch? and also does the framework have to call methods from the operating systems GUI framework?
-- Thank you to any one who takes the time to try to answer this question, and forgive me if i misspelled anything.
In the old days we did a lot of GUI programming from scratch. It is not as hard as it seems, but it requires a few weeks to come with results.
First you need a good drawing library. Minimal functionality for this library is drawing clipped rectangles (using patterns), lines, bitmaps, and fonts. You can cheat by creating fonts as bitmaps, and clipped rectangle is just a bunch of horizontal lines.
Now you need at least drivers for mouse, keyboard, and timer (if not already provided by the operating system). In general, you will need to detect keys, symbol keys (such as shift, etc.), mouse moves and mouse clicks. Basic timer functions will allow you to detect double clicks.
Then you need to create a window data structure. This data structure needs to have coordinates i.e. a rectangle, link to parent window (if not top window), and window function i.e. the function that will be called when this window should handle some events.
Once you can draw on screen you need some rectangle algebra functions. You need at least good function to calculate intersection of rectangles, and a quick resolution of relative to absolute coordinates. For example - if your child window has parent then its' x and y should recursively be added to parent x and y until you reach top window.
At this point you have your:
- primitive graphical functions,
- window structure,
- mouse driver, keyboard driver, and timer,
- rectangle arithmetic.
Now you can write your main event harvesting function. This function will run all the time. It's purpose will be to detect events and send messages to correct windows. What is an event? Well, when you start your program, store mouse x and y coordinates. Then in a loop check if they have changed. If they have changed, find the window at that position ... and send WM_MOUSEMOVE event to it. Your harvesting function should handle:
- mouse moves
- mouse clicks
- mouse double clicks (remember last click and position, measure time and decide if it is a double click or not)
- timer events
- keyboard buffer changes
...
Now you should be able to send events to windows. But you really need a mechanism for it. It is a combination of message queue, and window procedure. It usually works like this: each window has a window procedure which commonly accepts four arguments: message id (i.e. is it mouse move, is it paint message), window handle, parameter 1 and parameter 2. You can call this window procedure directly using something like a send_message functions. Or you can send this window a message via post_message function. This will put message to the queue and window will process messages one by one, eventually receiving this one. So why should you call one messages directly and put others to the queue? Because of priority. You see, a keyboard click can wait some time before being processed. But a window redraw must complete immediately to prevent flicker and wrong data on screen.
So your harvest_events function sends messages to windows using post_message, and send_message. And your window message pump gets them using typical message pump like this:
while (pmsg = get_message() != NULL) send_message(pmsg->id, pmsg->hwnd, pmsg->p1, pmsg->p2);
get_message simply obtains message from the queue, and calls send message. Simple, huh? Well, not quite so. This way you would only receive driver messages to windows, but you also need some functions to redraw windows, move them, etc.. When you create move_window function, resize_window, show_window, and hide_window function, your window coordinates will change. Parts of other windows will be uncovered (if top window is moved or closed).You need to calculate which windows are affected by coordinate changes and send paint message to those windows (to repaint only the parts that were uncovered - remember, you have clipping drawing functions so this will work).
These functions introduces messages msg_paint, msg_move, msg_resize, msg_hide...
Last, you need to maintain hierarchy of windows. Your top window should be the desktop. It should have child windows (application top windows). These windows may have further child windows (buttons, edit boxes, etc.) The obvious structure for holding these is the window tree. When you detect mouse click you have to traverse window tree and do it in a smart way (finding out who has focus, who is modal, etc.) to send message to the right window. And when you draw you also must traverse all children to see who is uncovered and who is not. Last but not least, you need to handle mouse rectangle as top window to prevent flickering the mouse as windows are re-drawn or (using timers and msg_paint events) animated.
That's roughly it.
A GUI framework like Qt generally works by taking the existing OS's primitive objects (windows, fonts, bitmaps, etc), wrapping them in more platform-neutral and less clunky classes/structures/handles, and giving you the functionality you'll need to manipulate them. Yes, that almost always involves using the OS's own functions, but it doesn't HAVE to -- if you're designing an API to draw an OpenGL UI, for example, most of the underlying OS's GUI stuff won't even work, and you'll be doing just about everything on your own.
Either way, it's not for the faint of heart. If you have to ask how a GUI framework works, you're not even close to ready to design one. You're better off sticking with an existing framework and extending it to do the spiffy stuff it doesn't do already.
I've done lots of stuff with pygtk however i'm deciding to learn pyqt, im stuck at the qgraphicsview i have absolutley no idea how to get signals from the items i place on the graphics view, primarily mouse events.How do i get the mouse events from idividual items in a scene?
QGraphicsItem is not a QObject and cannot send signals, nor receive slots. Instead, you must handle events. You can do that either through an event filter, sub-classing the view or scene to intercept events or simply sub-classing the items themselves and implementing the event handling functions (see protected member functions in the documentation). Perhaps this example can be of interest: http://doc.trolltech.com/4.6/graphicsview-diagramscene.html .
Right after you create an item, connect the signals you want from it to the instance of the widget that contains it.
Another option is to just give up using signals and have your instance of QGraphicItem directly call a method of its parent by keeping a reference to it. This is less pretty than using signals but ultimately, it gets the job done.