mouse moved events are not detected by NSView - cocoa

I am trying to make a simple application in which there is a empty red rectangle and whenever the mouse is moved over the upper half border of the rectangle the cursor will become closed hand.
I started with selecting the foundation command line project.Made a transparent NSWindow and embedded a NSView in it with the rectangle, made window to accept mouse moved events(by method: -setAcceptsMouseMovedEvents). I have overridden -canBecomeKeyWindow and -canBecomeMainWindow window to return YES. But somehow none of the -mouseMoved events are being received by NSView.
When I put the same code by making a cocoa application project and creating my window in -applicationDidFinishLaunching method , my view was able to receive -mouseMoved events.
why is it not receiving mouse moved events when I use foundation command line utility project ?
I have also observed that whenever I make a window(carbon or cocoa) through foundation cmd line utility project , the window doesn't become key even on clicking the title bar.On clicking the title bar color remains light grey instead of becoming dark grey. Why is this happening?
I have overridden -canBecomeKeyWindow and -canBecomeMainWindow of NSwindow to return YES.

I would agree with what Joshua has already said. Any application that is going to show a user interface, be it a faceless background process or one which shows up in the Dock, should be in the form of an application bundle, not a plain old Mach-O executable like the Foundation tool template will create.
Also, there are reasons why views do not respond to mouseMoved: events by default:
Mouse moved events can quickly flood the event queue
There is generally little reason to use mouseMoved:, as tracking areas are
far more effective and efficient.
A while back, I wrote a little test app that demonstrates the differences between these 2 approaches:
Moving your mouse around the upper view for roughly 20 seconds results in 1000 events, while in the lower view, which uses tracking areas, less than 50.
Sample GitHub project: https://github.com/NSGod/MouseMoved-vs-TrackingAreas
Again, as Joshua mentioned, it would be helpful if you could describe what you're trying to accomplish. If your app needs to be a background app (LSUIElement == 1), and present an interface without appearing in the Dock, then there are ways to do that (as Josh mentioned, a command-line, non-bundled app is not the way).

You have no event loop to detect events and pass them to your window because your program does not start an NSApplication. See the main.m file of a typical Cocoa application.
It might be helpful to describe what you're trying to accomplish by taking this approach. My guess is you're building a daemon but want a GUI interface to manage the otherwise "headless" daemon. That or you're building a new login management system. In either case, there are specific ways to do both and this isn't it. :-)

Related

Keep part of a window always visible

It is possible to use the SetWindowPos API on Windows to keep a windows always on top of other windows, and there are many questions on StackOverflow dealing with this.
It is possible to keep only part of a Window always visible? I.e. specify a clipping region inside an existing window, and keep only that part visible?
A use case would be the following (on Windows):
User clicks on icon to run app.
User highlights a portion of the screen to focus on (similar to the Snipping Tool on Windows 7)
The highlighted part of the screen remains always visible, even when other windows/programs are moved over the selected region.
I know the issues that would spring up with having other applications that are also set to being topmost. Just curious if this is even possible?
Even if you change part of your window to be transparent to what's below (with a clipping region) it's still going to take all the mouse clicks, etc. that occur over the transparent part.
Your best bet is to create a new smaller window and make it top-most while hiding the main one.

Change Background of WP7 Application if Theme is altered

I need to change the Background Image of my Application if user changes theme from "Light" to "Dark" or vice-vesa in code behind. I hope these should be done in Page Loded event
#TimDams pointed you to one of the nice ways to detect what-theme-is-now-set, but I didn't notice there any information how to detect a change to the theme during the application runtime. The user could start your app, then bump forward to the menu, change the theme, and get back to your app. While you may think that your app will be tombstoned and then restarted and renavigated to your page with full cycle with all pageloads -- it is not 100% true.
Firstly, the PageLoaded is NOT a good place to do the initial check-and-set-styles, because, if you get that event called, then the page, probably, has already rendered once. If I remember well, the PageLoaded is invoked just after the first render. If this is true, then you will have to detect the colors earlier, for example in the LayoutUpdated (warning: this event is a great spammer. I mean, it gets called gazillions times. Attach a single-shot handler, you know, such that will instantly unattach-self on first invocation). Maybe you will be able to do it in the Page's .ctor, just after InitializeComponent. Or in the OnApplyTemplate or MeasureOverride, or at least ArrangeOverride -- the visuals should be mostly/fully available there.
Buuuut. I've intentionally 'bolded-out' the word "initial". With Mango, there's some multitasking getting more and more common, but even the pre-Mango 7.0 version does not guarantee that your app will be tombstoned. From my observations in early 7.0, for example, starting the MediaPlayer from WebBrowser component does not tombstone your app:) If you have some time to read, check WP7 recover from Tombstone and return to page for details on the "pause" vs "tombstone".
Anyways, if your app gets "paused" and the user switches themes in the meantime, I think (I've NOT checked) that your page will (in most of the cases) be just temporarily hidden and upon returning to the screen, it will probably not be re-created and will not be re-(Page)Loaded. If it is true, then you will have not so easy problem to solve, because your app may be paused, the OS rethemed, and your app unpased virtually in any moment of time and the only events you will get in the mean time are .... global events of App.Deactivated and App.Activated. It is possible that completely none of per-page events will fire [but I've not checked - before you do anything I suggest below, CHECK IT].
If this pessimistic view is really true, than in those events, you will have to detect the current theme (->Tim's post), then somehow inform your current Page that themes changed - or not. If you have your ViewModel at least a bit separated from the rest of the app (as it should be:) ) you have an easy option to do it: create in that ViewModel a set of properties (dp or inotif) like Brush Background, Brush Foreground, Brush Hightlight, and other that you need, and instead of harcoding colours in your XAML - bind to those properties. You may event want to create a separate class for all those many Brushes and other Styles, let's say "pub class MyCurrentAppTheme" and keep that props there, and expose such object from ViewModel - whatever. Just Bind your colors to whatever -- but whatever that will be "logically global" and that will be easily accessible from the App.Acticated event handler. Having that done, in the App.Activated, detect the current theme and if changed, so through all the colors kept in VM and set them appropriatelly. Voil'a, whole your App gets recoloured properly.
But mind that still - there might be some transient blinks and flashes between the rendering of cached old theme, refreshing databound objects, and redrawing fresh theme. I hope not, but I sense it may occur, especially when returning from fast-switch tool (long back press): I think that the device captures the 'last screenshot' of your app in the backbuffer and uses that throughout all the time the app is 'minimalized' to do transitions animations, to show the fast-app-switch overview and so on.. again, I've not checked, but I doubt that during such animations the pagecontents are 'live', it could be very demaning on CPU/GPU resources. Any one knows anything on that? It could be a nice test to have some looping animation on the page and then to switch out and check in the fast-switch overview, whether the animation moves or is halted!:)
If your application is tombstoned, all your controls will be recreated and the new theme will be applied. If you'd like to manage your light/dark specific styles in a similar vein to normal themes, you might want to take a look at a custom ResourceDictionary I blogged about a while back.
Unfortunately, as of Mango there, is a bug (?) related to fast application switching that causes the theme to remain unchanged in your application. The bug is outlined in this question and its linked posts:
Is there a bug when changing themes when app is deactivated and reactivated in Windows Phone Mango
My ResourceDictionary is still useful for the initial startup but it appears, unfortunately, that nothing can be done to workaround the Mango bug.
For this no event exists. You need to figure this out manually by comparing the PhoneBackgroundBrush color to see if the user has the dark or light theme and compare it with your last stored value.
Were you able to do some test on the App.Activate - Deactivate?
I decided to use a different path to solve the issue of dynamic theme change.
I've assigned to all text and buttons only system resource colors.
For the icons inside the buttons in the window instead of using PNG images-icons I used the in XAML and assing it a Foreground color from system resource.
For the buttons in the SystemMenuBar there is no issue as they are always on a dark gray background so the black PNG images work just fine.
Hoping this helps.
You can check if the dark theme is in use with this simple check:
public static bool CurrentThemeIsDark
{
get
{
return (Visibility)Application.Current.Resources["PhoneDarkThemeVisibility"] == Visibility.Visible;
}
}

Draw over screen with Quartz

I'm trying to work out what the best way to draw over the top of all other items on the screen on OS X. I don't want to impede the user's ability to interact with their applications, but want to 'annotate' them. I want to be able to draw up to 20 different annotations. The top half of this screenshot from Gizmodo happens to nicely show the kind of thing I want to do. http://gizmodo.com/assets/resources/2006/07/04%20Safari.jpg (sorry, I'm too new to post it as an image)
The questions I think I need to answer are:
Should I create a single window for
each drawing and draw to that? If
so, how do I minimise overhead?
What kind of window or other context should I use given that I don't want any window
decoration?
I don't think I want the overhead of creating 20 windows, but I also don't know that I want to create a full-screen, invisible window that contains my context (I presume a subclassed NSView), because I fear that will a) cause problems interacting with what's below and b) break the niceties of only redrawing when necessary (my actual drawing will likely only cover 10% of the screen)
I've not worked with Quartz2d before, so I just can't get my head around how to get the 'right' context to draw on from the documentation. Any help would be appreciated.
Thanks,
Who
You can only draw into a window. You can make a transparent, borderless window but it will need to be at the front.
You could set it's level to something like NSPopUpMenuWindowLevel to make it draw above other windows (this will be very annoying to users) but clicking on it will:
a) activate and bring to the front your app
b) Prevent the app underneath receiving mouse events
Is that what you want?

How does a GUI Framework work?

I have been all over the web looking for an answer to this, and my question is this:
How does a GUI framework work? for instance how does Qt work, is there any books or wibsites on the topic of writing a GUI framework from scratch? and also does the framework have to call methods from the operating systems GUI framework?
-- Thank you to any one who takes the time to try to answer this question, and forgive me if i misspelled anything.
In the old days we did a lot of GUI programming from scratch. It is not as hard as it seems, but it requires a few weeks to come with results.
First you need a good drawing library. Minimal functionality for this library is drawing clipped rectangles (using patterns), lines, bitmaps, and fonts. You can cheat by creating fonts as bitmaps, and clipped rectangle is just a bunch of horizontal lines.
Now you need at least drivers for mouse, keyboard, and timer (if not already provided by the operating system). In general, you will need to detect keys, symbol keys (such as shift, etc.), mouse moves and mouse clicks. Basic timer functions will allow you to detect double clicks.
Then you need to create a window data structure. This data structure needs to have coordinates i.e. a rectangle, link to parent window (if not top window), and window function i.e. the function that will be called when this window should handle some events.
Once you can draw on screen you need some rectangle algebra functions. You need at least good function to calculate intersection of rectangles, and a quick resolution of relative to absolute coordinates. For example - if your child window has parent then its' x and y should recursively be added to parent x and y until you reach top window.
At this point you have your:
- primitive graphical functions,
- window structure,
- mouse driver, keyboard driver, and timer,
- rectangle arithmetic.
Now you can write your main event harvesting function. This function will run all the time. It's purpose will be to detect events and send messages to correct windows. What is an event? Well, when you start your program, store mouse x and y coordinates. Then in a loop check if they have changed. If they have changed, find the window at that position ... and send WM_MOUSEMOVE event to it. Your harvesting function should handle:
- mouse moves
- mouse clicks
- mouse double clicks (remember last click and position, measure time and decide if it is a double click or not)
- timer events
- keyboard buffer changes
...
Now you should be able to send events to windows. But you really need a mechanism for it. It is a combination of message queue, and window procedure. It usually works like this: each window has a window procedure which commonly accepts four arguments: message id (i.e. is it mouse move, is it paint message), window handle, parameter 1 and parameter 2. You can call this window procedure directly using something like a send_message functions. Or you can send this window a message via post_message function. This will put message to the queue and window will process messages one by one, eventually receiving this one. So why should you call one messages directly and put others to the queue? Because of priority. You see, a keyboard click can wait some time before being processed. But a window redraw must complete immediately to prevent flicker and wrong data on screen.
So your harvest_events function sends messages to windows using post_message, and send_message. And your window message pump gets them using typical message pump like this:
while (pmsg = get_message() != NULL) send_message(pmsg->id, pmsg->hwnd, pmsg->p1, pmsg->p2);
get_message simply obtains message from the queue, and calls send message. Simple, huh? Well, not quite so. This way you would only receive driver messages to windows, but you also need some functions to redraw windows, move them, etc.. When you create move_window function, resize_window, show_window, and hide_window function, your window coordinates will change. Parts of other windows will be uncovered (if top window is moved or closed).You need to calculate which windows are affected by coordinate changes and send paint message to those windows (to repaint only the parts that were uncovered - remember, you have clipping drawing functions so this will work).
These functions introduces messages msg_paint, msg_move, msg_resize, msg_hide...
Last, you need to maintain hierarchy of windows. Your top window should be the desktop. It should have child windows (application top windows). These windows may have further child windows (buttons, edit boxes, etc.) The obvious structure for holding these is the window tree. When you detect mouse click you have to traverse window tree and do it in a smart way (finding out who has focus, who is modal, etc.) to send message to the right window. And when you draw you also must traverse all children to see who is uncovered and who is not. Last but not least, you need to handle mouse rectangle as top window to prevent flickering the mouse as windows are re-drawn or (using timers and msg_paint events) animated.
That's roughly it.
A GUI framework like Qt generally works by taking the existing OS's primitive objects (windows, fonts, bitmaps, etc), wrapping them in more platform-neutral and less clunky classes/structures/handles, and giving you the functionality you'll need to manipulate them. Yes, that almost always involves using the OS's own functions, but it doesn't HAVE to -- if you're designing an API to draw an OpenGL UI, for example, most of the underlying OS's GUI stuff won't even work, and you'll be doing just about everything on your own.
Either way, it's not for the faint of heart. If you have to ask how a GUI framework works, you're not even close to ready to design one. You're better off sticking with an existing framework and extending it to do the spiffy stuff it doesn't do already.

How does one use onmousedown/onmouseup correctly?

Whenever I write mouse handling code, the onmousedown/onmouseup/onmousemove model always seemed to force me to produce unnecessarily complex code that would still end up causing all sorts of UI bugs.
The main problem which I see even in major pieces of software these days is the "ghost mouse" event where you drag to outside the window and then let go. Once you return back into the window, the application still thinks you have the mouse down even though the button is up. This is especially annoying when you're trying to highlight something that goes to the border of the screen.
Is there a RIGHT way to write mouse code or is the entire model just flawed?
Ordinarily one captures the mouse events on mouse down so the mouse move and mouse up go through your code regardless of the caret moving out of you application window.
More recently this is a problem when running a VM or remote session, its difficult for apps in these to track the mouse outside of the machine screen area represented by a window on a host.
I'm not sure what environment you're attempting to track mouse buttons in, but the best way to handle this is to have a mouse listener that tracks onmouseup 100% of the time after you've detected onmousedown.
That way, it doesn't matter what screen region the user releases the mouse button in. It will reset no matter where it happens.

Resources