I have a full-screen transparent window that displays above the main menu of my app. It has ignoresMouseEvents set to NO. To receive mouse clicks nonetheless, I added this code :
[NSEvent addLocalMonitorForEventsMatchingMask:NSLeftMouseDownMask handler:^(NSEvent *event) {
[self click:event];
return event;
}];
Every time the user clicks while my app is active, a method click is thus called :
- (BOOL)click:(NSEvent *)event {
NSPoint coordinate = [event locationInWindow];
float ycoord = coordinate.y;
float menuheight = [[NSApp mainMenu] menuBarHeight];
float windowheight = [[NSApp mainWindow] frame].size.height;
if (ycoord >= windowheight - menuheight && ![[NSApp mainWindow] ignoresMouseEvents]) {
[[NSApp mainWindow] setIgnoresMouseEvents:YES];
[NSApp sendEvent:event];
NSLog(#"click");
[[NSApp mainWindow] setIgnoresMouseEvents:NO];
return YES;
}
return NO;
}
As you can see, it changes the ignoresMouseEvents property of the main window to YES if the click was on the main menu bar - after which it calls sendEvent: in NSApplication. Finally it changes the ignoresMouseEvents property of the main window back to NO.
However, even though the log does say 'click' when the main menu bar is clicked, the click has no effect. If I click a menu item (eg. the 'File' menu item), for example, it will not open the corresponding menu (in this case the file menu).
What am I doing wrong?
The window that an event is targeted toward is decided by the window server before your app even receives it. It is not decided at the time of the call to -sendEvent:. The primary effect of -setIgnoresMouseEvents: is to inform the window server, not Cocoa's internals, how to dispatch mouse events.
Except for something like event taps, once you've received an event, it's too late to re-target it.
Note, for example, that the NSEvent already has an associated -window before your call to -sendEvent:. -sendEvent: is just going to use that to dispatch it.
If you want to allow clicks in the menu bar, you should either size your window so it doesn't overlap the menu bar or you should set its window level to be behind the menu bar.
Related
I added a context menu to an IKImageBrowserView.
When a user right-clicks (mouse) or two-finger-clicks (trackpad) an image in the IKImageBrowserView, the selection changes to this image, and the context menu appears.
When a user control-clicks (mouse or trackpad), the selection does not change, and the context menu appears.
As the context menu is relative to the selected image, I prefer, that the selection changes, when the context menu is invoked.
Can I make the IKImageBrowserView change selection also on control-click (mouse and trackpad)?
Can I attach the context menu not to the IKImageBrowserView but to a single element/image of an IKImageBrowserView?
If you make a subclass of IKImageBrowserView and override menuForEvent:, you can accomplish this:
- (NSMenu *)menuForEvent:(NSEvent *)event {
NSUInteger idx = [self indexOfItemAtPoint:[self convertPoint:[event locationInWindow] fromView:nil]];
if (idx == NSNotFound) {return nil;}
if (![self.selectionIndexes containsIndex:idx]) {
[self setSelectionIndexes:[NSIndexSet indexSetWithIndex:idx] byExtendingSelection:NO];
}
return self.menu;
}
I'm trying to create a method that will toggle between fullscreen and a window. I'm trying to do this from within a class inherited from NSOpenGLView, essentially following this blogpost. That works once, going from windowed to fullscreen; trying to go back fails in various ways: the window screen doesn't get updated, or I don't even manage switch to the window but the fullscreen just blanks out. Trying to go back and forth a few times anyway (mapped it to the 'f' key), the program often locks up, and in a worst case, I have to restart my computer.
I've attached the code for the method below; for debugging purposes, I've set the full frame rectangle much smaller, so that if things freeze, the application is never at full screen.
The fullscreen example in the Apple developer examples suggest using a controller, and does not go fullscreen from within the inherited NSOpenGLView.
My questions:
should I use a controller instead, and from there switch between windowed and fullscreen (creating a separate fullscreen view each time)? Or should both methods work?
If both methods should work, which one is preferred?
If both methods can work, what am I doing wrong in the current way of implementing this?
or, is there a third, better, method?
Note that for both references, I'll have to assume that things haven't changed for 10.8 (both references seem to apply to 10.6).
Code follows:
#implementation MyOpenGLView
[...]
- (void)toggleFullscreen
{
mainWindow = [self window];
if (isFullscreen) {
[fullscreenWindow close];
[mainWindow setAcceptsMouseMovedEvents:YES];
[mainWindow setContentView: self];
[mainWindow makeKeyAndOrderFront: self];
[mainWindow makeFirstResponder: self];
isFullscreen = false;
} else {
[mainWindow setAcceptsMouseMovedEvents:NO];
//NSRect fullscreenFrame = [[NSScreen mainScreen] frame];
NSRect fullscreenFrame = { {300, 300}, {300, 300} };
fullscreenWindow = [[NSWindow alloc] initWithContentRect:fullscreenFrame
styleMask:NSBorderlessWindowMask
backing:NSBackingStoreBuffered
defer:NO];
if (fullscreenWindow) {
[fullscreenWindow setAcceptsMouseMovedEvents:YES];
[fullscreenWindow setTitle:#"Full screen"];
[fullscreenWindow setReleasedWhenClosed: YES];
[fullscreenWindow setContentView: self];
[fullscreenWindow makeKeyAndOrderFront: self];
//[fullscreenWindow setOpaque:YES];
//[fullscreenWindow setHidesOnDeactivate:YES];
// Set the window level to be just above the menu bar
//[fullScreenWindow setLevel:NSMainMenuWindowLevel+1];
// Set the window level to be just below the screen saver
[fullscreenWindow setLevel:NSScreenSaverWindowLevel-1];
[fullscreenWindow makeFirstResponder:self];
isFullscreen = true;
} else {
NSLog(#"Error: could not switch to full screen.");
}
}
}
[...]
#end
I now think this can't be done, and should not be done. When windowed, the rendering context is a window, which is a different beast than a screen, when rendering fullscreen.
Thus, when switching between, things have to re-setup everytime you switch.
It is possible to simply use the native fullscreen option that is in the newest OS X variants. This will (presumably) enlarge the containg window to full screen size while removing the frame, borders and buttons. Thus, you're still rendering to a window, though it looks fullscreen.
I'm not sure if this option makes things slower: there's a window layer in between, which could make it a slower than rendering directly to a screen.
For the curious, implementing the native fullscreen is ridiculously easy (at least in 10.8 and 10.9): In XCode, select the .xib file, select the (main) window in the editor's sidebar, then select the attributes selector on the right. You can find a "Full Screen" selection between Unsupported, Primary Window or Auxiliary Window. That will automatically add the full screen toggle to the window.
Even neater, now select the main menu -> view menu in the sidebar, find the "Full Screen Menu Item" in the inspector at the bottom (there's a search bar for it), drag it into the View menu in the editor, and voilĂ , it will have a shortcut and automatically connect to the full screen option for the window (select the new View menu item and look at the Connections inspector to it's already connected for you).
A nice way to test all this is to grab the full screen example I linked in my question, and edit it as suggested above. Using the default control-command F shortcut to toggle back and forth between fullscreen will show the opengl view and the frame with text below it in a full screen. Using the fullscreen option as coded in the example will toggle the openglview to use the fullscreen, without any extra (Cocoa) frames, buttons or text.
I'm curious about this too- specifically your first two bullet point questions.
This doesn't address those questions, but your third one about the bug, I think you can get away with just changing the properties of the same window (works for me):
- (void)toggleFullscreen
{
if (isFullscreen) {
NSRect windowFrame = [[NSScreen mainScreen] visibleFrame];
[mainWindow setStyleMask:NSTitledWindowMask | NSClosableWindowMask |
NSMiniaturizableWindowMask | NSResizableWindowMask ];
[mainWindow setFrame:windowFrame display:true];
[mainWindow setAcceptsMouseMovedEvents:YES];
[mainWindow setLevel:NSNormalWindowLevel];
[mainWindow setTitle:#"SimpleOculus"];
[mainWindow makeKeyAndOrderFront:self];
[mainWindow makeFirstResponder:self];
isFullscreen = false;
}
else {
NSRect fullscreenFrame = [[NSScreen mainScreen] frame];
[mainWindow setStyleMask:NSBorderlessWindowMask];
[mainWindow setFrame:fullscreenFrame display:true];
[mainWindow setAcceptsMouseMovedEvents:YES];
[mainWindow setLevel:NSScreenSaverWindowLevel-1];
[mainWindow makeKeyAndOrderFront:self];
[mainWindow makeFirstResponder:self];
isFullscreen = true;
}
}
When changing the dock position Cocoa is firing a NSApplicationDidChangeScreenParametersNotification:
The problem is that as for Apple Docs, it should be raised only when
Posted when the configuration of the displays attached to the computer
is changed. The configuration change can be made either
programmatically or when the user changes settings in the Displays
control panel. The notification object is sharedApplication. This
notification does not contain a userInfo dictionary.
So if you want to update your application windows when attach a new display (e.g. changing/moving the frame of some HUD window/etc), you will have a fake notification coming the dock.
Also there's no userInfo dictionary attached to this notification, so I had no chance to check whenever was the dock or a new display controller.
So how to handle this?
A possible solution is to check the [NSScreen mainScreen] size when the notification si fired.
If this NSSize changes, that notification comes from a new display attached, not from the dock:
static NSSize mainScreenSize;
-(void)handleApplicationDidChangeScreenParameters:(NSNotification *)notification {
NSSize screenSize = [[NSScreen mainScreen] frame].size;
if( screenSize.width != mainScreenSize.width || screenSize.height != mainScreenSize.height ) { // screen size changed
mainScreenSize = [[NSScreen mainScreen] frame].size;
[myWindowController updateContent];
[[myWindow contentView] setNeedsDisplay:YES]; // update custom window
}
The notification is fired because the main screen's visibleFrame (which excludes the space occupied by the Dock) depends on the position of the Dock.
So if the visibleFrame of the main screen changes, you can be sure that the notification is the result of the Dock being moved.
I am programatically generating mouse clicks when a user clicks a certain keyboard key (CapsLock).
So I do a left mouse down when CapsLock is switched on, then a left mouse up when CapsLock is switched off.
This behaves correctly in that if I for example place the mouse over a window title bar, click CapsLock, then move the mouse, then click CapsLock, the window correctly moves. i.e. I correctly 'drag' the window as if I had held the left mouse button down whilst moving the mouse.
However there is one problem - the window does not move whilst I am moving the mouse, it only moves to the final position after I have clicked CapsLock a second time. i.e. after I have 'released' the mouse button.
What do I need to do to ensure the screen is refreshed during the mouse move?
Interestingly, I also hooked to
[NSEvent addGlobalMonitorForEventsMatchingMask:NSLeftMouseDraggedMask
and found that my NSLog statement only output after I released the left mouse button (the real left mouse button)
Mouse click code is below, I can post all the code if necessary, there isn't much of it..
// simulate mouse down
// get current mouse pos
CGEventRef ourEvent = CGEventCreate(NULL);
CGPoint point = CGEventGetLocation(ourEvent);
NSLog(#"Location? x= %f, y = %f", (float)point.x, (float)point.y);
CGEventSourceRef source = CGEventSourceCreate(kCGEventSourceStateCombinedSessionState);
CGEventRef theEvent = CGEventCreateMouseEvent(source, kCGEventLeftMouseDown, point, kCGMouseButtonLeft);
CGEventSetType(theEvent, kCGEventLeftMouseDown);
CGEventPost(kCGHIDEventTap, theEvent);
CFRelease(theEvent);
// simulate mouse up
// get current mouse pos
CGEventRef ourEvent = CGEventCreate(NULL);
CGPoint point = CGEventGetLocation(ourEvent);
NSLog(#"Location? x= %f, y = %f", (float)point.x, (float)point.y);
CGEventSourceRef source = CGEventSourceCreate(kCGEventSourceStateCombinedSessionState);
CGEventRef theEvent = CGEventCreateMouseEvent(source, kCGEventLeftMouseUp, point, kCGMouseButtonLeft);
CGEventSetType(theEvent, kCGEventLeftMouseUp);
CGEventPost(kCGHIDEventTap, theEvent);
CFRelease(theEvent);
If you want to be able to drag windows, the problem is that you also need to post a LeftMouseDragged event.
Simply call beginEventMonitoring to start listening for caps lock key events and mouse move events. The event handlers will simulate a left mouse press and movement just as you wanted. Here is a link to my blog where you can download a full working example for Xcode 4: http://www.jakepetroules.com/2011/06/25/simulating-mouse-events-in-cocoa
The example is in the public domain, do whatever you like with it. :)
According to Apple (NSEvent documentation), "Enable access for assistive devices" needs to be checked in System Preferences > Universal Access for this to work, but I have it unchecked and it wasn't an issue. Just a heads up.
Please let me know if you have any further issues and I will try my best to help.
// Begin listening for caps lock key presses and mouse movements
- (void)beginEventMonitoring
{
// Determines whether the caps lock key was initially down before we started listening for events
wasCapsLockDown = CGEventSourceKeyState(kCGEventSourceStateHIDSystemState, kVK_CapsLock);
capsLockEventMonitor = [NSEvent addGlobalMonitorForEventsMatchingMask:(NSFlagsChangedMask) handler: ^(NSEvent *event)
{
// Determines whether the caps lock key was pressed and posts a mouse down or mouse up event depending on its state
bool isCapsLockDown = [event modifierFlags] & NSAlphaShiftKeyMask;
if (isCapsLockDown && !wasCapsLockDown)
{
[self simulateMouseEvent: kCGEventLeftMouseDown];
wasCapsLockDown = true;
}
else if (wasCapsLockDown)
{
[self simulateMouseEvent: kCGEventLeftMouseUp];
wasCapsLockDown = false;
}
}];
mouseMovementEventMonitor = [NSEvent addGlobalMonitorForEventsMatchingMask:(NSMouseMovedMask) handler:^(NSEvent *event)
{
[self simulateMouseEvent: kCGEventLeftMouseDragged];
}];
}
// Cease listening for caps lock key presses and mouse movements
- (void)endEventMonitoring
{
if (capsLockEventMonitor)
{
[NSEvent removeMonitor: capsLockEventMonitor];
capsLockEventMonitor = nil;
}
if (mouseMovementEventMonitor)
{
[NSEvent removeMonitor: mouseMovementEventMonitor];
mouseMovementEventMonitor = nil;
}
}
-(void)simulateMouseEvent:(CGEventType)eventType
{
// Get the current mouse position
CGEventRef ourEvent = CGEventCreate(NULL);
CGPoint mouseLocation = CGEventGetLocation(ourEvent);
// Create and post the event
CGEventRef event = CGEventCreateMouseEvent(CGEventSourceCreate(kCGEventSourceStateHIDSystemState), eventType, mouseLocation, kCGMouseButtonLeft);
CGEventPost(kCGHIDEventTap, event);
CFRelease(event);
}
I'm trying to implement a color picker in my Cocoa app. (Yes, I know about NSColorPanel. I don't like it very much. The point of rolling my own is that I think I can do better.)
Here's a picture of the current state of my picker.
(source: ryanballantyne.name)
The wells surrounding the color wheel are NSColorWell subclasses. They are instantiated programmatically and added to the color wheel view (an NSView subclass) by calling addSubView on the color wheel class.
I want to make it so that you can drag the color wells around by their grab handles. The start of that journey is making the cursor change to an open hand when the mouse hovers over the handles. Sadly, I can't use a cursor rect for this because most of my views are rotated. I must therefore use mouseMoved events and do the hit detection myself.
Here's the mouse event code I'm trying to make work:
- (void)mouseMoved:(NSEvent*)event
{
NSLog(#"I am over here!\n");
[super mouseMoved:event];
NSPoint eventPoint = [self convertPoint:[event locationInWindow] fromView:nil];
BOOL isInHandle = [grabHandle containsPoint:eventPoint];
if (isInHandle && [NSCursor currentCursor] != [NSCursor openHandCursor]) {
[[NSCursor openHandCursor] push];
}
else if (!isInHandle) [NSCursor pop];
}
- (void)mouseEntered:(NSEvent*)event
{
[[self window] setAcceptsMouseMovedEvents:YES];
}
- (void)mouseExited:(NSEvent*)event
{
[[self window] setAcceptsMouseMovedEvents:NO];
[NSCursor pop];
}
- (BOOL)acceptsFirstResponder
{
return YES;
}
- (BOOL)resignFirstResponder
{
return YES;
}
I find that my mouseMoved method is never called. Ditto for entered and exited. However, when I implement mouseDown, that one does get called, so at least some events are getting to me, just not the ones I want.
Any ideas? Thanks!
mouseEntered: and mouseExited: don't track entering/exiting your view directly; they track entering/exiting any tracking areas you've established in your view. The relevant methods are -addTrackingRect:owner:userData:assumeInside: and -removeTrackingRect:. Just pass [self bounds] for the first parameter if you want your whole view to be tracked. If your app is 10.5+ only, you should probably use NSTrackingArea instead as it directly supports getting mouse-moved events only inside the tracking area.
Keep in mind that 1) tracking rects have the same somewhat odd behavior as cursor rects w/r/t rotated views, and 2) if your bounds change (not merely your frame) you'll probably need to re-establish your tracking rect, so save the tracking rect's tag to remove it later.