(OS X) Detecting when front app goes into fullscreen mode - macos

I'm writing a "UIElement" app that shows a status window on the side of the screen, similar to the Dock.
Now, when a program takes over the entire screen, I need to hide my status window, just like the Dock does.
What are my options to detect this and the inverse event?
I like to avoid polling via a timed event and also cannot use undocumented tricks (such as suggested here)
What doesn't work:
Registering a Carbon Event Handler for the kEventAppSystemUIModeChanged event isn't sufficient - it works to detect VLC's full screen mode, but not for modern Cocoa apps that use the new fullscreen widget at the top right corner of their windows.
Similarly, following Apple's instructions about the NSApplication presentationOptions API by observing changes to the currentSystemPresentationOptions property does not help, either - again, it only informs about VLC's fullscreen mode, but not about apps using the window' top right fullscreen widget.
Monitoring changes to the screen configuration using CGDisplayRegisterReconfigurationCallback is not working because there aren't any callbacks for these fullscreen modes.

Based on #Chuck's suggestion, I've come up with a solution that works somewhat, but may not be foolproof.
The solution is based on the assumption that 10.7's new fullscreen mode for windows moves these windows to a new Screen Space. Therefore, we subscribe to notifications for changes to the active space. In that notification handler, we check the window list to detect whether the menubar is included. If it is not, it probably means that we're in a fullscreen space.
Checking for the presence of the "Menubar" window is the best test I could come up with based on Chuck's idea. I don't like it too much, though, because it makes assumptions on the naming and presence of internally managed windows.
Here's the test code that goes inside AppDelegate.m, which also includes the test for the other app-wide fullscreen mode:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
NSApplication *app = [NSApplication sharedApplication];
// Observe full screen mode from apps setting SystemUIMode
// or invoking 'setPresentationOptions'
[app addObserver:self
forKeyPath:#"currentSystemPresentationOptions"
options:NSKeyValueObservingOptionNew
context:NULL];
// Observe full screen mode from apps using a separate space
// (i.e. those providing the fullscreen widget at the right
// of their window title bar).
[[[NSWorkspace sharedWorkspace] notificationCenter]
addObserverForName:NSWorkspaceActiveSpaceDidChangeNotification
object:NULL queue:NULL
usingBlock:^(NSNotification *note)
{
// The active space changed.
// Now we need to detect if this is a fullscreen space.
// Let's look at the windows...
NSArray *windows = CFBridgingRelease(CGWindowListCopyWindowInfo
(kCGWindowListOptionOnScreenOnly, kCGNullWindowID));
//NSLog(#"active space change: %#", windows);
// We detect full screen spaces by checking if there's a menubar
// in the window list.
// If not, we assume it's in fullscreen mode.
BOOL hasMenubar = NO;
for (NSDictionary *d in windows) {
if ([d[#"kCGWindowOwnerName"] isEqualToString:#"Window Server"]
&& [d[#"kCGWindowName"] isEqualToString:#"Menubar"]) {
hasMenubar = YES;
break;
}
}
NSLog(#"fullscreen: %#", hasMenubar ? #"No" : #"Yes");
}
];
}
- (void)observeValueForKeyPath:(NSString *)keyPath
ofObject:(id)object
change:(NSDictionary *)change
context:(void *)context
{
if ([keyPath isEqual:#"currentSystemPresentationOptions"]) {
NSLog(#"currentSystemPresentationOptions: %#", [change objectForKey:NSKeyValueChangeNewKey]); // a value of 4 indicates fullscreen mode
}
}

Since my earlier answer doesn't work for detecting full screen mode between apps, I did some experimentation. Starting with the solution that Thomas Tempelmann came up with of checking the presence of menu bar, I found a variation that I think could be more reliable.
The problem with checking for the menu bar is that in full screen mode you can move the mouse cursor to the top of the screen to make the menu bar appear, but you're still in full screen mode. I did some crawling through the CGWindow info, and discovered that when I enter full screen, there is window named "Fullscreen Backdrop" owned by the "Dock", and it's not there when not in full screen mode.
This is on Catalina (10.15.6) in an Xcode playground, so it should be tested in a real app, and on Big Sur (or whatever the current OS is, when you're reading this).
Here's the code (in Swift... easier to quickly test something)
func isFullScreen() -> Bool
{
guard let windows = CGWindowListCopyWindowInfo(.optionOnScreenOnly, kCGNullWindowID) else {
return false
}
for window in windows as NSArray
{
guard let winInfo = window as? NSDictionary else { continue }
if winInfo["kCGWindowOwnerName"] as? String == "Dock",
winInfo["kCGWindowName"] as? String == "Fullscreen Backdrop"
{
return true
}
}
return false
}

EDIT NOTE: This answer unfortunately doesn't provide a solution for detecting full screen in a different app, which is what the OP was asking. I'm leaving it because it does answer the question for detecting in in the same app going full screen - for example in a generic library that needs to know to automatically update keyEquivalents and title for an explicitly added "Enter Full Screen" menu item rather than Apple's automatically added menu item.
Although this question is quite old now, I've had to detect full screen mode in Swift recently. While it's not as simple as querying some flag in NSWindow, as we would hope for, there is an easy and reliable solution that has been available since macOS 10.7.
Whenever a window is about to enter full screen mode NSWindow sends a willEnterFullScreenNotification notification, and when it is about to exit full screen mode, it sends willExitFullScreenNotification. So you add an observer for those notifications. In the following code, I use them to set a global boolean flag.
import Cocoa
/*
Since notification closures can be run concurrently, we need to guard against
races on the Boolean flag. We could use DispatchSemaphore, but it's kind
over-kill for guarding a simple read/write to a boolean variable.
os_unfair_lock is appropriate for nanosecond-level contention. If the wait
could be milliseconds or longer, DispatchSemaphore is the thing to use.
This extension is just to make using it easier and safer to use.
*/
extension os_unfair_lock
{
mutating func withLock<R>(block: () throws -> R) rethrows -> R
{
os_unfair_lock_lock(&self)
defer { os_unfair_lock_unlock(&self) }
return try block()
}
}
fileprivate var fullScreenLock = os_unfair_lock()
public fileprivate(set) var isFullScreen: Bool = false
// Call this function in the app delegate's applicationDidFinishLaunching method
func initializeFullScreenDetection()
{
_ = NotificationCenter.default.addObserver(
forName: NSWindow.willEnterFullScreenNotification,
object: nil,
queue: nil)
{ _ in
fullScreenLock.withLock { isFullScreen = true }
}
_ = NotificationCenter.default.addObserver(
forName: NSWindow.willExitFullScreenNotification,
object: nil,
queue: nil)
{ _ in
fullScreenLock.withLock { isFullScreen = false }
}
}
Since observer closures can be run concurrently, I use os_unfair_lock to guard races on the _isFullScreen property. You could use DispatchSemaphore, though it's a bit heavy weight for just guarding a Boolean flag. Back when the question was first asked, OSSpinLock would have been the equivalent, but it's been deprecated since 10.12.
Just make sure to call initializeFullScreenDetection() in your application delegate's applicationDidFinishLaunching() method.

Related

NSButton subclass that responds to right clicks

I have an NSButton subclass that I would like to make work with right mouse button clicks. Just overloading -rightMouseDown: won't cut it, as I would like the same kind of behaviour as for regular clicks (e.g. the button is pushed down, the user can cancel by leaving the button, the action is sent when the mouse is released, etc.).
What I have tried so far is overloading -rightMouse{Down,Up,Dragged}, changing the events to indicate the left mouse button clicks and then sending it to -mouse{Down,Up,Dragged}. Now this would clearly be a hack at best, and as it turns out Mac OS X did not like it all. I can click the button, but upon release, the button remains pushed in.
I could mimic the behaviour myself, which shouldn't be too complicated. However, I don't know how to make the button look pushed in.
Before you say "Don't! It's an unconventional Mac OS X behaviour and should be avoided": I have considered this and a right click could vastly improve the workflow. Basically the button cycles through 4 states, and I would like a right click to make it cycle in reverse. It's not an essential feature, but it would be nice. If you still feel like saying "Don't!", then let me know your thoughts. I appreciate it!
Thanks!
EDIT: This was my attempt of changing the event (you can't change the type, so I made a new one, copying all information across. I mean, I know this is the framework clearly telling me Don't Do This, but I gave it a go, as you do):
// I've contracted all three for brevity
- (void)rightMouse{Down,Up,Dragging}:(NSEvent *)theEvent {
NSEvent *event = [NSEvent mouseEventWithType:NSLeftMouse{Down,Up,Dragging} location:[theEvent locationInWindow] modifierFlags:[theEvent modifierFlags] timestamp:[theEvent timestamp] windowNumber:[theEvent windowNumber] context:[theEvent context] eventNumber:[theEvent eventNumber] clickCount:[theEvent clickCount] pressure:[theEvent pressure]];
[self mouse{Down,Up,Dragging}:event];
}
UPDATE: I noticed that -mouseUp: was never sent to NSButton, and if I changed it to an NSControl, it was. I couldn't figure out why this was, until Francis McGrew pointed out that it contains its own event handling loop. Now, this also made sense to why before I could reroute the -rightMouseDown:, but the button wouldn't go up on release. This is because it was fetching new events on its own, that I couldn't intercept and convert from right to left mouse button events.
NSButton is entering a mouse tracking loop. To change this you will have to subclass NSButton and create your own custom tracking loop. Try this code:
- (void) rightMouseDown:(NSEvent *)theEvent
{
NSEvent *newEvent = theEvent;
BOOL mouseInBounds = NO;
while (YES)
{
mouseInBounds = NSPointInRect([newEvent locationInWindow], [self convertRect:[self frame] fromView:nil]);
[self highlight:mouseInBounds];
newEvent = [[self window] nextEventMatchingMask:NSRightMouseDraggedMask | NSRightMouseUpMask];
if (NSRightMouseUp == [newEvent type])
{
break;
}
}
if (mouseInBounds) [self performClick:nil];
}
This is how I do it; Hopefully it will work for you.
I've turned a left mouse click-and-hold into a fake right mouse down on a path control. I'm not sure this will solve all your problems, but I found that the key difference when I did this was changing the timestamp:
NSEvent *event = [NSEvent mouseEventWithType:NSLeftMouseDown
location:[theEvent locationInWindow]
modifierFlags:[theEvent modifierFlags]
timestamp:CFAbsoluteGetTimeCurrent()
windowNumber:[theEvent windowNumber]
context:[theEvent context]
// I was surprised to find eventNumber didn't seem to need to be faked
eventNumber:[theEvent eventNumber]
clickCount:[theEvent clickCount]
pressure:[theEvent pressure]];
The other thing is that depending on your button type, its state may be the value that is making it appear pushed or not, so you might trying poking at that.
UPDATE: I think I've figured out why rightMouseUp: never gets called. Per the -[NSControl mouseDown:] docs, the button starts tracking the mouse when it gets a mouseDown event, and it doesn't stop tracking until it gets mouseUp. While it's tracking, it can't do anything else. I just tried, for example, at the end of a custom mouseDown::
[self performSelector:#selector(mouseUp:) withObject:myFakeMouseUpEvent afterDelay:1.0];
but this gets put off until a normal mouseUp: gets triggered some other way. So, if you've clicked the right mouse button, you can't (with the mouse) send a leftMouseUp, thus the button is still tracking, and won't accept a rightMouseUp event. I still don't know what the solution is, but I figured that would be useful information.
Not much to add to the answers above, but for those working in Swift, you may have trouble finding the constants for the event mask, buried deep in the documentation, and still more trouble finding a way to combine (OR) them in a way that the compiler accepts, so this may save you some time. Is there a neater way? This goes in your subclass -
var rightAction: Selector = nil
// add a new property, by analogy with action property
override func rightMouseDown(var theEvent: NSEvent!) {
var newEvent: NSEvent!
let maskUp = NSEventMask.RightMouseUpMask.rawValue
let maskDragged = NSEventMask.RightMouseDraggedMask.rawValue
let mask = Int( maskUp | maskDragged ) // cast from UInt
do {
newEvent = window!.nextEventMatchingMask(mask)
}
while newEvent.type == .RightMouseDragged
My loop has become a do..while, as it always has to execute at least once, and I never liked writing while true, and I don't need to do anything with the dragging events.
I had endless trouble getting meaningful results from convertRect(), perhaps because my controls were embedded in a table view. Thanks to Gustav Larsson above for my ending up with this for the last part -
let whereUp = newEvent.locationInWindow
let p = convertPoint(whereUp, fromView: nil)
let mouseInBounds = NSMouseInRect(p, bounds, flipped) // bounds, not frame
if mouseInBounds {
sendAction(rightAction, to: target)
// assuming rightAction and target have been allocated
}
}

NSTimer - set up plain vanilla -- doesn't fire

Compiling in XCode 3.1.1 for OSX 10.5.8 target, 32-bit and i386 build.
I have a modal run loop, running in NSWindow wloop and NSView vloop. The modal loop is started first. It starts, runs and stops as expected. Here's the start:
[NSApp runModalForWindow: wloop];
Then, when the user presses the left mouse button, I do this:
if (ticking == 0) // ticking is set to zero in its definition, so starts that way
{
ticking = 1; // don't want to do this more than once per loop
tickCounter = 0;
cuckCoo = [NSTimer scheduledTimerWithTimeInterval: 1.0f / 10.0f // 10x per second
target: self // method is here in masterView
selector: #selector(onTick:) // method
userInfo: nil // not used
repeats: YES]; // should repeat
}
Checking the return of the call, I do get a timer object, and can confirm that the timer call is made when I expect it to be.
Now, according to the docs, the resulting NSTimer, stored globally as "cuckCoo", should be added to the current run loop automagically. The current run loop is definitely the modal one - at this time other windows are locked out and only the window with the intended mouse action is taking messages.
The method that this calls, "onTick", is very simple (because I can't get it to fire), located in the vloop NSView code, which is where all of this is going on:
- (void) onTick:(NSTimer*)theTimer
{
tickCounter += 1;
NSLog(#"Timer started");
}
Then when it's time to stop the modal loop (which works fine, btw), I do this:
[cuckCoo invalidate];
[NSApp stop: nil];
ticking=0;
cuckCoo = NULL;
NSLog(#"tickCounter=%ld",tickCounter);
ticking and tickCounter are both global longs.
I don't get the NSLog message from within onTick, and tickCounter remains at zero as reported by the NSLog at the close of the runloop.
All this compiles and runs fine. I just never get any ticks. I'm at a complete loss. Any ideas, anyone?
The problem is related to this statement "The current run loop is definitely the modal one". In Cocoa, each thread has at most one runloop, and each runloop can be run in a variety of "modes". Typical modes are default, event tracking, and modal. Default is the mode the loop normally runs in, while event tracking is typically used to track a drag session of the mouse, and modal is used for things like modal panels.
When you invoke -[NSTimer scheduledTimerWithTimeInterval:target:selector:userInfo:repeats:] it does schedule the timer immediately, but it only schedules it for the default runloop mode, not the modal runloop mode. The idea behind this is that the app generally shouldn't continue to run behind a modal panel.
To create a timer that fires during a modal runloop, you can use -[NSTimer initWithFireDate:interval:target:selector:userInfo:repeats:] and then -[NSRunLoop addTimer:forMode:].
The answer specific to...
[NSApp runModalForWindow: wloop];
...is, after the modal run loop has been entered:
NSRunLoop *crl;
cuckCoo = [NSTimer timerWithTimeInterval: 1.0 / 10
target: self
selector: #selector(onTick:)
userInfo: nil
repeats:YES];
crl = [NSRunLoop currentRunLoop];
[crl addTimer: cuckCoo forMode: NSModalPanelRunLoopMode];
(crl obtained separately for clarity) Where the onTick method has the form:
- (void) onTick:(NSTimer*)theTimer
{
// do something tick-tocky
}

Programmatically open Mac Help menu

I'm integrating a GTK# application into Mac OS X. GTK on Mac OS X is a wrapper over some Cocoa and Carbon fundamentals. We have some platform-specific stuff directly using Carbon global menu APIs (it's more low-level and flexible than Cocoa, and we don't need to be 64-bit).
It seems that GTK swallows up all the keyboard events before Carbon dispatches them as commands. This makes sense, because there is no mapping of Carbon commands into the GTK world. In general, this isn't a problem, because we have a global key event handler and dispatch everything via our own command system. However, this seems to be preventing Cmd-? from opening the Help search menu, and I cannot find a way to do this programmatically.
Menu Manager's MenuSelect function is promising, but I haven't figured out a way to determine the coordinate automatically, and for some reason it only works when I hit the combination twice...
Alternatively, a way to dispatch the Cmd-? keystroke to Carbon's command handling or synthesize the command event directly would be good, but I haven't had any luck in that area.
Carbon's ProcessHICommand isn't any use without a command ID and I can't figure out what it is (if there is one)
Regarding Cocoa, I can get hold of the NSWindow and call InterpretKeyEvents, but I haven't had any luck success synthesizing the NSEvent - it just beeps. The event I'm using is
var evt = NSEvent.KeyEvent (NSEventType.KeyDown, System.Drawing.PointF.Empty,
NSEventModifierMask.CommandKeyMask | NSEventModifierMask.ShiftKeyMask,
0, win.WindowNumber, NSGraphicsContext.CurrentContext, "?", "?",
false, (ushort) keycode);
Keycode is determined from a GTK keymap to be 44. I confirmed that the keycode was correct using a plain MonoMac (Cocoa) app but InterpretKeyEvents did not work with the event in that app either. And I can't find any selector associated with the command.
You can use accessibility APIs to fake a press on the menu item.
NSString *helpMenuTitle = [[[[NSApplication sharedApplication] mainMenu] itemWithTag:HELP_MENU_TAG] title];
AXUIElementRef appElement = AXUIElementCreateApplication(getpid());
AXUIElementRef menuBar;
AXError error = AXUIElementCopyAttributeValue(appElement,
kAXMenuBarAttribute,
(CFTypeRef *)&menuBar);
if (error) {
return;
}
CFIndex count = -1;
error = AXUIElementGetAttributeValueCount(menuBar, kAXChildrenAttribute, &count);
if (error) {
CFRelease(menuBar);
return;
}
NSArray *children = nil;
error = AXUIElementCopyAttributeValues(menuBar, kAXChildrenAttribute, 0, count, (CFArrayRef *)&children);
if (error) {
CFRelease(menuBar);
return;
}
for (id child in children) {
AXUIElementRef element = (AXUIElementRef)child;
id title;
AXError error = AXUIElementCopyAttributeValue(element,
kAXTitleAttribute,
(CFTypeRef *)&title);
if ([title isEqualToString:helpMenuTitle]) {
AXUIElementPerformAction(element, kAXPressAction);
CFRelease(title);
break;
}
CFRelease(title);
}
CFRelease(menuBar);
[children release];
You could do this via calling from C / Objective-C a AppleScript (GUI) script , that would essentially do the pointing and clicking of a user just as a user would do, to open the help menu program-matically.

Show NSSegmentedControl menu when segment clicked, despite having set action

I have an NSSegmentedControl on my UI with 4 buttons. The control is connected to a method that will call different methods depending on which segment is clicked:
- (IBAction)performActionFromClick:(id)sender {
NSInteger selectedSegment = [sender selectedSegment];
NSInteger clickedSegmentTag = [[sender cell] tagForSegment:selectedSegment];
switch (clickedSegmentTag) {
case 0: [self showNewEventWindow:nil]; break;
case 1: [self showNewTaskWindow:nil]; break;
case 2: [self toggleTaskSplitView:nil]; break;
case 3: [self showGearMenu]; break;
}
}
Segment 4 has has a menu attached to it in the awakeFromNib method. I'd like this menu to drop down when the user clicks the segment. At this point, it only will drop if the user clicks & holds down on the menu. From my research online this is because of the connected action.
I'm presently working around it by using some code to get the origin point of the segment control and popping up the context menu using NSMenu's popUpContextMenu:withEvent:forView but this is pretty hacktastic and looks bad compared to the standard behavior of having the menu drop down below the segmented control cell.
Is there a way I can have the menu drop down as it should after a single click rather than doing the hacky context menu thing?
Subclass NSSegmentedCell, override method below, and replace the cell class in IB. (Requires no private APIs).
- (SEL)action
{
//this allows connected menu to popup instantly (because no action is returned for menu button)
if ([self tagForSegment:[self selectedSegment]]==0) {
return nil;
} else {
return [super action];
}
}
I'm not sure of any built-in way to do this (though it really is a glaring hole in the NSSegmentedControl API).
My recommendation is to continue doing what you're doing popping up the context menu. However, instead of just using the segmented control's origin, you could position it directly under the segment (like you want) by doing the following:
NSPoint menuOrigin = [segmentedControl frame].origin;
menuOrigin.x = NSMaxX([segmentedControl frame]) - [segmentedControl widthForSegment:4];
// Use menuOrigin where you _were_ just using [segmentedControl frame].origin
It's not perfect or ideal, but it should get the job done and give the appearance/behavior your users expect.
(as an aside, NSSegmentedControl really needs a -rectForSegment: method)
This is the Swift version of the answer by J Hoover and the mod by Adam Treble. The override was not as intuitive as I thought it would be, so this will hopefully help someone else.
override var action : Selector {
get {
if self.menuForSegment(self.selectedSegment) != nil {
return nil
}
return super.action
}
set {
super.action = newValue
}
}
widthForSegment: returns zero if the segment auto-sizes. If you're not concerned about undocumented APIs, there is a rectForSegment:
(NSRect)rectForSegment:(NSInteger)segment
inFrame:(NSRect)frame;
But to answer the original question, an easier way to get the menu to pop up immediately is to subclass NSSegmentedCell and return 0 for (again, undocumented)
(double)_menuDelayTimeForSegment:(NSInteger)segment;

Distinguishing a single click from a double click in Cocoa on the Mac

I have a custom NSView (it's one of many and they all live inside an NSCollectionView — I don't think that's relevant, but who knows). When I click the view, I want it to change its selection state (and redraw itself accordingly); when I double-click the view, I want it to pop up a larger preview window for the object that was just double-clicked.
My first looked like this:
- (void)mouseUp: (NSEvent *)theEvent {
if ([theEvent clickCount] == 1) [model setIsSelected: ![model isSelected]];
else if ([theEvent clickCount] == 2) if ([model hasBeenDownloaded]) [mainWindowController showPreviewWindowForPicture:model];
}
which mostly worked fine. Except, when I double-click the view, the selection state changes and the window pops up. This is not exactly what I want.
It seems like I have two options. I can either revert the selection state when responding to a double-click (undoing the errant single-click) or I can finagle some sort of NSTimer solution to build in a delay before responding to the single click. In other words, I can make sure that a second click is not forthcoming before changing the selection state.
This seemed more elegant, so it was the approach I took at first. The only real guidance I found from Google was on an unnamed site with a hyphen in its name. This approach mostly works with one big caveat.
The outstanding question is "How long should my NSTimer wait?". The unnamed site suggests using the Carbon function GetDblTime(). Aside from being unusable in 64-bit apps, the only documentation I can find for it says that it's returning clock-ticks. And I don't know how to convert those into seconds for NSTimer.
So what's the "correct" answer here? Fumble around with GetDblTime()? "Undo" the selection on a double-click? I can't figure out the Cocoa-idiomatic approach.
Delaying the changing of the selection state is (from what I've seen) the recommended way of doing this.
It's pretty simple to implement:
- (void)mouseUp:(NSEvent *)theEvent
{
if([theEvent clickCount] == 1) {
[model performSelector:#selector(toggleSelectedState) afterDelay:[NSEvent doubleClickInterval]];
}
else if([theEvent clickCount] == 2)
{
if([model hasBeenDownloaded])
{
[NSRunLoop cancelPreviousPerformRequestsWithTarget: model];
[mainWindowController showPreviewWindowForPicture:model];
}
}
}
(Notice that in 10.6, the double click interval is accessible as a class method on NSEvent)
If your single-click and double-click operations are really separate and unrelated, you need to use a timer on the first click and wait to see if a double-click is going to happen. That is true on any platform.
But that introduces an awkward delay in your single-click operation that users typically don't like. So you don't see that approach used very often.
A better approach is to have your single-click and double-click operations be related and complementary. For example, if you single-click an icon in Finder it is selected (immediately), and if you double-click an icon it is selected and opened (immediately). That is the behavior you should aim for.
In other words, the consequences of a single-click should be related to your double-click command. That way, you can deal with the effects of the single-click in your double-click handler without having to resort to using a timer.
Personally, I think you need to ask yourself why you want this non-standard behaviour.
Can you point to any other application which treats the first click in a double-click as being different from a single-click? I can't think of any...
Add two properties to your custom view.
// CustomView.h
#interface CustomView : NSView {
#protected
id m_target;
SEL m_doubleAction;
}
#property (readwrite) id target;
#property (readwrite) SEL doubleAction;
#end
Overwrite the mouseUp: method in your custom view.
// CustomView.m
#pragma mark - MouseEvents
- (void)mouseUp:(NSEvent*)event {
if (event.clickCount == 2) {
if (m_target && m_doubleAction && [m_target respondsToSelector:m_doubleAction]) {
[m_target performSelector:m_doubleAction];
}
}
}
Register your controller as the target with an doubleAction.
// CustomController.m
- (id)init {
self = [super init];
if (self) {
// Register self for double click events.
[(CustomView*)m_myView setTarget:self];
[(CustomView*)m_myView setDoubleAction:#selector(doubleClicked:)];
}
return self;
}
Implement what should be done when a double click happens.
// CustomController.m
- (void)doubleClicked:(id)sender {
// DO SOMETHING.
}
#Dave DeLong's solution in Swift 4.2 (Xcode 10, macOS 10.13), amended for use with event.location(in: view)
var singleClickPoint: CGPoint?
override func mouseDown(with event: NSEvent) {
singleClickPoint = event.location(in: self)
perform(#selector(GameScene.singleClickAction), with: nil, afterDelay: NSEvent.doubleClickInterval)
if event.clickCount == 2 {
RunLoop.cancelPreviousPerformRequests(withTarget: self)
singleClickPoint = nil
//do whatever you want on double-click
}
}
#objc func singleClickAction(){
guard let singleClickPoint = singleClickPoint else {return}
//do whatever you want on single-click
}
The reason I'm not using singleClickAction(at point: CGPoint) and calling it with: event.location(in: self) is that any point I pass in - including CGPoint.zero - ends up arriving in the singleClick Action as (0.0, 9.223372036854776e+18). I will be filing a radar for that, but for now, bypassing perform is the way to go. (Other objects seem to work just fine, but CGPoints do not.)

Resources