NSScreen visibleFrame only accounting for menu bar area on main screen - macos

I noticed that NSScreen's visibleFrame method isn't subtracting the menu bar dimensions on my non-main screens. Say I have the following code:
DB("Cocoa NSScreen rects:");
NSArray *screens = [NSScreen screens];
for(NSUInteger i = 0; i < [screens count]; ++i) {
NSScreen *screen = [screens objectAtIndex:i];
CGRect r = [screen visibleFrame];
const char *suffix = "";
if(screen == [NSScreen mainScreen])
suffix = " (main screen)";
DB(" %lu. (%.2f, %.2f) + (%.2f x %.2f)%s", (unsigned long)i, r.origin.x, r.origin.y, r.size.width, r.size.height, suffix);
}
I run it on my Mac, which has a menu bar on every monitor. I then get the following output:
Cocoa NSScreen rects:
0. (4.00, 0.00) + (1276.00 x 777.00) (main screen)
1. (3200.00, 9.00) + (1200.00 x 1920.00)
2. (1280.00, 800.00) + (1920.00 x 1200.00)
The size of the menu bar and (hidden) dock appears to have been correctly subtracted from the main screen's visible area - but the menu bars on my external monitors have not been accounted for! (Assuming the menu bar is 23 pixels high on every screen - so I would expect screen 1 to be something like 1200x1897 and screen 2 to be around 1920x1877.)
Aside from wondering how big the screen is - and there you'll just have to trust me, I'm afraid! - what am I doing wrong? How do I get accurate screen bounds?
(OS X Yosemite 10.10.3)

Until the program creates an NSWindow - which this program, as far as I can tell, never does - the reported screen bounds appear to be inaccurate. So, before the program fetches the screen bounds, it now runs this bit of code:
if(!ever_created_hack_window) {
NSWindow *window = [[NSWindow alloc] initWithContentRect:NSMakeRect(0,0,100,100)
styleMask:NSTitledWindowMask
backing:NSBackingStoreBuffered
defer:YES
screen:nil];
[window release];
window = nil;
ever_created_hack_window = YES;
}
(ever_created_hack_window is just a global BOOL.)
Once this has been done, I get the screen dimensions I expect:
0. (4.00, 0.00) + (1276.00 x 777.00) (main screen)
1. (3200.00, 9.00) + (1200.00 x 1897.00)
2. (1280.00, 800.00) + (1920.00 x 1177.00)
Additionally, it now correctly picks up changes in main screen.
(This could be stuff that is set up by calling UIApplicationMain. This program doesn't do that, either.)

Related

NSScreen get the projector/TV Out/AirPlay screen?

What is the best way to get the NSScreen instance that is most likely be a projector or AirPlay display? (or even TV-Out?) I'm writing a presentation software and will need to know which screen that most likely represents the "presentation" screen.
Some options came to mind:
A. Use the second instance if there's any. Of course this will obviously won't give good results if there are more than two screens attached.
NSScreen* projectorScreen = [NSScreen mainScreen];
NSArray* screens = [NSScreen screens];
if(screens.count > 1) {
projectorScreen = screens[1];
}
B. Use the first screen if it's not the main screen. The reason behind it is that in cases of mirroring, the first screen should be the one with the highest pixel depth.
NSScreen* projectorScreen = [NSScreen mainScreen];
NSArray* screens = [NSScreen screens];
if(screens.count > 1) {
if(screens[0] != projectorScreen) {
projectorScreen = screens[0];
}
}
C. Use the lowest screen that is not the main screen. The reason is just to choose any screen that is not the main screen.
NSScreen* projectorScreen = [NSScreen mainScreen];
NSArray* screens = [NSScreen screens];
if(screens.count > 1) {
for(NSScreen* screen in screens) {
if(screen != projectorScreen) {
projectorScreen = screen;
break;
}
}
}
D. Use NSScreen's deviceDescription dictionary and find the biggest screen in real-world coordinates. That is divide NSDeviceSize's width and height with NSDeviceResolution and theoretically this should yield an area in square inches. However I'm not fully convinced that the OS knows the real-world size of each screen.
Any other suggestions?
Granted there isn't any 100% correct heuristics, but then again, picking the correct screen for most of the time should be sufficient.
Looks like option (D) is the best, with some changes. Apparently OS X has a pretty good idea about the real-world size of the display and you can get it via CGDisplayScreenSize. It's then pretty straightforward to pick the largest one and assume that's the presentation screen.
Granted this doesn't accurately measure projectors, but my informal testing shows that the function returns pretty good pixel per inch values for each screen:
Macbook Air 13": {290, 180} mm, 126 ppi
Apple Cinema Display: {596, 336} mm, 109 ppi
An Epson Overhead Projector: {799, 450} mm, 61 ppi
(the above were converted with a constant 25.4 millimeters per inch).
Here's the code that I used:
#import <Foundation/Foundation.h>
#import <AppKit/AppKit.h>
#import <ApplicationServices/ApplicationServices.h>
int main(int argc, char *argv[]) {
#autoreleasepool {
NSArray* screens = [NSScreen screens];
CGFloat __block biggestArea = 0;
NSScreen* __block presentationScreen;
NSUInteger __block presentationScreenIndex;
[screens enumerateObjectsUsingBlock:^(NSScreen* screen,NSUInteger idx,BOOL* stop){
NSDictionary *description = [screen deviceDescription];
NSSize displayPixelSize = [[description objectForKey:NSDeviceSize] sizeValue];
CGSize displayPhysicalSize = CGDisplayScreenSize(
[[description objectForKey:#"NSScreenNumber"] unsignedIntValue]);
NSLog(#"Screen %d Physical Size: %# ppi is %0.2f",(int) idx, NSStringFromSize(displayPhysicalSize),
(displayPixelSize.width / displayPhysicalSize.width) * 25.4f);
// there being 25.4 mm in an inch
CGFloat screenArea = displayPhysicalSize.width * displayPhysicalSize.height;
if(screenArea > biggestArea) {
presentationScreen = screen;
biggestArea = screenArea;
presentationScreenIndex = idx;
}
}];
NSLog(#"Presentation screen: index: %d %#",(int) presentationScreenIndex,presentationScreen);
}
}

ConvertRect:toView: gives bad rect when app dragged to second monitor

I'm trying to take a screenshot of a view.
I'm using the code at the following link, which is working fine with one exception:
cocoa: how to render view to image?
The problem I have is that if I drag the application window to my second monitor the screen capture grabs the wrong rect. Essentially the rect has been displaced vertically, or is possibly using an origin in the top left rather than bottom left.
The odd thing is that the app works fine on the launch monitor, but when I drag it to the second monitor (without closing and restarting the app) the rect capture goes wrong. If I drag the app back to the launch monitor everything starts working again.
The primary monitor and secondary monitor have different resolutions.
The code that converts the rect is as follows:
NSRect originRect = [aView convertRect:[aView bounds] toView:[[aView window] contentView]];
NSRect rect = originRect;
rect.origin.y = 0;
rect.origin.x += [aView window].frame.origin.x;
rect.origin.y += [[aView window] screen].frame.size.height - [aView window].frame.origin.y - [aView window].frame.size.height;
rect.origin.y += [aView window].frame.size.height - originRect.origin.y - originRect.size.height;
Does anyone know why this is calculating correctly on the launch monitor, but miscalculating on secondary monitors?
The problem must be related to the different resolutions, but I can't see why the call to convertRect:toView (or subsequent calculations) isn't working.
BTW, I'm developing this on 10.8.4 and targeting 10.7.
Thanks
Darren.
The issue was that the screen size was always being taken from the current monitor, when in should have been taken from the primary monitor.
rect.origin.y += [[aView window] screen].frame.size.height - [aView window].frame.origin.y - [aView window].frame.size.height;
rect.origin.y += [aView window].frame.size.height - originRect.origin.y - originRect.size.height;
replaced with:
NSArray *screens = [NSScreen screens];
NSScreen *primaryScreen = [screens objectAtIndex:0];
rect.origin.y = primaryScreen.frame.size.height - [aView window].frame.origin.y - originRect.origin.y - originRect.size.height;
I have added a full answer to the original post I referenced at the top of this one:
cocoa: how to render view to image?

How to accept a mouse click for one portion and let click-though the rest of an NSWindow

I have the code below for a subclassed NSWindow, there is an animated view which can be scaled and I want to accept click when it is clicked at the right spot and reject (click through) if it is outside.
The code below works nice except the window does not let clicks through.
- (void)mouseDragged:(NSEvent *)theEvent {
if (allowDrag) {
NSRect screenVisibleFrame = [[NSScreen mainScreen] visibleFrame];
NSRect windowFrame = [self frame];
NSPoint newOrigin = windowFrame.origin;
// Get the mouse location in window coordinates.
NSPoint currentLocation = [theEvent locationInWindow];
// Update the origin with the difference between the new mouse location and the old mouse location.
newOrigin.x += (currentLocation.x - initialMouseLocation.x);
newOrigin.y += (currentLocation.y - initialMouseLocation.y);
if ((newOrigin.y + windowFrame.size.height) > (screenVisibleFrame.origin.y + screenVisibleFrame.size.height)) {
newOrigin.y = screenVisibleFrame.origin.y + (screenVisibleFrame.size.height - windowFrame.size.height);
}
// Move the window to the new location
[self setFrameOrigin:newOrigin];
}
}
- (void)mouseDown:(NSEvent *)theEvent
{
screenResolution = [[NSScreen mainScreen] frame];
initialMouseLocation = [theEvent locationInWindow];
float scale = [[NSUserDefaults standardUserDefaults] floatForKey:#"widgetScale"]/100;
float pX = initialMouseLocation.x;
float pY = initialMouseLocation.y;
float fX = self.frame.size.width;
float fY = self.frame.size.height;
if (pX>(fX-fX*scale)/2 && pX<(fX+fX*scale)/2 && pY>(fY+fY*scale)/2) {
allowDrag = YES;
} else {
allowDrag = NO;
}
}
In Cocoa, you have two basic choices: 1) you can make a whole window pass clicks through with [window setIgnoresMouseEvents:YES], or 2) you can make parts of your window transparent and clicks will pass through by default.
The limitation is that the window server makes the decision about which app to deliver the event to once. After it has delivered the event to your app, there is no way to make it take the event back and deliver it to another app.
One possible solution might be to use Quartz Event Taps. The idea is that you make your window ignore mouse events, but set up an event tap that will see all events for the login session. If you want to make an event that's going through your window actually stop at your window, you process it manually and then discard it. You don't let the event continue on to the app it would otherwise reach. I expect that this would be very tricky to do right. For example, you don't want to intercept events for another app's window that is in front of yours.
If at all possible, I recommend that you use the Cocoa-supported techniques. I would think you would only want clicks to go through your window where it's transparent anyway, since otherwise how would the user know what they are clicking on?
Please invoke an transparent overlay CHILD WINDOW for accepting control and make the main window -setIgnoresMouseEvents:YES as Ken directed.
I used this tricky on my app named "Overlay".

NSScreen visibleFrame not subtracting menu bar area

I am using visibleFrame method of NSScreen and it appears as if it is not subtracting the menu bar area from the visible rectangle (Dock is positioned at the bottom of the screen). It is evident from the output of the code below:
NSRect visibleFrame = [screenInfo visibleFrame];
NSLog(#"\nx=%f , y=%f\nw=%f , h=%f",visibleFrame.origin.x,visibleFrame.origin.y,visibleFrame.size.width,visibleFrame.size.height);
NSRect screenFrame1 = [screenInfo frame];
NSLog(#"\nx=%f , y=%f\nw=%f , h=%f",screenFrame1.origin.x,screenFrame1.origin.y,screenFrame1.size.width,screenFrame1.size.height);
and the Output is as below:
Visible Rect
x=0.000000 , y=80.000000
w=1920.000000 , h=1000.000000
Screen Rect
x=0.000000 , y=0.000000
w=1920.000000 , h=1080.000000
We can infer from above output that the height of the dock is 80 (because the "Y" coordinate of the origin of the visible rectangle is 80). So, the height of the visible rectangle is supposed to be: (height of screen - height of dock - height of menu bar)which comes out to be:1080 - 80 - height of menubarand this should be less than 1000 under any circumstances, but as seen in the output above it is exactly 1000. This means that the height of the menu bar has not been subtracted.Is this a bug in visibleFrame or am I making a mistake somewhere?
Figured out the answer myself. I was using [[NSScreen alloc]init]; to get the NSScreen object whereas I should have used [NSScreen mainScreen];

Cocoa Move Mouse

I'm writing a Mac OS X application on Snow Leopard. I have a step method which is fired at a regular interval by a NSTimer. In this method I would like to move the mouse to the center of the screen, with no buttons being pressed or released. Here's what I have:
-(void) step: (NSTimer *) timer
{
NSRect bounds = [self bounds];
CGPoint point = CGPointMake(bounds.origin.x + bounds.size.width / 2.0f, bounds.origin.y + bounds.size.height / 2.0f);
CGEventCreateMouseEvent(NULL, kCGEventLeftMouseDragged, point, 0);
}
This doesn't do anything. Can somebody tell me what's wrong?
It sounds like CGWarpMouseCursorPosition is precisely what you're after (it moves the pointer without generating events - see the Quartz Display Services Reference for more info).

Resources