Cocoa Move Mouse - cocoa

I'm writing a Mac OS X application on Snow Leopard. I have a step method which is fired at a regular interval by a NSTimer. In this method I would like to move the mouse to the center of the screen, with no buttons being pressed or released. Here's what I have:
-(void) step: (NSTimer *) timer
{
NSRect bounds = [self bounds];
CGPoint point = CGPointMake(bounds.origin.x + bounds.size.width / 2.0f, bounds.origin.y + bounds.size.height / 2.0f);
CGEventCreateMouseEvent(NULL, kCGEventLeftMouseDragged, point, 0);
}
This doesn't do anything. Can somebody tell me what's wrong?

It sounds like CGWarpMouseCursorPosition is precisely what you're after (it moves the pointer without generating events - see the Quartz Display Services Reference for more info).

Related

NSScreen visibleFrame only accounting for menu bar area on main screen

I noticed that NSScreen's visibleFrame method isn't subtracting the menu bar dimensions on my non-main screens. Say I have the following code:
DB("Cocoa NSScreen rects:");
NSArray *screens = [NSScreen screens];
for(NSUInteger i = 0; i < [screens count]; ++i) {
NSScreen *screen = [screens objectAtIndex:i];
CGRect r = [screen visibleFrame];
const char *suffix = "";
if(screen == [NSScreen mainScreen])
suffix = " (main screen)";
DB(" %lu. (%.2f, %.2f) + (%.2f x %.2f)%s", (unsigned long)i, r.origin.x, r.origin.y, r.size.width, r.size.height, suffix);
}
I run it on my Mac, which has a menu bar on every monitor. I then get the following output:
Cocoa NSScreen rects:
0. (4.00, 0.00) + (1276.00 x 777.00) (main screen)
1. (3200.00, 9.00) + (1200.00 x 1920.00)
2. (1280.00, 800.00) + (1920.00 x 1200.00)
The size of the menu bar and (hidden) dock appears to have been correctly subtracted from the main screen's visible area - but the menu bars on my external monitors have not been accounted for! (Assuming the menu bar is 23 pixels high on every screen - so I would expect screen 1 to be something like 1200x1897 and screen 2 to be around 1920x1877.)
Aside from wondering how big the screen is - and there you'll just have to trust me, I'm afraid! - what am I doing wrong? How do I get accurate screen bounds?
(OS X Yosemite 10.10.3)
Until the program creates an NSWindow - which this program, as far as I can tell, never does - the reported screen bounds appear to be inaccurate. So, before the program fetches the screen bounds, it now runs this bit of code:
if(!ever_created_hack_window) {
NSWindow *window = [[NSWindow alloc] initWithContentRect:NSMakeRect(0,0,100,100)
styleMask:NSTitledWindowMask
backing:NSBackingStoreBuffered
defer:YES
screen:nil];
[window release];
window = nil;
ever_created_hack_window = YES;
}
(ever_created_hack_window is just a global BOOL.)
Once this has been done, I get the screen dimensions I expect:
0. (4.00, 0.00) + (1276.00 x 777.00) (main screen)
1. (3200.00, 9.00) + (1200.00 x 1897.00)
2. (1280.00, 800.00) + (1920.00 x 1177.00)
Additionally, it now correctly picks up changes in main screen.
(This could be stuff that is set up by calling UIApplicationMain. This program doesn't do that, either.)

ios particle emitter - how to make faster?

I have 9 blocks on screen (BlockView is just a subclass of view with some properties to keep track of things), and I want to add a smoke particle emitter behind the top of each block to add some smoke rising from the tops of each block. I create a view to hold the block and the particle emitter, and bring the block in front of the subviews so the block is in front. However, this causes my device (iphone 6) to be incredibly laggy and very difficult to move the blocks with a pan gesture.
SmokeParticles.sks: birthrate of 3 (max set to 0), lifetime of 10 (100 range), position range set in code.
My code for adding a particle emitter to each view is below (I'm not very good with particle emitters so any advice is appreciated! :D)
- (void)addEffectForSingleBlock:(BlockView *)view
{
CGFloat spaceBetweenBlocksHeight = (self.SPACE_TO_WALLS * self.view.frame.size.height + self.SPACE_BETWEEN_BLOCKS*self.view.frame.size.width + self.WIDTH_OF_BLOCK*self.view.frame.size.height) - (self.HEIGHT_OF_BLOCK*self.view.frame.size.height + self.SPACE_TO_WALLS * self.view.frame.size.height);
view.alpha = 1.0;
CGRect frame2 = [view convertRect:view.bounds toView:self.view];
UIView * viewLarge = [[UIView alloc] initWithFrame:frame2];
[self.view addSubview:viewLarge];
CGRect frame1 = [view convertRect:view.bounds toView:viewLarge];
view.frame = frame1;
[viewLarge addSubview:view];
SKEmitterNode *burstNode = [self particleEmitterWithName:#"SmokeParticles"];
CGRect frame = CGRectMake(view.bounds.origin.x-self.SPACE_BETWEEN_BLOCKS*self.view.frame.size.width, view.bounds.origin.y-self.SPACE_BETWEEN_BLOCKS_HEIGHT, view.bounds.size.width+self.SPACE_BETWEEN_BLOCKS*self.view.frame.size.width, view.bounds.size.height/2);
SKView *skView = [[SKView alloc] initWithFrame:frame];
[viewLarge addSubview:skView];
SKScene *skScene = [SKScene sceneWithSize:skView.frame.size];
[skScene addChild:burstNode];
[viewLarge bringSubviewToFront:view];
[burstNode setParticlePositionRange:CGVectorMake(skView.frame.size.width/5, skView.frame.size.height/100.0)];
skView.allowsTransparency = YES;
skScene.backgroundColor = [UIColor clearColor];
skView.backgroundColor = [UIColor clearColor];
[skView presentScene:skScene];
[burstNode setPosition:CGPointMake(skView.frame.size.width/2, -skView.frame.size.height*0.25)];
}
I realize that this is an old question, but I recently learned something that could be helpful to others and decided to share it here because it is relevant (I think).
I'll assume your BlockView is a subclass of UIView (if it is not, this will not help you, sorry). A view performs a lot of unnecessary calculations each frame (for example, each view checks if someone tapped on it). When creating a game you should use as fewer UIViews as possible (that's why all other commenters recommended you to use only one SKView and make each Block a SKSpriteNode, which is not a view). But, if you need to use some other kind of object or you do not want to use SpriteKit (or SceneKit for 3D objects), then try using CALayers inside one single UIView (for example, one case where you would prefer to use CALayers instead of SpriteKit is to increase backwards compatibility with older iOS versions as SpriteKit needs iOS 7).
Mr. John Blanco explains the CALayer approach very well in his View vs. Layers (including Clock Demo).

ConvertRect:toView: gives bad rect when app dragged to second monitor

I'm trying to take a screenshot of a view.
I'm using the code at the following link, which is working fine with one exception:
cocoa: how to render view to image?
The problem I have is that if I drag the application window to my second monitor the screen capture grabs the wrong rect. Essentially the rect has been displaced vertically, or is possibly using an origin in the top left rather than bottom left.
The odd thing is that the app works fine on the launch monitor, but when I drag it to the second monitor (without closing and restarting the app) the rect capture goes wrong. If I drag the app back to the launch monitor everything starts working again.
The primary monitor and secondary monitor have different resolutions.
The code that converts the rect is as follows:
NSRect originRect = [aView convertRect:[aView bounds] toView:[[aView window] contentView]];
NSRect rect = originRect;
rect.origin.y = 0;
rect.origin.x += [aView window].frame.origin.x;
rect.origin.y += [[aView window] screen].frame.size.height - [aView window].frame.origin.y - [aView window].frame.size.height;
rect.origin.y += [aView window].frame.size.height - originRect.origin.y - originRect.size.height;
Does anyone know why this is calculating correctly on the launch monitor, but miscalculating on secondary monitors?
The problem must be related to the different resolutions, but I can't see why the call to convertRect:toView (or subsequent calculations) isn't working.
BTW, I'm developing this on 10.8.4 and targeting 10.7.
Thanks
Darren.
The issue was that the screen size was always being taken from the current monitor, when in should have been taken from the primary monitor.
rect.origin.y += [[aView window] screen].frame.size.height - [aView window].frame.origin.y - [aView window].frame.size.height;
rect.origin.y += [aView window].frame.size.height - originRect.origin.y - originRect.size.height;
replaced with:
NSArray *screens = [NSScreen screens];
NSScreen *primaryScreen = [screens objectAtIndex:0];
rect.origin.y = primaryScreen.frame.size.height - [aView window].frame.origin.y - originRect.origin.y - originRect.size.height;
I have added a full answer to the original post I referenced at the top of this one:
cocoa: how to render view to image?

How to accept a mouse click for one portion and let click-though the rest of an NSWindow

I have the code below for a subclassed NSWindow, there is an animated view which can be scaled and I want to accept click when it is clicked at the right spot and reject (click through) if it is outside.
The code below works nice except the window does not let clicks through.
- (void)mouseDragged:(NSEvent *)theEvent {
if (allowDrag) {
NSRect screenVisibleFrame = [[NSScreen mainScreen] visibleFrame];
NSRect windowFrame = [self frame];
NSPoint newOrigin = windowFrame.origin;
// Get the mouse location in window coordinates.
NSPoint currentLocation = [theEvent locationInWindow];
// Update the origin with the difference between the new mouse location and the old mouse location.
newOrigin.x += (currentLocation.x - initialMouseLocation.x);
newOrigin.y += (currentLocation.y - initialMouseLocation.y);
if ((newOrigin.y + windowFrame.size.height) > (screenVisibleFrame.origin.y + screenVisibleFrame.size.height)) {
newOrigin.y = screenVisibleFrame.origin.y + (screenVisibleFrame.size.height - windowFrame.size.height);
}
// Move the window to the new location
[self setFrameOrigin:newOrigin];
}
}
- (void)mouseDown:(NSEvent *)theEvent
{
screenResolution = [[NSScreen mainScreen] frame];
initialMouseLocation = [theEvent locationInWindow];
float scale = [[NSUserDefaults standardUserDefaults] floatForKey:#"widgetScale"]/100;
float pX = initialMouseLocation.x;
float pY = initialMouseLocation.y;
float fX = self.frame.size.width;
float fY = self.frame.size.height;
if (pX>(fX-fX*scale)/2 && pX<(fX+fX*scale)/2 && pY>(fY+fY*scale)/2) {
allowDrag = YES;
} else {
allowDrag = NO;
}
}
In Cocoa, you have two basic choices: 1) you can make a whole window pass clicks through with [window setIgnoresMouseEvents:YES], or 2) you can make parts of your window transparent and clicks will pass through by default.
The limitation is that the window server makes the decision about which app to deliver the event to once. After it has delivered the event to your app, there is no way to make it take the event back and deliver it to another app.
One possible solution might be to use Quartz Event Taps. The idea is that you make your window ignore mouse events, but set up an event tap that will see all events for the login session. If you want to make an event that's going through your window actually stop at your window, you process it manually and then discard it. You don't let the event continue on to the app it would otherwise reach. I expect that this would be very tricky to do right. For example, you don't want to intercept events for another app's window that is in front of yours.
If at all possible, I recommend that you use the Cocoa-supported techniques. I would think you would only want clicks to go through your window where it's transparent anyway, since otherwise how would the user know what they are clicking on?
Please invoke an transparent overlay CHILD WINDOW for accepting control and make the main window -setIgnoresMouseEvents:YES as Ken directed.
I used this tricky on my app named "Overlay".

Flipped NSView mouse coordinates

I have a subclass of NSView that re-implements a number of the mouse event functions. For instance in mouseDown to get the point from the NSEvent I use:
NSEvent *theEvent; // <- argument to function
NSPoint p = [theEvent locationInWindow];
p = [self convertPoint:p fromView:nil];
However the coordinates seem to be flipped, (0, 0) is in the bottom left of the window?
EDIT: I have already overridden the isFlipped method to return TRUE, but it has only affected drawing. Sorry, can't believe I forgot to put that straight away.
What do you mean by flipped? Mac uses a LLO (lower-left-origin) coordinate system for everything.
EDIT I can't reproduce this with a simple project. I created a single NSView implemented like this:
#implementation FlipView
- (BOOL)isFlipped {
return YES;
}
- (void)mouseDown:(NSEvent *)theEvent {
NSPoint p = [theEvent locationInWindow];
p = [self convertPoint:p fromView:nil];
NSLog(#"%#", NSStringFromPoint(p));
}
#end
I received the coordinates I would expect. Removing the isFlipped switched the orientation as expected. Do you have a simple project that demonstrates your problmem?
I found this so obnoxious - until one day I just sat down and refused to get up until I had something that worked perfectly . Here it is.. called via...
-(void) mouseDown:(NSEvent *)click{
NSPoint mD = [NSScreen wtfIsTheMouse:click
relativeToView:self];
}
invokes a Category on NSScreen....
#implementation NSScreen (FlippingOut)
+ (NSPoint)wtfIsTheMouse:(NSEvent*)anyEevent
relativeToView:(NSView *)view {
NSScreen *now = [NSScreen currentScreenForMouseLocation];
return [now flipPoint:[now convertToScreenFromLocalPoint:event.locationInWindow relativeToView:view]];
}
- (NSPoint)flipPoint:(NSPoint)aPoint {
return (NSPoint) { aPoint.x,
self.frame.size.height - aPoint.y };
}
- (NSPoint)convertToScreenFromLocalPoint:(NSPoint)point
relativeToView:(NSView *)view {
NSPoint winP, scrnP, flipScrnP;
if(self) {
winP = [view convertPoint:point toView:nil];
scrnP = [[view window] convertBaseToScreen:winP];
flipScrnP = [self flipPoint:scrnP];
flipScrnP.y += [self frame].origin.y;
return flipScrnP;
} return NSZeroPoint;
}
#end
Hope this can prevent just one minor freakout.. somewhere, someday. For the children, damnit. I beg of you.. for the children.
This code worked for me:
NSPoint location = [self convertPoint:theEvent.locationInWindow fromView:nil];
location.y = self.frame.size.height - location.y;
This isn't "flipped", necessarily, that's just how Quartz does coordinates. An excerpt from the documentation on Quartz 2D:
A point in user space is represented by a coordinate pair (x,y), where x represents the location along the horizontal axis (left and right) and y represents the vertical axis (up and down). The origin of the user coordinate space is the point (0,0). The origin is located at the lower-left corner of the page, as shown in Figure 1-4. In the default coordinate system for Quartz, the x-axis increases as it moves from the left toward the right of the page. The y-axis increases in value as it moves from the bottom toward the top of the page.
I'm not sure what your question is, though. Are you looking for a way to get the "flipped" coordinates? If so, you can subclass your NSView, overriding the -(BOOL)isFlipped method to return YES.

Resources