Alternative to using CABasicAnimation callbacks? - calayer

CAAnimation does not provide a mechanism for assigning callback functions other than the standard "animationDidStart:"/"animationDidStop:" methods.
I have a custom UIControl that utilizes 2 CALayers that overlap. The purpose of this control is similar to an old fashioned sonar. The top layer's contents contains an image that gets rotated constantly (call this layer "wand"). Beneath that layer is a "spriteControl" layer that renders blips as the wand passes over them.
The objects that the blips represent are pre-fetched and organized into invisible CAShapeLayers by the spriteControl. I am using a CABasicAnimation to rotate the wand 10 degrees at a time, then utilizing the "animationDidStop:" method to invoke a method on the spriteControl that takes the current rotation value of the wand layer (a.k.a. heading) and animates the alpha setting from 1.0 to 0.0 for simulating the blip in and fade out effect. Finally, the process is started over again indefinitely.
While this approach of using the CAAnimation callbacks ensures that the timing of the wand reaching a "ping" position (i.e. 10deg, 20deg, 270deg, etc) always coincide with the lighting of the blips in the other layer, there is this issue of stopping, recalculating, and starting the animation every 10 degrees.
I could spawn an NSTimer to fire a method that queries the angle of the wand's presentation layer to get the heading value. However, this makes it more difficult to keep the wand and the blip highlighting in sync, and/or cause some to get skipped altogether. This approach is discussed a bit here: How can I callback as a CABasicAnimation is animating?
So my question is whether or not there is anything I can do to improve the performance of the wand layer rotation without reimplementing the control using OpenGL ES. (I realize that this would be easily solved in an OpenGL environment, however, to use it here would require extensive redesign that simply isn't worth it.) While the performance issue is minor, I can't shake the feeling that there is something simple and obvious that I could do that would allow the wand to animate indefinitely without pausing to perform expensive rotation calculations in between.
Here is some code:
- (void)rotateWandByIncrement
{
if (wandShouldStop)
return;
CGFloat newRotationDegree = (wandRotationDegree + WAND_INCREMENT_DEGREES);
if (newRotationDegree >= 360)
newRotationDegree = 0;
CATransform3D rotationTransform = CATransform3DMakeRotation(DEGREES_TO_RADIANS(newRotationDegree), 0, 0, 1);
CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:#"transform"];
animation.toValue = [NSValue valueWithCATransform3D:rotationTransform];
animation.duration = WAND_INCREMENT_DURATION;
animation.fillMode = kCAFillModeForwards;
animation.removedOnCompletion = FALSE;
animation.delegate = self;
[wandLayer addAnimation:animation forKey:#"transform"];
}
- (void)animationDidStart:(CAAnimation *)theAnimation
{
if (wandShouldStop)
return;
NSInteger prevWandRotationDegree = wandRotationDegree - WAND_INCREMENT_DEGREES;
if (prevWandRotationDegree < 0)
prevWandRotationDegree += 360;
// Pulse the spriteControl
[[self spriteControl] pulseRayAtHeading:prevWandRotationDegree];
}
- (void)animationDidStop:(CAAnimation *)theAnimation finished:(BOOL)flag
{
// update the rotation var
wandRotationDegree += WAND_INCREMENT_DEGREES;
if (wandRotationDegree >= 360)
wandRotationDegree = 0;
// This applies the rotation value to the model layer so that
// subsequent animations start where the previous one left off
CATransform3D rotationTransform = CATransform3DMakeRotation(DEGREES_TO_RADIANS(wandRotationDegree), 0, 0, 1);
[CATransaction begin];
[CATransaction setDisableActions:TRUE];
[wandLayer setTransform:rotationTransform];
[CATransaction commit];
//[wandLayer removeAnimationForKey:#"transform"];
[self rotateWandByIncrement];
}

Let's say it takes 10 seconds for the radar to make one complete rotation.
To get the wand to rotate indefinitely, attach a CABasicAnimation to it with its Duration property set to 10 and its RepeatCount property set to 1e100f.
The blips can each be animated using their own instance CAKeyframeAnimation. I won't write the details, but for each blip, you specify an array of opacity values (I assume opacity is how you're fading out the blips) and an array of time percentages (see Apple's documentation).

Related

Opposite of glscissor in Cocos2D?

I found a class called ClippingNode that I can use on sprites to only display a specified rectangular area: https://github.com/njt1982/ClippingNode
One problem is that I need to do exactly the opposite, meaning I want the inverse of that. I want everything outside of the specified rectangle to be displayed, and everything inside to be taken out.
In my test I'm using a position of a sprite, which will update frame, so that will need to happen to meaning that new clipping rect will be defined.
CGRect menuBoundaryRect = CGRectMake(lightPuffClass.sprite.position.x, lightPuffClass.sprite.position.y, 100, 100);
ClippingNode *clipNode = [ClippingNode clippingNodeWithRect:menuBoundaryRect];
[clipNode addChild:darkMapSprite];
[self addChild:clipNode z:100];
I noticed the ClippingNode class allocs inside but I'm not using ARC (project too big and complex to update to ARC) so I'm wondering what and where I'll need to release too.
I've tried a couple of masking classes but whatever I mask fits over the entire sprite (my sprite covers the entire screen. Additionally the mask will need to move, so I thought glscissor would be a good alternative if I can get it to do the inverse.
You don't need anything out of the box.
You have to define a CCClippingNode with a stencil, and then set it to be inverted, and you're done. I added a carrot sprite to show how to add sprites in the clipping node in order for it to be taken into account.
#implementation ClippingTestScene
{
CCClippingNode *_clip;
}
And the implementation part
_clip = [[CCClippingNode alloc] initWithStencil:[CCSprite spriteWithImageNamed:#"white_board.png"]];
_clip.alphaThreshold = 1.0f;
_clip.inverted = YES;
_clip.position = ccp(self.boundingBox.size.width/2 , self.boundingBox.size.height/2);
[self addChild:_clip];
_img = [CCSprite spriteWithImageNamed:#"carrot.png"];
_img.position = ccp(-10.0f, 0.0f);
[_clip addChild:_img];
You have to set an extra flag for this to work though, but Cocos will spit out what you need to do in the console.
I once used CCScissorNode.m from https://codeload.github.com/NoodlFroot/ClippingNode/zip/master
The implementation (not what you are looking for the inverse) was something :
CGRect innerClippedLayer = CGRectMake(SCREENWIDTH/14, SCREENHEIGHT/6, 275, 325);
CCScissorNode *tmpLayer = [CCScissorNode scissorNodeWithRect:innerClippedLayer];
[self addChild:tmpLayer];
So for you it may be like if you know the area (rectangle area that you dont want to show i.e. inverse off) and you know the screen area then you can deduct the rectangle are from screen area. This would give you the inverse area. I have not did this. May be tomorrow i can post some code.

Trying to mix UIKit Dynamics UICollision and core animation to scale a box's bounds

From a previous question I had.
CGAffineTransformScale does not send it's scale when using UIDynamicAnimator
Im trying to mix an animated scale effect on a box that has a smaller box on top of it.
Both boxes have UICollisionBehavior, I want to eventually make the bottom box thats getting larger scale fast enough to provide a velocity force I that will make the top box react with a bounce up.
The below is the scale code. The bounds is instantly set to its target final while the image in the view is still scaling. I want the bounds and image to be scaling at the same time so I can see the expected reactions happening.
- (IBAction)tapBoxThing:(UITapGestureRecognizer *)sender {
/* Make sure no translation is applied to this image view */
_boxView.transform = CGAffineTransformIdentity;
/* Begin the animation */
[UIView beginAnimations:nil
context:NULL];
/* Make the animation 5 seconds long */
[UIView setAnimationDuration:5.0f];
[_gravity removeItem:_boxView];
[_collision removeItem:_boxView];
// right here, the box will animate the image in the view to slowly fill in the new bounds.
// But I want the bounds to animate, its currently instantly scaling to teh final ammount.
// Core Animation states that frame is not animatable but bounds is.
CGRect frameZ = _boxView.bounds;
frameZ.size.height += 60.0f;
frameZ.size.width += 60.0f;
_boxView.bounds = frameZ;
[_gravity addItem:_boxView];
[_collision addItem:_boxView];
[UIView commitAnimations];
}

SceneKit smooth camera movement

What is the standard method for smooth camera movement within SceneKit (OpenGL)?
Manually changing x,y isn't smooth enough, yet using CoreAnimation creates "pulsing" movement. The docs on SceneKit seem to be very limited so any examples would be appreciated, I'm currently doing this:
- (void)keyDown:(NSEvent *)theEvent {
int key = [theEvent keyCode];
int x = cameraNode.position.x;
int y = cameraNode.position.y;
int z = cameraNode.position.z;
int speed = 4;
if (key==123) {//left
x-=speed;
} else if (key==124) {//right
x+=speed;
} else if (key==125) {//down
y-=speed;
} else if (key==126) {//up
y+=speed;
}
//move the camera
[SCNTransaction begin];
[SCNTransaction setAnimationDuration: 1.0];
// Change properties
cameraNode.position = SCNVector3Make(x, y, z);
[SCNTransaction commit];
}
To minimise the pulsing movements (due to the key repeat) you can use an "easeOut" timingFunction:
//move the camera
[SCNTransaction begin];
[SCNTransaction setAnimationDuration: 1.0];
[SCNTransaction setAnimationTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseOut]];
// Change properties
cameraNode.position = SCNVector3Make(x, y, z);
[SCNTransaction commit];
That said, the best thing to do here is probably to manage a target position (a vector3) yourself and update the position of the camera at every frame to go to this target smoothly.
I've been experimenting with this. The best I've found so far is to record the state of the input keys in internal state, modified by keyDown: and keyUp:, and run an NSTimer to apply them. The timer uses the actual, measured time delta between firings to determine how far to move the camera. That way irregular timings don't have too much effect (and I can call my method to update the camera position at any time without worrying about changing its movement speed).
It takes some work to make this behave correctly, though. keyDown: and keyUp: have some obnoxious behaviours when it comes to game input. For example, repeating keys. Also, they may fire even after your view loses focus or your app goes to the background, if keys are held down across the transition. Etc. Not insurmountable, but annoying.
What I haven't yet done is add acceleration and deceleration, which I think will aid the perception of it being smooth. Otherwise it feels pretty good.
I move camera using this code:
let lerpX = (heroNode.position.x - followCamera.position.x) * 0.05
let lerpZ = (heroNode.position.z - followCamera.position.z) * 0.05
followCamera.position.x += lerpX
followCamera.position.z += lerpZ

Smooth animation in Cocos2d for iOS

I move a simple CCSprite around the screen of an iOS device using this code:
[self schedule:#selector(update:) interval:0.0167];
- (void) update:(ccTime) delta {
CGPoint currPos = self.position;
currPos.x += xVelocity;
currPos.y += yVelocity;
self.position = currPos;
}
This works however the animation is not smooth. How can I improve the smoothness of my animation?
My scene is exceedingly simple (just has one full-screen CCSprite with a background image and a relatively small CCSprite that moves slowly).
I've logged the ccTime delta and it's not consistent (it's almost always greater than my specified interval of 0.0167... sometimes up to a factor of 4x).
I've considered tailoring the motion in the update method to the delta time (larger delta => larger movement etc). However given the simplicity of my scene it's seems there's a better way (and something basic that I'm probably missing).
The scheduler will try to accommodate and call your selector as per your interval but if there are other processes running, it can be earlier or later (hence why the inconsistency).
Instead, multiply your xVelocity and yVelocity by delta - this should scale the velocities into a far smoother motion.
For example:
- (void) update:(ccTime) delta {
CGPoint currPos = self.position;
currPos.x += (xVelocity * delta);
currPos.y += (yVelocity * delta);
self.position = currPos;
}
Try using the default [self scheduleUpdate] method rather than calling it directly as you are doing, see if that makes a difference. This method is designed for what you are doing and may be smoother.

Time Machine style Navigation

I've been doing some programming for iPhone lately and now I'm venturing into the iPad domain. The concept I want to realise relies on a navigation that is similar to time machine in osx. In short I have a number of views that can be panned and zoomed, as any normal view. However, the views are stacked upon each other using a third dimension (in this case depth). the user will the navigate to any view by, in this case, picking a letter, whereupon the app will fly through the views until it reaches the view of the selected letter.
My question is: can somebody give the complete final code for how to do this? Just kidding. :) What I need is a push in the right direction, since I'm unsure how to even start doing this, and whether it is at all possible using the frameworks available. Any tips are appreciated
Thanks!
Core Animation—or more specifically, the UIView animation model that's built on Core Animation—is your friend. You can make a Time Machine-like interface with your views by positioning them in a vertical line within their parent view (using their center properties), having the ones farther up that line be scaled slightly smaller than the ones below (“in front of”) them (using their transform properties, with the CGAffineTransformMakeScale function), and setting their layers’ z-index (get the layer using the view’s layer property, then set its zPosition) so that the ones farther up the line appear behind the others. Here's some sample code.
// animate an array of views into a stack at an offset position (0 has the first view in the stack at the front; higher values move "into" the stack)
// took the shortcut here of not setting the views' layers' z-indices; this will work if the backmost views are added first, but otherwise you'll need to set the zPosition values before doing this
int offset = 0;
[UIView animateWithDuration:0.3 animations:^{
CGFloat maxScale = 0.8; // frontmost visible view will be at 80% scale
CGFloat minScale = 0.2; // farthest-back view will be at 40% scale
CGFloat centerX = 160; // horizontal center
CGFloat frontCenterY = 280; // vertical center of frontmost visible view
CGFloat backCenterY = 80; // vertical center of farthest-back view
for(int i = 0; i < [viewStack count]; i++)
{
float distance = (float)(i - offset) / [viewStack count];
UIView *v = [viewStack objectAtIndex:i];
v.transform = CGAffineTransformMakeScale(maxScale + (minScale - maxScale) * distance, maxScale + (minScale - maxScale) * distance);
v.alpha = (i - offset > 0) ? (1 - distance) : 0; // views that have disappeared behind the screen get no opacity; views still visible fade as their distance increases
v.center = CGPointMake(centerX, frontCenterY + (backCenterY - frontCenterY) * distance);
}
}];
And here's what it looks like, with a couple of randomly-colored views:
do you mean something like this on the right?
If yes, it should be possible. You would have to arrange the Views like on the image and animate them going forwards and backwards. As far as I know aren't there any frameworks for this.
It's called Cover Flow and is also used in iTunes to view the artwork/albums. Apple appear to have bought the technology from a third party and also to have patented it. However if you google for ios cover flow you will get plenty of hits and code to point you in the right direction.
I have not looked but would think that it was maybe in the iOS library but i do not know for sure.

Resources