CAAnimation that calls a method in periodic animation-progress intervals? - cocoa

Say I want to animate a ball rolling 1000 pixels to the right, specifying a timing function in the process – something like this:
UIView *ball = [[UIView alloc] initWithFrame:CGRectMake(0,0,30,30)];
CABasicAnimation* anim =
[CABasicAnimation animationWithKeyPath:#"transform.translation.x"];
anim.toValue = [NSNumber numberWithFloat:ball.frame.origin.x + 1000.0];
// move 1000 pixels to the right
anim.duration = 10.0;
anim.timingFunction = [CAMediaTimingFunction functionWithControlPoints:
0.1 :0.0 :0.3 :1.0]; // accelerate fast, decelerate slowly
[ball.layer addAnimation:anim forKey:#"myMoveRightAnim"];
What I ultimately want is to have a method, say -(void)animationProgressCallback:(float)progress, be called during the animation, in regular intervals of the animation's progress in terms of the absolute "distance" between start and end values, i.e. ignoring the timing function.
I'll try to explain with the above example with the ball rolling 1000px to the right (charted by the y axis, in our case 100%=1000px):
I want my callback method to be invoked whenever the ball has progressed 250 pixels. Because of the timing function, the first 250 pixels might be reached in ti0=2 seconds, half the total distance reached just ti1= 0.7 seconds later (fast acceleration kicks in), the 750px mark another ti2= 1.1 seconds later, and needing the remaining ti3= 5.2 seconds to reach the 100% (1000px) mark.
What would be great, but isn't provided:
If the animation called a delegate method in animation-progress intervals as described, I wouldn't need to ask this question… ;-)
Ideas how to solve the problem:
One solution I can think of is to calculate the bezier curve's values, map that to the tik values (we know the total animation duration), and when the animation is started, we sequentially perform our animationProgresssCallback: selector with those delays manually.
Obviously, this is insane (calculating bezier curves manually??) and, more importantly, unreliable (we can't rely on the animation thread and the main thread to be in sync – or can we?).
Any ideas??
Looking forward to your ideas!

Ideas how to solve the problem:
One solution I can think of is to
calculate the bezier curve's values,
map that to the tik values (we know
the total animation duration), and
when the animation is started, we
sequentially perform our
animationProgresssCallback: selector
with those delays manually.
Obviously, this is insane (calculating
bezier curves manually??) and, more
importantly, unreliable (we can't rely
on the animation thread and the main
thread to be in sync – or can we?).
Actually this is reliable. CoreAnimation is time based, so you could use the delegate to be notified when the animation really starts.
And about calculating the bezier path... well look it this way: It could be worse if you would want to implement a surface in OpenGLES you would have to calculate a Cubic Bezier!. lol. Your case is only one dimension, is not that hard if you know the maths.

Related

Three.js Is Quaternion.slerp() slower than Vector3.lerpVectors()

I'm animating a camera in my scene and have been playing with Quaternion.slerp() and Vector3.lerpVectors().
With lerpVectors, I have a function that takes in a Vector3 to look at, and a duration to animate across, e.g.
rotateCameraToDestination(new Vector(1,1,1), 3);
With slerp, I have an identical function that takes in a Quaternion to orient to, and also a duration to animate across, e.g.
rotateCameraToDestination(new Quaternion().setFromRotationMatrix(m), 3);
Both durations are 3 seconds (both using delta time to animate during render loop) yet when I test the slerp function, I generally have to increase the duration to something like 30 or 300.
Do the two methods interpolate at different rates?
Any explanation would be appreciated, I need to leave appropriate documentation on any functions I develop, cheers
EDIT
As mentioned in my comment, I think that because I was updating the start position for my lerp function, it starts quickly, then ends slow, the slerp was just consistent rate, as I wasn't changing the start quaternion each frame.
So they just appeared to run at different rates.
Crudely represented like this:
LERP:
l-------l-------l-----l----l----l---l---l--l--l--l-l-llll
SLERP:
l---l---l---l---l---l---l---l---l---l---l---l---l---l---l
Sorry for hassle guys :(

How to align while loop operations and paint event?

I am currently working with painting and displaying on a Cartesian coordinate system. In my game, I have fast moving objects, bullets, for which I use the following formula to determine position:
x += speed * Cos(theta);
y += speed * Sin(theta);
where theta is a radian measure and speed modifies the speed at the cost of overall continuity. [lim speed → ∞] then x and y = larger "jump" between the starting and next calculated x,y point.
I had to use this formula with a 'high speed' object, so instead of using a timer, which is limited to milisecond .001, I utilized a while loop:
while(true) {
if(currentTime - oldTime > setInterval) //x,y and intersection operations {
//operations
} if(currentTime - oldTime > setInterval) //paint operations {
//operations
}
sleep(0,nanoseconds);//sleeps thread or if you're a C kind of guy, "task"
}
I want x,y and intersection operations to happen at a much faster rate than the paint event, which I plan to have occur at 30-125 times a second (basically the hertage of a monitor).
Actual Questions:
What would be the most efficient rate for the x,y and intersection operations, so that they would perform at a rate consistent across different CPUs (from a dusty single core # 1.6 ghz to a fancy shmancy hex-core # 4.0 ghz)?
Is there a better angle position formula than mine for these operations?
*note my method of painting the object has nothing to do with my problems, in case you were wondering.
Have a timer fire every time the screen refreshes (60Hz?). In that time you calculate where the object is at this point in time. You draw the object at the determined location.
Whenever you want to find out where the object currently is, you run the physics simulation until time has caught up with the point in time you want to render. This way you get the object being animated in exactly the point in time it should be in.
Define the frequency at which the physics simulation runs. You can pick 60Hz as well or any integer multiple of it. Run the physics engine with the same time increment (which is 1/Frequency). When you want to render, find out how many physics ticks are missing and run them one by one.
This scheme is completely robust against missing or superfluous timer ticks. CPU clock rate does not matter either. The object is always rendered at the price position it should be in.

How does 'rate' property of JavaFX Animation class behave?

I have two rectangle shapes which are in translate transition as below
//First rectangle animation
TranslateTransition translateTransition1 = new TranslateTransition();
translateTransition1.setNode(rect1);
translateTransition1.setFromX(10);
translateTransition1.setFromY(0);
translateTransition1.setToX(10);
translateTransition1.setToY(300);
translateTransition1.setCycleCount(8);
//translateTransition.setAutoReverse(true);
translateTransition1.play();
translateTransition1.setRate(0.1);
//Second rectangle animation
TranslateTransition translateTransition2 = new TranslateTransition();
translateTransition2.setNode(rect2);
translateTransition2.setFromX(10);
translateTransition2.setFromY(-300); // This is the only difference
translateTransition2.setToX(10);
translateTransition2.setToY(300);
translateTransition2.setCycleCount(8);
//translateTransition.setAutoReverse(true);
translateTransition2.play();
translateTransition2.setRate(0.1);
Here both of these animation has rate (0.1) but they moved at different speed when I run the application.
As per the oracle document, 'rate' property defines speed/direction of the animation, then why does the two TranslateTranslations not have same speed even though I set it to 0.1?
Also how is the actual behaviour of the 'rate' property?
The Definition of Rate
Rate is not the velocity or speed in pixels per second of the translated object in the transition.
Think of rate like this (courtesy of Louis Tully in Ghostbusters):
I see you were exercising. So was I. I taped '20 Minute Workout' and played it back at high speed so it only took ten minutes and I got a really good workout.
Rate is like a fast forward, slow motion or rewind feature on a video recorder.
The Definition of Speed
I don't want to set duration. I just want the two rectangle to move at same speed.
Speed is distance over time.
If you want your rectangles to move at the same speed, make them move the same distance for the same duration.
Issues with your sample code
If you don't set a duration, one will be assigned for you. The default duration is 400 milliseconds, which is probably not what you want.
The rectangles in your question are moving at different speeds because you have asked them to travel different distances in the same time span.
Example
OK, you have probably got it now, but here is an example just in case.
TranslateTransition translateTransition1 = new TranslateTransition(
Duration.seconds(1), rect1
);
translateTransition1.setFromX(0);
translateTransition1.setToX(100);
translateTransition1.setInterpolator(Interpolator.LINEAR);
translateTransition1.play();
TranslateTransition translateTransition2 = new TranslateTransition(
Duration.seconds(2), rect2
);
translateTransition2.setFromX(0);
translateTransition2.setToX(100);
translateTransition2.setInterpolator(Interpolator.LINEAR);
translateTransition2.play();
rect2.setTranslateY(200);
So there are two rectangles:
rect1 moves a total distance of 100 pixels in one second, so its speed of travel is 100 pixels per second.
rect2 moves a total distance of 100 pixels in two seconds, so its speed of travel is 50 pixels per second.
A linear interpolator is used so that the transitions occur at constant velocity (e.g. a given rectangle does not accelerate or decelerate while it is moving).
If you want both rectangles to move at the same speed, you could set the duration of the second transition to one second, so it matches the duration of the first transition (their distance travelled already matches).
Alternatively, if you invoke translateTransition2.setRate(2), the second animation will play twice as quick, thus finishing in half of it's duration. This will double the speed of travel from 50 pixels per second to 100 pixels per second, matching the speed of the first rectangle.

Game Development: How Do Game Developers Maintain Game Speed Regardless of FPS?

Say like it took a whole second for a character to jump in a game, how would a game developer go about keeping that jump time to 1 second if the FPS is either 10fps, 30fps, 100fps etc? - If you get me, how would you stop a game's fps affecting the gameplay speed basically.
I presume there's a certain method of doing this, so I was wondering what exactly it is?
Thankssss,
Alex!
Normally by using a timer to record how much time has passed since the last frame was rendered. There are many articles and samples on the subject available via Google:
Achieving Frame Rate Independent Game Movement
Constant game speed independent of variable FPS in OpenGL with GLUT?
Fixed time step vs Variable time step
Fix Your Timestep!
Of course if your FPS is allowed to be anything then you will end up with unrealistic simulations. For this reason there is the concept of a "fixed time step".
The "Fix Your Timestep!" (and the previous articles linked on that page) in particular is a good read on this subject.
Short answer of a large subject
I guess your game should place "animation" not determine by its frame sequences number but by the time delay from a reference...
1)example : 1 second jump with only 3 drawing ... should be considere draw#1 a t0 draw#2 if between t+0.25 and t+0.75 and draw#3 if between t+0.75 and t+1
2) example : if your move/animation is determined by a formula like positionX(int RelativeFrameNumber) your should consider change your fonction by using time like positionX(long relativeTimeInMillisecond)
or with small change in your gameloop
3) place a "wait" code in your loop that is calibrate depending a continuously/fixed computed framerate performance
Hope that help
Many physics engines pass around a delta time in an update() method of some kind.
void update(float dt)
This delta value represents the current frame step proportional to a fixed frame rate (say, 60fps). For example, if dt is 1.0, then we're at 60fps, if dt is 2.0, then we're 30fps and if dt is 0.5 then we are at 120fps.. etc..
To move (in your case, jump) a character at the same speed for any frame rate, multiply dt by the object's velocity vector to keep the character jumping at the same speed.
void update(float dt)
{
myChar.Vel += myChar.Direction.Normalized() * myChar.Speed * dt;
//myChar.Speed is in meters per second
}
Note: Different calculation is required for quadratic physics.

High resolution and high framerate mouse coordinates on OSX? (Or other solution?)

I'd like to get mouse movements in high resolution and high framerate on OSX.
"High framerate" = 60 fps or higher (preferably > 120)
"High resolution" = Subpixel values
Problem
I've got an opengl view running at about the monitor refresh rate, so it's ~60 fps. I use the mouse to look around, so I've hidden the mouse cursor and I'm relying on mouse delta values.
The problem is the mouse events come in at much too low framerate, and values are snapped to integer (whole pixels). This causes a "choppy" viewing experience. Here's a visualization of mouse delta values over time:
mouse delta X
^ xx
2 | x x x x xx
| x x x x xx x x x
0 |x-x-x--xx-x-x-xx--x-x----x-xx-x-----> frame
|
-2 |
v
This is a typical (shortened) curve created from the user moving the mouse a little bit to the right. Each x represent the deltaX value for each frame, and since deltaX values are rounded to whole numbers, this graph is actually quite accurate. As we can see, the deltaX value will be 0.000 one frame, and then 1.000 the next, but then it will be 0.000 again, and then 2.000, and then 0.000 again, then 3.000, 0.000, and so on.
This means that the view will rotate 2.000 units one frame, and then rotate 0.000 units the next, and then rotate 3.000 units. This happens while the mouse is being dragged with more or less constant speed. Nedless to say, this looks like crap.
So, how can I 1) increased the event framerate of the mouse? and 2) get subpixel values?
So far
I've tried the following:
- (void)mouseMoved:(NSEvent *)theEvent {
CGFloat dx, dy;
dx = [theEvent deltaX];
dy = [theEvent deltaY];
// ...
actOnMouse(dx,dy);
}
Well, this one was obvious. dx here is float, but values are always rounded (0.000, 1.000 etc.). This creates the graph above.
So the next step was to try and tap the mouse events before they enter the WindowServer, I thought. So I've created a CGEventTrap:
eventMask = (1 << kCGEventMouseMoved);
eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap,
0, eventMask, myCGEventCallback, NULL);
//...
myCGEventCallback(...){
double dx = CGEventGetDoubleValueField(event, kCGMouseEventDeltaX);
double dy = CGEventGetDoubleValueField(event, kCGMouseEventDeltaY);
}
Still values are n.000, although I believe the rate of event firing is a little higher. But it it's still not at 60 fps. I still get the chart above.
I've also tried setting the mouse sensitivity really high, and then scale the values down on my side. But it seems OSX adds some sort of acceleration or something—the values get really "unstable" and consequently unusable, and the rate of fire is still too low.
With no luck, I've been starting to follow the mouse events down the rabbit hole, and I've arrived at IOKit. This is scary for me. It's the mad hatter. The Apple documentation gets weird and seems to say "if you're this deep down, all you really need is header files".
So I have been reading header files. And I've found some interesting tidbits.
In <IOKit/hidsystem/IOLLEvent.h> on line 377 there's this struct:
struct { /* For mouse-down and mouse-up events */
UInt8 subx; /* sub-pixel position for x */
UInt8 suby; /* sub-pixel position for y */
// ...
} mouse;
See, it says sub-pixel position! Ok. Then on line 73 in <IOKit/hidsystem/IOLLParameter.h>
#define kIOHIDPointerResolutionKey "HIDPointerResolution"
Hmm.
All in all, I get the feeling OSX knows about sub-pixel mouse coordinates deep down, and there just has to be a way to read raw mouse movements every frame, but I've just no idea how to get those values.
Questions
Erh, so, what am I asking for?
Is there a way of getting high framerate mouse events in OSX? (Example code?)
Is there a way of getting sub-pixel mouse coordinates in OSX? (Example code?)
Is there a way of reading "raw" mouse deltas every frame? (Ie not rely on an event.)
Or, how do I get NXEvents or set HIDParameters? Example code? (So I can dig deeper into this on my own...)
(Sorry for long post)
(This is a very late answer, but one that I think is still useful for others that stumble across this.)
Have you tried filtering the mouse input? This can be tricky because filtering tends to be a trade-off between lag and precision. However, years ago I wrote an article that explained how I filtered my mouse movements and wrote an article for a game development site. The link is http://www.flipcode.com/archives/Smooth_Mouse_Filtering.shtml.
Since that site is no longer under active development (and may go away) here is the relevant excerpt:
In almost every case, filtering means averaging. However, if we simply average the mouse movement over time, we'll introduce lag. How, then, do we filter without introducing any side-effects? Well, we'll still use averaging, but we'll do it with some intelligence. And at the same time, we'll give the user fine-control over the filtering so they can adjust it themselves.
We'll use a non-linear filter of averaged mouse input over time, where the older values have less influence over the filtered result.
How it works
Every frame, whether you move the mouse or not, we put the current mouse movement into a history buffer and remove the oldest history value. So our history always contains X samples, where X is the "history buffer size", representing the most recent sampled mouse movements over time.
If we used a history buffer size of 10, and a standard average of the entire buffer, the filter would introduce a lot of lag. Fast mouse movements would lag behind 1/6th of a second on a 60FPS machine. In a fast action game, this would be very smooth, but virtually unusable. In the same scenario, a history buffer size of 2 would give us very little lag, but very poor filtering (rough and jerky player reactions.)
The non-linear filter is intended to combat this mutually-exclusive scenario. The idea is very simple. Rather than just blindly average all values in the history buffer equally, we average them with a weight. We start with a weight of 1.0. So the first value in the history buffer (the current frame's mouse input) has full weight. We then multiply this weight by a "weight modifier" (say... 0.2) and move on to the next value in the history buffer. The further back in time (through our history buffer) we go, the values have less and less weight (influence) on the final result.
To elaborate, with a weight modifier of 0.5, the current frame's sample would have 100% weight, the previous sample would have 50% weight, the next oldest sample would have 25% weight, the next would have 12.5% weight and so on. If you graph this, it looks like a curve. So the idea behind the weight modifier is to control how sharply the curve drops as the samples in the history get older.
Reducing the lag means decreasing the weight modifier. Reducing the weight modifier to 0 will provide the user with raw, unfiltered feedback. Increasing it to 1.0 will cause the result to be a simple average of all values in the history buffer.
We'll offer the user two variables for fine control: the history buffer size and the weight modifier. I tend to use a history buffer size of 10, and just play with the weight modifier until I'm happy.
If you are using the IOHIDDevice callbacks for the mouse you can use this to get a double value:
double doubleValue = IOHIDValueGetScaledValue(inIOHIDValueRef, kIOHIDTransactionDirectionTypeOutput);
The possibility of subpixel coordinates exists because Mac OS X is designed to be resolution independent. A square of 2x2 hardware pixels on a screen could represent a single virtual pixel in software, allowing the cursor to be placed at (x + 0.5, y + 0.5).
On any actual Mac using normal 1x scaling, you will never see subpixel coordinates because the mouse cursor cannot be moved to a fractional pixel position on the screen--the quantum of mouse movement is precisely 1 pixel.
If you need to get access to pointer device delta information at a lower level than the event dispatching system provides then you'll probably need to use the user-space USB APIs.

Resources