Can't change Animation frame number in unity3D - animation

I am trying to create an button animation in unity
When i create the clip,
frame number stays on 60 by default
then i change the frame number, but after moving the mouse pointer it go back to 60
i tried in again again by deleting the clip and recreating the clip
but no effect
still the same
for better understanding
1. when i create the clip
2. changing the frame number 60 to 0
3. after moving mouse pointer it back again to 60

You're trying to change total frame number.
Yes, It's in Unity 5.6 and upper versions, so that can be confusing.
If you look above to your sample 60 bar, you can see a place where 0 is typed:
That's your current frame
and you can make your desired animation changing the current time frame.

That number is the Sampling Rate of the animation, i.e. how many frames of that animation clip are "executed" in a second.
60 means the animation runs at 60fps, or 1 frame every 16.6ms, so the general formula is:
Sample = Number of frames / second
Hence, you can't set that value to 0, an animation that runs at 0 fps is a still frame.
To get a specific frame of the animation, you need to move the red vertical line or click on a specific time on the timeline bar.

Related

In Chome devtools Performance, why is the FPS always showing 60fps with dropped frames, but actually it isn't

I'm studying Analyze runtime performance using the demo https://googlechrome.github.io/devtools-samples/jank/
When I open FPS meter (using Frame Rendering Stats in Rendering tab), it shows 10.8 fps:
However, when I record using Performance, there are lots of red blocks and a few green blocks. Every 20 red blocks, one green block. But in every block it shows 60 fps, which is totally wrong
As the red blocks are dropped, and I have noticed that during the 20 red blocks, the animation has never change. So I just imagine maybe the real value is 60 / 20 = 3 fps. This is reasonable as actually there are only 3 green blocks inside 1000ms. But that is still not equal to 10.8 fps. It really make me confused!
Supplement:
In the document it shows 12 fps(see the image here), but it doesn't show any red dropped frames. Maybe it is due to the dropped frames are brought in 2020(https://docs.google.com/document/d/1-KP3fAjemdm7lnvCll9T1Lg-ikCzRPI-oNCp8fHAQCc/edit#heading=h.neguedjcao67), and the document is actually outdated?
I tried https://googlechrome.github.io/devtools-samples/jank/ and set the CPU "6x slowdown" in the Performance tab. The animation became significantly janky, while the Frame Rate was still very close to 60 fps.
It seems that the FPS meter is not accurate.

Camera and screen sync

Problem Description:
we have a camera that is sending video of a live sports game in 30 frames per second.
on the other side we have a screen that is representing immediately every fram that is coming.
Assumptions
*frames will arrive in order
1.what will be the experience for a person that is wathcing the screen?
2.what can we do in order to improve it?
Your playback will have very variable framerate which would cause visible artifacts during any smooth movement ...
To remedy this You need to implement image FIFO that will cover bigger time that is your worst delay difference (Idealy at least 2x times more). So if you got 300ms-100ms delay difference and 30 fps then minimal FIFO size is:
n = 2 * (300-100) * 0.001 * 30 = 12 images
Now the reproduction should be like this:
init playback
simply start obtaining images into FIFO until the FIFO is half FULL (contains images for biggest delay difference)
playback
so any incoming image is inserted into FIFO at time of receival (unless FIFO is full in which case you wait until you have room to place new image or skip frame). Meanwhile in some thread or timer (that runs in parallel) you fetch image from FIFO every 1/30 seconds and render it (if the FIFO is empty you use last image and you can even go to bullet #1 again).
playback stop
once FIFO is empty for longer duration then some threshold (no new frames are incoming) you stop the playback.
The FIFO size reserve and point when to start playback depends on the image source timing properties (so it does not overrun nor underrun the FIFO)...
In case you need to implement your own FIFO class then cyclic buffer of constant size is your friend (so you do not need to copy all the stored images on FIFO in/out operations).

Unity : 30fps vs 60fps on 30fps animation

I'm testing this on mobile, i have a 30 frame animation with 30 frame rate, i build it to my mobile with in-game target frame rate of 30 and 60. since the animation frame rate would be time base in unity, the animation would be exactly 1 second on both build.
This is what i assume would happen :
1) on 30fps build, the animation would play 1 frame on each in-game frame, so it would be 1 second.
2) on 60fps build, the animation would play 1 frame on every 2 in-game frame, so it would be 1 second as well.
My question is, why would the 60fps build look better compare with 30fps build? since the animation would just play 30 frame throughout 1 second.
There aren't anything else in the scene, only the animation, so nothing else would distract the feeling of 60fps would look better.
hope you all can understand the question, and if anyone have the same feeling, or you can test it yourself to see if you feel the same, feel free to comment, answer or discuss. Thanks.
I think i might have the answer, that's because since the animation would be time base, unity would fill better on empty keyframe in 60fps. example : set a position keyframe on 1st frame, then set another position key frame at 30th frame, unity would effectively play this as a 60 frame rate animation since there are so many empty keyframe.
I'm not sure if this is the exact answer, if someone can confirm this or there are no other answer i'll rate this as answer.
I find the question very vague and lacking specifics of what kind of animation is being discussed and how it is being used. 2D or 3D? Sprites or geometry? Frames of bitmap graphics or tweened motion created within Unity?
As an example: If your animation was a bitmap sprite animation and the sprite did not ever change coordinate positions, then 30 frames of bitmap animation played over a time duration of 1 second would appear EXACTLY THE SAME in a 60fps build as it would in a 30fps build.
If you're also moving the sprite from one XY to another set of XY, then the MOTION would appear smoother in the 60fps build due to interpolation of the coordinate positions (as you said in your own answer). But your bitmap frames still are playing exactly the same (1 image per 1/30ths of a second).
Similarly, if the animation discussed here is geometry, then everything is based on interpolation of the shapes and thus yes, 60 fps allows for 60 unique frames of animation.

Get mousewheel anywhere on screen? (Python or C++)

In the Windows API,
x,y = win32api.GetCursorPos()
...will get the mouse position regardless of whether it's inside your window or whether your program even has a GUI. (MSDN) (Python question)
Is there a similar function to get the scrollwheel (mousewheel)'s current rotation?
The mouse wheel's rotation is not an absolute value like the cursor position. Rather, the wheel position is the delta from the previous wheel position, either positive or negative, expressed as a multiple of 120 (120 = 1 line.). So, if the user scrolls up three lines, the delta might be +360 whereas if they scroll down three lines the delta might be -360.
You can keep an internal variable that you update every time your app gets a WM_MOUSEWHEEL message which will allow you to track the cumulative change in rotation since your app started. So if a user scrolls up 10 lines and down 20 lines the cumulative delta would be -1200.

High resolution and high framerate mouse coordinates on OSX? (Or other solution?)

I'd like to get mouse movements in high resolution and high framerate on OSX.
"High framerate" = 60 fps or higher (preferably > 120)
"High resolution" = Subpixel values
Problem
I've got an opengl view running at about the monitor refresh rate, so it's ~60 fps. I use the mouse to look around, so I've hidden the mouse cursor and I'm relying on mouse delta values.
The problem is the mouse events come in at much too low framerate, and values are snapped to integer (whole pixels). This causes a "choppy" viewing experience. Here's a visualization of mouse delta values over time:
mouse delta X
^ xx
2 | x x x x xx
| x x x x xx x x x
0 |x-x-x--xx-x-x-xx--x-x----x-xx-x-----> frame
|
-2 |
v
This is a typical (shortened) curve created from the user moving the mouse a little bit to the right. Each x represent the deltaX value for each frame, and since deltaX values are rounded to whole numbers, this graph is actually quite accurate. As we can see, the deltaX value will be 0.000 one frame, and then 1.000 the next, but then it will be 0.000 again, and then 2.000, and then 0.000 again, then 3.000, 0.000, and so on.
This means that the view will rotate 2.000 units one frame, and then rotate 0.000 units the next, and then rotate 3.000 units. This happens while the mouse is being dragged with more or less constant speed. Nedless to say, this looks like crap.
So, how can I 1) increased the event framerate of the mouse? and 2) get subpixel values?
So far
I've tried the following:
- (void)mouseMoved:(NSEvent *)theEvent {
CGFloat dx, dy;
dx = [theEvent deltaX];
dy = [theEvent deltaY];
// ...
actOnMouse(dx,dy);
}
Well, this one was obvious. dx here is float, but values are always rounded (0.000, 1.000 etc.). This creates the graph above.
So the next step was to try and tap the mouse events before they enter the WindowServer, I thought. So I've created a CGEventTrap:
eventMask = (1 << kCGEventMouseMoved);
eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap,
0, eventMask, myCGEventCallback, NULL);
//...
myCGEventCallback(...){
double dx = CGEventGetDoubleValueField(event, kCGMouseEventDeltaX);
double dy = CGEventGetDoubleValueField(event, kCGMouseEventDeltaY);
}
Still values are n.000, although I believe the rate of event firing is a little higher. But it it's still not at 60 fps. I still get the chart above.
I've also tried setting the mouse sensitivity really high, and then scale the values down on my side. But it seems OSX adds some sort of acceleration or something—the values get really "unstable" and consequently unusable, and the rate of fire is still too low.
With no luck, I've been starting to follow the mouse events down the rabbit hole, and I've arrived at IOKit. This is scary for me. It's the mad hatter. The Apple documentation gets weird and seems to say "if you're this deep down, all you really need is header files".
So I have been reading header files. And I've found some interesting tidbits.
In <IOKit/hidsystem/IOLLEvent.h> on line 377 there's this struct:
struct { /* For mouse-down and mouse-up events */
UInt8 subx; /* sub-pixel position for x */
UInt8 suby; /* sub-pixel position for y */
// ...
} mouse;
See, it says sub-pixel position! Ok. Then on line 73 in <IOKit/hidsystem/IOLLParameter.h>
#define kIOHIDPointerResolutionKey "HIDPointerResolution"
Hmm.
All in all, I get the feeling OSX knows about sub-pixel mouse coordinates deep down, and there just has to be a way to read raw mouse movements every frame, but I've just no idea how to get those values.
Questions
Erh, so, what am I asking for?
Is there a way of getting high framerate mouse events in OSX? (Example code?)
Is there a way of getting sub-pixel mouse coordinates in OSX? (Example code?)
Is there a way of reading "raw" mouse deltas every frame? (Ie not rely on an event.)
Or, how do I get NXEvents or set HIDParameters? Example code? (So I can dig deeper into this on my own...)
(Sorry for long post)
(This is a very late answer, but one that I think is still useful for others that stumble across this.)
Have you tried filtering the mouse input? This can be tricky because filtering tends to be a trade-off between lag and precision. However, years ago I wrote an article that explained how I filtered my mouse movements and wrote an article for a game development site. The link is http://www.flipcode.com/archives/Smooth_Mouse_Filtering.shtml.
Since that site is no longer under active development (and may go away) here is the relevant excerpt:
In almost every case, filtering means averaging. However, if we simply average the mouse movement over time, we'll introduce lag. How, then, do we filter without introducing any side-effects? Well, we'll still use averaging, but we'll do it with some intelligence. And at the same time, we'll give the user fine-control over the filtering so they can adjust it themselves.
We'll use a non-linear filter of averaged mouse input over time, where the older values have less influence over the filtered result.
How it works
Every frame, whether you move the mouse or not, we put the current mouse movement into a history buffer and remove the oldest history value. So our history always contains X samples, where X is the "history buffer size", representing the most recent sampled mouse movements over time.
If we used a history buffer size of 10, and a standard average of the entire buffer, the filter would introduce a lot of lag. Fast mouse movements would lag behind 1/6th of a second on a 60FPS machine. In a fast action game, this would be very smooth, but virtually unusable. In the same scenario, a history buffer size of 2 would give us very little lag, but very poor filtering (rough and jerky player reactions.)
The non-linear filter is intended to combat this mutually-exclusive scenario. The idea is very simple. Rather than just blindly average all values in the history buffer equally, we average them with a weight. We start with a weight of 1.0. So the first value in the history buffer (the current frame's mouse input) has full weight. We then multiply this weight by a "weight modifier" (say... 0.2) and move on to the next value in the history buffer. The further back in time (through our history buffer) we go, the values have less and less weight (influence) on the final result.
To elaborate, with a weight modifier of 0.5, the current frame's sample would have 100% weight, the previous sample would have 50% weight, the next oldest sample would have 25% weight, the next would have 12.5% weight and so on. If you graph this, it looks like a curve. So the idea behind the weight modifier is to control how sharply the curve drops as the samples in the history get older.
Reducing the lag means decreasing the weight modifier. Reducing the weight modifier to 0 will provide the user with raw, unfiltered feedback. Increasing it to 1.0 will cause the result to be a simple average of all values in the history buffer.
We'll offer the user two variables for fine control: the history buffer size and the weight modifier. I tend to use a history buffer size of 10, and just play with the weight modifier until I'm happy.
If you are using the IOHIDDevice callbacks for the mouse you can use this to get a double value:
double doubleValue = IOHIDValueGetScaledValue(inIOHIDValueRef, kIOHIDTransactionDirectionTypeOutput);
The possibility of subpixel coordinates exists because Mac OS X is designed to be resolution independent. A square of 2x2 hardware pixels on a screen could represent a single virtual pixel in software, allowing the cursor to be placed at (x + 0.5, y + 0.5).
On any actual Mac using normal 1x scaling, you will never see subpixel coordinates because the mouse cursor cannot be moved to a fractional pixel position on the screen--the quantum of mouse movement is precisely 1 pixel.
If you need to get access to pointer device delta information at a lower level than the event dispatching system provides then you'll probably need to use the user-space USB APIs.

Resources