Sun transits the sky in only 10 hours - macos

I'm writing a toy app to experiment with some Core Animation features, including animating along a path (that's where the Sun's movement comes in) and manipulating time.
https://github.com/boredzo/WatchCompass
(Never mind the button, which isn't implemented yet.)
The sun and watch face are CALayers, each containing a static image. The hour hand is a CAShapeLayer within the watch face layer, with its anchor point set to one end ((NSPoint){ 0.5, 1.0 }).
The sun is animated using a CAKeyframeAnimation along a path. The ellipse shows the path; you can see that they're not lined up for some reason, but that's a different question.
The hour hand's transform.rotation.z is animated using a CABasicAnimation, as described in this answer.
The problem—at least the one I'm asking about in this question—is the difference in duration.
Both animations are set to exactly the same duration, but the sun arrives back at its starting position a full two clock-hours before the hour hand does.
Of course, eventually the clock's duration will be exactly half the sun's duration (or its speed set to 2), since a clock only has 12 hours. If I do that, then the hour hand falls 4 clock-hours behind the sun, rather than 2.
So, given that both animations have the same duration, or the duration of the clock's animation is an even multiple of the sun's animation, why does the clock take longer?
For that matter, although I'm not complaining, why does the sun wait for the clock to catch up?

This appears to be due to the fact that you aren’t specifying a value for the keyTimes property of the keyframe animation. Per the documentation:
For the best results, the number of elements in the array should match the number of elements in the values property or the number of control points in the path property. If they do not, the timing of your animation might not be what you expect.
Indeed, setting keyTimes to #[ #0, #0.25, #0.5, #0.75, #1 ] appears to correct this.

Related

A better way to roll dice

I have a game that requires the player to roll two die. As this is a multiplayer game, the way I currently do this is have 6 animations (1 for each die's outcome). When the player clicks a button, it sends a request to my server code. My server code determines the die's outcome and sends the results to the client. The client then plays the corresponding animations.
This works ok, but has some issues. For instance, if the server sends back two of the same values (two 6's, for example) then the animations don't work correctly. As both animations are the same, they overlay each other, and it looks like only one die was rolled.
Is there a better way to do this? Instead of animations, using "real" dice? If that's the case, I always need to be sure to "pre-determine" the outcome of the dice roll, on the server. I also need to make sure the dice don't fall off the table or jostle any of the other player pieces on the board.
thanks for any ideas.
The server only needs to care about the value result, not running physics calculations.
Set up 12 different rolling animations:
Six for the first die
Six for the second die
Each one should always end with the same modeled face pointing upwards (the starting position isn't relevant, only the ending position). For the latter steps you'll probably want to adjust the model's UV coordinates to use a very tall or very wide texture (or just a slice of a square one). So not like this but rather all in a line 1-2-3-4-5-6.
The next step is picking a random animation to play. You've already got code to run a given animation, just set it to pick randomly instead of based on the die-roll-value from the server:
int animNum = Mathf.Floor(Random.Next()*6);
Finally, the fun bit. Adjusting the texture so that the desired face shows when the animation is done. I'm going to assume that you arrange your faces along the top edge of your square texture. Material.SetTextureOffset().
int showFace = Mathf.Floor(Random.Next()*6); //this value should come from the server
die.renderer.material.SetTextureOffset(1f/6 * showFace,0);
This will set the texture offset such that the desired face will show on top. You'll even be able to see it changing in the inspector. Because of the UVs being arranged such that each face uses the next chunk over and because textures will wrap around when reaching the edge (unless the texture is set to Clamp in its import settings: you don't want this here).
Note that this will cause a new material instance to be instantiated (which is not very performant). If you want to avoid this, you'll have to use a material property block instead.
You could simulate the physics on the server, keep track of the positions and the orientations of the dice for the duration of the animation, and then send the data over to the client. I understand it's a lot of data for something so simple, but that's one way you can get the rolls to appear realistic and synced between all clients.
If only Unity's physics was deterministic that would be a whole lot easier.

Why is WheelDelta = 120?

I'm working with mouse events, specifically OnMouseWheel. Many code samples refer to distance the view changes (or zoom f.i. in 3D application) as Distance = Sign(WheelDelta)*Constant or Distance = WheelDelta / WHEEL_DELTA or something of that kind - assuming that WheelDelta is always a multiple of 120 (WHEEL_DELTA constant = 120).
I already found that in touch interfaces / tablets input may depend on scrolling length.
I was wondering why Microsoft has set default WheelDelta to 120, why not 100 or 10 or anything else? In what other cases wheel delta may be something different from 120?
The Qt Documentation elaborates a bit more on why it is actually 120:
QPoint QWheelEvent::angleDelta() const
Returns the distance that the wheel is rotated, in eighths of a degree. A
positive value indicates that the wheel was rotated forwards away from the
user; a negative value indicates that the wheel was rotated backwards toward
the user.
Most mouse types work in steps of 15 degrees, in which case the delta value is
a multiple of 120; i.e., 120 units * 1/8 = 15 degrees.
However, some mice have finer-resolution wheels and send delta values that are
less than 120 units (less than 15 degrees). To support this possibility, you
can either cumulatively add the delta values from events until the value of 120
is reached, then scroll the widget, or you can partially scroll the widget in
response to each wheel event.
https://doc.qt.io/qt-5/qwheelevent.html#angleDelta
WHEEL_DELTA is not fixed anymore to 120. As I understand it this constant was chosen to allow for finer resolutions in the future, which obviously is NOW.
See this article from MSDN
The question is marked as answered already I thought I might provide some more information.
If I understand correctly, WHEEL_DELTA is actually 40 not 120, the 120 comes from the mouse driver multiplying the raw WHEEL_DELTA value by the number of lines to scroll, which is by default 3. You can obtain the scroll line number using
My.Computer.Mouse.WheelScrollLines
This can most easily be seen using a NumericUpDown control, which on scroll adjusts the value by the increment multiplied by that line count.
Just messing with my wheel mouse, there are 18 detents in a full revolution, being 20 degrees per detent (Sure I know that's a small sample size of mouses in the world...!). 40 suggests they felt half degrees were fine enough though this last paragraph is supposition.
EDIT: Not one to spread misinformation, on further study WHEEL_DELTA is in fact 120, NumericUpDown proved to be a false positive. Nonetheless, the rest of the discussion is valid, if one can apply a factor of three to the logic.
As you have noticed laptop Touchpads can scroll (either two-finger or scroll zone on right hand size), in which case there can be lots of events with very small wheelDelta values (either needing integration, or perhaps timeouts to prevent too many redraws).
Also different OS's or configurations or devices can have different meanings for scrolling - pixels, lines, or pages. e.g. DOM event.deltaMode
Finally some devices (mice and touchpads) also allow horizontal scrolling.
The above is more specific to browser DOM events, but the same issues may apply to Win events too.
Edit:
From the Firefox MDN docs there are three events you are probably interested in: WM_MOUSEWHEEL, WM_MOUSEHWHEEL, and WM_GESTURE (panning on touch devices).
A search of the Mozilla Bugzilla database shows a variety of problems with some Symantics and ALPS touch drivers sending WM_VSCROLL instead of WM_MOUSEWHEEL (may be relevant if supporting touchpads).
If you want horizontal mouse scrolling support, this article from a flash dev says: [mousewheel support] was added in Vista so if you are using XP or 2000 you need IntelliType Pro and/or IntelliPoint installed for WM_MOUSEHWHEEL support.
#Krom: more speculations and loose facts but maybe useful to others :-)

Is it better to count n ticks or run two seperate timers

I'm making a snake game and on every tick the snake moves. because the snake is moving a whole unit on each tick the animation is jumpy. The game is on an invisable grid so the snake can only change directions at specific points.
Would it be considered better practice to have a timer that will move the snake a pixels at a time counting it with a variable, and on every n tick run the code to change direction. Or is it better to have two separate timers; one for move the snake a pixel at a time and another for changing the snakes direction?
Essentially, you're creating a "game-loop" to update the displayList (in Actionscript) and re-draw your view. When the game gets complex, Timers and listening for ENTER_FRAME events will be constrained to the flashplayer settings for screen refresh (i.e. it's FPS) and the CPU's rendering based on what it is tasked to process.
For frame-rate independent animation, you should probably use a combination of ENTER_FRAME to track milliseconds and GetTimer() calls to more accurately (to the millisecond) call animations and normalize the experience across a variety of platforms.
Basically, listen for the ENTER_FRAME event, check the current milliseconds since the last update and if it exceeds your refresh-rate (in terms of milli's), fire off the animation/state update: check for snake collision detection with a "direction-block" - handle if true, then update the snake's movement / position / state.
Flash updates the Display List at whatever it's settings, and CPU / machine-dependent issues are. The key, I've found, is to make sure you're normalizing the speed of updates to make the experience consistent. Timer's have their usage, but can cause memory performance issues. ENTER_FRAME is synced to the main timeline / frame-rate settings.
For a dated, but interesting discussion 3 years ago, check out this post from actionscript.org.

Time delays and Model View Controller

I am implementing a turn based game, there are two sides and each side has several units, at each specific moment only one unit can move across the board.
Since only one unit can move at a time, after i figure out where it should go, as far as the simulation is concerned it Can instantly be teleported there, but playing the game you would want to see the unit moving so that you realise who moved and where he went.
The question is, would you put the movement algorithm (eg interpolating between 2 points in N seconds) in the model and then have the view show the unit in the interpolated position without even knowing that it is moving, or teleport the unit and notify the view that it should show the unit moving as best as it wants.
If you would take the second approach, how would you keep the simulation from running too far ahead of the view, would you put the view in command of resuming the simulation after the movement ended?
Thanks in advance, Xtapodi.
Ah, yet another example that reminds us that MVC was never originally designed for real-time graphics. ;)
I would store the current position and the previous position in the model. When the object moves, the current position is copied into the previous position, the new position is copied into the current position, and a notification is sent to the view that the model has changed. The view can then interpolate between the old and the new position accordingly. It can speed up, slow down, or even remove the interpolation entirely based on the specific view settings, without requiring any extra data to be stored within the model.
Rather than storing the current position and the previous position, you could instead just store the last move with each unit, and the move itself contains the previous position. This is probably more versatile if you ever need to store extra information about a move.
What you probably want is to have the unit image move each frame. How far to move the image each frame is similar to your interpolation.
unitsPerSecond = totalUnits / (framesPerSecond * totalSeconds)
So if I want to move an image from position 0 to position 60 in 2 seconds and my framerate is 30, I need to move 60 units in 60 frames, therefore my speed is 1. So each frame, I move the image 1 unit, and if moving the unit will take me beyond my destination, simply set my location to my destination.

How I do I make controls/elements move with inertia?

Modern UI's are starting to give their UI elments nice inertia when moving. Tabs slide in, page transitions, even some listboxes and scroll elments have nice inertia to them (the iphone for example). What is the best algorythm for this? It is more than just gravity as they speed up, and then slow down as they fall into place. I have tried various formulae's for speeding up to a maximum (terminal) velocity and then slowing down but nothing I have tried "feels" right. It always feels a little bit off. Is there a standard for this, or is it just a matter of playing with various numbers until it looks/feels right?
You're talking about two different things here.
One is momentum - giving things residual motion when you release them from a drag. This is simply about remembering the velocity of a thing when the user releases it, then applying that velocity to the object every frame and also reducing the velocity every frame by some amount. How you reduce velocity every frame is what you experiment with to get the feel right.
The other thing is ease-in and ease-out animation. This is about smoothly accelerating/decelerating objects when you move them between two positions, instead of just linearly interpolating. You do this by simply feeding your 'time' value through a sigmoid function before you use it to interpolate an object between two positions. One such function is
smoothstep(t) = 3*t*t - 2*t*t*t [0 <= t <= 1]
This gives you both ease-in and ease-out behaviour. However, you'll more commonly see only ease-out used in GUIs. That is, objects start moving snappily, then slow to a halt at their final position. To achieve that you just use the right half of the curve, ie.
smoothstep_eo(t) = 2*smoothstep((t+1)/2) - 1
Mike F's got it: you apply a time-position function to calculate the position of an object with respect to time (don't muck around with velocity; it's only useful when you're trying to figure out what algorithm you want to use.)
Robert Penner's easing equations and demo are superb; like the jQuery demo, they demonstrate visually what the easing looks like, but they also give you a position time graph to give you an idea of the equation behind it.
What you are looking for is interpolation. Roughly speaking, there are functions that vary from 0 to 1 and when scaled and translated create nice looking movement. This is quite often used in Flash and there are tons of examples: (NOTE: in Flash interpolation has picked up the name "tweening" and the most popular type of interpolation is known as "easing".)
Have a look at this to get an intuitive feel for the movement types:
SparkTable: Visualize Easing Equations.
When applied to movement, scaling, rotation an other animations these equations can give a sense of momentum, or friction, or bouncing or elasticity. For an example when applied to animation have a look at Robert Penners easing demo. He is the author of the most popular series of animation functions (I believe Adobe's built in ones are based on his). This type of transition works equally as well on alpha's (for fade in).
There is a bit of method to the usage. easeInOut start slow, speeds up and the slows down. easeOut starts fast and slows down (like friction) and easeIn starts slow and speeds up (like momentum). Depending on the feel you want you choose the appropriate one. Then you choose between Sine, Expo, Quad and so on for the strength of the effect. The others are easy to work out by their names (e.g. Bounce bounces, Back goes a little further then comes back like an elastic).
Here is a link to the equations from the popular Tweener library for AS3. You should be able to rewrite these in JavaScript (or any other language) with little to no trouble.
It's playing with the numbers.. What feels good is good.
I've tried to develop magic formulas myself for years. In the end the ugly hack always felt best. Just make sure you somehow time your animations properly and don't rely on some kind of redraw/refresh rate. These tend to change based on the OS.
Im no expert on this either, but I beleive they are done with quadratic formulas, that, when given the correct parameters, start fast or slow and dramatically increase or decrease towards the end until a certain point is reached.

Resources