Adobe After Effects CC Transformation Speed - transformation

I have a logo, moving center of the screen to the left hand side of the screen. It's a fairly smooth transformation however I want the motion to start slowly, speed up to a maximum then slow back down before completing it's transformation. How can this be achieved? I have Googled but not found any luck, I think I'm just not using the correct search criteria
Thanks, Nick

Right click on a keyframe. You have a number of options available, including Keyframe Assistant> "Easy Ease". This gives you "slow-ins" and "slow-outs" to your keyframes. To increase the eases, right click on a keyframe, select "Keyframe Velocity...", and if the keyframe is the outgoing one (the first one), enter a greater value in the "outgoing influence" (like, say, 70%). Play around with these values. As usual, for anything in AE, there are several ways to accomplish a task. Two other ways of doing this are a) using the tangent controls in the Comp Viewer window, and b) using the graph editor (see graphic).

Related

Detecting When A Moving Shape Is Clicked (For A Game)

I'm a CS scrub and I thought I understood everything until now. I'm being prompted to make anything I want, so I'm choosing to make a little game I've been thinking about for a few months. The problem is that I don't know where to start. We've been using JavaFX and we've done some animation but I don't fully understand everything. I don't really understand mouse events but the idea I have depends on them. Anyway, here's the idea:
The main (2D) game is about reducing fractions.
Imagine the window being split horizontally into top and bottom.
Now, imagine boxes spawning out of view and moving into view toward the horizontal center line. The boxes will only be moving vertically and each box has a random integer. When a box gets to the center, it'll stay there and allow for other boxes to land on top of it (or below it if it came from the bottom).
Boxes are eliminated by "reducing" a number in the numerator with a number in the denominator. Several boxes may be selected on one side before reducing them with the other side. Here's a picture that might help convey what I want to do:
Crude Three Frames of the Gameplay Drawn In Paint
Hopefully that all makes sense.
I've been trying to use an extension of Rectangle but I don't really know what else to do. I figure I'll need to create some ArrayLists to keep track of the boxes and some other lists to keep track of factors as well. Anyway, any help would be fantastic. Thank you guys very much!

Arrange blocks by 2D property without overlap

My app needs to show several buttons, without overlap, and preferably without scrolling or zooming. They must be big enough to poke with a finger and read the text. Button width depends on its text length, and the height is constant. The screen size is known.
Each button represents a food about which I know some nutritional information. I'll calculate a protein:carb ratio and a fat content, both ranging from 0% to 100%.
I want to put the buttons close to a position that reflects their nutritional content: e.g. protein-rich at the top, carby at the bottom, fatty on the right and lean on the left. So cake would be bottom right and meats would be somewhere on the top edge.
Often, there'll be overlap and I'll have to nudge them away from each other.
The puzzle is to invent an algorithm for that nudging. The desiderata in order of priority are:
1) Readable and pokeable size, no overlap.
2) No scrolling or zooming required, although it'll happen when there are so many buttons that they could never fit on the screen even if we didn't care where they were.
3) Buttons should be close to where the user would look based on knowing the nutritional content of the food.
Incidentally, I'm using JS on a smartphone, not prolog or the like.
(There are some seeming dupes, but no solutions. One is about diagonal stalks, another just advocates throwing it at a game engine, but most are devoid of answers.)
Ther MArVL group at Monash University does work on constraint-based layout work. Some of their software might be applicable to your problem.

PowerPoint: Animate arrow between rectantgle

I'm not sure whether it is the right place to ask a PowerPoint question. So, if it isn't, don't be too harsh with me, please.
I have two rectangles created using the drawing tools on a ppt slide. These both rectangles are connected using an arrow with magnetic connectors.
Now I want to move first one of the rectangles using an animation and in a second step the other one.
That's easy so far.
But now I also want that the magentic connectors stay tied to the rectangles during the animation.
Is this possible somehow?
(I'm unfortunately not sure whether I always use the correct ppt terms above, since I only have a German installation of ppt.)
Thanks!
I don't think there is an easy solution as the connector won't move with an animation of the connecting shape.
However, if the required movement isn't to complex you could try to replicate the behavior with a set of animations:
Move rectangle using motion path
Grow connector shape horizontally or vertically
Move connector in the required direction using another motion path which must be adjusted to growth rate of the grow animation
All animations need to start simultaneously ("start with previous") and smooth start/end need to be set to 0 sec for the motion paths. A sample of the stretch effect (2.+3.) can be found here: http://pptheaven.mvps.org/experimental.html ("Zoom Test").

Computer vision to calculate the digit (finger) ratio

If someone scans their right hand pressed against the glass of a scanner, the result would look like this:
(without the orange and white annotations). How could we determine someone's 2D:4D ratio from an image of their hand?
You've already tagged this opencv which is great - I'd highly recommend taking a look at openFrameworks and the openCV addon, as the basic examples there will give you some great starting blocks for this.
The general approach to this I would take is to first distill the image to light and dark areas, detect the edges of the hand and fingers, and then simplify your data until you have lines representing the edges and tips of the fingers. Finally, take the lower inseam between 2nd and 3rd finger, stopping at the tip of the 2nd, and the inseam of the 3rd and 4th, stopping at the tip of the 4th, which should give you your 2D:4D ratio.
First, you'll need to process your images to get to black and white images openCV can easily handle. You may have to play with various thresholds to get both the outline of the hand and the inseams of the fingers to be detected. (You may even need two passes to detect both the outline and inseams)
While there are many approaches to feature detection, OpenCV will generally return arrays of "blobs" detected. With the right thresholds, I believe you would be able to reliably and simply find contiguous horizontal blobs (or nearly contiguous, allowing for some distance between nearby blobs) for the inseams of each finger.
A simple algorithm for detecting the inseams would be to walk through the detected blobs starting from the top left and proceeding left-to-right through the image, as if reading a page. Assemble an array of detected horizontal lines from the blobs in your image, and play with various image processing thresholds, minimum accepted line length, and distance allowances between detected blobs which you still consider part of the same line until you're satisfied you're detecting the finger edges well.
Once you have detected the horizontal lines, you can process the blobs again, looking for the vertical lines that represent the tips of the fingers (stopping when you hit the previously detected horizontal lines)
Finally, find the lines which represent the correct inseams, measure them until they intersect with the appropriate fingertips, and you should have your ratio!
Interesting question. I'd go about it this way:
First, binarize the image by Otsu's thresholding. Then find the skeleton of the image using a Medial-Axis Transform (MAT). This would mean doing a distance transform on the image, then using adaptive thresholding to get the local maxima in the distance transform. This gives a rough and ready skeleton of your image. Sample code from here.
The obtained hand-skeleton may be slightly disconnected, in which case use the OpenCV morphology "CLOSE" (not "open") function can connect it into a single skeleton. Then checking convexity defects of the resulting hand should give an estimate.

Automatic tracking algorithm

I'm trying to write a simple tracking routine to track some points on a movie.
Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background.
I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that.
A few more infos:
the spots are a few (es. 5x5) pixels in size
the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth.
the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses.
the spots don't move in a particular direction. They can move right and then left and then right again
the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points.
As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy.
Thank you
nico
Sounds like a job for Blob detection to me.
I would suggest the Pearson's product. Having a model (which could be any template image), you can measure the correlation of the template with any section of the frame.
The result is a probability factor which determine the correlation of the samples with the template one. It is especially applicable to 2D cases.
It has the advantage to be independent from the sample absolute value, since the result is dependent on the covariance related with the mean of the samples.
Once you detect an high probability, you can track the successive frames in the neightboor of the original position, and select the best correlation factor.
However, the size and the rotation of the template matter, but this is not the case as I can understand. You can customize the detection with any shape since the template image could represent any configuration.
Here is a single pass algorithm implementation , that I've used and works correctly.
This has got to be a well reasearched topic and I suspect there won't be any 100% accurate solution.
Some links which might be of use:
Learning patterns of activity using real-time tracking. A paper by two guys from MIT.
Kalman Filter. Especially the Computer Vision part.
Motion Tracker. A student project, which also has code and sample videos I believe.
Of course, this might be overkill for you, but hope it helps giving you other leads.
Simple is good. I'd start doing something like:
1) over a small rectangle, that surrounds a spot:
2) apply a weighted average of all the pixel coordinates in the area
3) call the averaged X and Y values the objects position
4) while scanning these pixels, do something to approximate the bounding box size
5) repeat next frame with a slightly enlarged bounding box so you don't clip spot that moves
The weight for the average should go to zero for pixels below some threshold. Number 4 can be as simple as tracking the min/max position of anything brighter than the same threshold.
This will of course have issues with spots that overlap or cross paths. But for some reason I keep thinking you're tracking stars with some unknown camera motion, in which case this should be fine.
I'm afraid that blob tracking is not simple, not if you want to do it well.
Start with blob detection as genpfault says.
Now you have spots on every frame and you need to link them up. If the blobs are moving independently, you can use some sort of correspondence algorithm to link them up. See for instance http://server.cs.ucf.edu/~vision/papers/01359751.pdf.
Now you may have collisions. You can use mixture of gaussians to try to separate them, give up and let the tracks cross, use any other before-and-after information to resolve the collisions (e.g. if A and B collide and A is brighter before and will be brighter after, you can keep track of A; if A and B move along predictable trajectories, you can use that also).
Or you can collaborate with a lab that does this sort of stuff all the time.

Resources