Modern UI's are starting to give their UI elments nice inertia when moving. Tabs slide in, page transitions, even some listboxes and scroll elments have nice inertia to them (the iphone for example). What is the best algorythm for this? It is more than just gravity as they speed up, and then slow down as they fall into place. I have tried various formulae's for speeding up to a maximum (terminal) velocity and then slowing down but nothing I have tried "feels" right. It always feels a little bit off. Is there a standard for this, or is it just a matter of playing with various numbers until it looks/feels right?
You're talking about two different things here.
One is momentum - giving things residual motion when you release them from a drag. This is simply about remembering the velocity of a thing when the user releases it, then applying that velocity to the object every frame and also reducing the velocity every frame by some amount. How you reduce velocity every frame is what you experiment with to get the feel right.
The other thing is ease-in and ease-out animation. This is about smoothly accelerating/decelerating objects when you move them between two positions, instead of just linearly interpolating. You do this by simply feeding your 'time' value through a sigmoid function before you use it to interpolate an object between two positions. One such function is
smoothstep(t) = 3*t*t - 2*t*t*t [0 <= t <= 1]
This gives you both ease-in and ease-out behaviour. However, you'll more commonly see only ease-out used in GUIs. That is, objects start moving snappily, then slow to a halt at their final position. To achieve that you just use the right half of the curve, ie.
smoothstep_eo(t) = 2*smoothstep((t+1)/2) - 1
Mike F's got it: you apply a time-position function to calculate the position of an object with respect to time (don't muck around with velocity; it's only useful when you're trying to figure out what algorithm you want to use.)
Robert Penner's easing equations and demo are superb; like the jQuery demo, they demonstrate visually what the easing looks like, but they also give you a position time graph to give you an idea of the equation behind it.
What you are looking for is interpolation. Roughly speaking, there are functions that vary from 0 to 1 and when scaled and translated create nice looking movement. This is quite often used in Flash and there are tons of examples: (NOTE: in Flash interpolation has picked up the name "tweening" and the most popular type of interpolation is known as "easing".)
Have a look at this to get an intuitive feel for the movement types:
SparkTable: Visualize Easing Equations.
When applied to movement, scaling, rotation an other animations these equations can give a sense of momentum, or friction, or bouncing or elasticity. For an example when applied to animation have a look at Robert Penners easing demo. He is the author of the most popular series of animation functions (I believe Adobe's built in ones are based on his). This type of transition works equally as well on alpha's (for fade in).
There is a bit of method to the usage. easeInOut start slow, speeds up and the slows down. easeOut starts fast and slows down (like friction) and easeIn starts slow and speeds up (like momentum). Depending on the feel you want you choose the appropriate one. Then you choose between Sine, Expo, Quad and so on for the strength of the effect. The others are easy to work out by their names (e.g. Bounce bounces, Back goes a little further then comes back like an elastic).
Here is a link to the equations from the popular Tweener library for AS3. You should be able to rewrite these in JavaScript (or any other language) with little to no trouble.
It's playing with the numbers.. What feels good is good.
I've tried to develop magic formulas myself for years. In the end the ugly hack always felt best. Just make sure you somehow time your animations properly and don't rely on some kind of redraw/refresh rate. These tend to change based on the OS.
Im no expert on this either, but I beleive they are done with quadratic formulas, that, when given the correct parameters, start fast or slow and dramatically increase or decrease towards the end until a certain point is reached.
Related
First, a disclaimer. I'm well aware of the std answer for X vs Y - "it depends". However, I'm working on a very general purpose product, and I'm trying to figure out "it depends on what". I'm also not really able to test the wide variety of hardware, so try-and-see is an imperfect measure at best.
I've been doing some googling, and I've found very little reference to using an offline render target/surface for hittesting. I'm not sure of the nomenclature, what I'm talking about though is using very simple shaders to render a geometry ID (for example) to a buffer, then reading the pixel value under the mouse to see what geometry is directly under the mouse pointer.
I have however, found 101 different tutorials on doing triangle intersection, a la D3DXIntersect & DirectX sample "Pick".
I'm a little curious on this - I would have thought using HW was the standard method. By all rights, it should be many orders of magnitude faster, and should scale far better.
I'm relatively new to graphics programming, so here are my assumptions, for you to disabuse.
1) A simple shader that does geometry transform & writes a Node + UV value should be nearly free.
2) The main cost in the HW pick method would be the buffer fetch, when getting the rendered surface back off the GPU for the CPU to read over. I have no idea how costly this is. us? ms? seconds? minutes?
3) This may be obvious, but I am assuming that Triangle Intersection (D3DXIntersect) is only possible on the CPU.
4) A possible cost people want to avoid is the cost of the extra render target(s) (zbuffer+surface). I'm a'guessing about 10 megs for 1024x1280 (std screen size?). This is acceptable to me, although if I could render a smaller surface (trade accuracy for memory) I would do so (is that possible?).
This all leads to a few thoughts.
1) For very simple scenes, triangle intersection may be faster. Quite what is simple/complex is hard to guess at this point. I'm looking at possible 100s of tris to 10000s. Probably not much more than that.
2) The HW buffer needs to be rendered regardless of whether or not its used (in my case). However, it can be reused without cost (ie, click-drag, where mouse tracks across a static scene)
2a) Possibly, triangle intersection may be preferable if my scene updates every frame, or if I have limited mouse interaction.
Now I've finished writing, I see a similar question has been asked: (3D Graphics Picking - What is the best approach for this scenario). My problem with this is (a) why would you need to re-render your picking surface for click-drag as your scene hasn't actually changed, and (b) wouldn't it still be faster than triangle intersection?
I welcome thoughts, criticism, and any manner of side-tracking :-)
I'm developing in j2me and using canvas to drawing some images.
Now, my question is : what is difference of below sample codes in speed of drawing?
drawing after clipping area rectangle:
g.clipRect(x, y, myImage.getWidth(), myImage.getHeight());
g.drawImage(myImage, x , y, Graphics.TOP | Graphics.LEFT);
g.setClip(0, 0, screenWidth, screenHeight);
drawing without clip:
g.drawImage(myImage, x, y, Graphics.TOP | Graphics.LEFT);
is the first one is faster? I'm drawing on screen a lot.
Well the direct answer to your question would be Mu I'm afraid - because you appear to approach the issue from the wrong direction.
Thing is, clipping API is not intended for performance considerations / optimizations. You can find full coverage of its purpose in API documentation (available online), it does not state anything related to performance impact:
Clipping
The clip is the set of pixels in the destination of the Graphics object that may be modified by graphics rendering operations.
There is a single clip per Graphics object. The only pixels modified by graphics operations are those that lie within the clip. Pixels outside the clip are not modified by any graphics operations.
Operations are provided for intersecting the current clip with a given rectangle and for setting the current clip outright...
Attempting to use clipping API for imaginary performance considerations will make your code a nightmare to understand for future maintainers. Note this future maintainer may be you yourself, just few weeks / months / years later - I for one had my nose broken on my own code written some time ago without clearly understandable intent - trust me, it hurts the same as messing with poor code written by anyone else.
Don't get me wrong - there is a chance that clipping may have substantial performance impact in some particular case on specific device - why not, everything is possible given the variety of MIDP implementations. Know what? there is even a chance of it having an opposite impact on some other device, why not.
If (if) that happens, if (if) you'll somehow get a clear, solid, tested and proven justification of specific performance impact - then (then), go ahead, implement whatever tricks necessary to reach required performance, no matter how perverse they may be (BTDTGTTS). Until then, though, drop any baseless assumptions that just may come to your mind.
Until then... Just. Drop. It.
Developers love to optimize code and with good reason. It is so satisfying and fun. But knowing when to optimize is far more important. Unfortunately, developers generally have horrible intuition about where the performance problems in an application will actually be... Most performance tuning reminds me of the old joke about the guy who's looking for his keys in the kitchen even though he lost them in the street, because the light's better in the kitchen... (Brian Goetz)
This will almost certainly vary between platforms, and will depend on how much you're actually drawing.
I suggest you measure performance yourself by logging the number of paints per second, or the average duration of a paint method, and painting this on screen.
Drawing without clip should be faster on any platform for the simple reason that you are not calling two clip methods. But I might ask, why are you using clip to begin with?
You usually use clipping when you have an animation sprite or an icon variation in the same file. In this case you can create a file for each frame/icon. It will increase your jar file size and will use more heap space to hold these images on memory, but will be drawn faster.
Let's say I throw a cube and it falls on the ground with 45, 45, 0 rotations (on it's corner). Now in a 'perfect' world, the cube wouldn't consist of atoms, and it would be 'perfect', there would be no wind (or any lesser movement of air) etc. And in the end the cube would stay on it's corner. But we don't live in such a boring 'perfect' world, and the physics emulators should take this into account and they do quite nicely. So the cube falls on it's side.
Now my question is, how random is that? Does the cube always fall on it's left side? Or maybe it depends on Math.random()? Or maybe it depends on current time? Or maybe it depends on some custom random function, that takes not time, but parameters of objects on stage, as it's seed?
Why I am making this question is, that if the randomness wasn't based on time, I probably could cache results of collisions (when objects stop) for their particular initial position to optimize my animation? If I cached the whole animation, I wouldn't care, but If I only cached the end result, I could be surprised that two exactly same situations can evaluate to different results and then the other wouldn't fit my cached version.
I could just check the source for Math.random functions, but that would be a shallow method, as the code is surely optimized, and as such sophisticated randomization isn't needed there, I personally would use something like fallLeft = time % 2. Also, the code could change with time.
Couldn't find anything about AwayPhysics here, so probably it's something new for everyone - that's why I added the parentheses part; the world won't explode if I'll assume one thing and it happens that in AwayPhysics it's opposite, just what's the standard?
I, personally, don't use pre-made physics engines. Instead, when I want one, I write it myself, so I know how they work inside. The reason that the cube tips over is because the physics engine is inaccurate. It can only approximate things like trig functions, square-roots, integrals, et cetera, so instead it estimates them to a few digits of accuracy (15 in Javascript). If you have the case of, say, two perfect circles stacked on top of each other, the angle between them (pi/2) would slowly change to some seemingly random value based on the way the program was approximating pi. Eventually, this tiny error would grow as the circles rolled off of each other, and the top one would just fall. So, in answer to your question, the cube should fall the same way each time if thrown in the same way, but the direction in which it always fell would be effectively random.
I have a web cam that takes a picture every N seconds. This gives me a collection of images of the same scene over time. I want to process that collection of images as they are created to identify events like someone entering into the frame, or something else large happening. I will be comparing images that are adjacent in time and fixed in space - the same scene at different moments of time.
I want a reasonably sophisticated approach. For example, naive approaches fail for outdoor applications. If you count the number of pixels that change, for example, or the percentage of the picture that has a different color or grayscale value, that will give false positive reports every time the sun goes behind a cloud or the wind shakes a tree.
I want to be able to positively detect a truck parking in the scene, for example, while ignoring lighting changes from sun/cloud transitions, etc.
I've done a number of searches, and found a few survey papers (Radke et al, for example) but nothing that actually gives algorithms that I can put into a program I can write.
Use color spectroanalisys, without luminance: when the Sun goes down for a while, you will get similar result, colors does not change (too much).
Don't go for big changes, but quick changes. If the luminance of the image changes -10% during 10 min, it means the usual evening effect. But when the change is -5%, 0, +5% within seconds, its a quick change.
Don't forget to adjust the reference values.
Split the image to smaller regions. Then, when all the regions change same way, you know, it's a global change, like an eclypse or what, but if only one region's parameters are changing, then something happens there.
Use masks to create smart regions. If you're watching a street, filter out the sky, the trees (blown by wind), etc. You may set up different trigger values for different regions. The regions should overlap.
A special case of the region is the line. A line (a narrow region) contains less and more homogeneous pixels than a flat area. Mark, say, a green fence, it's easy to detect wheter someone crosses it, it makes bigger change in the line than in a flat area.
If you can, change the IRL world. Repaint the fence to a strange color to create a color spectrum, which can be identified easier. Paint tags to the floor and wall, which can be OCRed by the program, so you can detect wheter something hides it.
I believe you are looking for Template Matching
Also i would suggest you to look on to Open CV
We had to contend with many of these issues in our interactive installations. It's tough to not get false positives without being able to control some of your environment (sounds like you will have some degree of control). In the end we looked at combining some techniques and we created an open piece of software named OpenTSPS (Open Toolkit for Sensing People in Spaces - http://www.opentsps.com). You can look at the C++ source in github (https://github.com/labatrockwell/openTSPS/).
We use ‘progressive background relearn’ to adjust to the changing background over time. Progressive relearning is particularly useful in variable lighting conditions – e.g. if lighting in a space changes from day to night. This in combination with blob detection works pretty well and the only way we have found to improve is to use 3D cameras like the kinect which cast out IR and measure it.
There are other algorithms that might be relevant, like SURF (http://achuwilson.wordpress.com/2011/08/05/object-detection-using-surf-in-opencv-part-1/ and http://en.wikipedia.org/wiki/SURF) but I don't think it will help in your situation unless you know exactly the type of thing you are looking for in the image.
Sounds like a fun project. Best of luck.
The problem you are trying to solve is very interesting indeed!
I think that you would need to attack it in parts:
As you already pointed out, a sudden change in illumination can be problematic. This is an indicator that you probably need to achieve some sort of illumination-invariant representation of the images you are trying to analyze.
There are plenty of techniques lying around, one I have found very useful for illumination invariance (applied to face recognition) is DoG filtering (Difference of Gaussians)
The idea is that you first convert the image to gray-scale. Then you generate two blurred versions of this image by applying a gaussian filter, one a little bit more blurry than the first one. (you could use a 1.0 sigma and a 2.0 sigma in a gaussian filter respectively) Then you subtract from the less-blury image, the pixel intensities of the more-blurry image. This operation enhances edges and produces a similar image regardless of strong illumination intensity variations. These steps can be very easily performed using OpenCV (as others have stated). This technique has been applied and documented here.
This paper adds an extra step involving contrast equalization, In my experience this is only needed if you want to obtain "visible" images from the DoG operation (pixel values tend to be very low after the DoG filter and are veiwed as black rectangles onscreen), and performing a histogram equalization is an acceptable substitution if you want to be able to see the effect of the DoG filter.
Once you have illumination-invariant images you could focus on the detection part. If your problem can afford having a static camera that can be trained for a certain amount of time, then you could use a strategy similar to alarm motion detectors. Most of them work with an average thermal image - basically they record the average temperature of the "pixels" of a room view, and trigger an alarm when the heat signature varies greatly from one "frame" to the next. Here you wouldn't be working with temperatures, but with average, light-normalized pixel values. This would allow you to build up with time which areas of the image tend to have movement (e.g. the leaves of a tree in a windy environment), and which areas are fairly stable in the image. Then you could trigger an alarm when a large number of pixles already flagged as stable have a strong variation from one frame to the next one.
If you can't afford training your camera view, then I would suggest you take a look at the TLD tracker of Zdenek Kalal. His research is focused on object tracking with a single frame as training. You could probably use the semistatic view of the camera (with no foreign objects present) as a starting point for the tracker and flag a detection when the TLD tracker (a grid of points where local motion flow is estimated using the Lucas-Kanade algorithm) fails to track a large amount of gridpoints from one frame to the next. This scenario would probably allow even a panning camera to work as the algorithm is very resilient to motion disturbances.
Hope this pointers are of some help. Good Luck and enjoy the journey! =D
Use one of the standard measures like Mean Squared Error, for eg. to find out the difference between two consecutive images. If the MSE is beyond a certain threshold, you know that there is some motion.
Also read about Motion Estimation.
if you know that the image will remain reletivly static I would reccomend:
1) look into neural networks. you can use them to learn what defines someone within the image or what is a non-something in the image.
2) look into motion detection algorithms, they are used all over the place.
3) is you camera capable of thermal imaging? if so it may be worthwile to look for hotspots in the images. There may be existing algorithms to turn your webcam into a thermal imager.
What is the principle behind creating rain effect or water drops regardless of using any particular language. I've seen a few impressive rain and water effects done in Flash, but how does it actually work?
Rain Effect Example
Rain Drop Water Effect Example
You are asking a question as if the two examples were related, but you actually have
1) simulating drops of rain as seen in air (drop trails; simple but the realism depends on lighting very much)
for this you to simulate following events:
for each time step:
create new drops
move existing drops vertically down
remove (or/and animate) the drops hitting the ground
as pointed in other answers new drops (size and position) can be created with various algorithms.
as for speed they move with constant speed.
finally to show your trails you need to look at simple projections
2) simulating splash waves (water simulation, and in the example a reflective surface is shown)
For this you only need to know where the drops fall and how big they are, the rest is wave propagation. However that's only really visible if there is a reflection and that can be a bit tricky.
NOTES:
There are many things that determines realism, but mostly it boils down to detail.
For example rain is usually seen clearly only in strange lighting conditions - close to lamps or on high contrast background. Otherwise it is quite bleak.
Also the details in interaction - splashing on surfaces that it hits, which can leave bubbles (if close enough to notice), or create waves.
Another example - if you look at this tutorial, which is not really realistic, but it does illustrate one point, you will see that even though the rain looks more like a snow it exposes the 'flatness' of your first example (which has absolutely no depth).
So, it is all about detail.
Try to model what you have in terms of events that you have to simulate and then solve simulating each one separately - for example using fractals for seeding rain might be an overkill, but if you nicely model your work you start with random seeding and latter substitute with more accurate/complex methods.
Here's a paper by Mandelbrot and Lovejoy which is one of the most cited works on developing fractal models to represent rain.
The second one (Rain Drop Water Effect Example) is probably done with a wave equation simulator
They probably use particle effects mostly.
An old school way that is dirt cheap is to use palette cycling. Basically, you setup a ramp of colors and move one color into the next in fixed intervals. The moving colors give the illusion of motion. I've worked on games where rain, wind, snow, waterfalls, fire, etc. have all been animated using palette cycling. It's a dying art, but it still works. :)