bicubic keyframe interpolation - animation

I am trying to implement what is presented in this very interesting tech conference about animation
http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-Indie-Approach
As a quick summary, it is about making an pose based animation system. Instead of playing animation, we play fix poses and interpolate between different poses to create the motion.
I am successfully doing linear interpolation between different pose by lerping the model space translation and slerping the model space rotation between two poses.
However, at 7:30 it is proposed to interpolate using bicubic interpolation, in order to assure position and speed continuity.
I spend some times thinking and researching about it, but I am not really sure what it means.
Could anyone provide me a bit of guidance on this question

I found the solution here:
http://archive.gamedev.net/archive/reference/articles/article1497.html
This links describes how to perform cubic interpolation.

Related

Removing skew/distortion based on known dimensions of a shape

I have an idea for an app that takes a printed page with four squares in each corner and allows you to measure objects on the paper given at least two squares are visible. I want to be able to have a user take a picture from less than perfect angles and still have the objects be measured accurately.
I'm unable to figure out exactly how to find information on this subject due to my lack of knowledge in the area. I've been able to find examples of opencv code that does some interesting transforms and the like but I've yet to figure out what I'm asking in simpler terms.
Does anyone know of papers or mathematical concepts I can lookup to get further into this project?
I'm not quite sure how or who to ask other than people on this forum, sorry for the somewhat vague question.
What you describe is very reminiscent of augmented reality marker tracking. Maybe you can start by searching these words on a search engine of your choice.
A single marker, if done correctly, can be used to identify it without confusing it with other markers AND to determine how the surface is placed in 3D space in front of the camera.
But that's all very difficult and advanced stuff, I'd greatly advise to NOT try and implement something like this, it would take years of research... The only way you have is to use a ready-made open source library that outputs the data you need for your app.
It may even not exist. In that case you'll have to buy one. Given the niché of your problem that would be perfectly plausible.
Here I give you only the programming aspect and if you want you can find out about the mathematical aspect from those examples. Most of the functions you need can be done using OpenCV. Here are some examples in python:
To detect the printed paper, you can use cv2.findContours function. The most outer contour is possibly the paper, but you need to test on actual images. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
In case of sloping (not in perfect angle), you can find the angle by cv2.minAreaRect which return the angle of the contour you found above. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html (part 7b).
If you want to rotate the paper, use cv2.warpAffine. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
To detect the object in the paper, there are some methods. The easiest way is using the contours above. If the objects are in certain colors, you can detect it by using color filter. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html

Is it possible to detect there is a motion happening from only an image(no referening is given)

I have searched around the internet, only seen motion detection can be done in video or two consecutive images. I wonder is that possible to detect a motion from an image(like jumping running swimming).The motion is referring any significant body movement. If it can be done, please tell me the algorithm and ways to learn it. thank you
As others have commented, for the general case, you probably can't. But, there are still avenues to explore, if you have control over some of the parameters.
One idea that comes to mind is detecting motion blur for some fast movement. You can accent that if you have control over the camera type/exposure.
You can find academic papers on the subject, and can start with:
https://www.google.com/search?q=detecting+motion+blur+in+one+image
A technique that can be helpful to you is called scene understanding. Basically you train a deep neural net with images and labels that describe that image. In that way you can know that a person is running, swimming or doing any other activity.
There is a good presentation about the subject by Prof. LeCun.
What yu are implying is an implicit comparison with an image of a person standng in a "stable/not moving directed way. So there is a two image comparison there non-withstanding.

Raytracing via diffusion algorithm

Many certain resources about raytracing tells about:
"shoot rays, find the first obstacle to cut it"
"shoot secondary rays..."
"or, do it reverse and approximate/interpolate"
I didnt see any algortihm that uses a diffusion algorithm. Lets assume a point-light is a point that has more density than other cells(all space is divided into cells), every step/iteration of lighting/tracing makes that source point to diffuse into neighbours using a velocity field and than their neighbours and continues like that. After some satisfactory iterations(such as 30-40 iterations), the density info of each cell is used for enlightment of objects in that cell.
Point light and velocity field:
But it has to be a like 1000x1000x1000 size and this would take too much time and memory to compute. Maybe just computing 10x10x10 and when finding an obstacle, partitioning that area to 100x100x100(in a dynamic kd-tree fashion) can help generating lighting/shadows for acceptable resolution? Especially for vertex-based illumination rather than triangle.
Has anyone tried this approach?
Note: Velocity field is here to make light diffuse to outwards mostly(not %100 but %99 to have some global illumination). Finite-element-method can make this embarassingly-parallel.
Edit: any object that is hit by a positive-density will be an obstacle to generate a new velocity field around the surface of it. So light cannot go through that object but can be mirrored to another direction.(if it is a lens object than light diffuse harder through it) So the reflection of light can affect other objects with a higher iteration limit
Same kd-tree can be used in object-collision algorithms :)
Just to take as a grain of salt: a neural-network can be trained for advection&diffusion in a 30x30x30 grid and that can be used in a "gpu(opencl/cuda)-->neural-network ---> finite element method --->shadows" way.
There's a couple problems with this as it stands.
The first problem is that, fundamentally, a photon in the Newtonian sense doesn't react or change based on the density of other photons around. So using a density field and trying to light to follow the classic Navier-Stokes style solutions (which is what you're trying to do, based on the density field explanation you gave) would result in incorrect results. It would also, given enough iterations, result in complete entropy over the scene, which is also not what happens to light.
Even if you were to get rid of the density problem, you're still left with the the problem of multiple photons going different directions in the same cell, which is required for global illumination and diffuse lighting.
So, stripping away the problem portions of your idea, what you're left with is a particle system for photons :P
Now, to be fair, sudo-particle systems are currently used for global illumination solutions. This type of thing is called Photon Mapping, but it's only simple to implement a direct lighting solution using it :P

Where to find materials about edge detection and which is good for virtual wardrobe application?

I am trying to build an application called virtual wardrobe where I am planning to capture the image of a human and then allow him to select different clothing and instantly see his virtual image wearing that clothing.
I do not have much knowledge of how to go about this idea. I read a few materials and found out a few edge detecting algorithms.
Sobel seems to be fast but not very efficient while Canny is better but slow.
There are a few other algorithms like Gradient based, Laplacian, etc but I don't have much idea about those.
Are there good course materials available to understand these algorithms in details?
Also, will it be better to have an algorithm that is faster but less efficient or slower but more efficient for this application?
I do not have much knowledge about this so, any help is appreciated.
Thank you in advance.
Not sure if you have all other components, but I think using edge detection alone might not work well in many cases. Here are possible directions / techniques that you might find them useful:
foreground detection: detection which part of image is the user, this might work better than pure edge detection if your background is not simple.
face detection: detect which part of image is the user's face. This allows the cloth better fit to the user, esp. for sunglasses or hats.
skin color model: can be used as a basic alternative for face detection.
object tracking: if your input is a video, then you can also utilize object tracking technique to speed up the other detection processes.
And you might also consider other techniques such as human posture recognition or eye-tracking, but they are more complicated than the above items.
I can suggest you one solution. If you have images of various outfits then assume them as target images and replace the face of the target image with the face of the source image i.e user. For that you kinda have to built a face replacement app.If you want to detect face in the source image then go for face detection first then retrieve face boundaries from the source image. For this you can use various algorithms, of which I am suggesting few:
Canny Edge Detection followed by longest edge detection.
Skin Color Thresholding followed by shrinking and growing algorithm.
Adaptive Active Contour Model(Snake Algorithm)
Canny is bit slow, if you want the result fast go for skin color thresholding.
For accurate result you can use Snake Algorithm. Snake algorithm is useful for detecting the face boundary even if the face has shadows in it.
Read detecting face boundary using Canny Edge Detection

How can I compensate illumination changes in iris images other than using Retinex theory?

I want to make an effective illumination compensation on iris images and I want this compensation to be based on color i.e. illumination compensation using color rather than texture. I corrected my images for various mechanical errors but I want a simple algorithm to compensate the illumination based on color. Any ideas?
Try subtracting a low-pass copy of the same image?
What you are interested in is white balancing (i.e. achieving color constancy). One of the simplest algorithms is the Gray-World algorithm and I would try that one first because it's very easy to implement (even though it's not very precise).
You also might want to try some Retinex based algorithms. If so, visit this site: http://www.fer.unizg.hr/ipg/resources/color_constancy/
It contains C++ implementations of several Retinex-based color constancy algorithms.

Resources