Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
With a camera inside a cylinder I capture a image. I want to detec if there are some deformation due to a collision outside. I also want to detec in which side the collision occurs. The image inside the cylinder have a lot of dots which forms a grid. Which is the better way to do this?
A simple way to detec the collision is to subtract the image without collision with the real image. If the result isn't "zero", something changed and probally a collision occured. But this doesn't give me which side the cylinder deformed.
I already tried to do a projection of the points in the plane, but i couldn't do it.
In this link you can find a question post by me with the problem of the projection: Projection of a image from inside a cylinder to a plane 2D [Matlab]
In that link you can see all the information about this problem.
An idea is to use region props in the image and see which part of the image deformed, but I want to do something a little more complex. I want to measure the deformation, to have an idea how much it deformed during the impact. This is the reason why i thought about doing some projection in the plane and measure the distance that the points deformed. Do you have any idea to do this in a more simple way? How can i do it
Someone can help me please?
Here's a little code/pseudo-code to try to help. In words:
I would subtract the before and after images and take the absolute value of the difference image. Then, I would have some sort of threshold for whether or not the difference is just due to variation in noise and not a real change. Next I find the center of mass (weighted by the magnitude of difference), which can be done easily with the image processing toolbox (regionprops). The center of mass of the variation would be a good estimate of where a "collision" occurred, i.e. a deformation in the cylinder
So that would be something along the lines of:
diffIm = abs(originalIm - afterIm)
threshold = someNumber
diffIm = diffIm(diffIm>threshold)
%Tell regionprops that the whole image is one region by passing it a array of ones the size of the image, and diffIm as the measurment image
props = regionprops(ones(size(diffIm)),diffIm,'WeightedCentroid')
%WeightedCentroid is the center of mass, and it is weighted by the grayscale image diffIm
You now have the location of the centroid of deformation in your image space, and all you would need is a map to convert that to cylinder space (if you needed that), otherwise you could just plot the centroid over the original image for a visual output of where the code expects the collision occurred.
On another note, if you have control of your expirimental setup, I would expect that a checkerboard pattern would give you better results than the dots (because the dots are very spaced out, and if the collison only affects the white space you might not be able to detect it at all). A checkerboard would mean you have more edges than can be displaced, which is the brunt of what would be detected anyways. A checkerboard may also be easier for mapping to a plane if you were still trying to do that, because you could know that all the edges are either parallel or intersecting at right angles, and also evenly spaced.
Related
This question already has answers here:
90 degree field of view without distortion in THREE.PerspectiveCamera
(2 answers)
Closed 1 year ago.
Is this an FOV issue with my perspective camera? In my scene, spheres look like eggs/oval shaped rather than spheres when they reach the edges of the screen. Anyone know why this happens?
It sounds like you've encountered one of the unfortunate realities of 3D.
In any 3-dimensional scene, the view from a given point is most naturally thought of as a sphere. When we render a scene, we're rendering a piece of that sphere, but we need to somehow convert that piece of a sphere into a flat rectangle, since our computer screens are flat, not round.
So, in order to render a 3D scene as a rectangle, the software needs to use a projection. For 3D rendering, the most common projection is probably a rectilinear projection, also called a gnomonic projection. (On Wikipedia, see "Rectilinear lens" for a discussion of rectilinear projections in photography, and "Gnomonic projection" for a discussion of rectilinear projections in mapmaking.)
The biggest advantage of a rectilinear projection is that straight lines in the scene appear as straight lines in the rendering. A big disadvantage is that objects far from the center are distorted: small circles get turned into large ovals.
This phenomenon is an unalterable mathematical fact that no software will ever be able to overcome. However, there are things you may be able to do to mitigate the situation. One option is to use a narrower field of view. Another option is to use a different projection; the answers here have a few suggestions for how to do that: Three.js - Fisheye effect
I am aware that real time face detection is something that needs high cpu time, too much to implement it in a game(which is my goal). Therefore I am looking for a way to improve my FPS.
In the game, there should only be two faces. Those faces are nearly always on the same positions. One in the left lower middle of the screen, the other one in the right lower middle.
I CAN assume that there are ALWAYS exactly 2 Faces, which, like I said before, are roughly on the same positions as in the frame before.
My idea was to tell the algorithm WHERE he has to search.
First frame:
calculates where there are faces on the screen. Coordinates of Faces are stored for next frame.
Following frames:
use the coordinates of the frame before to start looking for faces in the area around the stored position. If nothing found, increase the distance from the position where it has to look for faces and search again.
Doing so would greatly improve my performance, however I didn't find any way to tell the algorithm where it has to look for faces.
Is there a way to do so?
Thanks.
If you want to use the OpenCV algorithm without modifying it, you can extract a sub-image around the location of the faces at the previous frame. In this way the OpenCV face detector performs a sliding window search on a much smaller region. Then you remap the face position in the full frame coordinate system. If your faces do not move too fast you can run this every n-frames and interpolate the position between the detection frames for a further speed-up.
To get the subImg you can use:
cv::Rect roi(xTl,yTl,w,h);
cv::Mat subImg = img(roi);
where xTl,yTl are the top left coordinates of the searching window and w,h the size.
Alternatively once you detect the faces, you can use MeanShift/CamShift tracker (or other trackers) to find the position in every frame:
http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_meanshift/py_meanshift.html .
I'm new to matlab, I'm not clear of how to detect the spirality and spiral center in an image using matlab.
For example I need to detect the spiral center of the galaxy.
Question: How to model spirality concept in these kind of spiral image for example....
Thank you.
original images taken from here:
storm
galaxy
Optical flow
is moving intensity/color of scene
not image of an object !!!
this is taken from flying insects vision
they use it to:
determine flight direction (compensate wind drift)
navigation
collision avoidance
landing
Spiral image
in your case you should look for geometry + density analysis (nothing to do with Optical flow)
here are few things that pop up in my head for your case:
make density map
find the biggest density
or density center
vectorise the whole thing
find center mathematically
or look for joint of arms
or look for eye of the storm
also you can vectorise the gaps
if they are curved and rotated to each other then you have spiral
make gap occurence map
number of gaps per square area
the bigger the count is the closer you are to center
beware inside center area can be 0 gaps
find max gap count positions
compute average middle between all of them
to improve accuracy you could segmentate gaps before
and count only different gaps per area
[Notes]
I would go for option 3
it is most simple of all of them
just few for loops
you can also combine more approaches together to improve accuracy
use proper filtrations and color reduction/tresholding before detection
like sharpening, artifact reduction, smoothing, erosion/corosion ...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
how i can work programaticly with spell effect in game
i have a effect look like this spell effect
i want to know how i can change the cale of the spell .
like change color in distance
or when u run some this effect goes to transparent and after 10 sec go to fade
or
look at this spell and wepon in this pic like when you have a small wepon and u add a animation effect on it ( like fire or ice effect )
how you change the animation base on the size of the wepon
i have no idea how i have to implement that
thanks in advance
[Edit by Spektre] acording to comments I would change the question text to something like this:
I need/want to program a spell effect visualization for game or whatever ...
what are the common/usual approaches to do this (algorithms,graphics techniques)
want to implement Ice/Fire/??? effects
in form of rays/waves/field/cones...
ideally by a single configurable effect routine (I guess this)
also what are the exact names for some of these effects so I can do a search for them myself
I want to use Unity3D environment
this is the way i want to know how i can make an dynamic spell effect
well there are many approaches for this. I am no expert in the field and not a Unity3D user so I stick to basics:
particle system
It is an engine which visualize particles (many small moving objects). It is used for many effects like: rocket throttle,fire,lighting,changing glow,and many more. The trick is the use of blending so each particle is usually semi transparent ball. More transparent on the outside and more solid on the inside. When you draw more particles close together they blend to the desired continuous effect.
This can be done by single textured QUAD or TRIANGLE. The texture can be white color with alpha channel coded transparency so color of the effect can be coded without change in the texture. Color,size and movement patterns can differ to each other and also with time. These three parameters define the effect look for example you want to cast a electric ray from caster to target so the movement pattern is LINE. Now distort that line a little by some random numbers so the LINE becomes a POLYLINE and on the Vertex of this POLYLINE you can sometimes free a particle in random direction with limited duration so it will be like some sparks (do not forget to lower their size in time so they dissipate). Also you have to experiment with the speed and size/color of the main particle stream along the POLYLINE so it looks right. Some effect need to combine few different particle streams together.
Search keywords: particle system,RGBA texture,blending,interpolation
this picture is taken from the link posted in the question. It is a nice example of two particle systems. Yellow straight LINE particle stream and green POLYLINE particle stream. Also some green sparks around the main stream are present.
texture animation
You can have a little cyclic movie (image by image) in an array of textures so you draw the texture on an plane (usually single QUAD or TRIANGLE) and change the texture after some time to the next , and usually after last go from first again. You can also use BLENDing or STENCIL techniques to draw only the effect area. If the textures are white (colorless) then the color can be modulated by code. This is mostly used for explosions, fire,...
this is simple explosion movie example it is not a cyclic animation so after last frame the effect stops (explosion has finite duration)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a math/vector/matrix question that I can't seem to work out.
I have 4 points in 3D space that represent the bounds of a surface.
I have written a raycast algoritum to get the intersection location of the mouse "ray" against the rectangle in the 3d scene.
The rectangle in the scene has a rotation and translation matrix applied to it so it can be moved anywhere in the scene, and my raycast system does correctly get the ray hit location on the surface.
My problem is that I need to now take the Ray Hit location that is in world space and work out where on the 2d surface of the rectangle the hit is.
I cannot work out how to do this.
Your hit point is in the world space. To get the point in the same coordinate system as the original 4 points, just calculate the inverse of the rotation and translation matrix and multiply this inverse matrix by the hit point.
The resulting point will be in the same coordinate system as the 4 points that represent the bounds of the surface.