MATLAB - Projecting a quadrilateral to a rectangle [duplicate] - image

This question already has answers here:
Warping an image using control points
(2 answers)
How to warp an image into a trapezoidal shape in MATLAB
(1 answer)
Closed 5 years ago.
I have an image of a quadrilateral (the image is of a rectangle, but in an unknown perspective). Assume it looks something like this:
Also, assume I've found the coordinates for the quadrilateral's corners (hence the black arrows, which are NOT part of the actual image).
What I want is, given the 4 corners' coordinates, to take only the quadrilateral and project it to a rectangle of a given size (say, height h and width w).
Meaning, the result should be:
I've tried several things (like imwarp, for example) but can't get the result I'm looking for.
How do I go about making this in MARLAB?
EDIT: I should mention that thus far, using MATLAB's fitgoetrans and imwarp transformed the image, but didn't create a new image containing just the shape.

Related

python get coordinates (pixels) of corresponding points from clicks [duplicate]

This question already has answers here:
Store mouse click event coordinates with matplotlib
(3 answers)
Detecting mouse event in an image with matplotlib
(2 answers)
Determine button clicked subplot in matplotlib
(2 answers)
Closed 5 years ago.
I have two images which I want to do the following using python 3.6
1- plot both images side by side
2- click on one image and output the corresponding pixel coordinates
3- click on the second image and get the corresponding pixel coordinates
note: this question is different because it has to work on two images by finding the axis the click event is happening.
I think the best way is to have a function that identifies which axis i am clicking on and gets the clicked coordinates. I couldn't find much of a resources for two images, mostly what I found is on one image.

How can I measure the height of the plant in the given picture? [duplicate]

This question already has an answer here:
How to get the real life size of an object from an image, when not knowing the distance between object and the camera?
(1 answer)
Closed 6 years ago.
I have attached my picture and I have many pictures of the same plant captured over time, but the angle of all the pictures is same,it's because camera is held onto a pole which looks over these plants. From this image, I want to know the height of the plant by calculating the distance between the camera and yellow spot (i.e. tip of plant) minus the distance between camera and red spot (i.e a point on ground), in short-plant:
height = dist(camera,yellow spot)-dist(camera,red spot)
I have gone through the MATLAB documentation and many papers, but I could not figure out how can I get the distance between the camera and those red and yellow spots in the image. Please somebody explain me. I have been struggling over it for many days.
As cagatayodabasi pointed out in the comments, it can't be done in this ways:
1 - you have to horizontally translate your camera and take picture and obtain 2 different points of view,
2- use two cameras, better aligned in the horizontal axis.
In every case, you have to work with a stereo system. Maybe this link in Mathworks could help you.
In every case you should work first with the camera calibration, then with the disparity calculation, strongly linked with the distance from the camera, which is (or should be) your "scale factor" (the bigger the distance, the smaller the height is perceived).
In fact if you have a yellow spot on the top of the plant (I am trying to understand your method) and a red spot at the bottom, what you obtain with a difference is the "apparent height" of a plant in terms of pixel, but it is not the real height (it depends on the scale factor I mentioned above)
The "spot distance" method is unclear without an image, but maybe (as Mark Setchell pointed out, if you can not post an image) you can link the paper or the page from you took inspiration for your code.

Looking for algorithm to map an image to 4 sides polygon

This is more a math question than a programming question beside the fact that I must implement it using Delphi inside a graphic application.
Assuming I have a picture of a sheet of paper. The actual sheet of paper is of course a rectangular area. When the picture is shown on a computer screen the rectangular area is no more rectangular because when the picture was taken, the camera was not perfectly positioned above the sheet of paper. There is all kinds of perspective effects which result in deformations.
My application needs to tweak the image so that the original rectangular area is displayed as a rectangular area on screen.
Most photo processing software have an interactive tool to do that. The user draw a rectangular area on screen around the rectangular object and then drag each corner to deform the displayed rectangular area until he see the real area as rectangular. What I'm looking for is the algorithm to do that computation.
You need to split the problem into 2 steps. Find the edges or corners of the sheet and remap the pixels.
To find the corners or edges it's a really hard problem since they might be invisible, outside of the picture, obstructed, bent or deformed. Assuming you have a very simple setup (black uniform background, white paper, very little distortion) you could run an edge detection kernel over the image then find the 4 outer edges. If you find the edges you can intersect them to find the corners and the other way around.
Once you find the corners run an interpolation over the image to map the pixels onto the rectangle you want. You should be able to get the graphics engine to do this for you if you provide the coordinates of the corners as texture coordinates for the rectangle and map the image as a texture.
I made it sound simple, but you will encounter many parameters to set and experiment with.
It seems (because you mentioned bilinear interpolation) that you need perspective transformations.
There is implementation of perspective transformations (mapping of arbitrary convex quad to rectangle and vice versa) in Anti-Grain Geometry library (exe example). Delphi port.
With agg_trans_perspective one can calculate the matrix of persp. transformation and then apply it to map coordinates from one quad to another.

Collision detection algorithm - Image inside a cylinder [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
With a camera inside a cylinder I capture a image. I want to detec if there are some deformation due to a collision outside. I also want to detec in which side the collision occurs. The image inside the cylinder have a lot of dots which forms a grid. Which is the better way to do this?
A simple way to detec the collision is to subtract the image without collision with the real image. If the result isn't "zero", something changed and probally a collision occured. But this doesn't give me which side the cylinder deformed.
I already tried to do a projection of the points in the plane, but i couldn't do it.
In this link you can find a question post by me with the problem of the projection: Projection of a image from inside a cylinder to a plane 2D [Matlab]
In that link you can see all the information about this problem.
An idea is to use region props in the image and see which part of the image deformed, but I want to do something a little more complex. I want to measure the deformation, to have an idea how much it deformed during the impact. This is the reason why i thought about doing some projection in the plane and measure the distance that the points deformed. Do you have any idea to do this in a more simple way? How can i do it
Someone can help me please?
Here's a little code/pseudo-code to try to help. In words:
I would subtract the before and after images and take the absolute value of the difference image. Then, I would have some sort of threshold for whether or not the difference is just due to variation in noise and not a real change. Next I find the center of mass (weighted by the magnitude of difference), which can be done easily with the image processing toolbox (regionprops). The center of mass of the variation would be a good estimate of where a "collision" occurred, i.e. a deformation in the cylinder
So that would be something along the lines of:
diffIm = abs(originalIm - afterIm)
threshold = someNumber
diffIm = diffIm(diffIm>threshold)
%Tell regionprops that the whole image is one region by passing it a array of ones the size of the image, and diffIm as the measurment image
props = regionprops(ones(size(diffIm)),diffIm,'WeightedCentroid')
%WeightedCentroid is the center of mass, and it is weighted by the grayscale image diffIm
You now have the location of the centroid of deformation in your image space, and all you would need is a map to convert that to cylinder space (if you needed that), otherwise you could just plot the centroid over the original image for a visual output of where the code expects the collision occurred.
On another note, if you have control of your expirimental setup, I would expect that a checkerboard pattern would give you better results than the dots (because the dots are very spaced out, and if the collison only affects the white space you might not be able to detect it at all). A checkerboard would mean you have more edges than can be displaced, which is the brunt of what would be detected anyways. A checkerboard may also be easier for mapping to a plane if you were still trying to do that, because you could know that all the edges are either parallel or intersecting at right angles, and also evenly spaced.

Map image/texture to a predefined uneven surface (t-shirt with folds, mug, etc.)

Basically I was trying to achieve this: impose an arbitrary image to a pre-defined uneven surface. (See examples below).
-->
I do not have a lot of experience with image processing or 3D algorithms, so here is the best method I can think of so far:
Predefine a set of coordinates (say if we have a 10x10 grid, we have 100 coordinates that starts with (0,0), (0,10), (0,20), ... etc. There will be 9x9 = 81 grids.
Record the transformations for each individual coordinate on the t-shirt image e.g. (0,0) becomes (51,31), (0, 10) becomes (51, 35), etc.
Triangulate the original image into 81x2=162 triangles (with 2 triangles for each grid). Transform each triangle of the image based on the coordinate transformations obtained in Step 2 and draw it on the t-shirt image.
Problems/questions I have:
I don't know how to smooth out each triangle so that the image on t-shirt does not look ragged.
Is there a better way to do it? I want to make sure I'm not reinventing the wheels here before I proceed with an implementation.
Thanks!
This is called digital image warping. There was a popular graphics text on it in the 1990s (probably from somebody's thesis). You can also find an article on it from Dr. Dobb's Journal.
Your process is essentially correct. If you work pixel by pixel, rather than trying to use triangles, you'll avoid some of the problems you're facing. Scan across the pixels in target bitmap, and apply the local transformation based on the cell you're in to determine the coordinate of the corresponding pixel in the source bitmap. Copy that pixel over.
For a smoother result, you do your coordinate transformations in floating point and interpolate the pixel values from the source image using something like bilinear interpolation.
It's not really a solution for the problem, it's just a workaround :
If you have the 3D model that represents the T-Shirt.
you can use directX\OpenGL and put your image as a texture of the t-shirt.
Then you can ask it to render the picture you want from any point of view.

Resources