I have four cameras each feeding me a different portion of a basketball court. Due to the slight offset of the cameras physical locations and lens distortion around the edges of the camera, I cannot simply stitch the videos together without some kind of correction.
I've looked into ffmpeg's perspective filter, as well as the lenscorrection filter. In the former case it was only able to create a trapezoid, not the curved image I want. In the latter case using negative values to k1 and k2 seemed to be heading in the right direction, but it either disorted the top and bottom of the image to the point of being nonsensical noise, or it zoomed in to the image so much that I lost important details.
For the sample picture below, ultimately I want the midcourt line (the blue vertical line on the right side) to be vertical, and I want the mess of wires on the white desk at the bottom to remain visible and identifiable.
Given a video which looks like the following:
I wish to produce something like the following:
This image was made using the "Curve Bend" filter in GIMP, but I just eye-balled it - so it's not perfect. Ideally once I get the exact parameters the midcourt line will be perfectly vertical
When using the lenscorrection filter, no values for k1 and k2 seemed to get the effect I want:
Negative k1, negative k2:
Negative k1, positive k2:
Positive k1, negative k2:
Positive k1, positive k2:
In general:
negative / negative distorted the image beyond recognition
negative / positive looked alright, but the midcourt line was off the screen and it wasn't clear if any distortion had been applied
positive / negative looked the best, but while the top and bottom curved in the middle of the left and right actually bulged out, leaving the midcourt line distorted
positive / positive was the opposite of the desired effect
I wrote a post about this subject. Strangely enough I was also trying to undistort video of a basketball court.
There's a few options:
Try to find a standard projection (e.g. fisheye, stereographic, etc) that roughly matches the projection that your camera produces, and look up or measure the field of view of your camera. At that point you can use the new v360 filter to correct the image to a rectilinear projection (which is the one where straight lines in real life remain straight in the image).
Either find a database entry in lensfun for your camera, or create one (there are instructions in their documentation), or send pictures and ask the maintainers to do it. Then you can use the lensfun filter to accurately correct the distortion.
Lensfun is probably the best option if you want it to be accurate, but depending on your camera you might find v360 produces good enough results, and its significantly faster.
Short answer: No. FFMPEG does not have a curve bend function. That said, curve bend is not the proper solution, anyway. A lens correction is necessary, the parameters supplied were just way off.
Ultimately I just wrote a script to dump thousands of images using lensfun with different lenses, then skimmed them for one that looked good
Related
I don't know much about image processing so please bear with me if this is not possible to implement.
I have several sets of aerial images of the same area originating from different sources. The pictures have been taken during different seasons, under different lighting conditions etc. Unfortunately some images look patchy and suffer from discolorations or are partially obstructed by clouds or pix-elated, as par example picture1 and picture2
I would like to take as an input several images of the same area and (by some kind of averaging them) produce 1 picture of improved quality. I know some C/C++ so I could use some image processing library.
Can anybody propose any image processing algorithm to achieve it or knows any research done in this field?
I would try with a "color twist" transform, i.e. a 3x3 matrix applied to the RGB components. To implement it, you need to pick color samples in areas that are split by a border, on both sides. You should fing three significantly different reference colors (hence six samples). This will allow you to write the nine linear equations to determine the matrix coefficients.
Then you will correct the altered areas by means of this color twist. As the geometry of these areas is intertwined with the field patches, I don't see a better way than contouring the regions by hand.
In the case of the second picture, the limits of the regions are blurred so that you will need to blur the region mask as well and perform blending.
In any case, don't expect a perfect repair of those problems as the transform might be nonlinear, and completely erasing the edges will be difficult. I also think that colors are so washed out at places that restoring them might create ugly artifacts.
For the sake of illustration, a quick attempt with PhotoShop using manual HLS adjustment (less powerful than color twist).
The first thing I thought of was a kernel matrix of sorts.
Do a first pass of the photo and use an edge detection algorithm to determine the borders between the photos - this should be fairly trivial, however you will need to eliminate any overlap/fading (looks like there's a bit in picture 2), you'll see why in a minute.
Do a second pass right along each border you've detected, and assume that the pixel on either side of the border should be the same color. Determine the difference between the red, green and blue values and average them along the entire length of the line, then divide it by two. The image with the lower red, green or blue value gets this new value added. The one with the higher red, green or blue value gets this value subtracted.
On either side of this line, every pixel should now be the exact same. You can remove one of these rows if you'd like, but if the lines don't run the length of the image this could cause size issues, and the line will likely not be very noticeable.
This could be made far more complicated by generating a filter by passing along this line - I'll leave that to you.
The issue with this could be where there was development/ fall colors etc, this might mess with your algorithm, but there's only one way to find out!
For a hobby project I'm attempting to align photo's and create 3D pictures. I basically have 2 camera's on a rig, that I use to make pictures. Automatically I attempt to align the images in such a way that you get a 3D SBS image.
They are high resolution images, which means a lot of pixels to process. Because I'm not really patient with computers, I want things to go fast.
Originally I've worked with code based on image stitching and feature extraction. In practice I found these algorithms to be too inaccurate and too slow. The main reason is that you have different levels of depth here, so you cannot do a 1-on-1 match of features. Most of the code already works fine, including vertical alignment.
For this question, you can assume that different ISO exposion levels / color correction and vertical alignment of the images are both taken care of.
What is still missing is a good algorithm for correcting the angle of the pictures. I noticed that left-right pictures usually vary a small number of degrees (think +/- 1.2 degrees difference) in angle, which is enough to get a slight headache. As a human you can easily spot this by looking at sharp differences in color and lining them up.
The irony here is that you spot it immediately as a human if it's correct or not, but somehow I'm not able to learn this to a machine. :-)
I've experimented with edge detectors, Hough transform and a large variety of home-brew algorithms, but so far found all of them to be both too slow and too inaccurate for my purposes. I've also attempted to iteratively aligning vertically while changing the angles slightly, so far without any luck.
Please note: Accuracy is perhaps more important than speed here.
I've added an example image here. It's actually both a left and right eye, alpha-blended. If you look closely, you can see the lamb at the top having two ellipses, and you can see how the chairs don't exactly line up at the top. It might seem negliable, but on a full screen resolution while using a beamer, you will easily see the difference. This also shows the level of accuracy that is required; it's quite a lot.
The shift in 'x' direction will give the 3D effect. Basically, if the shift is 0, it's on the screen, if it's <0 it's behind the screen and if it's >0 it's in front of the screen. This also makes matching harder, since you're not looking for a 'stitch'.
Basically the two camera's 'look' in the same direction (perpendicular as in the second picture here: http://www.triplespark.net/render/stereo/create.html ).
The difference originates from the camera being on a slightly different angle. This means the rotation is uniform throughout the picture.
I have once used the following amateur approach.
Assume that the second image has a rotation + vertical shift mismatch. This means that we need to apply some transform for the second image which can be expressed in matrix form as
x' = a*x + b*y + c
y' = d*x + e*y + f
that is, every pixel that has coordinates (x,y) on the second image, should be moved to a position (x',y') to compensate for this rotation and vertical shift.
We have a strict requirement that a=e, b=-d and d*d+e*e=1 so that it is indeed rotation+shift, no zoom or slanting etc. Also this notation allows for horizontal shift too, but this is easy to fix after angle+vertical shift correction.
Now select several common features on both images (I did selection by hand, as just 5-10 seemed enough, you can try to apply some automatic feature detection mechanism). Assume i-th feature has coordinates (x1[i], y1[i]) on first image and (x2[i], y2[i]) on the second. We expect that after out transformation the features have as equal as possible y-coordinates, that is we want (ideally)
y1[i]=y2'[i]=d*x2[i]+e*y2[i]+f
Having enough (>=3) features, we can determine d, e and f from this requirement. In fact, if you have more than 3 features, you will most probably not be able to find common d, e and f for them, but you can apply least-square method to find d, e and f that make y2' as close to y1 as possible. You can also account for the requirement that d*d+e*e=1 while finding d, e and f, though as far as i remember, I got acceptable results even not accounting for this.
After you have determined d, e and f, you have the requirement a=e and b=-d. This leaves only c unknown, which is horizontal shift. If you know what the horizontal shift should be, you can find c from there. I used the background (clouds on a landscape, for example) to get c.
When you know all the parameters, you can do one pass on the image and correct it. You might also want to apply some anti-aliasing, but that's a different question.
Note also that you can in a similar way introduce quadratic correction to the formulas to account for additional distortions the camera usually has.
However, that's just a simple algorithm I came up with when I faced the same problem some time ago. I did not do much research, so I'll be glad to know if there is a better or well-established approach or even a ready software.
I am currently working on OCR software and my idea is to use templates to try to recognize data inside invoices.
However scanned invoices can have several 'flaws' with them:
Not all invoices, based on a single template, are correctly aligned under the scanner.
People can write on invoices
etc.
Example of invoice: (Have to google it, sadly cannot add a more concrete version as client data is confidential obviously)
I find my data in the invoices based on the x-values of the text.
However I need to know the scale of the invoice and the offset from left/right, before I can do any real calculations with all data that I have retrieved.
What have I tried so far?
1) Making the image monochrome and use the left and right bounds of the first appearance of a black pixel. This fails due to the fact that people can write on invoices.
2) Divide the invoice up in vertical sections, use the sections that have the highest amount of black pixels. Fails due to the fact that the distribution is not always uniform amongst similar templates.
I could really use your help on (1) how to identify important points in invoices and (2) on what I should focus as the important points.
I hope the question is clear enough as it is quite hard to explain.
Detecting rotation
I would suggest you start by detecting straight lines.
Look (perhaps randomly) for small areas with high contrast, i.e. mostly white but a fair amount of very black pixels as well. Then try to fit a line to these black pixels, e.g. using least squares method. Drop the outliers, and fit another line to the remaining points. Iterate this as required. Evaluate how good that fit is, i.e. how many of the pixels in the observed area are really close to the line, and how far that line extends beyond the observed area. Do this process for a number of regions, and you should get a weighted list of lines.
For each line, you can compute the direction of the line itself and the direction orthogonal to that. One of these numbers can be chosen from an interval [0°, 90°), the other will be 90° plus that value, so storing one is enough. Take all these directions, and find one angle which best matches all of them. You can do that using a sliding window of e.g. 5°: slide accross that (cyclic) region and find a value where the maximal number of lines are within the window, then compute the average or median of the angles within that window. All of this computation can be done taking the weights of the lines into account.
Once you have found the direction of lines, you can rotate your image so that the lines are perfectly aligned to the coordinate axes.
Detecting translation
Assuming the image wasn't scaled at any point, you can then try to use a FFT-based correlation of the image to match it to the template. Convert both images to gray, pad them with zeros till the originals take up at most 1/2 the edge length of the padded image, which preferrably should be a power of two. FFT both images in both directions, multiply them element-wise and iFFT back. The resulting image will encode how much the two images would agree for a given shift relative to one another. Simply find the maximum, and you know how to make them match.
Added text will cause no problems at all. This method will work best for large areas, like the company logo and gray background boxes. Thin lines will provide a poorer match, so in those cases you might have to blur the picture before doing the correlation, to broaden the features. You don't have to use the blurred image for further processing; once you know the offset you can return to the rotated but unblurred version.
Now you know both rotation and translation, and assumed no scaling or shearing, so you know exactly which portion of the template corresponds to which portion of the scan. Proceed.
If rotation is solved already, I'd just sum up all pixel color values horizontally and vertically to a single horizontal / vertical "line". This should provide clear spikes where you have horizontal and vertical lines in the form.
p.s. Generated a corresponding horizontal image with Gimp's scaling capabilities, attached below (it's a bit hard to see because it's only one pixel high and may get scaled down because it's > 700 px wide; the url is http://i.stack.imgur.com/Zy8zO.png ).
To give you some background as to what I'm doing: I'm trying to quantitatively record variations in flow of a compressible fluid via image analysis. One way to do this is to exploit the fact that the index of refraction of the fluid is directly related to its density. If you set up some kind of image behind the flow, the distortion in the image due to refractive index changes throughout the fluid field leads you to a density gradient, which helps to characterize the flow pattern.
I have a set of routines that do this successfully with a regular 2D pattern of dots. The dot pattern is slightly distorted, and by comparing the position of the dots in the distorted image with that in the non-distorted image, I get a displacement field, which is exactly what I need. The problem with this method is resolution. The resolution is limited to the number of dots in the field, and I'm exploring methods that give me more data.
One idea I've had is to use a regular grid of horizontal and vertical lines. This image will distort the same way, but instead of getting only the displacement of a dot, I'll have the continuous distortion of a grid. It seems like there must be some standard algorithm or procedure to compare one geometric grid to another and infer some kind of displacement field. Nonetheless, I haven't found anything like this in my research.
Does anyone have some ideas that might point me in the right direction? FYI, I am not a computer scientist -- I'm an engineer. I say that only because there may be some obvious approach I'm neglecting due to coming from a different field. But I can program. I'm using MATLAB, but I can read Python, C/C++, etc.
Here are examples of the type of images I'm working with:
Regular: Distorted:
--------
I think you are looking for the Digital Image Correlation algorithm.
Here you can see a demo.
Here is a Matlab Implementation.
From Wikipedia:
Digital Image Correlation and Tracking (DIC/DDIT) is an optical method that employs tracking & image registration techniques for accurate 2D and 3D measurements of changes in images. This is often used to measure deformation (engineering), displacement, and strain, but it is widely applied in many areas of science and engineering.
Edit
Here I applied the DIC algorithm to your distorted image using Mathematica, showing the relative displacements.
Edit
You may also easily identify the maximum displacement zone:
Edit
After some work (quite a bit, frankly) you can come up to something like this, representing the "displacement field", showing clearly that you are dealing with a vortex:
(Darker and bigger arrows means more displacement (velocity))
Post me a comment if you are interested in the Mathematica code for this one. I think my code is not going to help anybody else, so I omit posting it.
I would also suggest a line tracking algorithm would work well.
Simply start at the first pixel line of the image and start following each of the vertical lines downwards (You just need to start this at the first line to get the starting points. This can be done by a simple pattern that moves orthogonally to the gradient of that line, ergo follows a line. When you reach a crossing of a horizontal line you can measure that point (in x,y coordinates) and compare it to the corresponding crossing point in your distorted image.
Since your grid is regular you know that the n'th measured crossing point on the m'th vertical black line are corresponding in both images. Then you simply compare both points by computing their distance. Do this for each line on your grid and you will get, by how far each crossing point of the grid is distorted.
This following a line algorithm is also used in basic Edge linking algorithms or the Canny Edge detector.
(All this are just theoretic ideas and I cannot provide you with an algorithm to it. But I guess it should work easily on distorted images like you have there... but maybe it is helpful for you)
I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey