matching jigsaw puzzle pieces - image

I have nothing useful to do and was playing with jigsaw puzzle like this:
alt text http://manual.gimp.org/nl/images/filters/examples/render-taj-jigsaw.jpg
and I was wondering if it'd be possible to make a program that assists me in putting it together.
Imagine that I have a small puzzle, like 4x3 pieces, but the little tabs and blanks are non-uniform - different pieces have these tabs in different height, of different shape, of different size. What I'd do is to take pictures of all of these pieces, let a program analyze them and store their attributes somewhere. Then, when I pick up a piece, I could ask the program to tell me which pieces should be its 'neighbours' - or if I have to fill in a blank, it'd tell me how does the wanted puzzle piece(s) look.
Unfortunately I've never did anything with image processing and pattern recognition, so I'd like to ask you for some pointers - how do I recognize a jigsaw piece (basically a square with tabs and holes) in a picture?
Then I'd probably need to rotate it so it's in the right position, scale to some proportion and then measure tab/blank on each side, and also each side's slope, if present.
I know that it would be too time consuming to scan/photograph 1000 pieces of puzzle and use it, this would be just a pet project where I'd learn something new.

Data acquisition
(This is known as Chroma Key, Blue Screen or Background Color method)
Find a well-lit room, with the least lighting variation across the room.
Find a color (hue) that is rarely used in the entire puzzle / picture.
Get a color paper that has that exactly same color.
Place as many puzzle pieces on the color paper as it'll fit.
You can categorize the puzzles into batches and use it as a computer hint later on.
Make sure the pieces do not overlap or touch each other.
Do not worry about orientation yet.
Take picture and download to computer.
Color calibration may be needed because the Chroma Key background may have upset the built-in color balance of the digital camera.
Acquisition data processing
Get some computer vision software
OpenCV, MATLAB, C++, Java, Python Imaging Library, etc.
Perform connected-component on the chroma key color on the image.
Ask for the contours of the holes of the connected component, which are the puzzle pieces.
Fix errors in the detected list.
Choose the indexing vocabulary (cf. Ira Baxter's post) and measure the pieces.
If the pieces are rectangular, find the corners first.
If the pieces are silghtly-off quadrilateral, the side lengths (measured corner to corner) is also a valuable signature.
Search for "Shape Context" on SO or Google or here.
Finally, get the color histogram of the piece, so that you can query pieces by color later.
To make them searchable, put them in a database, so that you can query pieces with any combinations of indexing vocabulary.

A step back to the problem itself. The problem of building a puzzle can be easy (P) or hard (NP), depending of whether the pieces fit only one neighbour, or many. If there is only one fit for each edge, then you just find, for each piece/side its neighbour and you're done (O(#pieces*#sides)). If some pieces allow multiple fits into different neighbours, then, in order to complete the whole puzzle, you may need backtracking (because you made a wrong choice and you get stuck).
However, the first problem to solve is how to represent pieces. If you want to represent arbitrary shapes, then you can probably use transparency or masks to represent which areas of a tile are actually part of the piece. If you use square shapes then the problem may be easier. In the latter case, you can consider the last row of pixels on each side of the square and match it with the most similar row of pixels that you find across all other pieces.
You can use the second approach to actually help you solve a real puzzle, despite the fact that you use square tiles. Real puzzles are normally built upon a NxM grid of pieces. When scanning the image from the box, you split it into the same NxM grid of square tiles, and get the system to solve that. The problem is then to visually map the actual squiggly piece that you hold in your hand with a tile inside the system (when they are small and uniformly coloured). But you get the same problem if you represent arbitrary shapes internally.

Related

anyway to remove algorithmically discolorations from aerial imagery

I don't know much about image processing so please bear with me if this is not possible to implement.
I have several sets of aerial images of the same area originating from different sources. The pictures have been taken during different seasons, under different lighting conditions etc. Unfortunately some images look patchy and suffer from discolorations or are partially obstructed by clouds or pix-elated, as par example picture1 and picture2
I would like to take as an input several images of the same area and (by some kind of averaging them) produce 1 picture of improved quality. I know some C/C++ so I could use some image processing library.
Can anybody propose any image processing algorithm to achieve it or knows any research done in this field?
I would try with a "color twist" transform, i.e. a 3x3 matrix applied to the RGB components. To implement it, you need to pick color samples in areas that are split by a border, on both sides. You should fing three significantly different reference colors (hence six samples). This will allow you to write the nine linear equations to determine the matrix coefficients.
Then you will correct the altered areas by means of this color twist. As the geometry of these areas is intertwined with the field patches, I don't see a better way than contouring the regions by hand.
In the case of the second picture, the limits of the regions are blurred so that you will need to blur the region mask as well and perform blending.
In any case, don't expect a perfect repair of those problems as the transform might be nonlinear, and completely erasing the edges will be difficult. I also think that colors are so washed out at places that restoring them might create ugly artifacts.
For the sake of illustration, a quick attempt with PhotoShop using manual HLS adjustment (less powerful than color twist).
The first thing I thought of was a kernel matrix of sorts.
Do a first pass of the photo and use an edge detection algorithm to determine the borders between the photos - this should be fairly trivial, however you will need to eliminate any overlap/fading (looks like there's a bit in picture 2), you'll see why in a minute.
Do a second pass right along each border you've detected, and assume that the pixel on either side of the border should be the same color. Determine the difference between the red, green and blue values and average them along the entire length of the line, then divide it by two. The image with the lower red, green or blue value gets this new value added. The one with the higher red, green or blue value gets this value subtracted.
On either side of this line, every pixel should now be the exact same. You can remove one of these rows if you'd like, but if the lines don't run the length of the image this could cause size issues, and the line will likely not be very noticeable.
This could be made far more complicated by generating a filter by passing along this line - I'll leave that to you.
The issue with this could be where there was development/ fall colors etc, this might mess with your algorithm, but there's only one way to find out!

How to count the number of spots in this image?

I am trying to count the number of hairs transplanted in the following image. So practically, I have to count the number of spots I can find in the center of image.
(I've uploaded the inverted image of a bald scalp on which new hairs have been transplanted because the original image is bloody and absolutely disgusting! To see the original non-inverted image click here. To see the larger version of the inverted image just click on it). Is there any known image processing algorithm to detect these spots? I've found out that the Circle Hough Transform algorithm can be used to find circles in an image, I'm not sure if it's the best algorithm that can be applied to find the small spots in the following image though.
P.S. According to one of the answers, I tried to extract the spots using ImageJ, but the outcome was not satisfactory enough:
I opened the original non-inverted image (Warning! it's bloody and disgusting to see!).
Splited the channels (Image > Color > Split Channels). And selected the blue channel to continue with.
Applied Closing filter (Plugins > Fast Morphology > Morphological Filters) with these values: Operation: Closing, Element: Square, Radius: 2px
Applied White Top Hat filter (Plugins > Fast Morphology > Morphological Filters) with these values: Operation: White Top Hat, Element: Square, Radius: 17px
However I don't know what to do exactly after this step to count the transplanted spots as accurately as possible. I tried to use (Process > Find Maxima), but the result does not seem accurate enough to me (with these settings: Noise tolerance: 10, Output: Single Points, Excluding Edge Maxima, Light Background):
As you can see, some white spots have been ignored and some white areas which are not actually hair transplant spots, have been marked.
What set of filters do you advise to accurately find the spots? Using ImageJ seems a good option since it provides most of the filters we need. Feel free however, to advise what to do using other tools, libraries (like OpenCV), etc. Any help would be highly appreciated!
I do think you are trying to solve the problem in a bit wrong way. It might sound groundless, so I'd better show my results first.
Below I have a crop of you image on the left and discovered transplants on the right. Green color is used to highlight areas with more than one transplant.
The overall approach is very basic (will describe it later), but still it provides close to be accurate results. Please note, it was a first try, so there is a lot of room for enhancements.
Anyway, let's get back to the initial statement saying you approach is wrong. There are several major issues:
the quality of your image is awful
you say you want to find spots, but actually you are looking for hair transplant objects
you completely ignores the fact average head is far from being flat
it does look like you think filters will add some important details to your initial image
you expect algorithms to do magic for you
Let's review all these items one by one.
1. Image quality
It might be very obvious statement, but before the actual processing you need to make sure you have best possible initial data. You might spend weeks trying to find a way to process photos you have without any significant achievements. Here are some problematic areas:
I bet it is hard for you to "read" those crops, despite the fact you have the most advanced object recognition algorithms in your brain.
Also, your time is expensive and you still need best possible accuracy and stability. So, for any reasonable price try to get: proper contrast, sharp edges, better colors and color separation.
2. Better understanding of the objects to be identified
Generally speaking, you have a 3D objects to be identified. So you can analyze shadows in order to improve accuracy. BTW, it is almost like a Mars surface analysis :)
3. The form of the head should not be ignored
Because of the form of the head you have distortions. Again, in order to get proper accuracy those distortions should be corrected before the actual analysis. Basically, you need to flatten analyzed area.
3D model source
4. Filters might not help
Filters do not add information, but they can easily remove some important details. You've mentioned Hough transform, so here is interesting question: Find lines in shape
I will use this question as an example. Basically, you need to extract a geometry from a given picture. Lines in shape looks a bit complex, so you might decide to use skeletonization
All of a sadden, you have more complex geometry to deal with and virtually no chances to understand what actually was on the original picture.
5. Sorry, no magic here
Please be aware of the following:
You must try to get better data in order to achieve better accuracy and stability. The model itself is also very important.
Results explained
As I said, my approach is very simple: image was posterized and then I used very basic algorithm to identify areas with a specific color.
Posterization can be done in a more clever way, areas detection can be improved, etc. For this PoC I just have a simple rule to highlight areas with more than one implant. Having areas identified a bit more advanced analysis can be performed.
Anyway, better image quality will let you use even simple method and get proper results.
Finally
How did the clinic manage to get Yondu as client? :)
Update (tools and techniques)
Posterization - GIMP (default settings,min colors)
Transplant identification and visualization - Java program, no libraries or other dependencies
Having areas identified it is easy to find average size, then compare to other areas and mark significantly bigger areas as multiple transplants.
Basically, everything is done "by hand". Horizontal and vertical scan, intersections give areas. Vertical lines are sorted and used to restore the actual shape. Solution is homegrown, code is a bit ugly, so do not want to share it, sorry.
The idea is pretty obvious and well explained (at least I think so). Here is an additional example with different scan step used:
Yet another update
A small piece of code, developed to verify a very basic idea, evolved a bit, so now it can handle 4K video segmentation in real-time. The idea is the same: horizontal and vertical scans, areas defined by intersected lines, etc. Still no external libraries, just a lot of fun and a bit more optimized code.
Additional examples can be found on YouTube: RobotsCanSee
or follow the progress in Telegram: RobotsCanSee
I've just tested this solution using ImageJ, and it gave good preliminary result:
On the original image, for each channel
Small (radius 1 or 2) closing in order to get rid of the hairs (black part in the middle of the white one)
White top-hat of radius 5 in order to detect the white part around each black hair.
Small closing/opening in order to clean a little bit the image (you can also use a median filter)
Ultimate erode in order to count the number of white blob remaining. You can also certainly use a LoG (Laplacian of Gaussian) or a distance map.
[EDIT]
You don't detect all the white spots using the maxima function, because after the closing, some zones are flat, so the maxima is not a point, but a zone. At this point, I think that an ultimate opening or an ultimate eroded would give you the center or each white spot. But I am not sure that there is a function/pluggin doing it in ImageJ. You can take a look to Mamba or SMIL.
A H-maxima (after white top-hat) may also clean a little bit more your results and improve the contrast between the white spots.
As Renat mentioned, you should not expect algorithms to do magic for you, however I'm hopeful to come up with a reasonable estimate of the number of spots. Here, I'm going to give you some hints and resources, check them out and call me back if you need more information.
First, I'm kind of hopeful to morphological operations, but I think a perfect pre-processing step may push the accuracy yielded by them dramatically. I want you put my finger on the pre-processing step. Thus I'm going ti work with this image:
That's the idea:
Collect and concentrate the mass around the spot locations. What do I mean my concentrating the masses? Let's open the book from the other side: As you see, the provided image contains some salient spots surrounded by some noisy gray-level dots.
By dots, I mean the pixels that are not part of a spot, but their gray-value are larger than zero (pure black) - which are available around the spots. It is clear that if you clear these noisy dots, you surely will come up with a good estimate of spots using other processing tools such as morphological operations.
Now, how to make the image more sharp? What if we could make the dots to move forward to their nearest spots? This is what I mean by concentrating the masses over the spots. Doing so, only the prominent spots will be present in the image and hence we have made a significant step toward counting the prominent spots.
How to do the concentrating thing? Well, the idea that I just explained is available in this paper, which its code is luckily available. See the section 2.2. The main idea is to use a random walker to walk on the image for ever. The formulations is stated such that the walker will visit the prominent spots far more times and that can lead to identifying the prominent spots. The algorithm is modeled Markov chain and The equilibrium hitting times of the ergodic Markov chain holds the key for identifying the most salient spots.
What I described above is just a hint and you should read that short paper to get the detailed version of the idea. Let me know if you need more info or resources.
That is a pleasure to think on such interesting problems. Hope it helps.
You could do the following:
Threshold the image using cv::threshold
Find connected components using cv::findcontour
Reject the connected components of size larger than a certain size as you seem to be concerned about small circular regions only.
Count all the valid connected components.
Hopefully, you have a descent approximation of the actual number of spots.
To be statistically more accurate, you could repeat 1-4 for a range of thresholds and take the average.
This is what you get after applying unsharpen radius 22, amount 5, threshold 2 to your image.
This increases the contrast between the dots and the surrounding areas. I used the ballpark assumption that the dots are somewhere between 18 and 25 pixels in diameter.
Now you can take the local maxima of white as a "dot" and fill it in with a black circle until the circular neighborhood of the dot (a circle of radius 10-12) erases the dot. This should let you "pick off" the dots joined to each other in clusters more than 2. Then look for local maxima again. Rinse and repeat.
The actual "dot" areas are in stark contrast to the surrounding areas, so this should let you pick them off as well as you would by eyeballing it.

Best approach for "game" problematic

I have to tell you, I'm completely NEW to XNA, and I know NOTHING about vertex, multisampling, etc etc..
However, I love so much programming and windows phone that I wanted to start an immense challenge... create a XNA game! :D
ok, let's stop the story and start explaining..
I'm making a big game... which I've worked for it for almost 1 month, now it's almost over..
I've just a big issue, which is the central point of the game... think about it as a 1024x800 puzzle, each point of puzzle can be clicked, and when it's clicked it must change color..
so we have 2 hard point to do,
1) UNDERSTAND WHICH PIECE I'VE CLICKED
2) COLOR THIS PIECE
I thought about 2 approaches
FIRST APPROACH
1 PNG big puzzle background 1024x800
N PNG for each puzzle piece, with transparent layer around the piece ( each piece is 1024x800, in the correct position )
by merging the N+1 PNGS we have the complete puzzle, now, it's REALLY easy to understand which piece I've clicked, because I just have to cycle the N textures, and when I got the one which havent the transparent pixel in the point clicked, I've the piece!
then, for color it, I've just to color the texture in the draw part.
it's easy, the problem is that if I have to drag, zoom, 50*4 pngs, it's really slow :(
SECOND APPROACH
1 PNG big puzzle 1024x800 with all pieces merged, EACH piece will have a different color fill, example, 1) 250,250,250 2) 245,245,245 etc
After the PNG has been loaded, I calculate the pixel indexes by using GetData for each piece and store it in an array for each piece
for getting the piece selected, it's easy.. I've just to calculate the y*width+x and get the piece with that value on the array.
problem is that when I've to color.. I've to iterate the array and change the RGB and then finally do a SetData.. it takes 1 second to colorate that piece.. it MAY be acceptable.. but I want better ...
second approach is way MUCH MUCH MUCH faster when dragging and zooming, because it's only 1 PNG, and I can use bigger resolutions thanks to this, this far way best approach
any suggestions??
Thanks,
Luca
So you have a 1024x800 puzzle, with each pixel being a piece? You are not going to want to make 819200 image files or define 819200 individual behaviors for each pixel.
You need to (well, should) use object-oriented concepts for this. Here is a basic idea of the path I think you should take:
Define a "Piece" class. This Piece represents a game piece.
In the Piece class, define a rectangle which represents this piece on the board. If your piece is an irregular shape (like an S tetronimo, etc), define multiple rectangles. If you have multiple different types of pieces (and different shapes), you will want to use inheritance and make each different type of piece inherit from the Piece class, but define its own rectangles. However, if your 'pieces' are just pixels on the screen, just make a 1x1 Rectangle.
Fill the rectangles with a color using SpriteBatch. You can use a single white pixel png and stretch it to fill the rectangle(s) as well as specify the color.
Use the rectangles to hit-test your pieces with touch. If a touch falls within your rectangle, you have selected that piece. If your pieces are only 1 pixel in size, it's even easier: simply lookup the piece at the touch position's X and Y values.
Now for the game board: depending on the shape and possible positions of the pieces, you will need to select a way to store them. If all your pieces were squares, you could store them in a 2D array. Otherwise you might use a map, where the key is the piece's position and the value is the piece itself.
A basic summary of this approach: don't try to store your board in a single texture. Writing/reading texture data is slow and textures are not made to act as data structures. Instead, store your pieces as individual objects. Iterate through each piece and draw it to the screen.

simple case of optical flow

General: I'm hoping that the use-case I'm about to describe is a simple case of an optical flow problem and since I don't have much knowledge on the subject, I was wondering if anyone has any suggestions on how I can approach solving my problem.
Research I've already done: I have began reading the High Accuracy Optical Flow Estimation Based on a Theory for Warping paper and am planning on looking over the Particle Video paper. I have found a MATLAB High Accuracy Optical Flow implementation of optical flow. However, the papers (and the code) seem to describe concepts that are very involved and may require a lot of time for me to dig in and understand. I am hoping that the solution to my problem may be more simple.
Problem: I have a sequence of images. The images depict a material breakage process, where the material and background are black and the cracks are white. I am interested in traversing the sequence of images in reverse in an attempt to map all of the cracks that have formed in the breakage process to the first black image. You can think of the material as a large puzzle and I am trying to put the pieces back together in the reverse order that they broke.
In each image, there can be some cracks that are just emerging and/or some cracks that have been fully formed (and thus created a fragment). Throughout the breakage process, some fragments may separate and break further. The fragments can also move farther away from one another (the change is slight between subsequent frames).
Desired Output: All of the cracks/lines in the sequence mapped to the first image in the sequence.
Additional Notes: Images are available in grayscale format (i.e. original) as well as in binary format, where the cracks have been outlined in white and the background is completely black. See below for some image examples.
The top row shows the original images and the bottom row shows the binary images. As you can see, the crack that goes down the middle grows wider and wider as the image sequence progresses. Thus, the bottom crack moves together with the lower fragment. When traversing the sequence in reverse, I hope to algorithmically realize that the middle crack comes together as one (and map it correctly to the first image), and also map the bottom crack correctly, keeping its correct correspondence (size and position) with the bottom fragment.
A sequence typically contains about 30~40 images, so I've just shown the beginning subset. Also, although these images don't show it, it is possible that a particular image only contains the beginning of the crack (i.e. its initial appearance) and in subsequent images it gets longer and longer and may join with other cracks.
Language: Although not necessary, I would like to implement the solution using MATLAB (just because most of the other code that relates to the project has been done in MATLAB). However, if OpenCV may be easier, I am flexible in my language/library usage.
Any ideas are greatly appreciated.
Traverse forward rather than reverse, and don't use optical flow. Use the fracture lines to segment the black parts, track the centroid of each black segment over time. Whenever a new fracture line appears that cuts across a black segment, split the segment into two and continue tracking each segment separately.
From this you should be able to construct a tree structure representing the segmentation of the black parts over time. The fracture lines can be added as metadata to this tree, perhaps assigning fracture lines to the segment node in which they first appeared.
I would advise you to follow your initial idea of backtracking the cracks. Yo kind of know how the cracks look like so you can track all the points that belong to the crack. You just track all the white points with an optical flow tracker, start with Lukas-Kanade tracker and see where you get. The high-accuracy optical flow method is a global one and more general, I'll track all the pixels in the image trying to keep some smoothness everywhere. The LK is a local method that will just use a small window around each point to do the tracking. The problem is that appart from the cracks all the pixels are plain black so nothing to track there, you'll just waist time trying to track something that you can't track and you don't need to track.
If lines are very straight you might end up with what's called the aperture problem and you'll get inaccurate results. You can also try some shape fitting/deformation based on snakes.
I agree to damian. Most optical flow methods like the HAOF rely on the first-order taylor approximation of the intensity constancy constrian equation I(x,t)=I(x+v,t+dt). That mean the solution depends on image derivatives where the gradient determine the motion vector magnitude and angle i.e. you need a certain amount of texture. However the very low texture of your non-binarised images could be enough. You could try histogram equalization to increase the contrast of your input data but it is important to apply the same transformation for both input images. e.g. as follows:
cv::Mat equalizeMat(grayInp1.rows, grayInp1.cols * 2 , CV_8UC1);
grayInp1.copyTo(equalizeMat(cv::Rect(0,0,grayInp1.cols,grayInp1.rows)));
grayInp2.copyTo(equalizeMat(cv::Rect(grayInp1.cols,0,grayInp2.cols,grayInp2.rows)));
cv::equalizeHist(equalizeMat,equalizeMat);
equalizeMat(cv::Rect(0,0,grayInp1.cols,grayInp1.rows)).copyTo(grayInp1);
equalizeMat(cv::Rect(grayInp1.cols,0,grayInp2.cols,grayInp2.rows)).copyTo(grayInp2);
// estimate optical flow

Recognizing distortions in a regular grid

To give you some background as to what I'm doing: I'm trying to quantitatively record variations in flow of a compressible fluid via image analysis. One way to do this is to exploit the fact that the index of refraction of the fluid is directly related to its density. If you set up some kind of image behind the flow, the distortion in the image due to refractive index changes throughout the fluid field leads you to a density gradient, which helps to characterize the flow pattern.
I have a set of routines that do this successfully with a regular 2D pattern of dots. The dot pattern is slightly distorted, and by comparing the position of the dots in the distorted image with that in the non-distorted image, I get a displacement field, which is exactly what I need. The problem with this method is resolution. The resolution is limited to the number of dots in the field, and I'm exploring methods that give me more data.
One idea I've had is to use a regular grid of horizontal and vertical lines. This image will distort the same way, but instead of getting only the displacement of a dot, I'll have the continuous distortion of a grid. It seems like there must be some standard algorithm or procedure to compare one geometric grid to another and infer some kind of displacement field. Nonetheless, I haven't found anything like this in my research.
Does anyone have some ideas that might point me in the right direction? FYI, I am not a computer scientist -- I'm an engineer. I say that only because there may be some obvious approach I'm neglecting due to coming from a different field. But I can program. I'm using MATLAB, but I can read Python, C/C++, etc.
Here are examples of the type of images I'm working with:
Regular: Distorted:
--------
I think you are looking for the Digital Image Correlation algorithm.
Here you can see a demo.
Here is a Matlab Implementation.
From Wikipedia:
Digital Image Correlation and Tracking (DIC/DDIT) is an optical method that employs tracking & image registration techniques for accurate 2D and 3D measurements of changes in images. This is often used to measure deformation (engineering), displacement, and strain, but it is widely applied in many areas of science and engineering.
Edit
Here I applied the DIC algorithm to your distorted image using Mathematica, showing the relative displacements.
Edit
You may also easily identify the maximum displacement zone:
Edit
After some work (quite a bit, frankly) you can come up to something like this, representing the "displacement field", showing clearly that you are dealing with a vortex:
(Darker and bigger arrows means more displacement (velocity))
Post me a comment if you are interested in the Mathematica code for this one. I think my code is not going to help anybody else, so I omit posting it.
I would also suggest a line tracking algorithm would work well.
Simply start at the first pixel line of the image and start following each of the vertical lines downwards (You just need to start this at the first line to get the starting points. This can be done by a simple pattern that moves orthogonally to the gradient of that line, ergo follows a line. When you reach a crossing of a horizontal line you can measure that point (in x,y coordinates) and compare it to the corresponding crossing point in your distorted image.
Since your grid is regular you know that the n'th measured crossing point on the m'th vertical black line are corresponding in both images. Then you simply compare both points by computing their distance. Do this for each line on your grid and you will get, by how far each crossing point of the grid is distorted.
This following a line algorithm is also used in basic Edge linking algorithms or the Canny Edge detector.
(All this are just theoretic ideas and I cannot provide you with an algorithm to it. But I guess it should work easily on distorted images like you have there... but maybe it is helpful for you)

Resources