Re-arrange the picture - algorithm

This question was asked in a recent interview. please suggest something:
A picture of 16x16 is divided into pieces with sizes of 4x4 (16 pieces) and shuffled. Suggest an algorithm to rearrange it back.

If it's a software engineering type of problem and you divide it yourself you can cheat and store each location with each piece. ;)
They're probably looking for some pattern-matching solution though. Perhaps compare the last row of pixels on each side (top/bottom/left/right) with the other (horizontal/vertical) sides (with a certain tolerance). Each side will get a certain score against the others, progressively matching until all are done.

Without going into the Pixel matching algorithms, I think I would take a Dynamic Programming bottom up approach here. First find 8 sets of 2 pieces which are most likely adjacent and then try to build the whole thing from the smaller subsets.

I hope each of these pieces will have a identification (like a number to order/rearrange them). I can think this problem as a analogy to Reception of UDP Packets(Usually UDP Packets might get received out of order and then we need to order them.)
So any sorting algorithm should work.
Please correct me if I have misunderstood the question.

Assuming nothing is available expect the pixels of the pieces, this is a great approach at solving it probabilistically
http://people.csail.mit.edu/taegsang/JigsawPuzzle.html

Related

Detecting the presence of small details in an image

I'd like to detect regions in an image which contain a comparatively large amount of small details, but equally I need to ignore strong edges. For example I would like to (approximately) identify regions of small text on a poster which is located on a building, but I also want to ignore the strong edges of the building itself.
I guess I'm probably looking for specific frequency bands, so approaches that spring to mind include: hand tuning a convolution kernel(s) until I hit what I need, use specific DCT coefficients, apply a histogram on directional filter responses. But perhaps I'm missing something more obvious?
To answer a question in the comments below, I'm developing in Matlab
I'm open to any suggestions for how to achieve this - thanks!
Here is something unscientific, but maybe not bad to get folks talking. I start with this image.
and use the excellent, free ImageMagick to divide it up into tiles 400x400 pixels, like this:
convert -crop 400x400 cinema.jpg tile%d.jpg
Now I measure the entropy of each tile, and sort by increasing entropy:
for f in tile*.jpg; do
convert $f -print '%[entropy] %f\n' null:
done | sort -n
and I get this outoput:
0.142574 tile0.jpg
0.316096 tile15.jpg
0.412495 tile9.jpg
0.482801 tile5.jpg
0.515268 tile4.jpg
0.534078 tile18.jpg
0.613911 tile12.jpg
0.629857 tile14.jpg
0.636475 tile11.jpg
0.689776 tile17.jpg
0.709307 tile10.jpg
0.710495 tile16.jpg
0.824499 tile6.jpg
0.826688 tile3.jpg
0.849991 tile8.jpg
0.851871 tile1.jpg
0.863232 tile13.jpg
0.917552 tile7.jpg
0.971176 tile2.jpg
So, if I look at the last 3 (i.e. those with the most entropy), I get:
The question itself is too broad for a non-paper worthy answer on my side. That being said, I can offer you some advice of narrowing the question down.
First off, go to Google Scholar and search for the keywords your work is revolved around. In your case, one of them would probably be edge detection.
Look through the most recent papers ( no more than 5 years ) for work that satisfies your needs. If you don't find anything, expand the search criteria or try different terms.
If you have something more specific, please edit your question and let me know.
Always remember to split the big question into smaller chunks and then split them into even smaller chunks, until you have a plate of delicious, manageable bites.
EDIT: From what I've gathered, you're interested in an edge detection and feature selection algorithm? Here are a couple of helpful links, which might prove useful:
-MATLAB feature detection
-MATLAB edge detection
Also this MATLAB edge detection write up, which is a part of their extensive guide documentation will hopefully prove useful enough for you to dig through the Matlab image processing toolbox. documentation for specific answers to your question.
You'll find Maximally Stable Extremal Regions (MSER) useful for this. You should be able to impose an area constraint to filter out large MSERs and then calculate a MSER density, for example as Mark had done in his answer by dividing the image into tiles.

Tiling Algorithm

I'm faced with a problem where I have to solve puzzles.
E.g. I have an (variable) area of 20x20 (meters for example). There are a number of given set pieces having variable sizes. Such as 4x3, 4x2, 1x5 pieces etc. These pieces can also be turned to add more pain to my problem. The point of the puzzle is to fill the entire area of 20x20 with the given pieces.
What would be a good starting algorithm to achieve such a feat?
I'm thinking of using a heuristic that calculates the open space (for efficiency purposes).
Thanks in advance
That's an Exact Cover problem, with a nice structure too, usually, depending on the pieces. I don't know about any heuristic algorithms, but there are several exact options that should work well.
As usual with Exact Covers, you can use Dancing Links, a way to implement Algorithm X efficiently.
Less generally, you can probably solve this with zero-suppressed decision diagrams. It depends on the tiles though. As a bonus, you can represent all possible solutions and count them or generate one with some properties, all without ever explicitly storing the entire (usually far too large) set of solutions.
BDDs would work about as well, using more nodes to accomplish the same thing (because the solutions are very sparse, as in, using few of the possible tile-placements - ZDDs like that but BDDs like symmetry better than sparseness).
Or you could turn it into a SAT problem, then you get less information (no solution count for example), but faster if there are easy solutions.

Calculating efficient use of window casing (trim)

I'm working on an app that's going to estimate building material for my business. The part I'm working on right now deals specifically with trim that goes around windows.
The best way to explain this is to give an example:
Window trim is purchased in lengths of 14 feet (168 inches). Let say I have 5 rectangular windows of various sizes, all of which consist of 4 pieces of trim each (the top and bottom, and right and left). I'm trying to build an algorithm that will determine the best way to cut these pieces with the least amount of waste.
I've looked into using permutations to calculate every possibly outcome and keep track of waste, but the number of permutations where beyond the trillions once I got past 5 windows (20 different pieces of trim).
Does anyone have any insight on how I might do this.
Thanks.
You are looking at a typical case of the cutting stock problem.
I find this lecture from the University of North Carolina (pdf) is rather clear. More oriented towards implementing, with an example throughout, and few requirements -- maybe just looking up a few acronyms. But there are also 2 hours of video lectures from the university of Madras on the topic, if you want more details and at a reasonably slow pace.
It relies on solving the knapsack problem several times, which you can grab directly from Rosetta Code if you don't want to go through a second linear optimization problem.
In short, you want to select some ways (how many pieces of each length) in which to cut stock (in your case the window trim), and how many times to use each way.
You start with a trivial set : for each length you need, make a way of cutting with just that size. You then iterate : the knapsack problem gives the least favourable way to cut stock from your current configuration, and the simplex method then "removes" this combination from your set of ways to cut stock, by pivoting.
To optimize the casements on windows and bi-fold doors for the company i worked for, i used this simple matrix - i simply took the most common openings and decided what would be the most reasonable and optimal cut lengths.
for example a 3050 window could be trimmed with waste by using one 8' cut and one 12'cut.

Matlab - distinguish overlapping low contrast objects in a RGB or Grayscale Image

I have a big problem detecting objects within an image - I know this topic was already highly discusses in many forums, but I spend the last 4 days searching for an answer and was not able.
In fact: I have a picture from a branch (http://cl.ly/image/343Y193b2m1c). My goal is to count every single needle in this picture. So I have to face several problems:
Separate the branch with its needles from the background (which in this case is no problem).
Select the borders of the needles. This is a huge problem; I tried different ways including all edges() functions but the problem is always the same - the borders around the needles are not closed and - which leads to the last problem:
Needles are overlapping! This leads in "squares between the needles" which are, if I use imfill() or equal formula, filled in instead of the needles. And: the places where the needles are concentrated (many needles at one place) are nearly impossible to distinguish.
I tried watershed, I tried to enhance the contrast, Kmeans clustering, I tried imerose, imdilate and related functions with subsequent edge detection. I tried as well to filter and smooth the picture a bit in order to "unsharp" the needles a bit so that not every small change in color is recognized as a border (which is another problem).
I am relatively new to matlab, so I dont know what I have to look for. I tried to follow the MatLab tutorial used for Nuclei detection - but with this I just can get all the green objects (all needles at once).
I hope this questions did not came up before - if yes, I apologize deeply for the double post. If anybody has an idea what to do or what methods to use, it would be awesome and would safe this really bad beginning of the week.
Thank you very much in advance,
Phillip
Distinguishing overlapping objects is very, very hard, particularly if you do not know how many objects you have to distinguish. Your brain is much better at distinguishing overlapping objects than any segmentation algorithm I'm aware of, since it is able to integrate a lot of information that is difficult to encode. Therefore: If you're not able to distinguish some of the features yourself, forget about doing it via code.
Having said that, there may be a way for you to be able to get an approximate count of the needles: If you can segment the image pixels into two classes: "needle" versus "not needle", and you know how much area in your picture is covered by a needle (it may help to include a ruler when you take the picture), you can then divide number of "needle"-pixels by the number of pixels covered by a single needle to estimate the total number of needles in the image. This will somewhat underestimate the needle count due to overlaps, and it will underestimate more the denser the needles are (due to more overlaps), but it should allow you to compare automatically between branches with lots of needles and branches with few needles, as well as to identify changes in time, should that be one of your goals.
I agree with #Jonas = you got yourself one HUGE problem.
Let me make a few suggestions.
First, along #Jonas' direction, instead of getting an accurate count, another way of getting a rough estimate is by counting the tips of the needles. Obviously, not all the tips are clearly visible. But, if you can get a clear mask of the branch it might be relatively easy to identify the tips of the needles using some of the morphological operations you mentioned yourself.
Second, is there any way you can get more information? For example, if you could have depth information it might help a little in distinguishing the needles from one another (it will not completely solve the task but it may help). You may get depth information from stereo - that is, taking two pictures of the branch while moving the camera a bit. If you have a Kinect device at your disposal (or some other range-camera) you can get a depth map directly...

How calculate minimal waste when tailoring tubes

I have a rather mathematical problem I need to solve:
The task is to cut a predefined number of tubes out of fixed length tubes with a minimum amount of waste material.
So let's say I want to cut 10 1m tubes and 20 2,5m tubes out of tubes with a standardized length of 6m.
I'm not sure what an algorithm for this kind of problem would look like?
I was thinking to create a list of variations of the different sized tubes, fit them into the standard sized tubes and
then choose the variation with the minimal waste.
First I'm not sure if there are not other and better ways to attack the problem.
Second I did not find a solution HOW I would create such a variations list.
Any help is greatly appreciated, thanks!
I believe you are describing the cutting stock problem. Some additional information can be found here.
This is known as the Cutting Stock problem. Wikipedia has a number of references that might help you find clues to an algorithm that works.

Resources