Split an image into multiple sub images using C - image

I am looking for a C library that given an image of size x, it will split the image into multiple images so that I can send every subimage to a dedicated CPU to do detect segments on it using region growing or what ever.

Do you really want to split the image?
If you are using multi-core CPUs, it's better to load the image once, then run processing threads on it (I assume, the processing only reads the image) with x,y,width,height parameters.
If you have more hosts, there are a dispatcher, and it is doing several operations with the image: decompress, split, compress parts, transmit parts. I think, the processing hosts are on the same local network. If you can send the image to this local network as broadcast, I mean that all the hosts receives the image at once, it would be a performance gain: the displathcer has not to split and re-send parts, processing tasks should just pick the appropiate part of the whole image received (x,y,width,height). I don't know what image format are you using, but I'm pretty sure that you don't have to decompress the whole image, at least vertically you should skip unwanted regions. (You should split image to full-width regions avoid decompressing unneeded areas.)

Merging the results from the separate segmentation outputs will be the hard part. What if you happen to split right through a segment? You'd get a segment from each split image, and you'd have to merge them together. There will be uncertain cases, so you'll have to pick a metric to decide when to merge two regions.
If this is a concern, you might want to try out the Seam Carving algorithm to generate splits that aren't likely to intersect a region edge. Photoshop's Content-Aware Resize tool uses seam carving to find horizontal and vertical paths in an image that are not visually important.

As pointed by japresis merging the resulting segmentation results may be your hardest part.
If you are using graph-cut based image segmentation algorithm, you may augment your code with this approach that provides a principled way for performing parallel operations and combine them in an optimal way.

While agreeing with Shai and japreiss, and underlying that since your goal is image segmentation you'll have boundary issues (because you need neighborhood information), for the image manipulation part I'd suggest something like
libpng: http://www.libpng.org/pub/png/libpng.html
And take a look to these StackOverflow questions:
How to encode PNG to buffer using libpng?
Using libpng to "split" an image into segments (this isn't properly answered yet)
When you have your buffer filled with the image values, reading and writing portions of it should not be tricky at all.

Related

What can I do to get the most out of png compression for an image with animation frames?

I'm looking to do some javascript powered animation via image clipping. Here's an example of what I'm talking about: http://www.def-logic.com/_dhtml/freejack/hero1.gif
I know png uses a kind of prediction in its compression, what would be the best way to lay out an image like the one above so that I get the most out of the compression? I'm especially interested when the images are very similar, more so than the one above, so there is a lot of potential for compression due to redundancy.
For example, is there specific size of tile that would work well?
For example, is there specific size of tile that would work well?
Not really. PNG prediction is strictly local (it uses the 3 neighbours pixels), and the prediction ("filter") strategy can be chosen on a line basis.
That kind of redundancy is not very detectable in PNG compression (not in JPG or practically any other, actually).
If you have the freedom to select the distribution of tiles (few or many per row), you can try vary that, it can have some small influence (to have an image with many short lines instead of few long lines can give the filter better opportunities to select different filters) but, again, I'd bet that the difference will be very small.

How can I get the rectangular areas of difference between two images?

I feel like I have a very typical problem with image comparison, and my googles are not revealing answers.
I want to transmit still images of a desktop every X amount of seconds. Currently, we send a new image if the old and new differ by even one pixel. Very often only something very minor changes, like the clock or an icon, and it would be great if I could just send the changed portion to the server and update the image (way less bandwidth).
The plan I envision is to get a rectangle of an area that has changed. For instance, if the clock changed, screen capture the smallest rectangle that encompasses the changes, and send it to the server along with its (x, y) coordinate. The server will then update the old image by overlaying the rectangle at the coordinate specified.
Is there any algorithm or library that accomplishes this? I don't want it to be perfect, let's say I'll always send a single rectangle that encompasses all the changes (even if many smaller rectangles would be more efficient).
My other idea was to get a diff between the new and old images that's saved as a series of transformations. Then, I would just send the series of transformations to the server, which would then apply this to the old image to get the new image. Not sure if this is even possible, just a thought.
Any ideas? Libraries I can use?
Compare every pixel of the previous frame with every pixel of the next frame, and keep track of which pixels have changed?
Since you are only looking for a single box to encompass all the changes, you actually only need to keep track of the min-x, min-y (not necessarily from the same pixel), max-x, and max-y. Those four values will give you the edges of your rectangle.
Note that this job (comparing the two frames) should really be off-loaded to the GPU, which could do this significantly faster than the CPU.
Note also that what you are trying to do is essentially a home-grown lossless streaming video compression algorithm. Using one from an existing library would not only be much easier, but also probably much more performant.
This is from algorithms point of view. Not sure if this is easier to implement.
Basically XOR the two images and compress using any information theory algorithm (huffman coding?)
I know am very late responding but I found this question today.
I have done some analysis on Image Differencing but the code was written for java. Kindly look into the below link that may come to help
How to find rectangle of difference between two images
The code finds differences and keeps the rectangles in a Linkedlist. You can use the linkedlist that contains the Rectangles to patch the differences on to the Base Image.
Cheers !

Restoring an old manuscript with image processing

Say i have this old manuscript ..What am trying to do is making the manuscript such that all the characters present in it can be perfectly recognized what are the things i should keep in mind ?
While approaching such a problem any methods for the same?
Please help thank you
Some graphics applications have macro recorders (e.g. Paint Shop Pro). They can record a sequence of operations applied to an image and store them as macro script. You can then run the macro in a batch process, in order to process all the images contained in a folder automatically. This might be a better option, than re-inventing the wheel.
I would start by playing around with the different functions manually, in order to see what they do to your image. There are an awful number of things you can try: Sharpening, smoothing and remove noise with a lot of different methods and options. You can work on the contrast in many different ways (stretch, gamma correction, expand, and so on).
In addition, if your image has a yellowish background, then working on the red or green channel alone would probably lead to better results, because then the blue channel has a bad contrast.
Do you mean that you want to make it easier for people to read the characters, or are you trying to improve image quality so that optical character recognition (OCR) software can read them?
I'd recommend that you select a specific goal for readability. For example, you might want readers to be able to read the text 20% faster if the image has been processed. If you're using OCR software to read the text, set a read rate you'd like to achieve. Having a concrete goal makes it easier to keep track of your progress.
The image processing book Digital Image Processing by Gonzalez and Woods (3rd edition) has a nice example showing how to convert an image like this to a black-on-white representation. Once you have black text on a white background, you can perform a few additional image processing steps to "clean up" the image and make it a little more readable.
Sample steps:
Convert the image to black and white (grayscale)
Apply a moving average threshold to the image. If the characters are usually about the same size in an image, then you shouldn't have much trouble selecting values for the two parameters of the moving average threshold algorithm.
Once the image has been converted to just black characters on a white background, try simple operations such as morphological "close" to fill in small gaps.
Present the original image and the cleaned image to adult readers, and time how long it takes for them to read each sample. This will give you some indication of the improvement in image quality.
A technique call Stroke Width Transform has been discussed on SO previously. It can be used to extract character strokes from even very complex backgrounds. The SWT would be harder to implement, but could work for quite a wide variety of images:
Stroke Width Transform (SWT) implementation (Java, C#...)
The texture in the paper could present a problem for many algorithms. However, there are technique for denoising images based on the Fast Fourier Transform (FFT), an algorithm that you can use to find 1D or 2D sinusoidal patterns in an image (e.g. grid patterns). About halfway down the following page you can see examples of FFT-based techniques for removing periodic noise:
http://www.fmwconcepts.com/misc_tests/FFT_tests/index.html
If you find a technique that works for the images you're testing, I'm sure a number of people would be interested to see the unprocessed and processed images.

Data Structure for large and detailed maps

Does anyone has recommendation of data structures for relative large maps with high resolution, something like 400mile x 400mile with 10-15ft resolution. Using 2D array, that would be roughly 2Mx2M cells.
The map only needs to store the elevation and terrain (earth, water, rock, etc.), and I don't think storing tiles is a good strategy.
Thank you!
It depends on what you need to do with it: view it, store it, analyze it, etc...
One thing I can say, however, is that that file will be HUGE at your stated resolution, and you should consider splitting it up into at least a few tiles, even better at 1x1 mile tiles.
The list of raster formats supported by GDAL could serve as a good starting point for exploring various formats, keeping in mind that many software packages (GRASS, ArcGIS, etc. use GDAL to read and write most raster formats). Note also that some file formats have maximum sizes which may prevent you from using them with your very large file.
For analysis and non-viewable storage, HDF5 format might be of interest.
If you want people to see the data as a map over the web, then creating small image tile overlays will be the fastest approach to sharing such a large dataset.

Detecting if two images are visually identical

Sometimes two image files may be different on a file level, but a human would consider them perceptively identical. Given that, now suppose you have a huge database of images, and you wish to know if a human would think some image X is present in the database or not. If all images had a perceptive hash / fingerprint, then one could hash image X and it would be a simple matter to see if it is in the database or not.
I know there is research around this issue, and some algorithms exist, but is there any tool, like a UNIX command line tool or a library I could use to compute such a hash without implementing some algorithm from scratch?
edit: relevant code from findimagedupes, using ImageMagick
try $image->Sample("160x160!");
try $image->Modulate(saturation=>-100);
try $image->Blur(radius=>3,sigma=>99);
try $image->Normalize();
try $image->Equalize();
try $image->Sample("16x16");
try $image->Threshold();
try $image->Set(magick=>'mono');
($blob) = $image->ImageToBlob();
edit: Warning! ImageMagick $image object seems to contain information about the creation time of an image file that was read in. This means that the blob you get will be different even for the same image, if it was retrieved at a different time. To make sure the fingerprint stays the same, use $image->getImageSignature() as the last step.
findimagedupes is pretty good. You can run "findimagedupes -v fingerprint images" to let it print "perceptive hash", for example.
Cross-correlation or phase correlation will tell you if the images are the same, even with noise, degradation, and horizontal or vertical offsets. Using the FFT-based methods will make it much faster than the algorithm described in the question.
The usual algorithm doesn't work for images that are not the same scale or rotation, though. You could pre-rotate or pre-scale them, but that's really processor intensive. Apparently you can also do the correlation in a log-polar space and it will be invariant to rotation, translation, and scale, but I don't know the details well enough to explain that.
MATLAB example: Registering an Image Using Normalized Cross-Correlation
Wikipedia calls this "phase correlation" and also describes making it scale- and rotation-invariant:
The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation.
Colour histogram is good for the same image that has been resized, resampled etc.
If you want to match different people's photos of the same landmark it's trickier - look at haar classifiers. Opencv is a great free library for image processing.
I don't know the algorithm behind it, but Microsoft Live Image Search just added this capability. Picasa also has the ability to identify faces in images, and groups faces that look similar. Most of the time, it's the same person.
Some machine learning technology like a support vector machine, neural network, naive Bayes classifier or Bayesian network would be best at this type of problem. I've written one each of the first three to classify handwritten digits, which is essentially image pattern recognition.
resize the image to a 1x1 pixle... if they are exact, there is a small probability they are the same picture...
now resize it to a 2x2 pixle image, if all 4 pixles are exact, there is a larger probability they are exact...
then 3x3, if all 9 pixles are exact... good chance etc.
then 4x4, if all 16 pixles are exact,... better chance.
etc...
doing it this way, you can make efficiency improvments... if the 1x1 pixel grid is off by a lot, why bother checking 2x2 grid? etc.
If you have lots of images, a color histogram could be used to get rough closeness of images before doing a full image comparison of each image against each other one (i.e. O(n^2)).
There is DPEG, "The" Duplicate Media Manager, but its code is not open. It's a very old tool - I remember using it in 2003.
You could use diff to see if they are REALLY different.. I guess it will remove lots of useless comparison. Then, for the algorithm, I would use a probabilistic approach.. what are the chances that they look the same.. I'd based that on the amount of rgb in each pixel. You could also find some other metrics such as luminosity and stuff like that.

Resources