Data structure for pixel selections in a picture - image

Is there a convenient data structure for storing a pixel selection in a picture?
By pixel selection I mean a set of pixels you obtain with selection tools such as those in image editing software (rectangles, lasso, magic wand, etc.). there can be holes, and in the general case the selection is (much) smaller than the picture itself.
The objective is to be able to save/load selections, display the selected pixels only is a separate view (bounding box size), using selections in specific algorithms (typically algorithms requiring segmentation), etc. It should use as little memory space as possible since the objective is to store a lot of them in a DB.
Solutions I found so far:
a boolean array (size of the picture/8)
a list of (uint16,uint16) => unefficient if many pixels in the selection
an array of lists: lists of pixels series for each line

A boolean array will take W x H bits for the raster plus extra accounting (such as ROI limits). This is roughly proportional to the area of the bounding box.
A list of pixel coordinates will take like 32 bits (2x16 bits) per selected pixel. This is pretty large compared to the boolean array, except when the selection is very hollow.
Another useful representation is the run-length-encoding, which counts the continguous pixels row by row. This representation will take about 16 bits per run. Said differently, 16 / n bits per pixel when the average length of the runs is n pixels. This works fine for large filled shapes, but poorly for isolated pixels.
Finally, you can also consider just storing the outlines of the shapes as a list of pixels (32 bits per pixel) or as a Freeman chain (only 3 bits per pixel), which can be a significant saving with respect to the full enumeration.
As you can see, the choice is uneasy because the efficiency of the different representations is strongly dependent on the shape of the selection. Another important aspect is the ease with which the given representation can be used for the targeted processing of the selection.

Related

How to count black cells without iteration (bitmap?)

Been stuck on a problem for a while, hope some of you have ideas.
Given a matrix size N*M of binary values (0 / 1), come with an approach to return the number of 1's which is more efficient than simply iterating the matrix.
The key in my opinion is bitmap. Thought about allocating new N*M matrix and manipulate the two... haven't got a solution yet.
Any ideas?
From a theoretical point of view, unless the matrix has special properties, you must test all the N.M elements and this can be achieved by a loop. So this construction is optimal and unbeatable.
In practice, maybe you are looking for a way to get some speedup from a naïve implementation that handles a single element at a time. The answer will be highly dependent on the storage format of the elements and the processor architecture.
If the bits are packed 8 per bytes, you can setup a lookup table of bit counts for every possible byte value. This yields a potential speedup of x8.
If you know that the black zone are simply connected (no hole), then it is not necessary to visit their inside, a contouring algorithm will suffice. But you still have to scan the white areas. This allows to break the N.M limit and to reduce to Nw + Lb, where Nw is the number of white pixels, and Lb the length of the black outlines.
If in addition you know that there is a single, simply connected black zone, and you know a black outline pixel, the complexity drops to Lb, which can be significantly smaller than N.M.

Split non rectangle image into same sized blocks

I'm looking for an algorithm to chunk a non-rectangle image (i.e. transparent image) into blocks of (for example) size 16x16 pixels. The blocks may be overlapping, but the goal is to get the smallest amount of blocks.
Example
Summary
Blocks must have equal sizes
Blocks may be overlapping
Smallest amount of rectangles is the goal
Thank you in advance
This is a special case of set cover. You could try an integer program solver, but there may just be too many possible blocks. The integer program would be amenable to column generation/branch and price, but that's an advanced technique and would require some experimentation to get it right.
I think that you could do pretty well with a greedy algorithm that repeatedly chooses the block covering as many pixels as possible including one boundary pixel.

Image size influence comparing histograms OpenCV

I'm using compareHist() function to compare the histograms of two images.
My question is: Does the size of the image has a considerable influence on the results? Should I resize the images or normalize the histograms before comparing? I'm using the CV_COMP_CORREL method.
You have to normalize histograms before comparision.
Imagine you have non noramlized histograms, e.g. one of them has a bins values in interval [0..1000] and other in [0..1]. How can you compare them? Of course every mathematical operation like addition makes no sense, because what is the result of this addition?
Then in practice a size of an image does not really matters.
In practice means that if you hava an Image A and you scale it lets say two times and you've got an image B, then if you compute hist(A) and hist(B), normalize both then histograms will be practically the same. It is because of the fact if you are scaling an image by factor k, and you have n pixels in color c in image A, then in image B you have approximately k*k*n pixels in color c (depends of interpolation). So every color amount also "scale" proportionally, so if you normalize hist(A)and hist(B), results will be approximately the same (also if your bins have sizes greater than 1 like 16 etc.)

What is sparsity in image processing?

I am new in image processing and I don't know the use of basic terms, I know the basic definition of sparsity, but can anyone please elaborate the definition in term of image processing?
Well Sajid, I actually was doing image processing a few months ago, and I had found a website that gave me what I thought was the best definition of sparsity.
Sparsity and density are terms used to describe the percentage of
cells in a database table that are not populated and populated,
respectively. The sum of the sparsity and density should equal 100%.
A table that is 10% dense has 10% of its cells populated with non-zero
values. It is therefore 90% sparse – meaning that 90% of its cells are
either not filled with data or are zeros.
I took this in the context of on/off for black and white image processing. If many pixels were off, then the pixels were sparse.
As The Obscure Question said, sparsity is when a vector or matrix is mostly zeros. To see a real world example of this, just look at the wavelet transform, which is known to be sparse for any real-world image.
(all the black values are 0)
Sparsity has powerful impacts. It can transform matrix multiplication of two NxN matrices, normally a O(N^3) operation, into an O(k) operation (with k non-zero elements). Why? Because it's a well-known fact that for all x, x * 0 = 0.
What does sparsity mean? In the problems I've been exposed to, it means similarity in some domain. For example, natural images are largely the same color in areas (the sky is blue, the grass is green, etc). If you take the wavelet transform of that natural image, the output is sparse through the recursive nature of the wavelet (well, at least recursive in the Haar wavelet).

Fast and simple image hashing algorithm

I need a (preferably simple and fast) image hashing algorithm. The hash value is used in a lookup table, not for cryptography.
Some of the images are "computer graphic" - i.e. solid-color filled rects, rasterized texts and etc., whereas there are also "photographic" images - containing rich color spectrum, mostly smooth, with reasonable noise amplitude.
I'd also like the hashing algorithm to be able to be applied to specific image parts. I mean, the image can be divided into a grid cells, and the hash function of each cell should depend only on the contents of this cell. So that one may spot quickly if two images have common areas (in case they're aligned appropriately).
Note: I only need to know if two images (or their parts) are identical. That is, I don't need to match similar images, there's no need in feature recognition, correlation, and other DSP techniques.
I wonder what is the preferred hashing algorithm.
For "photographic" images just XOR-ing all the pixels within a grid cell is ok more-or-less. The probability of the same hash value for different images is pretty low, especially because the presence of the (nearly white) noise breaks all the potential symmetries. Plus the spectrum of such a hash function looks good (any value is possible with nearly the same probability).
But such a naive algorithm may not be used with "artificial" graphics. Identical pixels, repeating patterns, geometrical offset invariance are very common for such images. XOR-ing all the pixels will give 0 for any image with even number of identical pixels.
Using something like CRT-32 looks somewhat promising, but I'd like to figure-out something faster. I thought about iterative formula, each new pixel mutates the current hash value, like this:
hashValue = (hashValue * /*something*/ | newPixelValue) % /* huge prime */
Doing modulo prime number should probably give a good dispersion, so that I'm leaning toward this option. But I'd like to know if there are better varians.
Thanks in advance.
Have a look at this tutorial on the phash algorithm http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html which is used to find closely matching images.
If you want to make it very fast, you should consider taking a random subset of the pixels to avoid reading the entire image. Next, compute a hash function on the sequence of values at those pixels. The random subset should be selected by a deterministic pseudo-random number generator with fixed seed so that identical images produce identical subsets and consequently identical hash values.
This should work reasonably well even for artificial images. However, if you have images which differ from each other by a small number of pixels, this is going to give hash collisions. More iterations give better reliability. If that is the case, for instance, if your images set is likely to have pairs with one different pixel, you must read every pixel to compute the hash value. Taking a simple linear combination with pseudo-random coefficients would be good enough even for artificial images.
pseudo-code of a simple algorithm
Random generator = new generator(2847) // Initialized with fixed seed
int num_iterations = 100
int hash(Image image) {
generator.reset() //To ensure consistency on each evaluation
int value = 0
for num_iteration steps {
int nextValue = image.getPixel(generator.nextInt()%image.getSize()).getValue()
value = value + nextValue*generator.nextInt()
}
return value
}

Resources