Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If the input is an image, my output should be a scrambled or jumbled image of the original image. There isn't a need to divide the image into sub-blocks and scramble those regions individually. You can scramble the entire image in one operation. How can I achieve this in MATLAB?
If you what you mean by "scrambled" is by randomly re-arranging pixels in your image, you can create a random permutation vector that is as long as the total number of pixels in your image, reshape this so that it's the same size as the image, then use this to index into it. Specifically, use randperm to help you do this. As an example, let's use cameraman.tif that's part of MATLAB's system path. Do something like this:
im = imread('cameraman.tif');
vec = randperm(numel(im));
vec = reshape(vec, size(im));
out = im(vec);
If you want to undo the scrambling, vec contains the column-major indices of where the pixels need to be remapped to in the original image. Therefore, simply allocate a blank image, then use vec to index into this blank image and copy out back. What this will do is that it will take the positions in the scrambled image, and copy them back in the locations that are indexed by vec. This basically undoes the scrambling that we did before. Therefore:
reconstruct = zeros(size(im), class(im));
reconstruct(vec) = out;
This is the original image:
This is what the scrambled image looks like:
Certainly looks noisy!... and scrambled... at least I think so. By doing the reconstruction, this is what the image looks like:
The key to scrambling and unscrambling the image is to have that vector generated by randperm. If you don't have this, then there's no way for you to reconstruct the original image.
Good luck!
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to find the algorithm or even the general idea behind some of the effects used in Photoshop. Specifically, the palette knife effect that simplifies the colors of an image. For instance, the image bellow:
turns into something like this:
I want for each group of pixels that have similar color, to turn into a simple block of one or two colors (in real time) as happens in Photoshop. Any idea of a method to do this is appreciated.
Following tucuxi's suggestion, I could run a classification algorithm like kNN to pick K main colors for each image (frame in the video) and then change each pixel's color the the closest one from the k representatives. I am going to put the code here, and I appreciate any suggestions for improving it.
Since you want to choose representative colors, you can proceed as follows:
choose K colors from among N total present in the image
for each pixel in the image, replace it with its nearest color within the K chosen
To achieve step 1, you can run a k-nearest-neighbors over the actual color-space. In an WxH image, you have WxH pixels, each with a color. You choose K random colors to act as centroids, add the closest pixels to each, and after a while, you finish up with K different colors that more-or-less represent the most important colors of the image (in terms of being not too far from all others). Note that this is only one possible clustering algorithm - I am sure a lot of literature exists on alteratives and their relative merits.
Step 2 is comparatively much easier. For each original pixel, calculate distance to each of the K chosen colors, and replace it by the closest one.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have RGB image that I converted to binary to do some processing(I'm working on watermarking) and in extraction watermaek I want to reconvert it back to its original color RGB, is it possible? I did some researchs on the internet but unfortunately I couldn't find a good answer. thank you in advance.
Binarization is a lossy opertation that loses information. There is no way to hallucinate this information back, unless you use the additional external input, e.g. the original image.
You can always use your binary image as a mask for example for operating on the original RGB image.
If you want an RGB formatted binary image, just copy the binary image into the three RGB channel. You will need to decide what RGB the binary non-zero value will be converted to, e.g. 1, 255, max-val etc.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have two images I and J, I take X=fft(I) and Y=fft(J) to have the Fourier transforms, then I take the phase and magnitude of 'X' and 'Y' respectively.
The problem is I need to combine the phase of X and magnitude of Y to form a new image, and use ifft to reconstruct this new image, How to do that in MATLAB ?
The magnitude and phase of the 2D Fourier spectrum can be expressed as the phase and absolute value of a complex number. For images in Matlab, it consist of a 2D complex array. You can create a 2D complex array merging the magnitude and phase like this:
FreqDomain = abs(Y).*exp(i*angle(X));
and feed it back into ifft2.
Note: use fft2 to calculate 2D FFTs of images.
Edit: in fact there is a full example of exactly what you are asking on this page: http://matlabgeeks.com/tips-tutorials/how-to-do-a-2-d-fourier-transform-in-matlab/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to implement a piece of paper, but I got stuck in a part called elevation filter!
here is that part of this article:
Does anyone know how to write it in MATLAB?
What you are asking is strongly related to what in image processing is called watershed transform (or wikipedia).
According to watershed approach, a grayscale image is seen as a topographic relief and it is filled with water. Doing so, different regions of the image can be separated, according to the way different basins join once filled with water.
If watershed is your final aim, the image processing toolbox has an implementation for it. Here.
In principle, in your problem, given a local minimum q, height(p), for p close to q, solves the minimization problem
height(p) = inf_{g} \int_g ||grad I (g) || dg
where g is curve which joins p and q and I is your image.
For more mathematical details, you can consider, for instance, this paper.
For implementation details, matlab, for instance, should have mex code.
You can make use of the Image Processing Toolbox function watershed to compute your elevation. I'll start with a simple example 300-by-300 matrix to represent your height data, shown in the first row of the figure below:
height = repmat(uint8([1:100 99:-1:50 51:1:150 149:-1:100]), 300, 1);
Each row has the same profile in this case. We can compute a basin matrix using watershed:
basin = watershed(height)+1;
This is shown in the second row of the figure below. Note that there are crests that are assigned a default value of 1 because they fall on the edges of the basins. You'll have to decide for yourself how you want to deal with these, as here I end up just lumping all the edges together into a pseudo-basin.
Next, we can use accumarray to compute a minima matrix (shown in the third row of the figure below) that maps the minimum value of each basin to all the points in that basin:
minValues = accumarray(basin(:), height(:), [], #min);
minima = minValues(basin);
And finally, the elevation can be computed like so (result shown in last row of figure below):
elevation = height - minima;
Figure:
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What's the best and most flexible algorithm to detect any black (or colored pixel) in a given image file?
Say I'm given an image file that could, say, have a blue background. And any non blue pixel, including a white pixel, is counted as a "mark". The function returns true if there are X number of pixels that deviate from each other at a certain threshold.
I thought it'd be fastest to just simply iterate through every pixel and see if its color matches the last. But if it's the case that pixel (0,0) is deviant, and every other pixel is the same color (and I want to allow at least a couple deviated pixels before considering an image to be "marked), this won't work or be terribly efficient.
M1: scan the entire image.
M2: Xor with the color that you are searching for . For example you are looking for r,g,b = (112,233,35), XOR the first r layer with 112, g with 233 and b with 35. ( assuming its 24 bit image ( each layer is 8 bit) ). In the resulting image, find the brightest pixels, and go back to those pixels in the original image to check.
You could always check nearby pixels, if pixel 3x3 is black then check 2x2, 2x3, 2x4, 3x2, 3x4 and so on and have a treshhold for how far away in colorvalue those nearby pixels can be for the black pixel to NOT count as deviation. and do the same for any color really. In a normal image every pixel should have some neighboring pixels with similar colors, there are very very few exceptions to this in med to high-res pictures.
Edit: Finding a treshhold that works can differ from image to image depending on a multitude of things so you'll likely have to be able to finetune the treshhold to get good results.