Read RGB image into binary and display it as RGB in Matlab - algorithm

This question is based on the one asked earlier Understanding image steganography by LSB substitution method
In order to make the code efficient and reduce the mean square error (MSE) the suggestion was: "read the file as is with and convert it to bits with de2bi(fread(fopen(filename)), 8). Embed these bits to your cover image with the minimum k factor required, probably 1 or 2. When you extract your secret, you'll be able to reconstruct the original file." This is what I have been trying but somewhere I am doing wrong as I am not getting any display. However, the MSE has indeed reduced. Basically, I am confused as to how to convert the image to binary, perform the algorithm on that data and display the image after extraction.
Can somebody please help?

I've made some modifications to your code to get this to work regardless of what the actual image is. However, they both need to be either colour or grayscale. There are also some errors your code that would not allow me to run it on my version of MATLAB.
Firstly, you aren't reading in the images properly. You're opening up a byte stream for the images, then using imread on the byte stream to read in the image. That's wrong - just provide a path to the actual file.
Secondly, the images are already in uint8, so you can perform the permuting and shifting of bits natively on this.
The rest of your code is the same as before, except for the image resizing. You don't need to specify the number of channels. Also, there was a syntax error with bitcmp. I used 'uint8' instead of the value 8 as my version of MATLAB requires that you specify a string of the expected data type. The value 8 here I'm assuming you mean 8 bits, so it makes sense to put 'uint8' here.
I'll also read your images directly from Stack Overflow. I'll assume the dinosaur image is the cover while the flower is the message:
%%% Change
x = imread('https://i.stack.imgur.com/iod2d.png'); % cover message
y = imread('https://i.stack.imgur.com/Sg5mr.png'); % message image
n = input('Enter the no of LSB bits to be subsituted- ');
%%% Change
S = uint8(bitor(bitand(x,bitcmp(2^n-1,'uint8')),bitshift(y,n-8))); %Stego
E = uint8(bitand(255,bitshift(S,8-n))); %Extracted
origImg = double(y); %message image
distImg = double(E); %extracted image
[M N d] = size(origImg);
distImg1=imresize(distImg,[M N]); % Change
figure(1),imshow(x);title('1.Cover image')
figure(2),imshow(y);title('2.Message to be hide')
figure(3),imshow((abs(S)),[]);title('3.Stegnographic image')
figure(4),imshow(real(E),[]); title('4.Extracted image');
This runs for me and I manage to reconstruct the message image. Choosing the number of bits to be about 4 gives you a good compromise between the cover and message image.

Loading the byte stream instead of the pixel array of the secret will result to a smaller payload. How smaller it'll be depends on the image format and how repetitive the colours are.
imread() requires a filename and loads a pixel array if said filename is a valid image file. Loading the byte stream of the file and passing that to imread() makes no sense. What you want is this
% read in the byte stream of a file
fileID = fopen(filename);
secretBytes = fread(fileID);
fclose(fileID);
% write it back to a file
fileID = fopen(filename);
fwrite(fileID, secretBytes);
fclose(fileID);
Note that the cover image is loaded as a pixel array, because you'll need to modify it.
The size of your payload is length(secretBytes) * 8 and this must fit in your cover image. If you decide to embed k bits per pixel, for all your colour planes, the following requirement must be met
secretBytes * 8 <= prod(size(coverImage)) * k
If you want to embed in only one colour plane, regardless of whether your cover medium is an RGB or greyscale, you need to modify that to
secretBytes * 8 <= size(coverImage,1) * size(coverImage,2) * k
If this requirement isn't met, you can choose to
stop the process
ask the user for a smaller file to embed
increase k
include more colour planes, if available
The following is a prototype for embedding in one colour plane in the least significant bit only (k = 1).
HEADER_LEN = 24;
coverImage = imread('lena.png');
secretBytes = uint8('Hello world'); % this could be any byte stream
%% EMBEDDING
coverPlane = coverImage(:,:,1); % this assumes an RGB image
bits = de2bi(secretBytes,8)';
bits = [de2bi(numel(bits), HEADER_LEN) bits(:)'];
nBits = length(bits);
coverPlane(1:nBits) = bitset(coverPlane(1:nBits),1,bits);
coverImage(:,:,1) = coverPlane;
%% EXTRACTION
nBits = bi2de(bitget(coverPlane(1:HEADER_LEN),1));
extBits = bitget(coverPlane(HEADER_LEN+1:HEADER_LEN+nBits),1);
extractedBytes = bi2de(reshape(extBits',8,length(extBits)/8)')';
Along with your message bytes you have to embed the length of the secret, so the extractor knows how many bits to extract.
If you embed with k > 1 or in more than one colour planes, the logic becomes more complicated and you have to be careful how you implement the changes.
For example, you can choose to embed in each colour plane at a time until you run out of bits to hide, or you can flatten the whole pixel array with coverImage(:), which will embed in the RGB of each pixel, one pixel at a time until you run out of bits.
If you embed with k > 1, you have to pad your bits vector with 0s until its length is divisible by k. Then you can combine your bits in groups of k with
bits = bi2de(reshape(a',k,length(bits)/k)')';
And to embed them, you want to resort back to using bitand() and bitor().
coverPlane(1:nBits) = bitor(bitand(coverPlane(1:nBits), bitcmp(2^k-1,'uint8')), bits);
There are more details, like extracting exactly 24 bits for the message length and I can't stress enough you have to think very carefully how you implement all of those things. You can't just stitch parts from different code snippets and expect everything to do what you want it to do.

Related

A proper way to convert 2D Array into RGB or GrayScale image for precision difference

I have a 2D CNN model where I perform a classification task. My images are all coming from a sensor data after conversion.
So, normally, my way is to convert them into images using the following approach
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.uint8(pic*255), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.jpeg")
This is what I obtain:
However, their precision is sort of important. For instance, some of the numerical values are like:
117.79348187327987 or 117.76568758022673.
As you see in the above line, their difference is the digits, when I use uint8, it only takes 117 to when converting it into image pixels and it looks the same, right? But, I'd like to make them different. In some cases, the difference is even at the 8th or 10th digit.
So, when I try to use mode F and save them .jpeg in Image.fromarray line it gives me error and says that PIL cannot write mode F to jpeg.
Then, I tried to first convert them RGB like following;
img = Image.fromarray(pic, 'RGB')
I am not including np.float32 just before pic or not multiplying it by 255 as it is. Then, I convert this image to grayscale. This is what I got for RGB image;
After converting RGB into grayscale:
As you see, it seems that there is a critical different between the first pic and the last pic. So, what should be the proper way to use them in 2D CNN classification? or, should I convert them into RGB and choose grayscale in CNN implementation and a channel of 1? My image dimensions 1000x9. I can even change this dimension like 250x36 or 100x90. It doesn't matter too much. By the way, in the CNN network, I am able to get more than 90% test accuracy when I use the first-type of image.
The main problem here is using which image conversion method I'll be able to take into account those precision differences across the pixels. Would you give me some idea?
---- EDIT -----
Using .tiff format I made some quick comparisons.
First of all, my data looks like the following;
So, if I convert this first reading into an image using the following code where I use np.float64 and L gives me a grayscale image;
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.float64(pic), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.tiff")
It gives me this image;
Then, the first 15x9 matrix seems like following image; The contradiction is that if you take a closer look at the numerical array, for instance (1,4) member, it's a complete black where the numerical array is equal to 0.4326132099074307. For grayscale images, black means that it's close to 0 cause it makes white if it's close to 1. However, if it's making a row operation, there is another value closer to 0 and I was expecting to see it black at (1,5) location. If it does a column operation, there is again something wrong. As I said, this data has been already normalized and varies within 0 and 1. So, what's the logic that it converts the array into an image? What kind of operation it does?
Secondly, if I first get an RGB image of the data and then convert it into a grayscale image, why I am not having exactly the same image as what I obtained first? Should the image coming from direct grayscale conversion (L method, np.float64) and the one coming from RGB-based (first I get RGB then convert it to grayscale) be the same? There is a difference in black-white pixels in those images. I do not know why we have it.
---- EDIT 2 ----
.tiff image with F mode and np.float32 gives the following;
I don't really understand your question, but you seem to want to store image differences that are less than 1, i.e. less than the resolution of integer values.
To do so, you need to use an image format that can store floats. JPEG, PNG, GIF, TGA and BMP cannot store floats. Instead, use TIFF, EXR or PFM formats which can handle floats.
Alternatively, you can create 16-bit PNG images wherein each pixel can store values in range 0..65535. So, say the maximum difference you wanted to store was 60 you could calculate the difference and multiply it by 1000 and round it to make an integer in range 0..60000 and store as 16-bit PNG.
You could record the scale factor as a comment within the image if it is variable.

Detecting black spots on image - Image Segmentation

I'm trying to segment an image with Color-Based Segmentation Using K-Means Clustering. I already created 3 clusters, and the cluster number 3 is like this image:
This cluster has 3 different colors. And I want to only display the black spots of this image. How can I do that?
The image is 500x500x3 uint8.
Those "holes" look like they are well defined with the RGB values all being set to 0. To make things easy, convert the image to grayscale, then threshold the image so that any intensities less than 5 set the output to white. I use a threshold of 5 instead to ensure that we capture object pixels in their entirety taking variations into account.
Once that's done, you can use the function bwlabel from the image processing toolbox (I'm assuming you have it as you're dealing with images) where the second output tells you how many distinct white objects there are.
Something like this could work:
im = imread('http://i.stack.imgur.com/buW8C.png');
im_gray = rgb2gray(im);
holes = im_gray < 5;
[~,count] = bwlabel(holes);
I read in the image directly from StackOverflow, convert the image to grayscale, then determine a binary mask where any intensity that is less than 5, set the output to white or true. Once we have this image, we can use bwlabel's second output to determine how many objects there are.
I get this:
>> count
count =
78
As an illustration, if we show the image where the holes appear, I get this:
>> imshow(holes);
The amount of "holes" is a bit misleading though. If you specifically take a look at the bottom right of the image, there are some noisy pixels that don't belong to any of the "holes" so we should probably filter that out. As such, a simple morphological opening filter with a suitable sized structure will help remove spurious noisy islands. As such, use imopen combined with strel to define the structuring element (I'll choose a square) as well as a suitable size of the structuring element. After, use the structuring element and filter the resulting image and you can use this image to count the number of objects.
Something like this:
%// Code the same as before
im = imread('http://i.stack.imgur.com/buW8C.png');
im_gray = rgb2gray(im);
holes = im_gray < 5;
%// Further processing
se = strel('square', 5);
holes_process = imopen(holes, se);
%// Back to where we started
[~,count] = bwlabel(holes_process);
We get the following count of objects:
>> count
count =
62
This seems a bit more realistic. I get this image now instead:
>> imshow(holes_process);

Comment image's "source code"

When you open an image in a text editor you get some characters which don't really makes sense (at least not to me). Is there a way to add comments to that text, so the file would not apear damaged when opened with an image viewer.
So, something like this:
Would turn into this:
The way to go is setting metadata fields if your image format supports any.
For example, for a PNG you can set a comment field when exporting the file or with a separate tool like exiftool:
exiftool -comment="One does not simply put text into the image data" test.png
If the purpose of the text is to ensure ownership then take a look at digital watermarking.
If you are looking to actually encode information in your image you should use steganography ( https://en.wikipedia.org/wiki/Steganography )
The wiki article runs you through the basics and shows and example of a picture of a cat hidden in a picture of trees as an example of how you can hide information. In the case of hiding text you can do the following:
Encoding
Come up with your phase: For argument's sake I'll use the word Hidden
Convert that text to a numeric representation - for simplicity I'll assume ASCII conversion of characters, but you don't have to
"Hidden" = 72 105 100 100 101 110
Convert the numeric representation to Binary
72 = 01001000 / 105 = 01101001 / 100 = 01100100 / 101=01100100 / 110=01101110
For each letter convert the 8 bit binary representations into four 2 bit binary representations that we shall call mA,mR,mG,mB for reasons that will become clear shortly
72 = 01 00 10 00 => 1 0 2 0 = mA mR mG mB
Open an image file for editing: I would suggest using C# to load the image and then use Get/Set Pixels to edit them (How to manipulate images at the pixel level in C# )
use the last 2 bits of each color channel for each pixel to encode your message. For example to encode H in the first pixel of an image you can use the C# code at the end of the instructions
Once all letters of the Word - one per pixel - have been encoded in the image you are done.
Decoding
Use the same basic process in reverse.
You walk through the image one pixel at a time
You take the 2 least significant bits of each color channel in the pixel
You concatenate the LSB together in alpha,red,green,blue order.
You convert the concatenated bits into an 8 bit representation and then convert that binary form to base 10. Finally, you perform a look up on the base 10 number in an ASCII chart, or just cast the number to a char.
You repeat for the next pixel
The thing to remember is that the technique I described will allow you to encode information in the image without a human observer noticing because it only manipulates the image on the last 2 bits of each color channel in a single pixel, and human eyes cannot really tell the difference between the colors in the range of [(252,252,252,252) => (255,255,255,255)].
But as food for thought, I will mention that a computer can with the right algorithms, and there is active research into bettering the ability of a computer to be able to pick this sort of thing out.
So if you only want to put in a watermark then this should work, but if you want to actually hide something you have to encrypt the message and then perform the
Steganography on the encrypted binary. Since encrypted data is MUCH larger than plain text data it will require an image with far more pixels.
Here is the code to encode H into the first pixel of your image in C#.
//H=72 and needs the following message Alpha, message Red, message Green, message Blue components
mA = 1;
mR = 0;
mG = 2;
mB = 0;
Bitmap myBitmap = new Bitmap("YourImage.bmp");
//pixel 0,0 is the first pixel
Color pixelColor = myBitmap.GetPixel(0, 0);
//the 252 places 1's in the 6 bits that we aren't manipulating so that ANDing with the message bits works
pixelColor = Color.FromArgb(c.A & (252 + mA), c.R & (252 + mR), c.G & (252 + mG), c.B & (252 + mB));
myBitmap.SetPixel(0, 0, pixelColor);

How to average multiple images using Octave and matrix manipulation to reduce noise?

UPDATE
Here is my code that is meant to add up the two matrices and using element by element addition and then divide by two.
function [ finish ] = stackAndMeanImage (initFrame, finalFrame)
cd 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
pkg load image;
i = initFrame;
f = finalFrame;
astr = num2str(i);
tmp = imread(astr, 'jpg');
d = f - i
for a = 1:d
a
astr = num2str(i + 1);
read_tmp = imread(astr, 'jpg');
read_tmp = rgb2gray(read_tmp);
tmp = tmp :+ read_tmp;
tmp = tmp / 2;
end
imwrite(tmp, 'meanimage.JPG');
finish = 'done';
end
Here are two example input images
http://imgur.com/5DR1ccS,AWBEI0d#1
And here is one output image
http://imgur.com/aX6b0kj
I am really confused as to what is happening. I have not implemented what the other answers have said yet though.
OLD
I am working on an image processing project where I am now manually choosing images that are 'empty' or only have the background, so that my algorithm can compute the differences and then do some more analysis, I have a simple piece of code that computes the mean of the two images, which I have converted to grayscale matrices, but this only works for two images, because when I find the mean of two, then take this mean and find the mean of this versus the next image, and do this repeatedly, I end up with a washed out white image that is absolutely useless. You can't even see anything.
I found that there is a function in Matlab called imFuse that is able to average images. I was wondering if anyone knew the process that imFuse uses to combine images, I am happy to implement this into Octave, or if anyone knew of or has already written a piece of code that achieves something similiar to this. Again, I am not asking for anyone to write code for me, just wondering what the process for this is and if there are already pre-existing functions out there, which I have not found after my research.
Thanks,
AeroVTP
You should not end up with a washed-out image. Instead, you should end up with an image, which is technically speaking temporally low-pass filtered. What this means is that half of the information content is form the last image, one quarter from the second last image, one eight from the third last image, etc.
Actually, the effect in a moving image is similar to a display with slow response time.
If you are ending up with a white image, you are doing something wrong. nkjt's guess of type challenges is a good one. Another possibility is that you have forgotten to divide by two after summing the two images.
One more thing... If you are doing linear operations (such as averaging) on images, your image intensity scale should be linear. If you just use the RGB values or some grayscale values simply calculated from them, you may get bitten by the nonlinearity of the image. This property is called the gamma correction. (Admittedly, most image processing programs just ignore the problem, as it is not always a big challenge.)
As your project calculates differences of images, you should take this into account. I suggest using linearised floating point values. Unfortunately, the linearisation depends on the source of your image data.
On the other hand, averaging often the most efficient way of reducing noise. So, there you are in the right track assuming the images are similar enough.
However, after having a look at your images, it seems that you may actually want to do something else than to average the image. If I understand your intention correctly, you would like to get rid of the cars in your road cam to give you just the carless background which you could then subtract from the image to get the cars.
If that is what you want to do, you should consider using a median filter instead of averaging. What this means is that you take for example 11 consecutive frames. Then for each pixel you have 11 different values. Now you order (sort) these values and take the middle (6th) one as the background pixel value.
If your road is empty most of the time (at least 6 frames of 11), then the 6th sample will represent the road regardless of the colour of the cars passing your camera.
If you have an empty road, the result from the median filtering is close to averaging. (Averaging is better with Gaussian white noise, but the difference is not very big.) But your averaging will be affected by white or black cars, whereas median filtering is not.
The problem with median filtering is that it is computationally intensive. I am very sorry I speak very broken and ancient Octave, so I cannot give you any useful code. In MatLab or PyLab you would stack, say, 11 images to a M x N x 11 array, and then use a single median command along the depth axis. (When I say intensive, I do not mean it couldn't be done in real time with your data. It can, but it is much more complicated than averaging.)
If you have really a lot of traffic, the road is visible behind the cars less than half of the time. Then the median trick will fail. You will need to take more samples and then find the most typical value, because it is likely to be the road (unless all cars have similar colours). There it will help a lot to use the colour image, as cars look more different from each other in RGB or HSV than in grayscale.
Unfortunately, if you need to resort to this type of processing, the path is slightly slippery and rocky. Average is very easy and fast, median is easy (but not that fast), but then things tend to get rather complicated.
Another BTW came into my mind. If you want to have a rolling average, there is a very simple and effective way to calculate it with an arbitrary length (arbitrary number of frames to average):
# N is the number of images to average
# P[i] are the input frames
# S is a sum accumulator (sum of N frames)
# calculate the sum of the first N frames
S <- 0
I <- 0
while I < N
S <- S + P[I]
I <- I + 1
# save_img() saves an averaged image
while there are images to process
save_img(S / N)
S <- -P[I-N] + S + P[I]
I <- I + 1
Of course, you'll probably want to use for-loops, and += and -= operators, but still the idea is there. For each frame you only need one subtraction, one addition, and one division by a constant (which can be modified into a multiplication or even a bitwise shift in some cases if you are in a hurry).
I may have misunderstood your problem but I think what you're trying to do is the following. Basically, read all images into a matrix and then use mean(). This is providing that you are able to put them all in memory.
function [finish] = stackAndMeanImage (ini_frame, final_frame)
pkg load image;
dir_path = 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
imgs = cell (1, 1, d);
## read all images into a cell array
current_frame = ini_frame;
for n = 1:(final_frame - ini_frame)
fname = fullfile (dir_path, sprintf ("%i", current_frame++));
imgs{n} = rgb2gray (imread (fname, "jpg"));
endfor
## create 3D matrix out of all frames and calculate mean across 3rd dimension
imgs = cell2mat (imgs);
avg = mean (imgs, 3);
## mean returns double precision so we cast it back to uint8 after
## rescaling it to range [0 1]. This assumes that images were all
## originally uint8, but since they are jpgs, that's a safe assumption
avg = im2uint8 (avg ./255);
imwrite (avg, fullfile (dir_path, "meanimage.jpg"));
finish = "done";
endfunction

changing bits per pixel in MATLAB

How does one change the bits per pixel of an image loaded into MATLAB? I use the file dialog and the imread functions to load the image into a matrix. i just need to change that image's bits per pixel. Giving the user that ability to choose anywhere from 1 bit to 8 bits. I know how to give the users the ability to choose one I just don't know who to change it. How does one change that? (By the way I'm in MATLAB R2012a)
The way I understand it, you want to do something like this:
imdata = rgb2gray(imread('ngc6543a.jpg') ); % Assuming that we have a grayscale uint8 image
figure('name', 'Before');
imagesc(imdata);
colormap('gray');
numberOfBits = input('Enter number of bits:\n');
maxValue = 2^numberOfBits - 1;
newImage = imdata * (maxValue / 256);
figure('name', 'After');
imagesc(newImage);
colormap('gray');
The image ngc6543a.jpg is a sample image, so you can run this code immediately as it is.
This documentation page contains lots of information about what you want to do: Reducing the Number of Colors in an Image.
A simple example is the following (pretty much taken straight from that page), which will dither the image and produce a colour map (slightly different to the OP's answer - not sure which one you want to do):
>> RGB = imread('peppers.png');
>> [x,map] = rgb2ind(RGB, 2); % Reduce to a 2-colour image
>> imagesc(x)
>> colormap(map)
You should choose the number of colours based on the maximum number that however many bits can hold.

Resources