Unscrambling rotation of jpeg concentric pixel blocks - image

As part of a 'Capture The Flag' challenge the attached jpg was scrambled to obscure the content. The image ("flagpoles.jpg") is 1600 pixels by 1600 pixels. The concentric lines appear to have a blocksize of 10 pixels wide. (It resembles a Frank Stella painting). It appears the original image has been split into four portions which are arranged symmetrically around the center. I have been trying to write a python script to work through the pixels and unscramble the concentric squares. My efforts have resulted in two useless recurring conditions, either no change or increased scrambling. I think this might be because I am working on the entire image and it might be better to try and unscramble part of it. Here is the code I have. At the moment it only processes half of the pixels because I am trying to match up portions of the picture with each other. I tried sending the blocks to the other side of the image to try and match them up, but there is no improvement. Any assistance in getting a clear picture would be gratefully received.
from PIL import Image
import math
im = Image.open("flagpoles.jpg", "r")
pic = im.load()
def rot(A, r, x1, y1):
myArray = []
for i in range(r):
myArray.append([])
for j in range(r):
myArray[i].append(pic[x1+i, y1+j])
for i in range(r):
for j in range(r):
pic[x1+i,y1+j] = myArray[r-1-i][r-1-j]
xres = 800
yres = 800
blocksize = 10
for i in range(blocksize, blocksize+1):
for j in range(int(math.floor(float(xres)/float(blocksize+2+i)))):
for k in range(int(math.floor(float(yres)/float(blocksize+2+i)))):
rot(pic, blocksize+2+i, j*(blocksize+2+i), k*(blocksize+2+i))
im.save("hopeful.png")
print("Finished!")

The image seems to consist of concentric square boxes of width 10 pixels, each rotated by 90° relative to the previous one. After every four rotations, the pixels are once again oriented in the same direction.
You can easily undo this by making a copy of the image and repeatedly rotating by 270° while cropping away a 10 px border. Paste these rotated images back into the corresponding locations to retrieve the original image.
from PIL import Image
step_size = 10
angle_step = 270
img = Image.open("flagpoles.jpg", "r")
img.load()
w = img.width
assert img.height == w
img_tmp = img.copy() # Copy of the image that we're going to rotate
offset = 0 # Coordinate where the rotated images should be pasted
cropmax = w - step_size # Maximum coordinate of cropping region
while cropmax > step_size:
# Rotate the copy of the image
img_tmp = img_tmp.rotate(angle_step)
# Paste it into the original image
img.paste(img_tmp, (offset,offset))
# Crop a 10 px border away from the copy
img_tmp = img_tmp.crop((step_size, step_size, cropmax, cropmax))
# Update the crop position and width for the next iteration
cropmax -= step_size * 2
offset += step_size
img.save("fixed.jpg")

Related

Place image in black pixels of another image

I have an image (white background with 1-5 black dots) that is called main.jpg (main image).
I am trying to place another image (secondary.jpg) in every black dot that is found in main image.
In order to do that:
I found the black pixels in main image
resize the secondary image to specific size that I want
plot the image in every coordinate that I found in step one. (the black pixel should be the center coordinates of the secondary image)
Unfortunately, I don't know how to do the third step.
for example:
main image is:
secondary image is:
output:
(The dots are behind the chairs. They are the image center points)
This is my code:
mainImage=imread('main.jpg')
secondaryImage=imread('secondary.jpg')
secondaryImageResized = resizeImage(secondaryImage)
[m n]=size(mainImage)
for i=1:n
for j=1:m
% if it's black pixel
if (mainImage(i,j)==1)
outputImage = plotImageInCoordinates(secondaryImageResized, i, j)
% save this image
imwrite(outputImage,map,'clown.bmp')
end
end
end
% resize the image to (250,350) width, height
function [ Image ] = resizeImage(img)
image = imresize(img, [250 350]);
end
function [outputImage] = plotImageInCoordinates(image, x, y)
% Do something
end
Any help appreciated!
Here's an alternative without convolution. One intricacy that you must take into account is that if you want to place each image at the centre of each dot, you must determine where the top left corner is and index into your output image so that you draw the desired object from the top left corner to the bottom right corner. You can do this by taking each black dot location and subtracting by half the width horizontally and half the height vertically.
Now onto your actual problem. It's much more efficient if you loop through the set of points that are black, not the entire image. You can do this by using the find command to determine the row and column locations that are 0. Once you do this, loop through each pair of row and column coordinates, do the subtraction of the coordinates and then place it on the output image.
I will impose an additional requirement where the objects may overlap. To accommodate for this, I will accumulate pixels, then find the average of the non-zero locations.
Your code modified to accommodate for this is as follows. Take note that because you are using JPEG compression, you will have compression artifacts so regions that are 0 may not necessarily be 0. I will threshold with an intensity of 128 to ensure that zero regions are actually zero. You will also have the situation where objects may go outside the boundaries of the image. Therefore to accommodate for this, pad the image sufficiently with twice of half the width horizontally and twice of half the height vertically then crop it after you're done placing the objects.
mainImage=imread('https://i.stack.imgur.com/gbhWJ.png');
secondaryImage=imread('https://i.stack.imgur.com/P0meM.png');
secondaryImageResized = imresize(secondaryImage, [250 300]);
% Find half height and width
rows = size(secondaryImageResized, 1);
cols = size(secondaryImageResized, 2);
halfHeight = floor(rows / 2);
halfWidth = floor(cols / 2);
% Create a padded image that contains our main image. Pad with white
% pixels.
rowsMain = size(mainImage, 1);
colsMain = size(mainImage, 2);
outputImage = 255*ones([2*halfHeight + rowsMain, 2*halfWidth + colsMain, size(mainImage, 3)], class(mainImage));
outputImage(halfHeight + 1 : halfHeight + rowsMain, ...
halfWidth + 1 : halfWidth + colsMain, :) = mainImage;
% Find a mask of the black pixels
mask = outputImage(:,:,1) < 128;
% Obtain black pixel locations
[row, col] = find(mask);
% Reset the output image so that they're all zeros now. We use this
% to output our final image. Also cast to ensure accumulation is proper.
outputImage(:) = 0;
outputImage = double(outputImage);
% Keeps track of how many times each pixel was hit by the object
% This is so that we can find the average at each location.
counts = zeros([size(mask), size(mainImage, 3)]);
% For each row and column location in the image
for i = 1 : numel(row)
% Get the row and column locations
r = row(i); c = col(i);
% Offset to get the top left corner
r = r - halfHeight;
c = c - halfWidth;
% Place onto final image
outputImage(r:r+rows-1, c:c+cols-1, :) = outputImage(r:r+rows-1, c:c+cols-1, :) + double(secondaryImageResized);
% Accumulate the counts
counts(r:r+rows-1,c:c+cols-1,:) = counts(r:r+rows-1,c:c+cols-1,:) + 1;
end
% Find average - Any values that were not hit, change to white
outputImage = outputImage ./ counts;
outputImage(counts == 0) = 255;
outputImage = uint8(outputImage);
% Now crop and show
outputImage = outputImage(halfHeight + 1 : halfHeight + rowsMain, ...
halfWidth + 1 : halfWidth + colsMain, :);
close all; imshow(outputImage);
% Write the final output
imwrite(outputImage, 'finalimage.jpg', 'Quality', 100);
We get:
Edit
I wasn't told that your images had transparency. Therefore what you need to do is use imread but ensure that you read in the alpha channel. We then check to see if one exists and if one does, we will ensure that the background of any values with no transparency are set to white. You can do that with the following code. Ensure this gets placed at the very top of your code, replacing the images being loaded in:
mainImage=imread('https://i.stack.imgur.com/gbhWJ.png');
% Change - to accommodate for transparency
[secondaryImage, ~, alpha] = imread('https://i.imgur.com/qYJSzEZ.png');
if ~isempty(alpha)
m = alpha == 0;
for i = 1 : size(secondaryImage,3)
m2 = secondaryImage(:,:,i);
m2(m) = 255;
secondaryImage(:,:,i) = m2;
end
end
secondaryImageResized = imresize(secondaryImage, [250 300]);
% Rest of your code follows...
% ...
The code above has been modified to read in the basketball image. The rest of the code remains the same and we thus get:
You can use convolution to achieve the desired effect. This will place a copy of im everywhere there is a black dot in imz.
% load secondary image
im = double(imread('secondary.jpg'))/255.0;
% create some artificial image with black indicators
imz = ones(500,500,3);
imz(50,50,:) = 0;
imz(400,200,:) = 0;
imz(200,400,:) = 0;
% create output image
imout = zeros(size(imz));
imout(:,:,1) = conv2(1-imz(:,:,1),1-im(:,:,1),'same');
imout(:,:,2) = conv2(1-imz(:,:,2),1-im(:,:,2),'same');
imout(:,:,3) = conv2(1-imz(:,:,3),1-im(:,:,3),'same');
imout = 1-imout;
% output
imshow(imout);
Also, you probably want to avoid saving main.jpg as a .jpg since it results in lossy compression and will likely cause issues with any method that relies on exact pixel values. I would recommend using .png which is lossless and will also likely compress better than .jpg for synthetic images where the same colors repeat many times.

Scale image object to match another object's scale

I have two sets of images of different size for each set. The first set is images of 400x400 pixels with real picture objects.
The second set is 319x319, with image silhouettes of different scale than the real picture objects.
What I want to achieve, is basically to have the silhouettes replaced by the real picture objects (i.e. beaver) of the first set. So the end result will be 319x319 resolution images with real picture objects. Here is an example:
The first set images cannot simply be resized to 319x319, since the beaver will not match the silhouette. There are about 100 images with different "beaver size to beaver's silhouette size" relationships. Is there a way to automate this procedure?
So far, I've tried #cxw suggestion up to step 2. Here is the code of EllipseDirectFit I used. And here is my code to plot the images with the ellipse fits. I don't know how to proceed to steps 3-5.. I think from EllipseDirectFit function -> 2*abs(A(1)) should be the ellipsi's major axes. (NOTE: 'a1.bmp' is the real image and 'b1.bmp' is the silhouette).
In case anyone else has the same problem as me, I post the code that solved my problem. I actually followed cxw's suggestion and fitted an ellipse for both real and silhouette pictures, then resized the real picture based on the ratio of the silhouette-ellipse's major axis to the real-ellipse major axis. This made the image object match in size the silhouette image object (i.e. the beaver). Then I either cropped, or added border pixels to match the resolution I needed (i.e. 319x319).
% fetching the images
realList = getAllFiles('./real_images'); % getAllFiles => StackOverflow function
silhList = getAllFiles('./silhouettes');
for qq = 1:numel(realList)
% Name of the file to save
str = realList{qq}(15:end);
a = imread(realList{qq}); % assign real image
background_Ra = a(1,1,1); % getting the background colors
background_Ga = a(1,1,2);
background_Ba = a(1,1,3);
% finding the points (x,y) to pass to fit_ellipse
[x1,y1]=find(a(:,:,1)~=background_Ra | a(:,:,2)~=background_Ga | a(:,:,3)~=background_Ba);
% fitting an ellipse to these points
z1 = fit_ellipse(x1,y1); % Mathworks file exchange function
b = imread(silhList{qq}); % assign silhouette image
background_R2b = b(1,1,1); % getting the background colors
background_G2b = b(1,1,2);
background_B2b = b(1,1,3);
% finding the points (x,y) to pass to fit_ellipse
[x2,y2]=find(b(:,:,1)~=background_R2b & b(:,:,2)~=background_G2b & b(:,:,3)~=background_B2b);
% fitting an ellipse to these points
z2 = fit_ellipse(x2,y2);
% ratio of silhouette's ellipse major axis to real image's ellipse
% major axis
ellaxratio = z2.long_axis/z1.long_axis;
% resizing based on ellaxratio, so that the real image object size will
% now fit the silhouette's image object size
c = imresize(a,ellaxratio); c = rgb2gray(c);
bordercolor = c(end,end);
% if the resulting image is smaller, add pixels around it until they
% match with the silhouette image resolution
if size(c) < 319
while size(c) < 319
% 'addborder' is a Mathworks file exchange function
c = addborder(c(:,:,1),1, bordercolor ,'outer');
end
% if the resulting image is larger, crop pixels until they match
else size(c) > 319
while size(c) > 319
c = c(2:end-1,2:end-1);
end
end
% in a few cases, the resulting resolution is 318x318, instead of
% 319x319, so a small adjustment won't hurt.
if size(c) ~= 319
c = imresize(c,[319 319]);
end
% saving..
imwrite(c,['./good_fits/' str '.bmp'])
end
I don't have code for this, but here's how I would proceed, just off-hand. There's almost certainly a better way :) .
For each of the real image and the silhouette image:
Get the X, Y coordinates of the pixels that aren't the background. Edit Example tested in Octave:
background_R = img(1,1,1)
background_G = img(1,1,2)
background_B = img(1,1,3)
[xs,ys]=find(img(:,:,1)~=background_R | img(:,:,2)~=background_G | img(:,:,3)~=background_B)
The logical OR is because the image can differ from the background in any color component.
Fit an ellipse to the X, Y coordinate pairs you found. E.g., use this routine from File Exchange. (Actually, I suppose you could use a circle fit or any other shape fit you wanted, as long as size and position are the only differences between the non-background portions of the images.)
Now you have ellipse parameters for the real image and the silhouette image. Assuming the aspect ratios are the same, those ellipses should differ only in center and scale.
Resize the real image (imresize) based on the ratio of silhouette ellipse major axis length to real image ellipse major axis length. Now they should be the same size.
Find the centers. Using the above fit routine,
A=EllipseDirectFit(...)
% switch to Mathworld notation from http://mathworld.wolfram.com/Ellipse.html
ma=A(1); mb=A(2)/2; mc=A(3); md=A(4)/2; mf=A(5)/2; mg=A(6);
center_x = (mc*md-mb*mf)/(mb**2-ma*mc)
center_y = (ma*mf-mb*md)/(mb**2-ma*mc)
Move the real image data in a 3-d matrix so that the ellipse centers
coincide. For example,
cx_silhouette = ... (as above, for the silhouette image)
cy_silhouette = ...
cx_real = ... (as above, for the *resized* real image)
cy_real = ...
shifted = zeros(size(silhouette_image)) % where we're going to put the real image
deltax = cx_silhouette - cx_real
deltay = cy_silhouette - cy_real
% if deltax==deltay==0, you're done with this step. If not:
portion = resized_real_image(max(deltay,0):319-abs(deltay), max(deltax,0):319-abs(deltax), :); % or something like that - grab the overlapping part of the resized real image
shifted(max(deltay,0):min(deltay+319,319), max(deltax,0):min(deltax+319,319), :) = portion; % or something like that - slide the portion of the resized real image in x and y. Now _shifted_ should line up with the silhouette image.
Using the background color (or the black silhouette — same difference) as a mask, copy pixels from the resized, moved real image into the silhouette image.
Hope this helps!

Re-sizing a rectangular image to a square image

I have an image, size 213 x 145 pixels. I want to resize it to 128 x 128 pixels for example. I've already tried the code below:
i = imread ('alif1.png');
I = imresize (i, [128 128], 'bilinear');
OR
i = imread ('alif1.png');
I = imresize (i, [128 128], 'lanczos3');
it gave me a square image, but the image became disproportionate. However, I believe the aspect ratio was preserved.
I want to resize the image to a square shape without distorting or stretching the image, rather to pad/crop the white background instead. I still can't figure out the right code. I hope anyone could help.
any help will be very much appreciated :)
I = imread('alifi.png');
Crop image, specifying crop rectangle.
I2 = imcrop(I,[75 68 128 128]);
Size and position of the crop rectangle, specified as a four-element position vector of the form [xmin ymin width height].
for more understanding follow this(matlab ) and this(blog) links.
If you want to resize (not crop) the image and keep the aspect ratio (so you don't loose any part of the image AND it doesn't get distorted), you can first add margins to make the image squared.
You can achieve this using the function padarray, or just creating a new image of zeros and then adding your image in the appropiate coordinates.
Once your image is squared, you can resize it to 128x128 using imresize.
In order to add margins, you will have to see where to add them (top&bottom OR left&right).
Also since padarray adds the same amount of margins in both sides, you have to check if the number you need is even. If it's odd add first a last row (or column) of zeros to your image.
So basically you have three options:
Make the image squared by not preserving aspect ratio (which is what you already tried)
Cropping the image as suggested by #ShvetChakra and #bla (but you will loose some image info)
Add margins to the image and resize (but you will end up with a squared image with margins)
Magic doesn't exist so "you must choose, but choose wisely"
(Quote from Indiana Jones and the Last Crusade).
EDIT:
% Example with a 5x2 image, so an extra column will be added
% in order to use padarray.
im = [1 2; 3 4; 5 6; 7 8; 9 10];
nrows = size(a,1);
ncols = size(a,2);
d = abs(ncols-nrows); % difference between ncols and nrows:
if(mod(d,2) == 1) % if difference is an odd number
if (ncols > nrows) % we add a row at the end
im = [im; zeros(1, ncols)];
nrows = nrows + 1;
else % we add a col at the end
im = [im zeros(nrows, 1)];
ncols = ncols + 1;
end
end
if ncols > nrows
im = padarray(im, [(ncols-nrows)/2 0]);
else
im = padarray(im, [0 (nrows-ncols)/2]);
end
% Here im is a 5x5 matix, not perfectly centered
% because we added an odd number of columns: 3

How can I save/show the croped image and uncroped image in MATLAB

I am having issue with cropping of image. My task includes an image. I have to crop the image at x,y coordinates, which I have tried and got success.
Now I want to show/save both images, the cropped one and also the image which is being cropped (which will have a subtracted area of cropped part just like subtracting the small cavity from the image).
My Code:
B = imread('B1.jpg');
% figure,imshow(B)
GimageB = rgb2gray(B);
% figure, imshow(GimageB)
J = imcrop(B,[284 235 95 80]);
figure, imshow(J)
To show the image without the "extracted" area, fill that area with zero!
img=rgb2gray(imread('http://weknowyourdreams.com/images/cat/cat-03.jpg'));
img2 = imcrop(img,[500 600 700 800]);
img3=img;
% fill area with zero (note the numbers, compare to imcrop)
img3(500:500+700, 600:600+800)=0;
figure()
imshow(img3)

how to translate and scale the image?

My image looks like this:
The given imrgb = 320*512*3 double; color_map = 64*3 double; after using
[X, map] = rgb2ind(imrgb, color_map);
I get X = 320*512 uint8. The image is too big for the further processing. My
question is how to translate and scale the image to a standard size of 32*32 pixels without losing the important information ( I mean the non-black part of the image are all important information)?
Here is one solution where I make each brain tile a 32x32 image. The comments explain the code. But the basic idea is...
using block proc
to split the large image into a 5x8 grid, because it has 5 rows of brains and 8 columns of brains. I call each of these images a tile
Resize each tile to 32x32
using mat2cell
split the new small tiles into individual images and display them
Here is the code
im = rgb2gray(imrgb);
max_rows = 32;
max_cols = 32;
%I assume every picture has 40 brains, 5 rows and 8 columns
rows_brains = 5;
cols_brains = 8;
[m n] = size(im);
%define the resize function to take the 'block_struct' image and resize
%it to max_rows x max_cols
fun = #(block_struct) imresize(block_struct.data,[max_rows max_cols]);
%blockproc will split the image into tiles. Each tile should hold one brain
%image. Then we resize that tile to a 32x32 tile using the resize function
%we defined earlier
I2 = blockproc(im,[m/rows_brains n/cols_brains],fun);
%split the image with small tiles into individual pictures
%each cell of indiv_brains will contain a 32x32 image of only one brain
indiv_brains = mat2cell(I2,max_rows*ones(1,rows_brains),max_cols*ones(1,cols_brains));
%displays all the brains
figure(1);
for ii=1:1:rows_brains
for jj=1:1:cols_brains
subplot(rows_brains, cols_brains, (ii-1)*cols_brains + jj);
imshow(indiv_brains{ii,jj});
end
end
and the result, each of these individual images is 32x32

Resources