Below is the current working code in python using PIL for highlighting the difference between the two images. But rest of the images is blacken.
Currently i want to show the background as well along with the highlighted image.
Is there anyway i can keep the show the background lighter and just highlight the differences.
from PIL import Image, ImageChops
point_table = ([0] + ([255] * 255))
def black_or_b(a, b):
diff = ImageChops.difference(a, b)
diff = diff.convert('L')
# diff = diff.point(point_table)
h,w=diff.size
new = diff.convert('RGB')
new.paste(b, mask=diff)
return new
a = Image.open('i1.png')
b = Image.open('i2.png')
c = black_or_b(a, b)
c.save('diff.png')
!https://drive.google.com/file/d/0BylgVQ7RN4ZhTUtUU1hmc1FUVlE/view?usp=sharing
PIL does have some handy image manipulation methods,
but also a lot of shortcomings when one wants
to start doing serious image processing -
Most Python lterature will recomend you to switch
to use NumPy over your pixel data, wich will give
you full control -
Other imaging libraries such as leptonica, gegl and vips
all have Python bindings and a range of nice function
for image composition/segmentation.
In this case, the thing is to imagine how one would
get to the desired output in an image manipulation program:
You'd have a black (or other color) shade to place over
the original image, and over this, paste the second image,
but using a threshold (i.e. a pixel either is equal or
is different - all intermediate values should be rounded
to "different) of the differences as a mask to the second image.
I modified your function to create such a composition -
from PIL import Image, ImageChops, ImageDraw
point_table = ([0] + ([255] * 255))
def new_gray(size, color):
img = Image.new('L',size)
dr = ImageDraw.Draw(img)
dr.rectangle((0,0) + size, color)
return img
def black_or_b(a, b, opacity=0.85):
diff = ImageChops.difference(a, b)
diff = diff.convert('L')
# Hack: there is no threshold in PILL,
# so we add the difference with itself to do
# a poor man's thresholding of the mask:
#(the values for equal pixels- 0 - don't add up)
thresholded_diff = diff
for repeat in range(3):
thresholded_diff = ImageChops.add(thresholded_diff, thresholded_diff)
h,w = size = diff.size
mask = new_gray(size, int(255 * (opacity)))
shade = new_gray(size, 0)
new = a.copy()
new.paste(shade, mask=mask)
# To have the original image show partially
# on the final result, simply put "diff" instead of thresholded_diff bellow
new.paste(b, mask=thresholded_diff)
return new
a = Image.open('a.png')
b = Image.open('b.png')
c = black_or_b(a, b)
c.save('c.png')
Here's a solution using libvips:
import sys
from gi.repository import Vips
a = Vips.Image.new_from_file(sys.argv[1], access = Vips.Access.SEQUENTIAL)
b = Vips.Image.new_from_file(sys.argv[2], access = Vips.Access.SEQUENTIAL)
# a != b makes an N-band image with 0/255 for false/true ... we have to OR the
# bands together to get a 1-band mask image which is true for pixels which
# differ in any band
mask = (a != b).bandbool("or")
# now pick pixels from a or b with the mask ... dim false pixels down
diff = mask.ifthenelse(a, b * 0.2)
diff.write_to_file(sys.argv[3])
With PNG images, most CPU time is spent in PNG read and write, so vips is only a bit faster than the PIL solution.
libvips does use a lot less memory, especially for large images. libvips is a streaming library: it can load, process and save the result all at the same time, it does not need to have the whole image loaded into memory before it can start work.
For a 10,000 x 10,000 RGB tif, libvips is about twice as fast and needs about 1/10th the memory.
If you're not wedded to the idea of using Python, there are a few really simple solutions using ImageMagick:
“Diff” an image using ImageMagick
Related
I was trying to find snr for a set of images that I have but my two methodologies of doing so creates two different answers and I'm not sure which is right. I was wondering if one of them is just straight up the wrong way of doing this or if neither way is correct?
I am trying to characterize the snr of a set of images that I'm processing. I have 1 set of data with images and darkfields. From these pieces of data I subtracted the darkfield from the image and got "corrected_images".
So since I know snr is (mean of signal)/(std of noise), in my first methodology I was working with the corrected image and background noise image and I just took the mean of every pixel on the spectrum (from the corrected image) with a value greater than 1 for signal and the general std for the background noise image as my values. the plot for this methodology is in blue.
In my second methodology I used a single uncorrected image and basically considered every pixel above 50 as signal and every pixel below 50 as noise.This gives us the orange values for snr.
# -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
from PIL import Image
from matplotlib import pyplot as plt
import numpy as np
import os
signals=[]
name = r"interpolating_streaks/corrected"
name2 = r"interpolating_streaks/averages"
file = os.listdir(name)
file2 = os.listdir(name2)
wv1=[]
signal = []
snr = []
noise = []
x=0
for i in file:
wv=(i[:3])
wv1.append(wv)
corrected_image = Image.open(name+"/"+i) #opens the image
streak= np.array(corrected_image)
dark_image = Image.open(name2+'/d'+wv+'_averaged.tif')
dark = np.array(dark_image)
darkavg = dark[:][:].mean(axis=0)
avg= streak[:][:].mean(axis=0)
for i in avg:
if i >= 1:
signal.append(i)
noiser = np.std(darkavg)
signalr = np.mean(signal)
snr.append(signalr/noiser)
plt.plot(wv1,snr)
signal = []
noise = []
snr = []
for i in file2:
if(i[0] !='d'):
image = Image.open(name2+'/' + i )
im = np.array(image)
im_avg = im[:][:].mean(axis=0)
for i in im_avg:
if i <= 50:
noise.append(i)
else:
signal.append(i)
snr.append(np.mean(signal)/np.std(noise))
plt.plot(wv1,snr)
I would expect the snr values to be the same , and I know for my camera the snr has to be below 45 dB (but also I'm pretty sure this methodology for snr doesnt output decibels)
here are my current results
![1]: https://imgur.com/a/Vgecyp1
I'm trying to build a simple image classifier using scikit-learn. I'm hoping to avoid having to resize and convert each image before training.
Question
Given two different images that are different formats and sizes (1.jpg and 2.png), how can I avoid a ValueError while fitting the model?
I have one example where I train using only 1.jpg, which fits successfully.
I have another example where I train using both 1.jpg and 2.png and a ValueError is produced.
This example will fit successfully:
import numpy as np
from sklearn import svm
import matplotlib.image as mpimg
target = [1, 2]
images = np.array([
# target 1
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
# target 2
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
])
n_samples = len(images)
data = images.reshape((n_samples, -1))
model = svm.SVC()
model.fit(data, target)
This example will raise a Value error.
Observe the different 2.png image in target 2.
import numpy as np
from sklearn import svm
import matplotlib.image as mpimg
target = [1, 2]
images = np.array([
# target 1
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
# target 2
[mpimg.imread('./2.png'), mpimg.imread('./1.jpg')],
])
n_samples = len(images)
data = images.reshape((n_samples, -1))
model = svm.SVC()
model.fit(data, target)
# ValueError: setting an array element with a sequence.
1.jpg
2.png
For this, I would really recommend using the tools in Keras that are specifically designed to preprocess images in a highly scalable and efficient way.
from keras.preprocessing.image import ImageDataGenerator
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
1 Determine the target size of your new pictures
h,w = 150,150 # desired height and width
batch_size = 32
N_images = 100 #total number of images
Keras works in batches, so batch_size just determines how many pictures at once will be processed (this does not impact your end result, just the speed).
2 Create your Image Generator
train_datagen = ImageDataGenerator(
rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'Pictures_dir',
target_size=(h, w),
batch_size=batch_size,
class_mode = 'binary')
The object that is going to do the image extraction is ImageDataGenerator. It has the method flow_from_directory which I believe might be useful for you here. It will read the content of the folder Pictures_dir and expect your images to be in folders by class (eg: Pictures_dir/class0 and Pictures_dir/class1). The generator, when called, will then create images from these folders and also import their label (in this example, 'class0' and 'class1').
There are plenty of other arguments to this generator, you can check them out in the Keras documentation (especially if you want to do data augmentation).
Note: this will take any image, be it PNG or JPG, as you requested
If you want to get the mapping from class names to label indices, do:
train_generator.class_indices
# {'class0': 0, 'class1': 1}
You can check what is going on with
plt.imshow(train_generator[0][0][0])
3 Extract all resized images from the Generator
Now you are ready to extract the images from the ImageGenerator:
def extract_images(generator, sample_count):
images = np.zeros(shape=(sample_count, h, w, 3))
labels = np.zeros(shape=(sample_count))
i = 0
for images_batch, labels_batch in generator: # we are looping over batches
images[i*batch_size : (i+1)*batch_size] = images_batch
labels[i*batch_size : (i+1)*batch_size] = labels_batch
i += 1
if i*batch_size >= sample_count:
# we must break after every image has been seen once, because generators yield indifinitely in a loop
break
return images, labels
images, labels = extract_images(train_generator, N_images)
print(labels[0])
plt.imshow(images[0])
Now you have your images all at the same size in images, and their corresponding labels in labels, which you can then feed into any scikit-learn classifier of your choice.
Its difficult because of the math operations behind the scene, (the details are out of scope) if you manage do so, lets say you build your own algorithm, still you would not get the desired result.
i had this issue once with faces with different sizes. maybe this piece of code give you starting point.
from PIL import Image
import face_recognition
def face_detected(file_address = None , prefix = 'detect_'):
if file_address is None:
raise FileNotFoundError('File address required')
image = face_recognition.load_image_file(file_address)
face_location = face_recognition.face_locations(image)
if face_location:
face_location = face_location[0]
UP = int(face_location[0] - (face_location[2] - face_location[0]) / 2)
DOWN = int(face_location[2] + (face_location[2] - face_location[0]) / 2)
LEFT = int(face_location[3] - (face_location[3] - face_location[2]) / 2)
RIGHT = int(face_location[1] + (face_location[3] - face_location[2]) / 2)
if UP - DOWN is not LEFT - RIGHT:
height = UP - DOWN
width = LEFT - RIGHT
delta = width - height
LEFT -= int(delta / 2)
RIGHT += int(delta / 2)
pil_image = Image.fromarray(image[UP:DOWN, LEFT:RIGHT, :])
pil_image.thumbnail((50, 50), Image.ANTIALIAS)
pil_image.save(prefix + file_address)
return True
pil_image = Image.fromarray(image)
pil_image.thumbnail((200, 200), Image.ANTIALIAS)
pil_image.save(prefix + file_address)
return False
Note : i wrote this long time ago maybe not a good practice
I'm trying to extract a (very) large number of subimages from a large grayscale TIF file and save each image as a GIF, PNG, or even another TIF using MATLAB. I'm able to display the individual images using imshow(sub(:,:,1),cmap) but when I try to write the data to an image file, the generated files are just white squares 101x101 px. Using the cmap argument in imwrite produces the same result, as does changing the image format (I've tried with PNG, TIF, GIF, and JPG with no luck). The file a.tif is 16 bit according to the property menu in Windows Explorer. Any help is appreciated. I'm really at wit's end with this.
% Import coordinates array and correct for multiplication by 10
datafile = 'data.xlsx';
coords = xlsread(datafile,1,'G2:H13057');
x = coords(:,1) ./ 10;
y = coords(:,2) ./ 10;
r = 50;
[img, cmap] = imread('a.tif'); % import the image
s = 2*r+1; % scalar of size of each submatrix in the array (sise of image)
sub = zeros(s,s,num); % create 3D matrix/array of matrices. Each submatrix corresponds to 50 px box around each point
i = 1:4;
subrgb = zeros(s,s,num);
for i=1:4
sub(:,:,i) = img((y(i)-r):(y(i)+r),(x(i)-r):(x(i)+r));
filename = 'dot_%d.png';
filename = sprintf(filename,i);
imwrite(sub(:,:,i),filename,'png');
end
Try changing the line:
sub = zeros(s,s,num);
to:
sub = zeros(s,s,num,class(img));
I assume that the problem is that sub is of type double.
Good luck
I have a 4-channel image (.png, .tif) like this one:
I am using OpenCV, and I would like to add padding of type BORDER_REFLECT around the flower. copyMakeBorder is not useful, since it adds padding to the edges of the image.
I can add certain padding if I split the image in bgr + alpha and apply dilate with BORDER_REFLECT option on the bgr image, but that solution spoils all the pixels of the flower.
Is there any way to perform a selective BORDER_REFLECT padding addition on a ROI defined by a binary mask?
EDIT:
The result I expect is something like (sorry I painted it very quickly with GIMP) :
I painted two black lines to delimit the old & new contour of the flower after the padding, but of course those lines should not appear in the final result. The padding region (inside the two black lines) must be composed by mirrored pixels from the flower (I painted it yellow to make it understandable).
A simple python script to resize the image and copy the original over the enlarged one will do the trick.
import cv2
img = cv2.imread('border_reflect.png', cv2.IMREAD_UNCHANGED)
pad = 20
sh = img.shape
sh_pad = (sh[0]+pad, sh[1]+pad)
imgpad = cv2.resize(img, sh_pad)
imgpad[20:20+sh[0], 20:20+sh[1], :][img[:,:,3]==255] = img[img[:,:,3]==255]
cv2.imwrite("padded_image.png", imgpad)
Here is the result
But that doesn't look very 'centered'. So I modified the code to detect and account for the offsets while copying.
import cv2
img = cv2.imread('border_reflect.png', cv2.IMREAD_UNCHANGED)
pad = 20
sh = img.shape
sh_pad = (sh[0]+pad, sh[1]+pad)
imgpad = cv2.resize(img, sh_pad)
def get_roi(img):
cimg = img[:,:,3].copy()
contours,hierarchy = cv2.findContours(cimg,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
#Remove the tiny pixel noises that get detected as contours
contours = [cnt for cnt in contours if cv2.contourArea(cnt) > 10]
x,y,w,h = cv2.boundingRect(cnt)
roi=img[y:y+h,x:x+w]
return roi
roi = get_roi(img)
roi2 = get_roi(imgpad)
sh = roi.shape
sh2 = roi2.shape
o = ((sh2[0]-sh[0])/2, (sh2[1]-sh[1])/2)
roi2[o[0]:o[0]+sh[0], o[1]:o[1]+sh[1], :][roi[:,:,3]==255] = roi[roi[:,:,3]==255]
cv2.imwrite("padded_image.png", imgpad)
Looks much better now
The issue has been already addressed and solved here:
http://answers.opencv.org/question/90229/add-padding-to-object-in-4-channel-image/
I have a 3D array, of which the first two dimensions are spatial, so say (x,y). The third dimension contains point-specific information.
print H.shape # --> (200, 480, 640) spatial extents (200,480)
Now, by selecting a certain plane in the third dimension, I can display an image with
imdat = H[:,:,100] # shape (200, 480)
img = ax.imshow(imdat, cmap='jet',vmin=imdat.min(),vmax=imdat.max(), animated=True, aspect='equal')
I want to now rotate the cube, so that I switch from (x,y) to (y,x).
H = np.rot90(H) # could also use H.swapaxes(0,1) or H.transpose((1,0,2))
print H.shape # --> (480, 200, 640)
Now, when I call:
imdat = H[:,:,100] # shape (480,200)
img.set_data(imdat)
ax.relim()
ax.autoscale_view(tight=True)
I get weird behavior. The image along the rows displays the data till 200th row, and then it is black until the end of the y-axis (480). The x-axis extends from 0 to 200 and shows the rotated data. Now on, another rotation by 90-degrees, the image displays correctly (just rotated 180 degrees of course)
It seems to me like after rotating the data, the axis limits, (or image extents?) or something is not refreshing correctly. Can somebody help?
PS: to indulge in bad hacking, I also tried to regenerate a new image (by calling ax.imshow) after each rotation, but I still get the same behavior.
Below I include a solution to your problem. The method resetExtent uses the data and the image to explicitly set the extent to the desired values. Hopefully I correctly emulated the intended outcome.
import matplotlib.pyplot as plt
import numpy as np
def resetExtent(data,im):
"""
Using the data and axes from an AxesImage, im, force the extent and
axis values to match shape of data.
"""
ax = im.get_axes()
dataShape = data.shape
if im.origin == 'upper':
im.set_extent((-0.5,dataShape[0]-.5,dataShape[1]-.5,-.5))
ax.set_xlim((-0.5,dataShape[0]-.5))
ax.set_ylim((dataShape[1]-.5,-.5))
else:
im.set_extent((-0.5,dataShape[0]-.5,-.5,dataShape[1]-.5))
ax.set_xlim((-0.5,dataShape[0]-.5))
ax.set_ylim((-.5,dataShape[1]-.5))
def main():
fig = plt.gcf()
ax = fig.gca()
H = np.zeros((200,480,10))
# make distinguishing corner of data
H[100:,...] = 1
H[100:,240:,:] = 2
imdat = H[:,:,5]
datShape = imdat.shape
im = ax.imshow(imdat,cmap='jet',vmin=imdat.min(),
vmax=imdat.max(),animated=True,
aspect='equal',
# origin='lower'
)
resetExtent(imdat,im)
fig.savefig("img1.png")
H = np.rot90(H)
imdat = H[:,:,0]
im.set_data(imdat)
resetExtent(imdat,im)
fig.savefig("img2.png")
if __name__ == '__main__':
main()
This script produces two images:
First un-rotated:
Then rotated:
I thought just explicitly calling set_extent would do everything resetExtent does, because it should adjust the axes limits if 'autoscle' is True. But for some unknown reason, calling set_extent alone does not do the job.