I have a list of images in a directory. I am trying to extract a column from each image (image size is 403 px by 1288 px by 3 bands) , and sequentially build an array from these columns using numpy append that I want to save as an image. I'm trying to use numpy and pillow to make an image from this appended array.
I have researched Pillor, Numpy documentation
# !/usr/bin/python3
import numpy as np
from numpy import array
from PIL import Image
import os, time, sys, subprocess
savpath =
'C:/data/marsobot/spectral/pushbroom/zwoexperiments/fullsuntheframes/'
os.chdir('C:/data/marsobot/spectral/pushbroom/zwoexperiments/fullsuntheframes/')
toappendarr = np.empty ([403, 1288, 3])
for root, dirs, files in os.walk(".", topdown = False):
for name in files:
img = Image.open(name)
arr = array(img)
value = arr[:, 300, 1]
toappendarr = np.append(toappendarr, value, axis=1)
print(toappendarr.shape)
imgout = Image.fromarray(arr)
imgout.save("output.jpg")
I expected an image but instead I got:
ValueError: all the input arrays must have same number of dimensions
Related
The following code gives me black images and I can't understand why:
Imports:
import numpy as np
from PIL import Image
Code:
arr2 = np.zeros((200,200), dtype=int)
arr2[80:120,80:120]=1
im = Image.fromarray(arr2,mode="1")
im.save("C:/Users/Admin/Desktop/testImage.jpg")
I think you want something more like this, using Boolean True and False:
import numpy as np
from PIL import Image
# Create black 1-bit array
arr2 = np.full((200,200), False, dtype=bool)
# Set some bits white
arr2[80:120,80:120]=True
im = Image.fromarray(arr2)
im.save('a.png')
print(im)
<PIL.Image.Image image mode=1 size=200x200 at 0x103FF2770>
I am trying to convert 100 images into a numpy array, which in turn will be fed into my neural network.
My NN is training data was a 4D numpy array (No of Images, 32, 32, 3).
When using below code to read images and feed into model.predict() i am getting following error.
"Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (100, )"
This is the code i have written:
'''new_data = []
files = glob.glob (r"load images")
for myFile in files:
#print(myFile)
image = cv2.imread(myFile)
new_data.append(np.asarray(image))
#new_data = np.array(new_data)
print('new_data shape:', np.array(new_data).shape)'''
Output is "new_data shape: (100,)"
I am expecting new_data dimention to be (100, 32, 32, 3). Please help on how to achieve this.
Thanks,
Mrinal
Thanks for all the response.The issue was that images were not of same size. After i resized them all to 32*32 and did a np.reshape().
Below is the revised code
files = glob.glob (r"files\*.png*")
for myFile in files:
image = cv2.imread(myFile)
img = cv2.resize(image , (32 , 32)) # Reshaping the testing images to 32*32
new_data.append(img)
new_data = np.reshape(new_data, (len(new_data),32,32,3))
you can directly use PILLOW library for this
from PIL import Image
from numpy import asarray
image = Image.open('kolala.jpeg')
# convert image to numpy array
data = asarray(image)
print(type(data))
print(data.shape)
image2 = Image.fromarray(data)
print(type(image2))
So my problem is generating an animation from the list img_array. The code above that is basically used to get an image from the folder, annotate it and then save it into the array. Was wondering if anyone would have any suggestions on how to convert the images in the image array into an animation. Any help is appreciated! TIA.
I tried FFmepg and what not but none of them seem to work. I also tried videowriter in OpenCV but when I tried to open the file I get that this file type is not supported or corrupt.
import cv2
import numpy as np
import glob
import matplotlib.pyplot as plt
from skimage import io
import trackpy as tp
import pims
import pylab as pl
##########
pixel_min=23
min_mass=5000
Selector1=[1,2,3,4,5,6,7,11]
##########
frames = pims.ImageSequence('/Users/User/Desktop/eleventh_trial_2/*.tif', as_grey=True)
f1 = tp.locate(frames[0], pixel_min,minmass=min_mass)
plt.figure(1)
ax3=tp.annotate(f1,frames[0])
ax = plt.subplot()
ax.hist(f1['mass'], bins=20)
ax.set(xlabel='mass', ylabel='count');
f = tp.batch(frames[:], pixel_min, minmass=min_mass);
#f = tp.batch(frames[lower_frame:upper_frame], pixel, minmass=min_mass);
t=tp.link_df(f,10,memory=3)
##############
min_mass=8000#12000 #3000#2000 #6000#3000
pixel_min=23;
count=0
img_array = []
for filename in glob.glob('/Users/User/Desktop/eleventh_trial_2/*.tif'):
img = cv2.imread(filename)
height, width, layers = img.shape
size = (width,height)
img2 = io.imread(filename, as_gray=True)
fig, ax = plt.subplots()
ax.imshow(img)
#ax=pl.text(T1[i,1]+13,T1[i,0],str(int(T1[i,9])),color="red",fontsize=18)
T1=t.loc[t['frame']==count]
T1=np.array(T1.sort_values(by='particle'))
for i in Selector1:
pl.text(T1[i,1]+13,T1[i,0],str(int(T1[i,9])),color="red",fontsize=18)
circle2 = plt.Circle((T1[i,1], T1[i,0]), 5, color='r', fill=False)
ax.add_artist(circle2)
count=count+1
img_array.append(fig)
ani = animation.ArtistAnimation(fig, img_array, interval=50, blit=True,repeat_delay=1000)
When I run this I don't get an an error however I can't save the ani as tried in the past either using OpenCV videoWriter.
I found a work around although not the most efficient one. I saved the figures in a separate directory using os and plt.savefig() and then use ImageJ to automatically convert the sequentially numbered and saved figures into an animation. It ain't efficient but gets the job done. I am still open to more efficient answers. Thanks
I am was trying out one of the sample Python scripts available from the web site of Scikit Image. This script demonstrates Otsu segmentation at a local level. The script works with pictures loaded using
data.page()
but not using
io.imread
. Any suggestions?
https://scikit-image.org/docs/dev/auto_examples/applications/plot_thresholding.html#sphx-glr-auto-examples-applications-plot-thresholding-py
Picture file
Actual output - the Local thresholding window is empty
As you can see, Global thresholding has worked.But Local Thresholding has failed to produce any results.
Strangely, if I use data.page() then everything works fine.
Script
from skimage import io
from skimage.color import rgb2gray
import matplotlib.pyplot as plt
from skimage.filters import threshold_otsu,threshold_local
import matplotlib
from skimage import data
from skimage.util import img_as_ubyte
filename="C:\\Lenna.png"
mypic= img_as_ubyte (io.imread(filename))
#image = data.page() #This works - why not io.imread ?
imagefromfile=io.imread(filename)
image = rgb2gray(imagefromfile)
global_thresh = threshold_otsu(image)
binary_global = image > global_thresh
block_size = 35
local_thresh = threshold_local(image, block_size, offset=10)
binary_local = image > local_thresh
fig, axes = plt.subplots(nrows=3, figsize=(7, 8))
ax = axes.ravel()
plt.gray()
ax[0].imshow(image)
ax[0].set_title('Original')
ax[1].imshow(binary_global)
ax[1].set_title('Global thresholding')
ax[2].imshow(binary_local)
ax[2].set_title('Local thresholding')
for a in ax:
a.axis('off')
plt.show()
If you load the lenna.png and print its shape you will see it is a 4-channel RGBA image rather than a 3-channel RGB image.
print mypic.shape
(512, 512, 4)
I am not sure which parts of your code apply to which image, so I am not sure where to go next, but I guess you want to just get the RGB part and discard the alpha:
RGB = mypic[...,:3]
I'm trying to build a simple image classifier using scikit-learn. I'm hoping to avoid having to resize and convert each image before training.
Question
Given two different images that are different formats and sizes (1.jpg and 2.png), how can I avoid a ValueError while fitting the model?
I have one example where I train using only 1.jpg, which fits successfully.
I have another example where I train using both 1.jpg and 2.png and a ValueError is produced.
This example will fit successfully:
import numpy as np
from sklearn import svm
import matplotlib.image as mpimg
target = [1, 2]
images = np.array([
# target 1
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
# target 2
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
])
n_samples = len(images)
data = images.reshape((n_samples, -1))
model = svm.SVC()
model.fit(data, target)
This example will raise a Value error.
Observe the different 2.png image in target 2.
import numpy as np
from sklearn import svm
import matplotlib.image as mpimg
target = [1, 2]
images = np.array([
# target 1
[mpimg.imread('./1.jpg'), mpimg.imread('./1.jpg')],
# target 2
[mpimg.imread('./2.png'), mpimg.imread('./1.jpg')],
])
n_samples = len(images)
data = images.reshape((n_samples, -1))
model = svm.SVC()
model.fit(data, target)
# ValueError: setting an array element with a sequence.
1.jpg
2.png
For this, I would really recommend using the tools in Keras that are specifically designed to preprocess images in a highly scalable and efficient way.
from keras.preprocessing.image import ImageDataGenerator
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
1 Determine the target size of your new pictures
h,w = 150,150 # desired height and width
batch_size = 32
N_images = 100 #total number of images
Keras works in batches, so batch_size just determines how many pictures at once will be processed (this does not impact your end result, just the speed).
2 Create your Image Generator
train_datagen = ImageDataGenerator(
rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'Pictures_dir',
target_size=(h, w),
batch_size=batch_size,
class_mode = 'binary')
The object that is going to do the image extraction is ImageDataGenerator. It has the method flow_from_directory which I believe might be useful for you here. It will read the content of the folder Pictures_dir and expect your images to be in folders by class (eg: Pictures_dir/class0 and Pictures_dir/class1). The generator, when called, will then create images from these folders and also import their label (in this example, 'class0' and 'class1').
There are plenty of other arguments to this generator, you can check them out in the Keras documentation (especially if you want to do data augmentation).
Note: this will take any image, be it PNG or JPG, as you requested
If you want to get the mapping from class names to label indices, do:
train_generator.class_indices
# {'class0': 0, 'class1': 1}
You can check what is going on with
plt.imshow(train_generator[0][0][0])
3 Extract all resized images from the Generator
Now you are ready to extract the images from the ImageGenerator:
def extract_images(generator, sample_count):
images = np.zeros(shape=(sample_count, h, w, 3))
labels = np.zeros(shape=(sample_count))
i = 0
for images_batch, labels_batch in generator: # we are looping over batches
images[i*batch_size : (i+1)*batch_size] = images_batch
labels[i*batch_size : (i+1)*batch_size] = labels_batch
i += 1
if i*batch_size >= sample_count:
# we must break after every image has been seen once, because generators yield indifinitely in a loop
break
return images, labels
images, labels = extract_images(train_generator, N_images)
print(labels[0])
plt.imshow(images[0])
Now you have your images all at the same size in images, and their corresponding labels in labels, which you can then feed into any scikit-learn classifier of your choice.
Its difficult because of the math operations behind the scene, (the details are out of scope) if you manage do so, lets say you build your own algorithm, still you would not get the desired result.
i had this issue once with faces with different sizes. maybe this piece of code give you starting point.
from PIL import Image
import face_recognition
def face_detected(file_address = None , prefix = 'detect_'):
if file_address is None:
raise FileNotFoundError('File address required')
image = face_recognition.load_image_file(file_address)
face_location = face_recognition.face_locations(image)
if face_location:
face_location = face_location[0]
UP = int(face_location[0] - (face_location[2] - face_location[0]) / 2)
DOWN = int(face_location[2] + (face_location[2] - face_location[0]) / 2)
LEFT = int(face_location[3] - (face_location[3] - face_location[2]) / 2)
RIGHT = int(face_location[1] + (face_location[3] - face_location[2]) / 2)
if UP - DOWN is not LEFT - RIGHT:
height = UP - DOWN
width = LEFT - RIGHT
delta = width - height
LEFT -= int(delta / 2)
RIGHT += int(delta / 2)
pil_image = Image.fromarray(image[UP:DOWN, LEFT:RIGHT, :])
pil_image.thumbnail((50, 50), Image.ANTIALIAS)
pil_image.save(prefix + file_address)
return True
pil_image = Image.fromarray(image)
pil_image.thumbnail((200, 200), Image.ANTIALIAS)
pil_image.save(prefix + file_address)
return False
Note : i wrote this long time ago maybe not a good practice