How save bidimensional list to image in python? - image

I have a big bidimensional list of integer value. Each value represent a pixel and needs to match a color but obviously similar value needs to have similar color. Here an example of my list:
list=[[0,10,3,9,23,0], [7,0,0,0,0,10], [12,1,2,7,11,12], [0,0,0,34,1,9]]
"list" is a rectangle of 4 rows and each row have 6 columns. 0 value needs to match to no color, in other word 0 value is trasparent color. I try to use PIL but I didn't obtain the right result. Here the test code:
from PIL import Image
list=[[0,10,3,9,23,0], [7,0,0,0,0,10], [12,1,2,7,11,12], [0,0,0,34,1,9]]
new=Image.new("P", (4,6))
new.putdata(list)
new.save('test.tif')

The cause for the failure is during new.putdata(list), which expects a sequence object (I guess a 2D array doesn't count as a sequence object).
The fix is to convert your 2D array into a 1D array. One example of how to do this is:
sequence = [list[x][y] for x in range(len(list)) for y in range(len(list[0]))]
So the following code should work properly:
from PIL import Image
list=[[0,10,3,9,23,0], [7,0,0,0,0,10], [12,1,2,7,11,12], [0,0,0,34,1,9]]
new=Image.new("P", (6,4))
sequence = [list[x][y] for x in range(len(list)) for y in range(len(list[0]))]
new.putdata(sequence)
new.save('test.tif')

Related

How to read image from pixel data?

I am trying to read image of cifar10 dataset in MATLAB. The data is given in 10000x3072 format in which one row contains corresponding RGB value. I used:
img= reshape(data(1, 1:1024), [32,32]);
image(img)
to convert the image into meaningful because it is showing garbage image. How can I read the image from this .mat file? from this dataset https://www.cs.toronto.edu/~kriz/cifar-10-matlab.tar.gz
According to this page, the format of data is:
data -- a 10000x3072 numpy array of uint8s. Each row of the array stores a 32x32 colour image. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image.
Using your code:
img= reshape(data(1, 1:1024), [32,32]);
you should get the red channel of the first image in column-major order (i.e. transposed). To get a full RGB image with the correct orientation, you'll want to use:
img = reshape(data(1, 1:3072), [32,32,3]); % get 3-channel RGB image
img = permute(img, [2 1 3]); % exchange rows and columns

Change array shape of an image in python

When I read a colour image in OpenCV, it is showing the dimensions as 256x256x3. But I need to pass it as 3x256x256 array to my neural network. How do I change the array shape, retaining the pixel locations in BGR.
You can simply transpose the array. For an example, my picture is a 10 x 10 picture:
import numpy as np
#my picture
wrong_format = np.arange(300).reshape(10,10,3)
correct_format = wrong_format.T
If it works properly, then correct_format(0,1,1) should be equal to wrong_format(1,1,0). And we can see that it is:
correct_format(0,1,1) == wrong_format(1,1,0)
True

Python regionprops sci-kit image

I am using sci-kit image to get the "regionprops" of a segmented image. I then wish to replace each of the segment labels with their corresponding statistic (e.g eccentricity).
from skimage import segmentation
from skimage.measure import regionprops
#a segmented image
labels = segmentation.slic(img1, compactness=10, n_segments=200)
propimage = labels
#props loop
for region in regionprops(labels1, properties ='eccentricity') :
eccentricity = region.eccentricity
propimage[propimage==region] = eccentricity
This runs, but the propimage values do not change from their original labels
I have also tried:
for i in range(0,max(labels)):
prop = regions[i].eccentricity #the way to cal a single prop
propimage[i]= prop
This delivers this error
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I am a recent migrant from matlab where I have implemented this, but the data structures used are completely different.
Can any one help me with this?
Thanks
Use ndimage from scipy : the sum() function can operate using your label array.
from scipy import ndimage as nd
sizes = nd.sum(label_file[0]>0, labels=label_file[0], index=np.arange(0,label_file[1])
You can then evaluate the distribution with numpy.histogram and so on.

Matlab: Coding with arrays and writing data to excel sheet

I have obtained blood vessels of an eye in an image variable ves. I found the number of connected components(8-connectivity) as blobs. For each blob I need to calculate the Area, Major axis length and Centroid and store these values in a matrix testfv (each row corresponding to each property).For a single blob, Area returns a 1x1 struct, Centroid returns a 1x2 struct, and MajorAxisLength returns a 1x1 struct. So ,I guess depending on the number of blobs the number of cells required to store the values of Areas, Centroids and MajorAxisLength's vary, so using just one testfv to store these values as I have done would be wrong.
Is it possible? This is the code I tried(i assumed that testfv has 25 columns which allows me to store upto 8 blobs info)
[labeledImage numberOfBlobs] = bwlabel(ves, 8);
col=numberOfBlobs*2;
testfv = zeros(3,col);
for i=1:col
blobMeasurements = regionprops(labeledImage, 'Area');
testfv(1,col) = [blobMeasurements.Area];
blobMeasurements = regionprops(labeledImage, 'MajorAxisLength');
testfv(2,col)= [blobMeasurements.MajorAxisLength];
blobMeasurements = regionprops(labeledImage, 'Centroid');
testfv(3,col) = [blobMeasurements.Centroid];
end
I am getting the following error....
??? Subscripted assignment dimension mismatch.
Error in ==> alpha1 at 191 <br/>
testfv(1,col) = [blobMeasurements.Area];
Also, I need to write the data of the testfv matrix to an excel sheet file. How do I that ?
Would really appreciate the help as I am new to Matlab.

Finding area of the image

I used connected component labeling algorithm (bwconncomp) to label the different parts of a binary image (MATLAB). Now i need to calculate the area of different labels and remove the labels with smaller area. Can i use the default area finding command or is there any specific commands for that in matlab...Help..
From the documentation:
CC = bwconncomp(BW) returns the connected components CC found in BW.
The binary image BW can have any dimension. CC is a structure with
four fields...
The final field in CC is PixelIdxList, which is:
[a] 1-by-NumObjects cell array where the kth element in the cell array is
a vector containing the linear indices of the pixels in the kth object.
You can find the area of each label by looking at the length of the corresponding entry in the cell array. Something like:
areas_in_pixels = cellfun(#length, CC.PixelIdxList);
The PixelIdxList is a cell array, each member of which contains the linear indexes of the pixels present in that connected component. The line of code above finds the length of each cell in the cell array - i.e. the number of pixels in each connected component.
I've used cellfun to keep the code short and efficient. A different way of writing the same thing would be something like:
areas_in_pixels = nan(1, length(CC.PixelIdxList);
for i = 1:length(CC.PixelIdxList)
areas_in_pixels(i) = length(CC.PixelIdxList{i});
end
For each connected component, you can then find the size of that component in pixels by accessing an element in areas_in_pixels:
areas_in_pixels(34) %# area of connected component number 34
If you don't want to write lots of code like above just use built-in functions of MATLAB to detect the area. Label your components and from the properties of the component you can find out the area of that component. Suppose Bw is the binary image:
[B,L] = bwboundaries(Bw,'noholes');
stats = regionprops(L,'Area','perimeter');
for k = 1:length(B)
area(k)=stats.Area;
end
You can make this better still by avoiding the for loop with the following:
[B,L] = bwboundaries(Bw,'noholes');
stats = regionprops(L,'Area','perimeter');
area = [stats.Area];
Best,
-Will

Resources