Different images in Image.show() and Image.save() in PIL - image

I generate a PIL image from a NumPy array. The image showed by show function differs from what is saved by the save function directly called after show. Why might that be the case? How can I solve this issue? I use TIFF file format. Viewing both images in Windows Photos App.
from PIL import Image
import numpy as np
orig_img = Image.open('img.tif'))
dent = Image.open('mask.tif')
img_np = np.asarray(orig_img)
dent_np = np.asarray(dent)
dented = img_np*0.5 + dent_np*0.5
im = Image.fromarray(dented)
im.show('dented')
im.save("dented_2.tif", "TIFF")
Edit: I figured out that the save function saves correctly if the values for pixel in the NumPy array called 'dented' are normalized to 0,1 range. However then show function shows the image completely black.

I suspect the problem is related to the dtype of your variable dented. Try:
print(img_np.dtype, dented.dtype)
As a possible solution, you could use:
im = Image.fromarray(dented.astype(np.uint8))
You don't actually need to go to Numpy to do the maths and then convert back if you want the mean of two images, because you can do that with PIL.
from PIL import ImageChops
mean = ImageChops.add(imA, imB, scale=2.0)

Related

Making interactive plot with python: images with url link

I would like to plot some images I webscraped from a website, and I want to make a interactive plot in Jupyter notebook/Colab where the interactivity is being able to click on the images, which will get you to the url where I got the image.
I get the images and have them as follows:
im = Image.open(requests.get(df_info['Img'][i], stream=True).raw)
Then I found some code using ImageTk that looks like the following. But the problem is when I set the command option, im feeding a function open that requires an argument (the url)
from PIL import ImageTk
import webbrowser
root = Tk()
canvas = Canvas(root, width=600, height=600)
canvas.pack()
def open(url):
webbrowser.open(url)
img_file = Image.open(requests.get(df_info['Img'][0], stream=True))
img_file = img_file.resize((150, 150))
img = ImageTk.PhotoImage(img_file)
b1 = Button(canvas,image=img,command=open).pack()
root.mainloop()
I am not sure I have to use this ImageTk framework, either. It would be great if there was a way to just use matplotlib functions as well. But the key is to have that clickability on the pictures in the Jupyter notebook.
Could someone please help me?

Handling matplotlib.figure.Figure using openCV

I am using the following code to find the spectrogram of a signal and save it.
spec,freq,t,im = plt.specgram(raw_signal,Fs=100,NFFT=100,noverlap=50)
plt.axis('off')
figure = plt.gcf()
figure.set_size_inches(12, 1)
plt.savefig('spectrogram',bbox_inches = 'tight',pad_inches=0)
But I have multiple spectrograms like this and the end product I need is a concatenation of all these. Right now, what I am doing is, I am saving all these individual images using plt.savefig() as earlier and reading them back using cv2.imread() and concatenating them. But this process is not very good I think. So is there any other way I can do this without saving it and re-reading it?
One possible idea I have is, somehow converting matplotlib.figure.Figure into a format that can be handled by OpenCV (specifically cv2). However, it should also not have white padding.
You can get the image as an array using buffer_rgba (don't forget to draw the image first). Then in OpenCV, you need to convert the image from RGB to OpenCV's BGR channel ordering.
import matplotlib.pyplot as plt
import numpy as np
import cv2
raw_signal = np.random.random(1000)
spec,freq,t,im = plt.specgram(raw_signal,Fs=100,NFFT=100,noverlap=50)
plt.axis('off')
figure = plt.gcf()
figure.set_size_inches(12, 1)
figure.set_dpi(50)
figure.canvas.draw()
b = figure.axes[0].get_window_extent()
img = np.array(figure.canvas.buffer_rgba())
img = img[int(b.y0):int(b.y1),int(b.x0):int(b.x1),:]
img = cv2.cvtColor(img, cv2.COLOR_RGBA2BGRA)
cv2.imshow('OpenCV',img)
Top: matplotlib, bottom OpenCV:
don't save the figure. matplotlib happens to have a convenience function for displaying time series data in this way but that's not how you deal with spectrograms. any handling of spectrogram "pictures" is a kludge.
use scipy.signal.spectrogram to get the actual spectrogram.

Read the picture as a grayscale numpy array, and save it back

I tried the following, expecting to see the grayscale version of source image:
from PIL import Image
import numpy as np
img = Image.open("img.png").convert('L')
arr = np.array(img.getdata())
field = np.resize(arr, (img.size[1], img.size[0]))
out = field
img = Image.fromarray(out, mode='L')
img.show()
But for some reason, the whole image is pretty much a lot of dots with black in between. Why does it happen?
When you are creating the numpy array using the image data from your Pillow object, be advised that the default precision of the array is int32. I'm assuming that your data is actually uint8 as most images seen in practice are this way. Therefore, you must explicitly ensure that the array is the same type as what was seen in your image. Simply put, ensure that the array is uint8 after you get the image data, so that would be the fourth line in your code1.
arr = np.array(img.getdata(), dtype=np.uint8) # Note the dtype input
1. Take note that I've added two more lines in your code at the beginning to import the necessary packages for this code to work (albeit with an image offline).

Python 3.4: Where is the image library?

I am trying to display images with only builtin functions, and there are plenty of Tkinter examples online. However, none of the libraries work:
import Image # none of these exist.
import tkinter.Image
import _tkinter.Image
etc
However, tkinter does exist, a hellow-world with buttons worked fine.
I am on a MacBook pro 10.6.8 and using PyCharm.
Edit: The best way so far (a little slow but tolerable):
Get the pixel array as a 2D list (you can use a third-party .py to load your image).
Now you make a data array from the pixels like this (this is the weirdest format I have seen, why not a simple 2D array?). This may be sideways, so you may get an error for non-square images. I will have to check.
Imports:
from tkinter import *
import tkinter
data = list() # the image is x pixels by y pixels.
y = len(pixels)
x = len(pixels[0])
for i in range(y):
col_str.append('{')
for j in range(x):
data.append(pixels[i][j]+" ")
data.append("} ")
data = "".join(data)
Now you can create an image and use put:
# PhotoImage is builtin (tkinter).
# It does NOT need PIL, Pillow, or any other externals.
im = PhotoImage(width=x, height=y)
im.put(col_str)
Finally, attach it to the canvas:
canvas = tkinter.Canvas(width=x, height=y)
canvas.create_image(x/2, y/2, image=GLOBAL_IMAGE) # x/2 and y/2 are the center.
tK.mainloop() # enter the main loop and it will be drawn.
Image must be global or else it may not show up because the garbage collector gets greedy.
PIL hasn't been updated since 2009, with Python 3 support being terminally stuck at "later."
Instead, try pillow, which has forked PIL and provides Python 3 support.

adding contrast to an image in python

I am trying to work through a process where I take an astronomical fits file, subtract a masterflat file and then later the contrast on the resultant image.
The first part has been done successfully but my image lacks contrast. Here's my code
from astropy.io.fits import getdata
import numpy
import numpy as np
import scipy
import Image
import PIL
import os
os.chdir("/localdir/")
from scipy import misc
import ImageEnhance
image = getdata('23484748.fts')
flat = getdata('Masterflat.fit')
normalized_flat = flat / numpy.mean(flat)
calibrated_image = image / normalized_flat
pix=numpy.fliplr(calibrated_image)
# the problem starts about here. How do I alter the contrast of pix?
from matplotlib import pyplot as plt
misc.imsave('saved image.gif', pix) # uses the Image module (PIL)
plt.imshow(pix, interpolation='nearest')
plt.show()
Now before you tell me all about PIL functions and Matlib etc I have tried these without success.
I have tried to use image.fromarray to convert my numpy array into an image but the resultant image displays as pure white.
How can I take my numpy array (pix) and change its contrast?
For testing purposes I have put the two sample files at http://members.optusnet.com.au/berrettp/
Thanking you
Peter

Resources