The best way to set the opacity of an image in Wand? - wand

What is the best way to set the opacity of an image in Wand?
I'm using the most recent versions of ImageMagick (7.0.8-27 Q16 x64 2019-02-09) and Wand (0.5.1) on a Windows 7 computer.
I don't want to use transparent_color().
I want to set the alpha channel of an image for alpha-blended overlaying or compositing.
transparentize() does not set the opacity of an image. It merely darkens the image.
I've tried the following code, but it produced an error.
from wand.image import Image, CHANNELS
from wand.api import library
imageOverlay = Image(filename='mona-lisa.png')
imageOverlay.alpha_channel = 'opaque'
library.MagickSetImageOpacity(imageOverlay.wand, 0.2)
imageOverlay.save(filename='test_transparency.png')
library.MagickSetImageOpacity(wand_imageOverlay.wand, 0.2) TypeError:
'NoneType' object is not callable
I've also tried the following code, but it produced an error.
from wand.image import Image, CHANNELS
from wand.api import library
imageOverlay = Image(filename='mona-lisa.png')
imageOverlay.alpha_channel = 'opaque'
library.MagickEvaluateImage(imageOverlay.wand, 'multiply', 0.2, CHANNELS['alpha'])
imageOverlay.save(filename='test_transparency.png')
library.MagickEvaluateImage(wand_imageOverlay.wand, 'multiply', 0.2,
CHANNEL S['alpha']) ctypes.ArgumentError: argument 2: : wrong type
In Wand, what's the most compact code for setting every alpha-channel pixel to a certain value (e.g. 0.2)?

Thanks to fmw42's comment, now I have a block of Wand code for uniformly setting the pixel values of the alpha channel.
from wand.image import Image
imageOverlay = Image(filename='mona-lisa.png')
imageOverlay.alpha_channel = True
imageOverlay.evaluate(operator='set', value=imageOverlay.quantum_range*0.2, channel='alpha')
imageOverlay.save(filename='test_transparency.png')
The question has been answered.

Related

Bokeh rotated image blocks underlying image

I'm placeing a rotated image on top of another image of different anchor point in the same figure. However the top image partially covers the bottom image, shown below. Is there a way to remove the black border of the rotated image?
Sample codes here:
from bokeh.server.server import Server
from bokeh.application import Application
from bokeh.application.handlers.function import FunctionHandler
from bokeh.plotting import figure, ColumnDataSource, show
from bokeh.layouts import column
from bokeh.models.tools import PanTool, BoxZoomTool, WheelZoomTool, \
UndoTool, RedoTool, ResetTool, SaveTool, HoverTool
import numpy as np
from collections import namedtuple
from scipy import ndimage
def make_document(doc):
p = figure(match_aspect=True)
Anchor = namedtuple('Anchor', ['x', 'y'])
img1 = np.random.rand(256, 256)
anchor1 = Anchor(x=0, y=0)
img2= np.random.rand(256, 256)
anchor2 = Anchor(x=100, y=100)
img2 = ndimage.rotate(img2, 45, reshape=True)
p.image(image=[img1], x=anchor1.x, y=anchor1.y,
dw=img1.shape[0], dh=img1.shape[1], palette="Greys256")
p.image(image=[img2], x=anchor2.x, y=anchor2.y,
dw=img2.shape[0], dh=img2.shape[1], palette="Greys256")
doc.add_root(column(p, sizing_mode='stretch_both'))
apps = {'/': make_document}
server = Server(apps)
server.start()
server.io_loop.add_callback(server.show, "/")
try:
server.io_loop.start()
except KeyboardInterrupt:
print('keyboard interruption')
print('Done')
When you rotate an image, the new empty regions (black triangles on your image) are by default initialized with 0 (check out the mode and cval options at https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.rotate.html).
If you have a value that you know for sure will never be used in an image, you can pass it as cval. Then, you should be able to manually create a color mapper that maps that value to a transparent pixel and use the mapper instead of the palette (the arg name would be color_mapper).
If you don't have such a value, then you will have to use image_rgba and just make sure that whatever cval you decide to use will result in a transparent pixel.

skimage treshold_local does not work with pictures loaded using io.imread

I am was trying out one of the sample Python scripts available from the web site of Scikit Image. This script demonstrates Otsu segmentation at a local level. The script works with pictures loaded using
data.page()
but not using
io.imread
. Any suggestions?
https://scikit-image.org/docs/dev/auto_examples/applications/plot_thresholding.html#sphx-glr-auto-examples-applications-plot-thresholding-py
Picture file
Actual output - the Local thresholding window is empty
As you can see, Global thresholding has worked.But Local Thresholding has failed to produce any results.
Strangely, if I use data.page() then everything works fine.
Script
from skimage import io
from skimage.color import rgb2gray
import matplotlib.pyplot as plt
from skimage.filters import threshold_otsu,threshold_local
import matplotlib
from skimage import data
from skimage.util import img_as_ubyte
filename="C:\\Lenna.png"
mypic= img_as_ubyte (io.imread(filename))
#image = data.page() #This works - why not io.imread ?
imagefromfile=io.imread(filename)
image = rgb2gray(imagefromfile)
global_thresh = threshold_otsu(image)
binary_global = image > global_thresh
block_size = 35
local_thresh = threshold_local(image, block_size, offset=10)
binary_local = image > local_thresh
fig, axes = plt.subplots(nrows=3, figsize=(7, 8))
ax = axes.ravel()
plt.gray()
ax[0].imshow(image)
ax[0].set_title('Original')
ax[1].imshow(binary_global)
ax[1].set_title('Global thresholding')
ax[2].imshow(binary_local)
ax[2].set_title('Local thresholding')
for a in ax:
a.axis('off')
plt.show()
If you load the lenna.png and print its shape you will see it is a 4-channel RGBA image rather than a 3-channel RGB image.
print mypic.shape
(512, 512, 4)
I am not sure which parts of your code apply to which image, so I am not sure where to go next, but I guess you want to just get the RGB part and discard the alpha:
RGB = mypic[...,:3]

How to force python to write 3 channel png image

I am writing some scripts to do image processing (preparing large batches of image data for use in a convolutional neural network). As a part of that process, I am tiling a single large image into many smaller images. The single large image is a 3-channel (RGB) .png image. However, when I use matplotlib.image.imsave to save the image, it becomes 4-channel. A minimal working example of code is below (note python 2.7).
#!/usr/bin/env python
import matplotlib.image as mpimg
original_image = mpimg.imread('3-channel.png')
print original_image.shape
mpimg.imsave('new.png', original_image)
unchanged_original_image = mpimg.imread('new.png')
print unchanged_original_image.shape
The output of which is:
(300, 200, 3)
(300, 200, 4)
My question is: Why does matplotlib.image.imsave force the 4th channel to be there? and (most importantly) what can I do to make sure only the 3 color channels (RGB) are saved?
The example image I created is below:
If it doesn't need to be matplotlib you could use scipy.misc.toimage()
import matplotlib.image as mpimg
import scipy.misc
original_image = mpimg.imread("Bc11g.png")
print original_image.shape
# prints (200L, 300L, 3L)
mpimg.imsave('Bc11g_new.png', original_image)
unchanged_original_image = mpimg.imread('Bc11g_new.png')
print unchanged_original_image.shape
# prints (200L, 300L, 4L)
#now use scipy.misc
scipy.misc.toimage(original_image).save('Bc11g_new2.png')
unchanged_original_image2 = mpimg.imread('Bc11g_new2.png')
print unchanged_original_image2.shape
# prints (200L, 300L, 3L)
Note that scipy.misc.toimage is deprecated as of v1.0.0, and will be removed in 1.2.0 https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.misc.toimage.html

Python regionprops sci-kit image

I am using sci-kit image to get the "regionprops" of a segmented image. I then wish to replace each of the segment labels with their corresponding statistic (e.g eccentricity).
from skimage import segmentation
from skimage.measure import regionprops
#a segmented image
labels = segmentation.slic(img1, compactness=10, n_segments=200)
propimage = labels
#props loop
for region in regionprops(labels1, properties ='eccentricity') :
eccentricity = region.eccentricity
propimage[propimage==region] = eccentricity
This runs, but the propimage values do not change from their original labels
I have also tried:
for i in range(0,max(labels)):
prop = regions[i].eccentricity #the way to cal a single prop
propimage[i]= prop
This delivers this error
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I am a recent migrant from matlab where I have implemented this, but the data structures used are completely different.
Can any one help me with this?
Thanks
Use ndimage from scipy : the sum() function can operate using your label array.
from scipy import ndimage as nd
sizes = nd.sum(label_file[0]>0, labels=label_file[0], index=np.arange(0,label_file[1])
You can then evaluate the distribution with numpy.histogram and so on.

Is it possible to have black and white and color image on same window by using opencv?

Is it possible to have black-and-white and color image on same window by using opencv libraray? How can I have both of these images on same window?
fraxel's answer has solved the problem with old cv interface. I would like to show it using cv2 interface, just to understand how this easy in new cv2 module. (May be it would be helpful for future visitors). Below is the code:
import cv2
import numpy as np
im = cv2.imread('kick.jpg')
img = cv2.imread('kick.jpg',0)
# Convert grayscale image to 3-channel image,so that they can be stacked together
imgc = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
both = np.hstack((im,imgc))
cv2.imshow('imgc',both)
cv2.waitKey(0)
cv2.destroyAllWindows()
And below is the output I got:
Yes it is, here is an example, expaination in the comments:
import cv
#open color and b/w images
im = cv.LoadImageM('1_tree_small.jpg')
im2 = cv.LoadImageM('1_tree_small.jpg',cv.CV_LOAD_IMAGE_GRAYSCALE)
#set up our output and b/w in rgb space arrays:
bw = cv.CreateImage((im.width,im.height), cv.IPL_DEPTH_8U, 3)
new = cv.CreateImage((im.width*2,im.height), cv.IPL_DEPTH_8U, 3)
#create a b/w image in rgb space
cv.Merge(im2, im2, im2, None, bw)
#set up and add the color image to the left half of our output image
cv.SetImageROI(new, (0,0,im.width,im.height))
cv.Add(new, im, new)
#set up and add the b/w image to the right half of output image
cv.SetImageROI(new, (im.width,0,im.width,im.height))
cv.Add(new, bw, new)
cv.ResetImageROI(new)
cv.ShowImage('double', new)
cv.SaveImage('double.jpg', new)
cv.WaitKey(0)
Its in python, but easy to convert to whatever..
Small improvement to the code with modern writing
concatenate
instead of
hstack
that is discontinued (stack can also be used)
import cv2
import numpy as np
im = cv2.imread('kick.jpg')
img = cv2.imread('kick.jpg',0)
# Convert grayscale image to 3-channel image,so that they can be stacked together
imgc = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
both = np.concatenate((im,imgc), axis=1) #1 : horz, 0 : Vert.
cv2.imshow('imgc',both)
cv2.waitKey(0)
cv2.destroyAllWindows()
import cv2
img = cv2.imread("image.jpg" , cv2.IMREAD_GRAYSCALE)
cv2.imshow("my image",img)
cv2.waitkey(0)
cv2.destroyAllWindow
#The image file should be in the application folder.
#The output file will be 'my image' name.
#The bottom line is to free up memory.

Resources