Sizing a frame to an image in WxPython - image

I'm making a program with an image for a backdrop, and what I'm trying to do is get the frame to fit the image precisely.
It's easy to initiate the frame with the dimensions of the image:
wx.Frame.__init__(self, parent, title=title, size=(500, 300))
but because this also accounts for the borders and header of the window, this isn't entirely accurate. Short of manually adjusting the pixel size (which wouldn't be consistent cross-OS anyway), what can I do?
Edit: I've found an answer, but it looks like I can't self-answer for a few hours. In the meantime...
Backdrop = wx.Bitmap("image.png")
self.SetClientSize((Backdrop.GetWidth(), Backdrop.GetHeight()))

You could accomplish the same thing with a sizer, which would also make things easier if you ever need to include other items alongside the image and control how they scale with the frame.
Here's a basic example of a frame that resizes itself to fit an image.
import wx
class Frame(wx.Frame):
def __init__(self, parent, id, title, img_path):
wx.Frame.__init__(self, parent, id, title,
style=wx.DEFAULT_FRAME_STYLE ^ wx.RESIZE_BORDER)
image = wx.StaticBitmap(self, wx.ID_ANY)
image.SetBitmap(wx.Bitmap(img_path))
sizer = wx.BoxSizer()
sizer.Add(image)
self.SetSizerAndFit(sizer)
self.Show(True)
app = wx.App()
frame = Frame(None, wx.ID_ANY, 'Image', '/path/to/file.png')
app.MainLoop()

Related

Matploitlib image displaying at place of button

I have troubles using Matploitlib. My aim is to create program, which will be displaying image, with buttons allowing it's edition.
I started from the button allowing to pick an image, and I already have meet a problem. I wish image to load at the center of the window, but it loads at the place of button.
How to create a figure with fixed position, and how to choice it to display the image?
import matplotlib.pyplot as plt
from matplotlib.widgets import Button
import tkinter.filedialog as dialog
class Index(object):
def load(self, event):
filename = dialog.askopenfilename()
img = plt.imread(filename)
plt.imshow(img)
callback = Index()
axload = plt.axes([0.59, 0.05, 0.1, 0.075])
bload = Button(axload, 'Load')
bload.on_clicked(callback.load)
plt.show()
plt.close()
Okay, I found an answer by myself.
To create new axis
ax=plt.subplot(111)
and then, to use it instead of existing one, simply
ax.imshow(img)
instead of
plt.imshow(img)

Overlaying a box on label image using Tkinter

I am using Tkinter and the grid() layout manager to create a GUI. I am showing the image in my GUI using a label, on a tabbed window:
label2 = ttk.Label(tab2)
image2 = PhotoImage(file="lizard.gif")
label2['image'] = image2
label2.grid(column=0, row=0, columnspan=3)
For illustration, let's say the image is 300 x 900. If I know a set of coordinates within the image, how can I overlay a shaded box on the image, defined by the known (A,B,C,D which are shown just for the illustration purpose) coordinates?
Let me give you a step by step solution.
You can use a tkinter.Label() to display your image as you did, you can also choose other widgets. But for situation, let's choose tkinter.Canvas() widget instead (but same reasoning is valid if you choose to use tkinter.Label())
Technical issues:
Your problem contains 2 main sub-problems to resolve:
How to overlay 2 images the way you want.
How to display an image using tkinter.Canvas()
To be able to read an image of jpg format , you need to use a specific PIL (or its Pillow fork) method and a class:
PIL.Image.open()
PIL.ImageTk.PhotoImage()
This is done by 3 lines in the below program:
self.im = Image.open(self.saved_image)
self.photo = ImageTk.PhotoImage(self.im)
And then display self.photo in the self.canvas widget we opted for:
self.canvas.create_image(0,0, anchor=tk.N+tk.W, image = self.photo)
Second, to reproduce the effect you desire, use cv2.addWeighted() OpenCV method. But I feel you have already done that. So I just show you the portion of code of the program that does it:
self.img = cv2.imread(self.image_to_read)
self.overlay = self.img.copy()
cv2.rectangle(self.overlay, (500,50), (400,100), (0, 255, 0), -1)
self.opacity = 0.4
cv2.addWeighted(self.overlay, self.opacity, self.img, 1 - self.opacity, 0, self.img)
cv2.imwrite( self.saved_image, self.img)
Program design:
I use 2 methods:
- __init__(): Prepare the frame and call the GUI initialization method.
- initialize_user_interface(): Draw the GUI and perform the previous operations.
But for scalability reasons, it is better to create a separate method to handle the different operations of the image.
Full program (OpenCV + tkinter)
Here is the source code (I used Python 3.4):
'''
Created on Apr 05, 2016
#author: Bill Begueradj
'''
import tkinter as tk
from PIL import Image, ImageTk
import cv2
import numpy as np
import PIL
class Begueradj(tk.Frame):
'''
classdocs
'''
def __init__(self, parent):
'''
Prepare the frame and call the GUI initialization method.
'''
tk.Frame.__init__(self, parent)
self.parent=parent
self.initialize_user_interface()
def initialize_user_interface(self):
"""Draw a user interface allowing the user to type
"""
self.parent.title("Bill BEGUERADJ: Image overlay with OpenCV + Tkinter")
self.parent.grid_rowconfigure(0,weight=1)
self.parent.grid_columnconfigure(0,weight=1)
self.image_to_read = 'begueradj.jpg'
self.saved_image = 'bill_begueradj.jpg'
self.img = cv2.imread(self.image_to_read)
self.overlay = self.img.copy()
cv2.rectangle(self.overlay, (500,50), (400,100), (0, 255, 0), -1)
self.opacity = 0.4
cv2.addWeighted(self.overlay, self.opacity, self.img, 1 - self.opacity, 0, self.img)
cv2.imwrite( self.saved_image, self.img)
self.im = Image.open(self.saved_image)
self.photo = ImageTk.PhotoImage(self.im)
self.canvas = tk.Canvas(self.parent, width = 580, height = 360)
self.canvas.grid(row = 0, column = 0)
self.canvas.create_image(0,0, anchor=tk.N+tk.W, image = self.photo)
def main():
root=tk.Tk()
d=Begueradj(root)
root.mainloop()
if __name__=="__main__":
main()
Demo:
This is a screenshot of the running program:
You will need to use a canvas widget. That will allow you to draw an image, and then overlay a rectangle on it.
Although the above answers were wonderfully in depth, they did not fit my exact situation (Specifically use of Python 2.7, etc.). However, this solution gave me exactly what I was looking for:
canvas = Canvas(tab2, width=875, height=400)
image2=PhotoImage(file='lizard.gif')
canvas.create_image(440,180,image=image2)
canvas.grid(column=0, row=0, columnspan=3)
The rectangle is added over the canvas using:
x1 = 3, y1 = 10, x2 = 30, y2 = 20
canvas.create_rectangle(x1, y1, x2, y2, fill="blue", stipple="gray12")
stipple comes from this example, to help add transparency to the rectangle.

Is there a max image size (pixel width and height) within wx where png images lose there transparency?

Initially, I loaded in 5 .png's with transparent backgrounds using wx.Image() and every single one kept its transparent background and looked the way I wanted it to on the canvas (it kept the background of the canvas). These png images were about (200,200) in size. I proceeded to load a png image with a transparent background that was about (900,500) in size onto the canvas and it made the transparency a black box around the image. Next, I opened the image up with gimp and exported the transparent image as a smaller size. Then when I loaded the image into python the image kept its transparency. Is there a max image size (pixel width and height) within wx where png images lose there transparency? Any info would help. Keep in mind that I can't resize the picture before it is loaded into wxpython. If I do that, it will have already lost its transparency.
import wx
import os
def opj(path):
return apply(os.path.join, tuple(path.split('/')))
def saveSnapShot(dcSource):
size = dcSource.Size
bmp= wx.EmptyBitmap(size.width, size.height)
memDC = wx.MemoryDC()
memDC.SelectObject(bmp)
memDC.Blit(0, 0, size.width, size.height, dcSource, 0,0)
memDC.SelectObject(wx.NullBitmap)
img = bmp.ConvertToImage()
img.SaveFile('path to new image created', wx.BITMAP_TYPE_JPEG)
def main():
app = wx.App(None)
testImage = wx.Image(opj('path to original image'), wx.BITMAP_TYPE_PNG).ConvertToBitmap()
draw_bmp = wx.EmptyBitmap(1500, 1500)
canvas_dc = wx.MemoryDC(draw_bmp)
background = wx.Colour(208, 11, 11)
canvas_dc.SetBackground(wx.Brush(background))
canvas_dc.Clear()
canvas_dc.DrawBitmap(testImage,0, 0)
saveSnapShot(canvas_dc)
if __name__ == '__main__':
main()
I don't know if I got this right. But if I convert your example from MemoryDC to PaintDC, then I could fix the transparency issue. The key was to pass True to useMask in DrawBitmap method. If I omit useMask parameter, it will default to False and no transparency will be used.
The documentation is here: http://www.wxpython.org/docs/api/wx.DC-class.html#DrawBitmap
I hope this what you wanted to do...
import wx
class myFrame(wx.Frame):
def __init__(self, testImage):
wx.Frame.__init__(self, None, size=testImage.Size)
self.Bind(wx.EVT_PAINT, self.OnPaint)
self.testImage = testImage
self.Show()
def OnPaint(self, event):
dc = wx.PaintDC(self)
background = wx.Colour(255, 0, 0)
dc.SetBackground(wx.Brush(background))
dc.Clear()
#dc.DrawBitmap(self.testImage, 0, 0) # black background
dc.DrawBitmap(self.testImage, 0, 0, True) # transparency on, now red
def main():
app = wx.App(None)
testImage = wx.Image(r"path_to_image.png", wx.BITMAP_TYPE_PNG).ConvertToBitmap()
Frame = myFrame(testImage)
app.MainLoop()
if __name__ == '__main__':
main()
(Edit) Ok. I think your original example can be fixed in a similar way
memDC.Blit(0, 0, size.width, size.height, dcSource, 0,0, useMask=True)
canvas_dc.DrawBitmap(testImage,0, 0, useMask=True)
Just making sure that useMask is True was enough to fix the transparency issue in your example, too.

building gui using wxpython

I added an image into a panel in my gui . I want this image to be fitted in the panel, where i wanna make its length as same as the panel legth .. How can i do this please ?
i did the following in my code ? so the image appeared at the top of the panel as what i want, but i wanna resize this image to increase its length .
class myMenu(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(900, 700))
panel = wx.Panel(self, -1)
panel.SetBackgroundColour('#4f3856')
img = 'C:\Users\DELL\Desktop\Implementation\img1.jpg'
bmp = wx.Bitmap(img)
btmap = wx.StaticBitmap(panel, wx.ID_ANY, bmp, (0, 0))
If you want to scale the image you'll probably want to open it as a wx.Image rather than a wx.Bitmap. You can then scale it using the wx.Image's scale(self, width, height, quality) method http://www.wxpython.org/docs/api/wx.Image-class.html#Scale
The real problem is you want to get the image to resize every time the window does. That means you'll need to bind the wx.EVT_SIZE event to some method in your class (say onSize). Then every time onSize is called, you'll need to:
Find the current window size,
Scale the wx.Image to that size,
Convert it to a wx.Bitmap using wx.BitmapFromImage,
Call SetBitmap on your wx.StaticBitmap, passing the new bitmap.
See http://zetcode.com/wxpython/events/ for a basic introduction to event handling in wxPython, including an example with the wx.EVT_SIZE.

Speed up image display

I am using the PIL (python image library) to crop a very large image and present the cropped area to the interface. The problem Im having is that the process is taking too long. When the user clicks on the image to crop it, the image takes quite a long time to show up on the sizer I attach it to.
I tried doing this two ways: First I tried saving the cropped area as an image to the disk, and loaded it on the fly into the sizer. The second attempt was to create an empty image and convert the pil image into the wx image and load that onto the sizer. Surprising to me is that the first method of writing to the disk feels faster than the second method of managing it in memory. Here are the code samples:
First method:
area = image_object.crop(self.cropxy)
area.save(CROP_IMAGE, 'jpeg')
crop_image = wx.Image(CROP_IMAGE, wx.BITMAP_TYPE_JPEG).ConvertToBitmap()
crop_bitmap = wx.StaticBitmap(self.crop_panel, bitmap=crop_image, name="Cropped Image")
crop_bitmap.CenterOnParent()
crop_bitmap.Refresh()
Second method:
area = image_object.crop(self.cropxy)
image = wx.EmptyImage(area.size[0], area.size[1])
image.SetData(area.convert("RGB").tostring())
crop_image = wx.BitmapFromImage(image)
crop_bitmap = wx.StaticBitmap(self.crop_panel, bitmap=crop_image, name="Cropped Image")
crop_bitmap.CenterOnParent()
crop_bitmap.Refresh()
Is there a better way to do this so that the image will now show up so slowly?
So in order to solve something somewhere else in the interface, when I queue up my images I decided to pre-load the wxImage objects. Never had to before when they were much smaller.
Anyway - I found some code on google that would allow me to convert between wxImage objects and PIL objects and by doing so, I can convert the in-memory wxImage object to the PIL object, crop it, and convert it back to the image just in time to display it. This is 'Blazing' fast by comparison. You just hardly take your finger off the mouse and the crop shows just fine.
Here are the conversion routines:
def pil_to_image(self, pil, alpha=True):
""" Method will convert PIL Image to wx.Image """
if alpha:
image = apply( wx.EmptyImage, pil.size )
image.SetData( pil.convert( "RGB").tostring() )
image.SetAlphaData(pil.convert("RGBA").tostring()[3::4])
else:
image = wx.EmptyImage(pil.size[0], pil.size[1])
new_image = pil.convert('RGB')
data = new_image.tostring()
image.SetData(data)
return image
def image_to_pil(self, image):
""" Method will convert wx.Image to PIL Image """
pil = Image.new('RGB', (image.GetWidth(), image.GetHeight()))
pil.fromstring(image.GetData())
return pil

Resources