How to get opencv image in an enaml space - Is it possible? - macos

Is it possible to have enaml as target for OpenCV?
I'm thinking how to setup GUI and what to use.
Nothing too complicated, I need to be able to set some bitmap background, draw rectangles and circles over it, but also have the possibility to select/move these graphics objects.
Also, I would like that I do not have to take care of all these elements when I stretch the window, etc. they should do this automatically since they would be defined in some "absolute" space. I think I could easily make it work for the bitmaps (even from memory), by overriding request_image in ImageProvider object (even though I see some strange cache happening in provider/enaml view).
Problem that I'm having now with OpenCV (OSX 64) is that even when I get resize to work with qt backend and CV_WINDOW_NORMAL, the content does not stretch.
I like OpenCV, because easily I get basic UI functions.
On the other hand I started to like enaml so I'm thinking did anyone manage to get these to to work together.
I'm thinking if link with MPL works, it's possible that coupling with OpenCV should be possible :)
Thanks!

If you can get your image into argb32 or png format, you can use an Enaml ImageView to display it.
Take a look at the ImageView example:
https://github.com/nucleic/enaml/blob/master/examples/widgets/image_view.enaml

This should do it:
from enaml.image import Image
from cv2 import imread, imencode
open_cv_image = imread('./cat.png')
png_image = imencode('.png', open_cv_image)[1].tostring()
enaml_image = Image(data=png_image)

Related

Can I set an image, pixel by pixel, in Apps Script?

Preferably, I'd like to use an array, iterating over each pixel and setting the R G B values.
And I don't think that I can use HTML canvas in any way. I'm hoping to build it right on top of a Google Doc without additional libraries or references to external websites.
Everything I have found on the Image Class, type is about positioning or resizing, but not helpful for stating the image.
ImageItem .setImage() looks promising, but is not particularly descriptive.
You can implement your own encoding algorithm (or migrate someone else's) and transform your pixels array into an image blob compatible with the ImageItem.setImage() method.

KineticJS shape over image

I want to draw some shapes over an image.
After the image is loaded and added to the layer, I use the moveToBottom() function, which works for shapes but doesn't seem to work with images.
I've tried to use moveToTop() on the shapes, but still no luck.
Important note: I have to keep them on the same layer, so the obvious solution to put the image in another layer is not an option.
http://jsfiddle.net/hukNL/
This concept shows that layering functions work, so the error is somewhere else in your code.
First of all, you want to be using the newest KineticJS 4.3.1, then you want to make sure that if you are dragging images that you disable putting them in the dragOnTop layer that is now featured in the newer releases. Lastly, if nothing else works, then you could manually debug your code by checking the z-index of each item by using:
.getZIndex()
Also, if you would like more help, post some code so others can help you debug it.

Drawing large images for ipad

I am developing an application for viewing images.
I used the example of PhotoScroller Apple to implement this application.
In my application I want to be able to draw on the image.
I had the idea to put a UIView on top with transparent background and draw the lines via touch events. This solution has become very slow because the generated images are very large, around 3700x2000 pixels.
I also tried a solution with the example of Apple GLPaint that uses OpenGL, but it has a size limitation of 2048x2048 pixels.
Anyone have any idea or example of how I implement this?
I think you should try and tile your image.
One option is using CATiledLayer. Have a look at this short tutorial.
Or you could try and use CGContextDrawTiledImage to get your stuff done. Possibly this post from S.O. could help you getting started.

How to add a colored filter effect on an image?

I am building an Eclipse RCP application, based on eclipse 3.5.
I'd like to modify an image at runtime. The image is loaded and will be used as an icon, but depending on the situation, I'd like to add a filter on the image to give it a red or orange color, depending on some user-configured value.
It's the image transformation that I'm interested in. I already know how to get the image and ask a component to display it.
Has anybody done that? Thanks for your help :)
There are possibly many choices for doing just that, you can use ImageIO to load an image as BufferedImage and then get the Graphics2D and modify it as you wish. When you are finished modifying you can reaasign the newly created image back into your component which holds the original image and thats it.
You can of course look for some libraries to allow you easier image manipulation, maybe jmagick or something similar.
You can use DecoratingLabelProvider with a suitable ILabelDecorator. See also FAQ What is a label decorator?

Is it possible to intercept the image data of Mac OSX Screen?

I would like to access the whole contents of a Mac OSX screen, not to take a screenshot, but modify the final (or final as possible) rendering of the screen.
Can anyone point me in the direction of any Cocoa / Quartz or other, API documentation on this? In a would like to access manipulate part of the OSX render pipeline, not for just one app but the whole screen.
Thanks
Ross
Edit: I have found CGGetDisplaysWithOpenGLDisplayMask. Wondering if I can use OpenGL shaders on the main screen.
You can't install a shader on the screen, as you can't get to the screen's underlying GL representation. You can, however, access the pixel data.
Take a look at CGDisplayCreateImageForRect(). In the same documentation set you'll find information about registering callback functions to find out when certain areas of the screen are being updated.

Resources