I am looking for a method exactly like -[NSWorkspace iconForFile:] but which returns the icon in a higher resolution if possible. In particular, I have an app which makes use of QuickLook to display previews of files, and I'd like it to fall back to the file icon if no quick look plugin is available. Using the iconForFile: method, however, yields a small 32x32 icon. Is there a better method around? One that returns an NSImage or CGImageRef is preferred, but less accessible methods might be fine too.
The returned image of -[NSWorkspace iconForFile:] contains multiple representations, including higher resolution ones.
If you try drawing it at 512x512 it will automatically pick the appropriate representation.
Here is the way to make icon bigger:
NSImage * icon = [NSWorkspace iconForFile:yourPath];
[icon setSize:NSMakeSize(64,64)];
Thats it. Good luck!
Related
If I have the full path of an application's bundle or an NSBundle instance for that application, how can I get an NSImage representation of that app's icon?
Specifically, I'd like to get the best available version of the icon for a particular set of dimensions. For example, if the available versions of an app's icon include dimensions 32x32, 64x64, and 128x128 points, if I request one for 96x96 points I would get the 128x128 version.
This will need to work on both retina and non-retina displays, so the resulting NSImage will have to include both retina and non-retina representations of the icon.
I've found the iconForFile: method of NSWorkspace but unfortunately it only gives back a 32x32 pixel icon, so that's not useful for my purposes.
I could also read in the bundle's CFBundleIconFile and CFBundleIconName keys from its info dictionary, and then pull the correct image from its icns file or assets catalog, but that seems like a lot of work. I'm not sure where to begin with getting images from either of those file types. I hope there's an easier way!
I was incorrect about iconForFile:. While the NSImage it returns does report its size as being 32x32, it contains image representations for all of the various icon sizes for that application, including much larger ones. It's a simple matter of choosing the one I want, or using the bestRepresentationForRect:context:hints: method.
I knew that in [NSImage imageNamed] method, it will try to load retina image on Retina Mac while load non-retina image on Non-Retina Mac, but it was only applied on Resources directory, is there any other method can do the similar thing on other user-defined directory ?
If the image does not exist in two sizes with one named with the #2x naming convention then you will need to make a best guess judgment call.
You may also get some useful metadata about the image like DPI to figure out what makes sense.
Is it possible to have enaml as target for OpenCV?
I'm thinking how to setup GUI and what to use.
Nothing too complicated, I need to be able to set some bitmap background, draw rectangles and circles over it, but also have the possibility to select/move these graphics objects.
Also, I would like that I do not have to take care of all these elements when I stretch the window, etc. they should do this automatically since they would be defined in some "absolute" space. I think I could easily make it work for the bitmaps (even from memory), by overriding request_image in ImageProvider object (even though I see some strange cache happening in provider/enaml view).
Problem that I'm having now with OpenCV (OSX 64) is that even when I get resize to work with qt backend and CV_WINDOW_NORMAL, the content does not stretch.
I like OpenCV, because easily I get basic UI functions.
On the other hand I started to like enaml so I'm thinking did anyone manage to get these to to work together.
I'm thinking if link with MPL works, it's possible that coupling with OpenCV should be possible :)
Thanks!
If you can get your image into argb32 or png format, you can use an Enaml ImageView to display it.
Take a look at the ImageView example:
https://github.com/nucleic/enaml/blob/master/examples/widgets/image_view.enaml
This should do it:
from enaml.image import Image
from cv2 import imread, imencode
open_cv_image = imread('./cat.png')
png_image = imencode('.png', open_cv_image)[1].tostring()
enaml_image = Image(data=png_image)
I am trying to modify the default I-beam cursor image. I'm using [[[NSCursor IBeamCursor] image] representations], passing each one through a CIFilter and adding it to a new image. However, the resulting cursor looks as though it is rendering the low-resolution images.
The High Resolution Guidelines say:
For custom cursors, you can pass a multirepresentation TIFF to the NSCursor class method initWithImage:hotSpot:.
So I would expect this to work. Additionally, if I get the -TIFFRepresentation of the original image and my modified image, and write them to disk, they both look like multi-page TIFF files with the same size images. What could I be doing wrong?
I have a somewhat-temporary solution: manually call -setSize: on each image representation, dividing the pixel height and width by the screen's scale factor. However, this technique doesn't seem like it will work ideally with multiple screens.
You're right on. I've been debugging this all day and I'm pretty sure I've got it nailed. I'm not doing exactly the same thing you are (my images are loaded from a file) but the end result is exactly the same.
The trick is to set the first representation of the multi-representation image to the non-retina size. If you are loading your cursors from an image file, you must take this extra step to adjust the size of the representations to match. It doesn't work 'out-of-the-box' as you would expect.
I've tested this on a machine with two monitors and dragging the window from the retina display to the non-retina display acts as it should, displaying the high/low resolution images for the cursor.
I had a similar problem a while ago: I had my cursor in a PDF, and it always drew as if it was a pixel image at 1:1 size, blown up. There's a solution to that in NSCursor: Using high-resolution cursors with cursor zoom (or retina).
Maybe someone can use that technique to solve this problem? My guess is creating an image with the same size but a different CTM marks it as the same size but Retina. What #jtbrandes is doing probably marks it as a different size and non-Retina. So you're effectively losing the scale factor information. If you create an image with a CTM in the hints, maybe you can draw the filtered images into it and it'll be detected right.
I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor).
CLick here for a screenshot of the program (MFD Extractor)
This would be a live window ie. constantaly updated video display not just a static graphic.
I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage.
Any ideas or pointers would be much appreciated. I just need somewhere to start from.
Regards,
There are a few ways to do this:
Quartz Display Services will let you get access to the video memory for a screen.
Quartz Window Services (a.k.a. CGWindow) will let you create an image of everything that lies below a window. If you create a borderless, transparent, empty, high-level window whose frame occupies an entire screen, everything below it will be everything on that screen. (Of course, you could create a smaller window in order to copy a section of the screen.)
There's also a way to do it using OpenGL that I never fully understood. That technique is demonstrated by a couple of code samples, OpenGLScreenSnapshot and OpenGLCaptureToMovie. It's more or less obsoleted by CGWindow, though.
Each of those will get you an image that you can then show or write to a file or something.
To show an image, use NSImageView or IKImageView. If you want to magnify it, IKImageView has a zoomFactor property, but if you want nearest-neighbor scaling (like Pixie, DigitalColor Meter, or xScope), I think you'll need to write a custom view for that (but even that isn't all that hard).