I feed the image browser view with image filenames and it manages loading them.
Is there a way to retrieve the CGImageRef of those images from the browser after it loads them? I'd like to do some Core Animation with them when the user clicks on an image.
Probably the simplest way to do this is to use the NSView method
-bitmapImageRepForCachingDisplayInRect:
to build an NSBitmapImageRep for the area, then
-cacheDisplayInRect:toBitmapImageRep:
to draw into it, then NSBitmapImageRep's
-CGImage
method to get a CGImageRef.
Related
Im developing a plugin that implements a ImageViewer application using FireBreath in MAC OSX, the image is located in the local filesystem. I have the following code snippet:
help me in implementing the getDrawingPrimitive function.
`FB::PluginWindowMac *wnd = dynamic_cast<FB::PluginWindowMac*>(win);
wnd->getDrawingPrimitive();
/* code related to openGL*/
CALayer* layer = [CALayer new];
[layer setContents:(id)[ImageHandler setImageWithURL:#"somePath.jpg"]];`
setContents receives the argument of type (CGImageRef).
Is this the right way to set the image to a CALayer object?
On calling wnd->StartAutoInvalidate(sometimelapse); It makes a call to onDraw function depending on Drawing Model and Event Model selected. The window context of type CGContextRef is obtained. This context can be used further to draw whatever is required. In order to draw an image, the image is converted to CGImageRef and using CGContextDrawImage the image is drawn on to the context.
I want to create an NSImage of an NSScrollView object, so I can use the flat graphic for animation purposes.
When I render my scrollview object into a graphic and add it back to my window, it works but looks really bad like it's been scaled to 99% or something. I want the image to not be scaled and 100% pixel accurate. (Note: the image isn't scaled, it's the same size, it just looks like it's been poorly rescaled - the text looks rough and poor compared to the view onscreen in the scrollview)
My code:
(scrollView is my NSScrollView object)
NSData *pdf = [scrollView dataWithPDFInsideRect:[scrollView bounds]];
NSImage *image = [[NSImage alloc] initWithData:pdf];
NSImageView *imageView = [[NSImageView alloc] initWithFrame:[scrollView bounds]];
[imageView setImage: image];
[mainGUIPanel addSubview: imageView];
I've tried a heap of things, messed with pixel sizes, bounds, used IB to create the destination NSView and put the image inside that but just cannot get the image to not look bad. Any ideas?
Edit:
I tried writing the pdf data to a pdf file and viewed it, and it looked ok. So the bitmap image is being captured ok, it's just on the display that it looks like it's being scaled somewhat.
Edit2:
Also tried getting the bitmap like this:
NSBitmapImageRep *bitmap = [scrollView bitmapImageRepForCachingDisplayInRect:[scrollView bounds]];
[scrollView cacheDisplayInRect:[scrollView bounds] toBitmapImageRep:bitmap];
NSImage * image = [[NSImage alloc] initWithSize:[bitmap size]];
[image addRepresentation: bitmap];
Same results - the bitmap looks exactly the same, bad and scaled when displayed.
This leads me to believe that capturing the bitmap data either way works fine, it's creating the view and rendering the image that is doing the scaling. How can I make sure that the view and image are shown at the correct size and scaling?
Edit3:
Ok, I started a new blank project and set this up, and it works perfectly - the new imageview is identical to the grabbed bitmap. So I suspect my issue is stemming from some rendering/compositing issue when drawing the bitmap to the view. Investigating further...
It turns out the issue stems from the scrollView that I am rendering from. This has a transparent background (Draw Background is off in IB) and the text is the scrollView looks good. If I turn "Draw Background ON", with a transparent background color, the text is rendered badly, exactly as it is when I capture the image programatically.
So, in my app, even though Draw Background is off, the scrollView image is captured as though Draw Background is on. So I need to understand why the text is rendered badly when Draw Background is on and set to transparent, and hopefully this will lead me towards a solution.
Also tried creating an NSClipview with background drawing turned off and putting the bitmap view into that, but it sill renders the same. I can't find a way to render the transparent image to the screen without horrible artifacting.
Ok, I've found a solution. Instead of getting a grab of the transparent background scrollview object itself, I'm instead getting a grab of the parent view (essentially the window background), and restricting the bounds to the size of the scrollview object.
This captures both the background, and the contents of the scrollview, and displays correctly without any issues of transparency.
In have an NSImage is a template image (that is, [NSImage isTemplate] returns YES).
When I use it inside an NSImageView, it is drawn correctly as a template image.
However, if I draw it manually using drawInRect:fromRect:operation:fraction:, it is drawn as a flat black image.
How can I draw an NSImage manually, and still get the 'template' effect?
The special drawing of template images is part of CoreUI and not public.
You can imitate the effect with Quartz though. Details and code can be found in this answer here on SO:
https://stackoverflow.com/a/7138497/100848
I have an application that is set in landscape mode because of the content it contains. One of the things I want to do is take a picture of a piece of paper. I present the UIImagePickerController locked in portrait mode because it fits the paper size. After the user takes the picture I load that image as a background on a UIWebView. The reason I use a webview is because sometimes I need to load a .pdf there as well. Anyway, I'm setting the background using CSS. Here is the code...
//img is the path to an image.
myHtml = [NSString stringWithFormat:
#"<html><head>"
"</head>"
"<body><img src='%#'></body></html>", img];
[resume loadHTMLString:myHtml baseURL:baseURL];
The problem is, the image is displayed in landscape when the app returns to the UIWebView. Everything else is normal as far as text etc. Is there some reason that images are rotated 90 degrees to fit properly or something? I have tried pretty much everything with no luck.
The other thing is that we I retake a picture and reload the webView the old image remains.
You need to change the orientation of the image from its exif header if there is orientation info available in it. Identifying the picture orientation is the hard part, rotation of imgs can be done easily using css -webkit-transform: rotate(-90deg);
It does have to do with the exif data. While using a webkit transform may be ok for use in just the web view once, if you want to use the image later and have it always be the right orientation I'd use the categories given here:
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
The article does a great job explaining exactly why the rotation occurs and the code does a nice job of 'fixing' the 'problem' so that you can then make use of the image without having to do extra things or worry about whether it'll be displayed correctly (even though the UIImageView takes the orientation into account).
You can resize with this to the same size it originally was, and it should fix the orientation issue.
I have an app that currently has this line:
[myView setWantsLayer:YES];
In order to draw a GUI element via NSBezierPath. This line is required, otherwise when the user types in an adjacent (and overlapping) NSTextField, the contents of myView shudders.
I discovered that calling CoreAnimation loads the OpenGL framework, but does not unload it. See this question.
I think I can get around this by drawing the NSBezierPath to NSImage and then to display the NSImage in lieu of the NSBezierPath, but I haven't found a single source that shows me how to go about this.
Edit:
I should note that I want to save this BEFORE The NSBezierPath is displayed - so solutions that draw an existing view to an NSImage are not useful.
Question:
Can someone point me in the right direction for converting NSBezierPath to an NSImage?
You can draw anything directly into an NSImage, and you can create a blank NSImage. So, create an image whose size is the size of the bounds of the path, translate the path so that it's at the origin of the image, and then lock focus on the image, draw the path, and unlock focus.