I am a newbie to Cocoa, I have a few doubts regarding NSImage.
Question1:
Changing the bounds origin of an image doesn't seem to have any effect. I expected the image to be drawn from the newly set origin but that doesn't seem to the case. Am I missing something ?
code:
NSImage* carImage = [NSImage imageNamed:#"car"];
[self.imageView setImage:carImage];
//Following line has no effect:
self.imageView.bounds = CGRectMake(self.imageView.bounds.origin.x + 100, self.imageView.bounds.origin.y, self.imageView.bounds.size.width,self.imageView.bounds.size.height);
Note: imageView is an IBOutlet
Question2:
I was trying to crop an image, but it doesn't seem to be cropping the image, I can see the complete image. What is that I am missing ?
code:
NSRect sourceRect = CGRectMake(150, 25, 100, 50);
NSRect destRect = CGRectMake(0, 0, 100, 50);
NSImage* carImage = [NSImage imageNamed:#"car"];
[carImage drawInRect:destRect fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0];
[self.imageView setImage:carImage];
Thanks
Changing the bounds origin of an image doesn't seem to have any effect. …
//Following line has no effect:
self.imageView.bounds = CGRectMake(self.imageView.bounds.origin.x + 100, self.imageView.bounds.origin.y, self.imageView.bounds.size.width,self.imageView.bounds.size.height);
That's an image view, not an image.
The effect of changing the bounds of a view depends on what the view does to draw. Effectively, this means you shouldn't change the bounds of a view that isn't an instance of a view class you created, since you can't predict exactly how an NSImageView will draw its image (presumably, since it's a control, it involves its cell, but more than that, I wouldn't rely on).
More generally, it's pretty rare to change a view's bounds origin. I don't remember having ever done it, and I can't think of a reason off the top of my head to do it. Changing its bounds size will scale, not crop.
I was trying to crop an image, but it doesn't seem to be cropping the image, I can see the complete image. What is that I am missing ?
[carImage drawInRect:destRect fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0];
[self.imageView setImage:carImage];
Telling an image to draw does not change anything about the image. It will not “crop the image” such that the image will thereafter be smaller or larger. You are telling it to draw, nothing more.
Consequently, the statement after that sets the image view's image to the whole image, exactly as if you hadn't told the image to draw, because telling it to draw made no difference.
What telling an image to draw does is exactly that: It tells the image to draw. There are only two correct places to do that:
In between lockFocus and unlockFocus messages to a view or image (or after setting the current NSGraphicsContext).
Within a view's drawRect: method.
Anywhere else, you should not tell any Cocoa object to draw.
One correct way to crop an image is to create a new image of the desired/adjusted size, lock focus on it, draw the desired portion of the original image into it, and unlock focus on the new image. You will then have both the original and a cropped version.
Another correct way would be to create your own custom image view that has two properties: One owning an image to draw, and the other holding a rectangle. When told to draw, this custom view would tell the image to draw the given rectangle into the view's bounds. You would then always hold the original image and simply draw only the desired section.
Related
designer give me picture like this
But when I use drawInRect API draw the picture in context, the picture is like this
The size of the rect is just the size of the image.And the image is #1x and #2x.
the difference is very clear, the picture is blurry and there is a gray line in the right of image, and My imac is retina resolution.
================================================
I have found the reason,
[self.headLeftImage drawInRect:NSMakeRect(100,
100,
self.headLeftImage.size.width,
self.headLeftImage.size.height)];
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(context);
CGContextTranslateCTM(context, self.center.x , self.center.y);
[self.headLeftImage drawInRect:NSMakeRect(100,
100,
self.headLeftImage.size.width,
self.headLeftImage.size.height)];
CGContextRestoreGState(context);
And in the first draw the image will not blur, but after translate the image is blurry. Just like the picture:
The problem is that you're translating the context to a non-integral pixel location. Then, the draw is honoring your request to put the image at a non-integral position, which causes it to be anti-aliased and color in some pixels partially.
You should convert the center point to device space, integral-ize it (e.g. by using floor()), and then convert it back. Use CGContextConvertPointToDeviceSpace() and CGContextConvertPointToUserSpace() to do the conversions. That does the right thing for Retina and non-Retina displays.
I want to create an NSImage of an NSScrollView object, so I can use the flat graphic for animation purposes.
When I render my scrollview object into a graphic and add it back to my window, it works but looks really bad like it's been scaled to 99% or something. I want the image to not be scaled and 100% pixel accurate. (Note: the image isn't scaled, it's the same size, it just looks like it's been poorly rescaled - the text looks rough and poor compared to the view onscreen in the scrollview)
My code:
(scrollView is my NSScrollView object)
NSData *pdf = [scrollView dataWithPDFInsideRect:[scrollView bounds]];
NSImage *image = [[NSImage alloc] initWithData:pdf];
NSImageView *imageView = [[NSImageView alloc] initWithFrame:[scrollView bounds]];
[imageView setImage: image];
[mainGUIPanel addSubview: imageView];
I've tried a heap of things, messed with pixel sizes, bounds, used IB to create the destination NSView and put the image inside that but just cannot get the image to not look bad. Any ideas?
Edit:
I tried writing the pdf data to a pdf file and viewed it, and it looked ok. So the bitmap image is being captured ok, it's just on the display that it looks like it's being scaled somewhat.
Edit2:
Also tried getting the bitmap like this:
NSBitmapImageRep *bitmap = [scrollView bitmapImageRepForCachingDisplayInRect:[scrollView bounds]];
[scrollView cacheDisplayInRect:[scrollView bounds] toBitmapImageRep:bitmap];
NSImage * image = [[NSImage alloc] initWithSize:[bitmap size]];
[image addRepresentation: bitmap];
Same results - the bitmap looks exactly the same, bad and scaled when displayed.
This leads me to believe that capturing the bitmap data either way works fine, it's creating the view and rendering the image that is doing the scaling. How can I make sure that the view and image are shown at the correct size and scaling?
Edit3:
Ok, I started a new blank project and set this up, and it works perfectly - the new imageview is identical to the grabbed bitmap. So I suspect my issue is stemming from some rendering/compositing issue when drawing the bitmap to the view. Investigating further...
It turns out the issue stems from the scrollView that I am rendering from. This has a transparent background (Draw Background is off in IB) and the text is the scrollView looks good. If I turn "Draw Background ON", with a transparent background color, the text is rendered badly, exactly as it is when I capture the image programatically.
So, in my app, even though Draw Background is off, the scrollView image is captured as though Draw Background is on. So I need to understand why the text is rendered badly when Draw Background is on and set to transparent, and hopefully this will lead me towards a solution.
Also tried creating an NSClipview with background drawing turned off and putting the bitmap view into that, but it sill renders the same. I can't find a way to render the transparent image to the screen without horrible artifacting.
Ok, I've found a solution. Instead of getting a grab of the transparent background scrollview object itself, I'm instead getting a grab of the parent view (essentially the window background), and restricting the bounds to the size of the scrollview object.
This captures both the background, and the contents of the scrollview, and displays correctly without any issues of transparency.
I have a UIScrollView with a UIImageView inside it. The user can pan and zoom the image inside the scrollView.
I also have a UIImageView in the main view (above the scrollView), and the user can move that image around the screen.
I'm using CISourceOverCompositing to combine them both:
-(UIImage *) compositeFinalImage {
CIImage *foregroundImage = [CIImage imageWithCGImage:foregroundImageView.image.CGImage];
CIFilter *composite =[CIFilter filterWithName:#"CISourceOverCompositing"];
[composite setValue: foregroundImage forKey: #"inputImage"];
[composite setValue: [CIImage imageWithCGImage:backgroundImage.CGImage] forKey: #"inputBackgroundImage"];
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *finalImage = [composite valueForKey:#"outputImage"];
return [UIImage imageWithCGImage:[context createCGImage:finalImage fromRect:finalImage.extent] scale:1.0 orientation:userImageOrientation];
}
...but the location and scale of the image has been lost when I make it a CIImage and the foreground image is always at the bottom left.
So I tried to use imageByApplyingTransform to move the position of the foreground image (and eventually I will have to apply scale transform as well), but I get the position from the ImageView, but the coordinates need to be in the native resolution of the background image itself... but the background image is moving around (which I guess means I have to take the ContentOffset into account somehow), and the background image has a certain scale, and the foreground image has a certain scale...
It seems weird that it's needed to re-produce all of the transformation with regards to different scale, rotation and position variables of each image...
This is the basic idea of what I'm trying to do (the left side is in the main view coordinates, while the right side is in native image coordinates).
Any help will be much appreciated!
Thanks!
I have an NSImageView that encloses a dynamically generated NSImage. When I change the image displayed, I would like to dynamically resize the NSImageView so that it precisely wraps the new image, and also have the enclosing window resize so that the space between the NSImageView and every other part of the window remains constant. (Note that the image view's scaling is set to none, as I want its image to always be shown at its physical size.) To illustrate, suppose I begin with a small image in my image view. If I replace it with a large image, I wish for both the NSImageView and enclosing window to resize to accommodate it, without affecting the sizing or spacing of any other element.
Currently, I call the following method whenever the magnification level is changed via the stepper or associated text field. Though regenerating the image and loading it into the NSImageView works fine, resizing the NSImageView and enclosing window do not.
- (void) updateMagnification:(NSUInteger)newMagnification {
// Keep values of stepper and associated text field synchronized.
[self.magnificationStepper setIntegerValue:newMagnification];
[self.magnificationTextField setIntegerValue:newMagnification];
// Regenerate image based on newMagnification and display in image view.
[self.qrGenerator generateWithBlockPixelWidth:newMagnification];
self.imageView.image = self.qrGenerator.image;
// Adjust frame size of image view.
NSLog(#"Old size: frame=%# image=%#", NSStringFromSize(self.imageView.frame.size), NSStringFromSize(self.imageView.image.size));
[self.imageView setFrameSize:NSMakeSize(self.imageView.image.size.width, self.imageView.image.size.height )];
NSLog(#"New size: frame=%# image=%#", NSStringFromSize(self.imageView.frame.size), NSStringFromSize(self.imageView.frame.size));
//[self.window setViewsNeedDisplay:YES];
//[self.imageView setNeedsDisplay:YES];
//[self.imageView.superview setNeedsDisplay:YES];
}
Regardless of whether I increase or decrease the magnification value, causing the image to grow smaller or larger, the size of both the NSImageView and window remains constant. The three setNeedsDisplay: calls that are commented out have no effect even if they're uncommented -- they were my attempt to determine if the problem was related to the controls not redrawing once their size was adjusted, but the calls had no effect. Curiously, my NSLog calls indicate that the imageView's frame does indeed take the requested size, for they yield this output:
2012-06-12 11:02:50.651 Presenter[4660:603] Old size: frame={422, 351} image={168, 168}
2012-06-12 11:02:50.651 Presenter[4660:603] New size: frame={168, 168} image={168, 168}
The actual display, of course, does not change.
Interestingly, changing the imageView's frame style to "none," either in Interface Builder or programatically with [self.imageView setImageFrameStyle:NSImageFrameNone], gives me behaviour closer to what I desire. Making the image larger so that it would otherwise be clipped by the image view does indeed result in the image view and window growing larger. From this point, however, making the image smaller does not result in the image view or window resizing accordingly. "None" is the only image frame style that displays this somewhat correct behaviour -- all four of the bordered styles (i.e., bevel [which is the default], button, groove, and photo) show the same entirely incorrect behaviour described above.
I came across someone with a similar problem. Oddly, he only observed the problematic behaviour when his image view's frame style was set to NSImageFrameNone, when this is the only value that gives me somewhat-correct behaviour. I tried modifying the frame style to a non-none value before the resize and to none afterward, as this resolved the other person's problem, but for me, this yielded the same behaviour as when I simply set the frame style to "none" initially.
Any help you provide will be much appreciated. Thanks!
I have an app that currently has this line:
[myView setWantsLayer:YES];
In order to draw a GUI element via NSBezierPath. This line is required, otherwise when the user types in an adjacent (and overlapping) NSTextField, the contents of myView shudders.
I discovered that calling CoreAnimation loads the OpenGL framework, but does not unload it. See this question.
I think I can get around this by drawing the NSBezierPath to NSImage and then to display the NSImage in lieu of the NSBezierPath, but I haven't found a single source that shows me how to go about this.
Edit:
I should note that I want to save this BEFORE The NSBezierPath is displayed - so solutions that draw an existing view to an NSImage are not useful.
Question:
Can someone point me in the right direction for converting NSBezierPath to an NSImage?
You can draw anything directly into an NSImage, and you can create a blank NSImage. So, create an image whose size is the size of the bounds of the path, translate the path so that it's at the origin of the image, and then lock focus on the image, draw the path, and unlock focus.