Nsimage draw in rect but the image is blurry? - macos

designer give me picture like this
But when I use drawInRect API draw the picture in context, the picture is like this
The size of the rect is just the size of the image.And the image is #1x and #2x.
the difference is very clear, the picture is blurry and there is a gray line in the right of image, and My imac is retina resolution.
================================================
I have found the reason,
[self.headLeftImage drawInRect:NSMakeRect(100,
100,
self.headLeftImage.size.width,
self.headLeftImage.size.height)];
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(context);
CGContextTranslateCTM(context, self.center.x , self.center.y);
[self.headLeftImage drawInRect:NSMakeRect(100,
100,
self.headLeftImage.size.width,
self.headLeftImage.size.height)];
CGContextRestoreGState(context);
And in the first draw the image will not blur, but after translate the image is blurry. Just like the picture:

The problem is that you're translating the context to a non-integral pixel location. Then, the draw is honoring your request to put the image at a non-integral position, which causes it to be anti-aliased and color in some pixels partially.
You should convert the center point to device space, integral-ize it (e.g. by using floor()), and then convert it back. Use CGContextConvertPointToDeviceSpace() and CGContextConvertPointToUserSpace() to do the conversions. That does the right thing for Retina and non-Retina displays.

Related

image_map in Pov-ray not working as expected

I would like to map an image I have to the face of a box in Pov-ray.
The image's dimensions are 1500x1125
(Example Image)
So I set up a scene with a light source above a camera looking at a box
camera{location <3,1.8,0> look_at <3,1.8,1>}
light_source{<3,20,0> color rgb <1,1,1>}
box{<0,0,0> <1,0.75,1> texture{pigment{image_map{png "Test1.png"}}} translate <2.5,1.425,3>}
The box's dimensions are 1x0.75 (z not relevant) which has the same 4:3 ratio as the image.
However, when the scene is rendered, the width of the image maps perfectly onto the box but some of the height is cut off. The image does not look stretched and I am confused why it does not fit.
IIRC, porvray will always read images as if they had a 1:1 aspect ratio.
If you insert a scale inside your pigment statement, before using it, that should fix it:
box{
<0,0,0> <1,0.75,1>
texture{
pigment {
image_map{png "Test1.png"}
scale <1, 0.75, 1>
}
} translate <2.5,1.425,3>
}
(I apologize for not testing this to be really sure right now).

Make an image color (not bitmap) transparant

I'm drawing an image on a PictureBox using Graphics.DrawImage() method. The background of the drawn image should be transparant so that it's not drawn on top of the the other things made by grapics. The code looks like this:
Image Icon = Image.FromFile(#"C:\\Dieroller\Die Icons\d4.jpg");
//Icon.SetColorToTransparant(White); //This is what I which would work.
Graphics.DrawImage(Icon, Location);
Any sugestions?

Composite 2 UIImageViews using Core Image, when one ImageView is in a ScrollView

I have a UIScrollView with a UIImageView inside it. The user can pan and zoom the image inside the scrollView.
I also have a UIImageView in the main view (above the scrollView), and the user can move that image around the screen.
I'm using CISourceOverCompositing to combine them both:
-(UIImage *) compositeFinalImage {
CIImage *foregroundImage = [CIImage imageWithCGImage:foregroundImageView.image.CGImage];
CIFilter *composite =[CIFilter filterWithName:#"CISourceOverCompositing"];
[composite setValue: foregroundImage forKey: #"inputImage"];
[composite setValue: [CIImage imageWithCGImage:backgroundImage.CGImage] forKey: #"inputBackgroundImage"];
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *finalImage = [composite valueForKey:#"outputImage"];
return [UIImage imageWithCGImage:[context createCGImage:finalImage fromRect:finalImage.extent] scale:1.0 orientation:userImageOrientation];
}
...but the location and scale of the image has been lost when I make it a CIImage and the foreground image is always at the bottom left.
So I tried to use imageByApplyingTransform to move the position of the foreground image (and eventually I will have to apply scale transform as well), but I get the position from the ImageView, but the coordinates need to be in the native resolution of the background image itself... but the background image is moving around (which I guess means I have to take the ContentOffset into account somehow), and the background image has a certain scale, and the foreground image has a certain scale...
It seems weird that it's needed to re-produce all of the transformation with regards to different scale, rotation and position variables of each image...
This is the basic idea of what I'm trying to do (the left side is in the main view coordinates, while the right side is in native image coordinates).
Any help will be much appreciated!
Thanks!

Why is Cocos2d Scaling my Image Up?

This is very simple code, but I do not know why Cocos2D continues to scale my background image up by x2?
I'm using the Cocos2d Hello World template. I haven't done anything to the code except delete everything inside of - (id) init
I then added this:
//ADD BACKGROUND
CGSize winSize = [[CCDirector sharedDirector] winSize];
CCSprite *background = [CCSprite spriteWithFile:#"justAbackground.png"];
background.position = ccp(winSize.width/2, winSize.height/2);
[self addChild:background];
When I build and run it is double the size then what the image is supposed to be.
If I add:
background.scale = .5;
It is the exact size it's supposed to be.
The images pixel dimensions are exactly the same as the iPhone.
What am I missing here?
Thanks in advance.
Maybe you're confused by point vs pixel coordinates?
On a regular iPhone the point & pixel dimensions are equal and both amount to 480x320 pixels/points. On a Retina device the point coordinates remain 480x320 but the pixel coordinates are doubled to 960x640.
Now if you want to display a regular image using pixel coordinates on a Retina device, you must have Retina display mode disabled. Otherwise cocos2d will scale up any image without the -hd suffix to point dimensions.
The alternative is to have Retina display mode enabled and save your image with the -hd suffix (justAbackground-hd.png) with double the resolution.

Change bounds origin + cropping an image

I am a newbie to Cocoa, I have a few doubts regarding NSImage.
Question1:
Changing the bounds origin of an image doesn't seem to have any effect. I expected the image to be drawn from the newly set origin but that doesn't seem to the case. Am I missing something ?
code:
NSImage* carImage = [NSImage imageNamed:#"car"];
[self.imageView setImage:carImage];
//Following line has no effect:
self.imageView.bounds = CGRectMake(self.imageView.bounds.origin.x + 100, self.imageView.bounds.origin.y, self.imageView.bounds.size.width,self.imageView.bounds.size.height);
Note: imageView is an IBOutlet
Question2:
I was trying to crop an image, but it doesn't seem to be cropping the image, I can see the complete image. What is that I am missing ?
code:
NSRect sourceRect = CGRectMake(150, 25, 100, 50);
NSRect destRect = CGRectMake(0, 0, 100, 50);
NSImage* carImage = [NSImage imageNamed:#"car"];
[carImage drawInRect:destRect fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0];
[self.imageView setImage:carImage];
Thanks
Changing the bounds origin of an image doesn't seem to have any effect. …
//Following line has no effect:
self.imageView.bounds = CGRectMake(self.imageView.bounds.origin.x + 100, self.imageView.bounds.origin.y, self.imageView.bounds.size.width,self.imageView.bounds.size.height);
That's an image view, not an image.
The effect of changing the bounds of a view depends on what the view does to draw. Effectively, this means you shouldn't change the bounds of a view that isn't an instance of a view class you created, since you can't predict exactly how an NSImageView will draw its image (presumably, since it's a control, it involves its cell, but more than that, I wouldn't rely on).
More generally, it's pretty rare to change a view's bounds origin. I don't remember having ever done it, and I can't think of a reason off the top of my head to do it. Changing its bounds size will scale, not crop.
I was trying to crop an image, but it doesn't seem to be cropping the image, I can see the complete image. What is that I am missing ?
[carImage drawInRect:destRect fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0];
[self.imageView setImage:carImage];
Telling an image to draw does not change anything about the image. It will not “crop the image” such that the image will thereafter be smaller or larger. You are telling it to draw, nothing more.
Consequently, the statement after that sets the image view's image to the whole image, exactly as if you hadn't told the image to draw, because telling it to draw made no difference.
What telling an image to draw does is exactly that: It tells the image to draw. There are only two correct places to do that:
In between lockFocus and unlockFocus messages to a view or image (or after setting the current NSGraphicsContext).
Within a view's drawRect: method.
Anywhere else, you should not tell any Cocoa object to draw.
One correct way to crop an image is to create a new image of the desired/adjusted size, lock focus on it, draw the desired portion of the original image into it, and unlock focus on the new image. You will then have both the original and a cropped version.
Another correct way would be to create your own custom image view that has two properties: One owning an image to draw, and the other holding a rectangle. When told to draw, this custom view would tell the image to draw the given rectangle into the view's bounds. You would then always hold the original image and simply draw only the desired section.

Resources