Why is Cocos2d Scaling my Image Up? - image

This is very simple code, but I do not know why Cocos2D continues to scale my background image up by x2?
I'm using the Cocos2d Hello World template. I haven't done anything to the code except delete everything inside of - (id) init
I then added this:
//ADD BACKGROUND
CGSize winSize = [[CCDirector sharedDirector] winSize];
CCSprite *background = [CCSprite spriteWithFile:#"justAbackground.png"];
background.position = ccp(winSize.width/2, winSize.height/2);
[self addChild:background];
When I build and run it is double the size then what the image is supposed to be.
If I add:
background.scale = .5;
It is the exact size it's supposed to be.
The images pixel dimensions are exactly the same as the iPhone.
What am I missing here?
Thanks in advance.

Maybe you're confused by point vs pixel coordinates?
On a regular iPhone the point & pixel dimensions are equal and both amount to 480x320 pixels/points. On a Retina device the point coordinates remain 480x320 but the pixel coordinates are doubled to 960x640.
Now if you want to display a regular image using pixel coordinates on a Retina device, you must have Retina display mode disabled. Otherwise cocos2d will scale up any image without the -hd suffix to point dimensions.
The alternative is to have Retina display mode enabled and save your image with the -hd suffix (justAbackground-hd.png) with double the resolution.

Related

How to fill the CCLayer with a background image?

After adding my image located in the resources I notice that my image appears only at the left bottom of the screen and in a reduced size.
How to fill the image as background on my CLLayer?
I tried adding the image as child and changed the Size of it's content but no success
CCSprite sprite = new CCSprite("img.png");
AddChild(sprite);
You can set AnchorPoint to AnchorMiddle,then image can be in center of CCLayer:
CCSprite sprite = new CCSprite ("ship.png");
sprite.AnchorPoint = CCPoint.AnchorMiddle;
AddChild(sprite);
If want image fill the CCLayer,it seems like that there is no way to change size of sprite.So maybe need to change size of image to be more large.
https://learn.microsoft.com/en-us/xamarin/graphics-games/cocossharp/entities#creating-the-ship-entity

HiDPI / backingScaleFactor on MacOS, how to get actual value?

I'm trying to get the display scaling setting for the current screen so that I can correctly scale inside my OpenGL and/or Vulkan window to match the setting that the OS has. The documentation says to use
float dpi = [window backingScaleFactor];
However this will return 1.0f for no scaling at all, and 2.0f for some scaling.
float dpi = [[window screen] backingScaleFactor];
does the same.
NSRect resolution = [[window screen] frame];
will give you the virtual resolution of the current screen. In System Preferences -> Display -> Scaled, this is the "Looks like" value. For a 3840x2160 screen I have, the possible values are 1920x1080, 2560x1440, 3008x1692, 3360x1890, 3830x2160, depending on the Scaled setting you have chosen. On my MBP's built in screen, which has a native resolution of 2880x1440, the "Looks Like" values can be 1024x640, 1280x800, 1440x900, 1680x1050, 1920x1200. the docs say to use
NSRect test = {0, 0, 1000, 1000};
NSRect dpi = [NSView convertRectToBacking:test];
However this just multiplies the supplied NSRect by the backingScaleFactor.
This is someone trying to get the real resolution of the screen.
So, I want either the real backingScaleFactor, or the native screen resolution for a given NSScreen. Any ideas?
The -backingScaleFactor is giving you the real backing scale factor. The backing store is not the same size as the physical pixel dimensions. Your app renders to the backing store and that is displayed to screen, but that often involves another scaling operation. The right thing for an OpenGL or Vulkan app to do is to render to the backing store resolution, not the physical pixel resolution.

Nsimage draw in rect but the image is blurry?

designer give me picture like this
But when I use drawInRect API draw the picture in context, the picture is like this
The size of the rect is just the size of the image.And the image is #1x and #2x.
the difference is very clear, the picture is blurry and there is a gray line in the right of image, and My imac is retina resolution.
================================================
I have found the reason,
[self.headLeftImage drawInRect:NSMakeRect(100,
100,
self.headLeftImage.size.width,
self.headLeftImage.size.height)];
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(context);
CGContextTranslateCTM(context, self.center.x , self.center.y);
[self.headLeftImage drawInRect:NSMakeRect(100,
100,
self.headLeftImage.size.width,
self.headLeftImage.size.height)];
CGContextRestoreGState(context);
And in the first draw the image will not blur, but after translate the image is blurry. Just like the picture:
The problem is that you're translating the context to a non-integral pixel location. Then, the draw is honoring your request to put the image at a non-integral position, which causes it to be anti-aliased and color in some pixels partially.
You should convert the center point to device space, integral-ize it (e.g. by using floor()), and then convert it back. Use CGContextConvertPointToDeviceSpace() and CGContextConvertPointToUserSpace() to do the conversions. That does the right thing for Retina and non-Retina displays.

Set Colour Of Vector Image In Xcode 6+

I saw one of the Apple Videos mention that you can colour images via the code. All my searches on how to do this came up blank.
If I have a black vector image (pdf) saved inside Images.xcassets, how can I colour that image at run time?
Ideally it would be something simple like [UIImage setVectorColor:UIColorBlue] but I'm sure there could be more to it!
You have to set all vectors image as Template Image on Render options in the xassets. (http://i.stack.imgur.com/oTuDC.png)
After, you can set the color in the uiimageview which contain your image with method :
[imageView setTintColor:[UIColor redColor]];

Composite 2 UIImageViews using Core Image, when one ImageView is in a ScrollView

I have a UIScrollView with a UIImageView inside it. The user can pan and zoom the image inside the scrollView.
I also have a UIImageView in the main view (above the scrollView), and the user can move that image around the screen.
I'm using CISourceOverCompositing to combine them both:
-(UIImage *) compositeFinalImage {
CIImage *foregroundImage = [CIImage imageWithCGImage:foregroundImageView.image.CGImage];
CIFilter *composite =[CIFilter filterWithName:#"CISourceOverCompositing"];
[composite setValue: foregroundImage forKey: #"inputImage"];
[composite setValue: [CIImage imageWithCGImage:backgroundImage.CGImage] forKey: #"inputBackgroundImage"];
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *finalImage = [composite valueForKey:#"outputImage"];
return [UIImage imageWithCGImage:[context createCGImage:finalImage fromRect:finalImage.extent] scale:1.0 orientation:userImageOrientation];
}
...but the location and scale of the image has been lost when I make it a CIImage and the foreground image is always at the bottom left.
So I tried to use imageByApplyingTransform to move the position of the foreground image (and eventually I will have to apply scale transform as well), but I get the position from the ImageView, but the coordinates need to be in the native resolution of the background image itself... but the background image is moving around (which I guess means I have to take the ContentOffset into account somehow), and the background image has a certain scale, and the foreground image has a certain scale...
It seems weird that it's needed to re-produce all of the transformation with regards to different scale, rotation and position variables of each image...
This is the basic idea of what I'm trying to do (the left side is in the main view coordinates, while the right side is in native image coordinates).
Any help will be much appreciated!
Thanks!

Resources