Basically I'm trying to retinafy my game
So my application isn't universal, its specifically for the iPhone. I just would like to to also look its best when running on the iPad.
I've created a simple method to load the high resolution images when running on the iPad in scaled 2x mode. Which is working.
However my problem isn't the images. Its my UILabels and UIButtons. When I scale up the text becomes slightly blurry.
So they look fine with 1x scale. Its just 2x scale on the iPad.
Is there property I can set or a way I can redraw the UILabels/UIButtons so that they are more clear?
When an iPhone app is running on an iPad and scaled x2, you can't draw to each individual pixel. It is literally an enlarged 320x480 pixel screen (or 640x960 if you are using an iPad 3 which simulates an iPhone with a retina display).
Hope that helps.
Related
I have a game that I’m porting from iOS to MacOS. On iOS, everything functions as it should on retina and non-retina devices. On Mac, it’s a different story.
I have two class properties: upperCameraPos and lowerCameraPos
The game is in a window and the scene is twice the height of this window’s content view (SKView). When the game starts. the camera node is positioned at lowerCameraPos, revealing the bottom half of the scene. When a button is pressed, SKAction’s moveToPoint: method is used to scroll the scene up, revealing the top half. On retina and 4K Macs, this works as expected.
However, on non-retina Macs (tested on late ’07 iMac & late ’08 MacBook), the same code for setting the camera’s initial position does not reveal the lower half of the scene, but rather shows the middle of the scene (which is the camera's default position).
I’ve done a considerable amount of searching and haven’t found any explanation for this. Does anyone know if there are any issues between retina & non-retina when positioning a camera node in a scene on Mac?
I figured out what I was doing wrong. I had the scene's scale mode set to SKSceneScaleModeAspectFill. After changing the scale mode to SKSceneScaleModeResizeFill, the camera node now scales to the correct parts of the scene regardless of whether the screen is retina or non-retina.
I've read the book Retinafy Me. This basically says double the size of images. When it is then displayed on a retina screen at half the image size it will look great.
My problem is that the original images I have can't be doubled in size. i.e. the images are 750px wide. They are to be displayed 500px wide. What do I do? Is a x1.5 image better looking that a x1 image (on a retina screen) or is it just needlessly adding to the file size?
I've tried using the x1.5 images (750px scaled down to 500px) and the images looked good on a retina screen in the Apple store where I checked them out. But I couldn't do a definative comparison, I don't have a retina screen of my own and I can't find any info about it anywhere.
A pixel is still a pixel on a retina screen. The difference is simply that a retina screen uses 4 pixels to display one, if they have no additional information such as more pixels. This is why the suggestion is to double the image size.
Essentially your image will be rendered by stretching it to 1000px wide. When stretched, a 750px wide image will still look better than a 500px wide image. You can try this for yourself in Paint.
There is a pretty good explanation of how it works on http://crypt.codemancers.com/posts/2013-11-03-image-rendering-on-hd-screens/
Hope this helps.
When creating an iPhone5 storyboard, what images do you use inside of Xcode. Since the iPhone5 is retina only, then all of your graphics have to be doubled, and yet the coordinates are all 1/2 of that.
So if I have an image in the middle of the screen that is 50 x 50 in Xcode, then my iPhone5 image has to be 100x100. If I name that image image100.png do I still create a UIImageView on the screen at 50x50 and give it the image100.png. That appears to work, and if so is the rule that when creating an iPhone5 Storyboard that you just have to remember to cut all of your heights and widths by 2?
Yes that's the rule.
It's all a matter of getting used to the idea of resolution independence. Points are not pixels. On the iphone5, we want image resolutions of 2 pixels per point, on a nonretina screen we want our image resolutions at 1 pixel per point.
When I make my app for the 4inch screen i just stretch it out! I dont want to do that i want it to look the same when running on the iphone 4 and the 5.
You need to have a portion of your image stretchable, for example, if you have a rounded corner background, you could stretch the centre of it without deforming the corners. Same goes for any other type of background (I assume you have a background that needs to be stretched). Once you decide what portion of it you want to deform, you can use UIImage's stretchableImageWithLeftCapWidth:topCapHeight: to avoid stretching the rest.
If you're not ready to compromise and stretch a portion of your image, I'm afraid you'll have to use 2 different images, one for 4 inches (iPhone 5) and one for 3.5 inches (iPhone <5).
I'm working on a WP7 app and can't find any way to stretch an image only at a specific point (repeating that row of pixels). For example, if I have a box with rounded corners I want to stretch it to fill a specific area only I want to stretch just a few pixels in the center (horizontally and vertically) so that the corners are unmodified.
In Android you can do this with a 9 Patch image and in iOS UIImage provides methods such as rightCapWidth.
What's the equivalent for WP7?
Thanks.
There is, currently, no direct equivalent for WP7.
You'll have to adjust the image as you need to for your requirements.
As applying rounded corners to everything doesn't match the Metro design principals for the phone, I expect there is little reason for, or likelihood of, such functionlaity becoming part of the core APIs.