UIImageview programmilly manage for iphone5 and iphone4 - uiimageview

I have an Issue about UIImageView Manage by XIB file for iphone5 screen Height and Iphone4 Screen Height.
I Try to Manage code for UIImageView like this
~
CGFloat screenHeight = [UIScreen mainScreen].bounds.size.height;
if ([UIScreen mainScreen].scale == 2.f && screenHeight == 568.0f) {
backgroundImage.autoresizingMask=UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth;
frameView.autoresizingMask=UIViewAutoresizingFlexibleHeight;
backgroundImage.image = [UIImage imageNamed:#"bg-568h#2x.png"];
//frameView.frame=CGRectMake(16, 0, 288, 527);
frameView.image = [UIImage imageNamed:#"setframe-568h#2x.png"];
}
else
{
backgroundImage.image = [UIImage imageNamed:#"bg#2x.png"];
frameView.image = [UIImage imageNamed:#"setframe#2x.png"];
} ;
Please suggest me about Issues, FrameView is a UIImageView which have white Image,
Please
THanks

I had the same issue and below is what I did to make it work for me.
I have images used in a couple of apps which needed to be resized for the new 4 inch display. I wrote the code below to automatically resize images as needed without specifics on the height of the view. This code assumes the height of the given image was sized in the NIB to be the full height of the given frame, like it is a background image that fills the whole view. In the NIB the UIImageView should not be set to stretch, which would do the work of stretching the image for you and distort the image since only the height changes while the width stays the same. What you need to do is adjust the height and the width by the same delta and then shift the image to the left by the same delta to center it again. This chops off a little on both sides while making it expand to the full height of the given frame.
I call it this way...
[self resizeImageView:self.backgroundImageView intoFrame:self.view.frame];
I do this in viewDidLoad normally if the image is set in the NIB. But I also have images which are downloaded at runtime and displayed that way. These images are cached with EGOCache, so I have to call the resize method either after setting the cached image into the UIImageView or after the image is downloaded and set into the UIImageView.
The code below does not specifically care what the height of the display is. It actually could work with any display size, perhaps to handle resizing images for rotation as well, thought it assumes each time the change in height is greater than the original height. To support a greater width this code would need to be adjusted to respond to that scenario as well.
- (void)resizeImageView:(UIImageView *)imageView intoFrame:(CGRect)frame {
// resizing is not needed if the height is already the same
if (frame.size.height == imageView.frame.size.height) {
return;
}
CGFloat delta = frame.size.height / imageView.frame.size.height;
CGFloat newWidth = imageView.frame.size.width * delta;
CGFloat newHeight = imageView.frame.size.height * delta;
CGSize newSize = CGSizeMake(newWidth, newHeight);
CGFloat newX = (imageView.frame.size.width - newWidth) / 2; // recenter image with broader width
CGRect imageViewFrame = imageView.frame;
imageViewFrame.size.width = newWidth;
imageViewFrame.size.height = newHeight;
imageViewFrame.origin.x = newX;
imageView.frame = imageViewFrame;
// now resize the image
assert(imageView.image != nil);
imageView.image = [self imageWithImage:imageView.image scaledToSize:newSize];
}
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Related

OpenGL ES 1 + iOS 8 = layer.bounds messed?

Since I've installed the Xcode 6.0.1 I'm having my OpenGL ES 1 layer displayed incorrectly on any simulated device (as well as on real hardware: iPhone 4S with iOS 8) – wrong size and position of the layer.
Changing the glViewport parameters doesn't make any difference. I can actually comment it out and it'll look the same.
PARTIAL SOLUTION:
I've checked and then unchecked the "Use Auto Layout" box so that Xcode updated my window to newer version requirements. Now everything looks okay on iPhone 4S, but still the size of the window on other devices is messed.
Anyone got their OpenGL ES 1 code updated to new devices?
One possible workaround is to get the dimensions (width & height) and decide on real width depending on which dimension is bigger something like this:
CGRect screenBounds = [[UIScreen mainScreen] bounds];
float scale = [UIScreen mainScreen].scale;
float width = screenBounds.size.width;
float height = screenBounds.size.height;
NSLog(#"scale: %f, width: %f, height: %f", scale, width, height);
float w = width > height ? width : height;
if (scale == 2.0f && w == 568.0f) { ...
I have the similar problem with my OpenGL ES 1 app.
The following code always returns the renderbuffer size in Portrait mode (shouldAutorotate is NO, so autorotate is disabled in my app):
glGetRenderbufferParameterivOES( GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth );
glGetRenderbufferParameterivOES( GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight );
But now (Xcode 6.0.1, iOS8) this size depends on the device orientation. So i get the wrong renderbuffer size.
"Use Auto Layout" check+uncheck didn't help me.
I've managed to display the render buffer properly by minding the fact that [[UIScreen mainScreen] bounds].size is orientation dependent on iOS 8 and coding the view programmatically. So my delegate looks like this:
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *) {
CGRect screenBound = [[UIScreen mainScreen] bounds];
CGSize screenSize = screenBound.size;
screenHeight = screenSize.height;
screenWidth = screenSize.width;
window = [[UIWindow alloc] initWithFrame:CGRectMake(0, 0, screenWidth, screenHeight)];
window.bounds = CGRectMake(0, 0, screenWidth, screenHeight);
MainViewController = [[UIViewController alloc] init];
glView = [[EAGLView alloc] initWithFrame:CGRectMake(0, 0, screenWidth, screenHeight)];
glView.bounds = CGRectMake(0, 0, screenWidth, screenHeight);
MainViewController.view = glView;
window.rootViewController = MainViewController;
[window makeKeyAndVisible];
[glView setupGame];
[glView startAnimation];
return YES;
}

Getting the right coordinates of the visible part of a UIImage inside of UIScrollView

I have a UIImage which is inside a UIScrollView so I can zoom-in and zoom-out, crop, move around, etc.
How do I get the coordinates of the visible part of the UIImage inside the UIScrollView.
I want to crop the image in it's native resolution (using GPUIImage) but I need to x, y, width and height for the rectangle.
I use a scrollview to enable zoom of a UIImage. A cropping button presents an overlaid resizable cropping view. The following is the code I use to ensure the user defined crop box in the UIScrollView gets added with the correct coordinates to the UIImageView (can then use to crop image).
To find the x and y coordinates use the scrollview contentOffset multiplied by inverse of zoomscale:
float origX = 0;
float origY = 0;
float widthCropper = 0;
float heightCropper = 0;
if (_scrollView.contentOffset.x > 0){
origX = (_scrollView.contentOffset.x) * (1/_scrollView.zoomScale);
}
if (_scrollView.contentOffset.y > 0){
origY = (_scrollView.contentOffset.y) * (1/_scrollView.zoomScale);
}
If you need to create a properly sized cropping box for what is displayed in the scrollview you will need to adjust the width and the height for any zoom factor:
widthCropper = (_scrollView.frame.size.width * (1/_scrollView.zoomScale));
heightCropper = (_scrollView.frame.size.height * (1/_scrollView.zoomScale));
and to add this properly sized rectangle as a cropping view to the UIImageView:
CGRect cropRect = CGRectMake((origX + (SIDE_MARGIN/2)), (origY + (SIDE_MARGIN / 2)), (widthCropper - SIDE_MARGIN), (heightCropper - SIDE_MARGIN));
_cropView = [[UIView alloc]initWithFrame:cropRect];
[_cropView setBackgroundColor:[UIColor grayColor]];
[_cropView setAlpha:0.7];
[_imageView addSubview:_cropView];
[_imageView setCropView:_cropView];

ios UIScrollView - buttons exceed boundries

I'm adding buttons to my scrollview via code but when I run the app, I see all the buttons and they exceed the scrollview boundries instead of only some.
In the attached screenshot, you can see that the scrollbar is inside the boundries of the scrollview, only the buttons exceed.
why do I need self.recentFriendsScrollView.delegate = self;??
Here is my code:
//recentOpponents is an array
NSInteger xOffset = 0;
CGFloat size = 38;
CGFloat padding = 5;
self.recentFriendsScrollView.delegate = self;
for (User *user in recentOpponents) {
UIButton *tagButton = [UIButton buttonWithType:UIButtonTypeRoundedRect];
tagButton.backgroundColor = [UIColor lightGrayColor];
tagButton.frame = CGRectMake(xOffset, 8, size, size);
[self.recentFriendsScrollView addSubview:tagButton];
xOffset += size;
xOffset += padding;
}
[self.recentFriendsScrollView setContentSize:CGSizeMake(xOffset, 50.0f)];
image
Thanks
D
Your scrollview frame is not proper.
If you have set it programmatically, show the rect of frame. If it is set via sib, check your resizing flags again.

drawRect performance

I need to draw lots of polygons 500k to a million on the iPad. After experimenting, I can only get only get 1 fps if that. This is just an example my real code has some good sized polygons.
Here are a few question:
Why don't I have to add the Quartz Framework to my project?
If many of the polygons repeat can I leverage that with views or are they too heavy etc?
Any alternatives, QTPaint can handle this but dips into the gpu. Is there is anything like QT or ios?
Can Opengl increase 2d performance of this type?
Example drawrect:
//X Y Array of boxes
- (void)drawRect:(CGRect)rect
{
int reset = [self pan].x;
int markX = reset;
int markY = [self pan].y;
CGContextRef context = UIGraphicsGetCurrentContext();
for(int i = 0; i < 1000; i++)//1,000,000
{
for(int j = 0; j < 1000; j++)
{
CGContextMoveToPoint(context, markX, markY);
CGContextAddLineToPoint(context, markX, markY + 10);
CGContextAddLineToPoint(context, markX + 10, markY + 10);
CGContextAddLineToPoint(context, markX + 10, markY);
CGContextAddLineToPoint(context, markX, markY);
CGContextStrokePath(context);
markX+=12;
}
markY += 12;
markX = reset;
}
}
The pan just move the array of boxes around on screen with pan gesture. Any help or hints would greatly appreciated.
The key issue with your example is that it is not optimized. Whenever drawRect: is called, the device is rendering all 1,000,000 squares. Worse still, it's making 6,000,000 calls to those APIs in the loop. If you want to refresh this view at even a modest 30fps, that is 180,000,000 calls / second.
With your 'simple' example, the size of the draw area is 12,000px × 12,000px; the maximum area you can display on the iPad's display is 768×1024 (assuming full-screen portrait). Therefore, the code is wasting a lot of CPU resources drawing outside the visible area. UIKit has ways of handling this scenario with relative ease.
When managing content that is significantly larger than the visible area, you should limit drawing to only that which is visible. UIKit has a couple of ways of handing this; UIScrollView in combination with a view backed by a CATiledLayer is your best bet.
Steps:
Disclaimer: This is specifically an optimization of your example code above
Create a new View Based Application iPad project
Add a reference to the QuartzCore.framework
Create a new class, say MyLargeView, subclassed from UIView and add the following code:
:
#import <QuartzCore/QuartzCore.h>
#implementation MyLargeView
- (void)awakeFromNib {
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
tiledLayer.tileSize = CGSizeMake(512.0f, 512.0f);
}
// Set the layer's class to be CATiledLayer.
+ (Class)layerClass {
return [CATiledLayer class];
}
- (void)drawRect:(CGRect)rect {
// Drawing code
// only draws what is specified by the rect parameter
CGContextRef context = UIGraphicsGetCurrentContext();
// set up some constants for the objects being drawn
const CGFloat width = 10.0f; // width of rect
const CGFloat height = 10.0f; // height of rect
const CGFloat xSpace = 4.0f; // space between cells (horizontal)
const CGFloat ySpace = 4.0f; // space between cells (vertical)
const CGFloat tWidth = width + xSpace; // total width of cell
const CGFloat tHeight = height + ySpace;// total height of cell
CGFloat xStart = floorf(rect.origin.x / tWidth); // first visible cell (column)
CGFloat yStart = floorf(rect.origin.y / tHeight); // first visible cell (row)
CGFloat xCells = rect.size.width / tWidth + 1; // number of horizontal visible cells
CGFloat yCells = rect.size.height / tHeight + 1; // number of vertical visible cells
for(int x = xStart; x < (xStart + xCells); x++) {
for(int y = yStart; y < (yStart + yCells); y++) {
CGFloat xpos = x*tWidth;
CGFloat ypos = y*tHeight;
CGContextMoveToPoint(context, xpos, ypos);
CGContextAddLineToPoint(context, xpos, ypos + height);
CGContextAddLineToPoint(context, xpos + width, ypos + height);
CGContextAddLineToPoint(context, xpos + width, ypos);
CGContextAddLineToPoint(context, xpos, ypos);
CGContextStrokePath(context);
}
}
}
#end
Edit the view controller nib and add a UIScrollView to the view
Add a UIView to the UIScrollView and make sure it fills the UIScrollView
Change the class to MyLargeView
Set frame size of MyLargeView to 12,000×12,000
Finally, open up the view controller .m file and add the following override:
:
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad {
[super viewDidLoad];
UIScrollView *scrollView = [self.view.subviews objectAtIndex:0];
scrollView.contentSize = CGSizeMake(12000, 12000);
}
If you look at the drawRect: call, it is only drawing into the area specified by the rect parameter, which will correspond to the tile size (512×512) for the CATiledLayer we configured in the awakeFromNib method. This will scale to a 1,000,000×1,000,000 pixel canvas.
Alternatives to look at are the ScrollViewSuite example, specifically 3_Tiling.
OpenGL is GPU hardware accelerated on iOS devices. Core Graphics drawing is not, and can be many many times slower when dealing with a large number of small graphics primitives (lines).
For lots of small squares, just writing them into a bitmap in C code is faster than Core Graphics line drawing. Then just draw the bitmap to the view once when done. But Open GL would be even faster.
point 4. OpenGL should do that fine. Check if you could reuse those objects and whether you could move some of the logic to GLSL code.
OpenGL performance optimization (in context of WebGL but most of it should apply): http://www.youtube.com/watch?v=rfQ8rKGTVlg
I don't know the details of iOS history so this may not have been an option when the question was first posted. However, I wanted to call out CAShapeLayer as a simple option when dealing with path performance problems. "iOS Core Animation: Advanced Techniques" (find it on Google Books) says CAShapeLayer "uses hardware-accelerated drawing" which I'm taking to mean that it's a GPU-based implementation. The same book has a good usage example in chapter 6, which boils down to this:
Create a CAShapeLayer
Configure its lineWidth, fillColor, strokeColor, etc.
Add the layer as a sublayer of your view's containerView.layer
To draw a path, just set it to the layer's "path" property
This made a gigantic performance difference in my app, as measured by Instruments. If your performance problem is path-based, don't wade into OpenGL before you've tried CAShapeLayer.
I encountered the same problem. After endless searching on google,CAShapeLayer saved me finally! Here is the detail steps you need to do:
Create a view with CAShapeLayer as it's layer type by override UIView's + (Class)layerClass method
Configure the layer's lineWidth, fillColor, strokeColor, etc.
Create an UIBezierPath instance
To draw a path,use UIBezierPath instance to add lines,curve,or acr etc, after you finished drawing, just set bezierPath.CGPath to the
layer's "path" property
Here is a simple demo to draw a simple curve when you touch the demo view:
//Simple ShapelayerView.m
-(instancetype)init {
self = [super init];
if (self) {
_bezierPath = [UIBezierPath bezierPath];
CAShapeLayer *shapeLayer = (CAShapeLayer *)self.layer;
shapeLayer.lineWidth = 5;
shapeLayer.lineJoin = kCALineJoinRound;
shapeLayer.lineCap = kCALineCapRound;
shapeLayer.strokeColor = [UIColor yellowColor].CGColor;
shapeLayer.fillColor = [UIColor blueColor].CGColor;
}
return self;
}
+ (Class)layerClass {
return [CAShapeLayer class];
}
- (void) customDrawShape {
CAShapeLayer *shapeLayer = (CAShapeLayer *)self.layer;
[_bezierPath removeAllPoints];
[_bezierPath moveToPoint:CGPointMake(10, 10)];
[_bezierPath addQuadCurveToPoint:CGPointMake(2, 2) controlPoint:CGPointMake(50, 50)];
shapeLayer.path = _bezierPath.CGPath;
}
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
[super touchesBegan:touches withEvent:event];
[self customDrawShape];
}

Creating a UIImage from a rotated UIImageView

I have a UIImageView with an image in it. I have rotated the image prior to display by setting the transform property of the UIImageView to CGAffineTransformMakeRotation(angle) where angle is the angle in radians.
I want to be able to create another UIImage that corresponds to the rotated version that I can see in my view.
I am almost there, by rotating the image context I get a rotated image:
- (UIImage *) rotatedImageFromImageView: (UIImageView *) imageView
{
UIImage *rotatedImage;
// Get image width, height of the bounding rectangle
CGRect boundingRect = [self getBoundingRectAfterRotation: imageView.bounds byAngle:angle];
// Create a graphics context the size of the bounding rectangle
UIGraphicsBeginImageContext(boundingRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Rotate and translate the context
CGAffineTransform ourTransform = CGAffineTransformIdentity;
ourTransform = CGAffineTransformConcat(ourTransform, CGAffineTransformMakeRotation(angle));
CGContextConcatCTM(context, ourTransform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
// Clean up
UIGraphicsEndImageContext();
return rotatedImage;
}
However the image is not rotated about its centre. I have tried all kinds of transforms concatenated with my rotate to get it to rotate around the centre but to no avail. Am I missing a trick? Is this even possible since I am rotating the context not the image?
Getting desperate to make this work now, so any help would be appreciated.
Dave
EDIT: I've been asked several times for my boundingRect code, so here it is:
- (CGRect) getBoundingRectAfterRotation: (CGRect) rectangle byAngle: (CGFloat) angleOfRotation {
// Calculate the width and height of the bounding rectangle using basic trig
CGFloat newWidth = rectangle.size.width * fabs(cosf(angleOfRotation)) + rectangle.size.height * fabs(sinf(angleOfRotation));
CGFloat newHeight = rectangle.size.height * fabs(cosf(angleOfRotation)) + rectangle.size.width * fabs(sinf(angleOfRotation));
// Calculate the position of the origin
CGFloat newX = rectangle.origin.x + ((rectangle.size.width - newWidth) / 2);
CGFloat newY = rectangle.origin.y + ((rectangle.size.height - newHeight) / 2);
// Return the rectangle
return CGRectMake(newX, newY, newWidth, newHeight);
}
OK - at last I seem to have done it. Any comments on the correctness would be useful... needed a translate, a rotate, a scale and an offset from the drawing rect position to make it work. Code is here:
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];

Resources