Opengl es high resolution iphone 4 - opengl-es

I created an empty iOS project and then added a custom GLView class which is then added to AppDelegate. I have following questions:
1) How do I enable hi-res retina mode on iPhone 4? Currently I am using the following code to check for device:
CGRect screenBounds = [[UIScreen mainScreen] bounds];
self.window = [[[UIWindow alloc] initWithFrame:screenBounds] autorelease];
// Override point for customization after application launch.
_view = [[GLView alloc] initWithFrame:screenBounds];
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
NSLog(#"iPad detected");
}
else {
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2) {
NSLog(#"iPhone4 detected");
_view.contentScaleFactor = [[UIScreen mainScreen] scale];
}
else {
NSLog(#"iPhone detected");
}
}
self.window.backgroundColor = [UIColor whiteColor];
//self.window.rootViewController = [[[UIViewController alloc] initWithNibName:nil bundle:nil] autorelease];
[self.window addSubview:_view];
But even after setting content factor it is drawing pretty poor quality polygons with jagged edges as shown in the image below:
http://farm8.staticflickr.com/7358/8725549609_e2ed1e0e2a_b.jpg
Is there any way to set the resolution to 960x640 instead of the default 480x320 ?
Please note that I can not use "someImage#2x.png" because I am generating images at runtime in the render buffer.
2) Second problem I am having is this warning message:
"Application windows are expected to have a root view controller at the end of application launch"
Thank you for your time.

As for the first question I do not know the pipeline of GLView initializer but content scale must be set before the render buffer is made (usually before renderbufferStorage:: method). To see if dimensions of the buffer are correct (should be 960x640) use function:
GLint width;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width)
Even if the buffer is retina and dimensions are correct those polygons might still be jagged if you do not use any sort of anti-alias. The easiest way to make an antialiased GL view in iOS is probably multisampling, try searching for glResolveMultisampleFramebufferAPPLE() (you will need a few more lines beside this one though).

Related

NSView image corruption when dragging from scaled view

I have a custom subclass of NSView that implements drag/drop for copying the image in the view to another application. The relevant code in my class looks like this:
#pragma mark -
#pragma mark Dragging Support
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [[NSBitmapImageRep alloc] initWithFocusedViewRect:[self frame]];
[self cacheDisplayInRect:[self frame] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
- (void)mouseDown:(NSEvent *)theEvent
{
NSSize dragOffset = NSMakeSize(0.0, 0.0); // not used in the method below, but required.
NSPasteboard *pboard;
NSImage *image = [self imageWithSubviews];
pboard = [NSPasteboard pasteboardWithName:NSDragPboard];
[pboard declareTypes:[NSArray arrayWithObject:NSTIFFPboardType]
owner:self];
[pboard setData:[image TIFFRepresentation]
forType:NSTIFFPboardType];
[self dragImage:image
at:self.bounds.origin
offset:dragOffset
event:theEvent
pasteboard:pboard
source:self
slideBack:YES];
return;
}
#pragma mark -
#pragma mark NSDraggingSource Protocol
- (NSDragOperation)draggingSession:(NSDraggingSession *)session sourceOperationMaskForDraggingContext:(NSDraggingContext)context
{
return NSDragOperationCopy;
}
- (BOOL)ignoreModifierKeysForDraggingSession:(NSDraggingSession *)session
{
return YES;
}
This works as expected until I resize the main window. The main window only increases size/width in the same increments to maintain the proper ratio in this view. The view properly displays its content on the screen when the window is resized.
The problem comes when I resize the window more than about + 25%. While it still displays as expected, the image that is dragged off of it (into Pages, for example) is corrupt. It appears to have a portion of this image repeated on top of itself.
Here is what it looks like normally:
And here is what it looks like when dragged to Pages after resizing the main window to make it large (downsized to show here -- imagine it at 2-3x the size of the first image):
Note that I highlighted the corrupt area with a dotted rectangle.
A few more notes:
I have my bounds set like NSMakeRect(-200,-200,400,400) because it makes the symmetrical drawing a bit easier. When the window resizes, I recalculate the bounds to keep 0,0 in the center of the NSView. The NSView always is square.
Finally, the Apple docs state the following for the bitmapImageRep parameter in cacheDisplayInRect:toBitmapImageRep: should
An NSBitmapImageRep object. For pixel-format compatibility, bitmapImageRep should have been obtained from bitmapImageRepForCachingDisplayInRect:.
I've tried using bitmapImageRepForCachingDisplayInRect:, but then all I see is the lower-left quadrant of the pyramid in the upper-right quadrant of the image. That makes me think that I need to add an offset for the capture of the bitmapImageRep, but I've been unable to determine how to do that.
Here's what the code for imageWithSubviews looks like when I try that:
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [self bitmapImageRepForCachingDisplayInRect:[self bounds]];
[self cacheDisplayInRect:[self bounds] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
And this is how the resulting image appears:
That is a view of the lower left quadrant being drawn in the upper-right corner.
What is causing the corruption when I drag from the NSView after enlarging the window? How to I fix that and/or change my implementation of the methods that I listed above to avoid the problem?
More info:
When I change the imageWithSubviews method to:
- (NSImage *)imageWithSubviews
{
NSSize imgSize = self.bounds.size;
NSBitmapImageRep *bir = [[NSBitmapImageRep alloc] initWithFocusedViewRect:[self frame]];
[self cacheDisplayInRect:[self bounds] toBitmapImageRep:bir];
NSImage* image = [[NSImage alloc]initWithSize:imgSize];
[image addRepresentation:bir];
return image;
}
I get a corrupted image without scaling, where the bottom-left quadrant of the image is drawn again on top of the top-right quadrant, like this:
What in the world am I doing wrong?
Solution:
While it does not address the core problem of drawing with NSBitmapImageRep, the following -imageWithSubviews prevents the corruption and outputs the correct image:
- (NSImage *)imageWithSubviews
{
NSData *pdfData = [self dataWithPDFInsideRect:[self bounds]];
NSImage* image = [[NSImage alloc] initWithData:pdfData];
return image;
}
Based on some debugging above, we determined the problem was in -imageWithSubviews.
Instead of generating image data for the view using -cacheDisplayInRect:toBitmapImageRep:, changing it to -dataWithPDFInRect: fixed the issue.

Capturing an offline NSView to an NSImage

I'm trying to make a custom animation for replacing an NSView with another.
For that reason I need to get an image of the NSView before it appears on the screen.
The view may contain layers and NSOpenGLView subviews, and therefore standard options like initWithFocusedViewRect and bitmapImageRepForCachingDisplayInRect do not work well in this case (they layers or OpenGL content well in my experiments).
I am looking for something like CGWindowListCreateImage, that is able to "capture" an offline NSWindow including layers and OpenGL content.
Any suggestions?
I created a category for this:
#implementation NSView (PecuniaAdditions)
/**
* Returns an offscreen view containing all visual elements of this view for printing,
* including CALayer content. Useful only for views that are layer-backed.
*/
- (NSView*)printViewForLayerBackedView;
{
NSRect bounds = self.bounds;
int bitmapBytesPerRow = 4 * bounds.size.width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CGContextRef context = CGBitmapContextCreate (NULL,
bounds.size.width,
bounds.size.height,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context == NULL)
{
NSLog(#"getPrintViewForLayerBackedView: Failed to create context.");
return nil;
}
[[self layer] renderInContext: context];
CGImageRef img = CGBitmapContextCreateImage(context);
NSImage* image = [[NSImage alloc] initWithCGImage: img size: bounds.size];
NSImageView* canvas = [[NSImageView alloc] initWithFrame: bounds];
[canvas setImage: image];
CFRelease(img);
CFRelease(context);
return canvas;
}
#end
This code is primarily for printing NSViews which contain layered child views. Might help you too.

Zoom & Pan in UIImageView inside UIScrollView

I have a UIScrollView filled with UIImageView. The UIScrollView are paging enabled to let the users to "flip" those images. Each UIImageView has UIPinchGestureRecognizer for pinch zoom, and UIPanGestureRecognizer for to pan that image when zoomed in.
As probably you have noticed, what I'd like to achieve is just like what iBooks does in its application.
However, I have difficult times to get this work.
In my "BookPageViewController", I have set up UIScrollView, then fill them with images from the folder based on data (page numbers, file names, etc) from sqlite.
_pageScrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
NSUInteger pageCount = [pages count];
for (int i = 0; i < pageCount; i++) {
CGFloat x = i * self.view.frame.size.width;
// Page Settings
UIImageView *pageView = [[UIImageView alloc] initWithFrame:CGRectMake(x, 0, self.view.frame.size.width, self.view.frame.size.height)];
pageView.backgroundColor = [UIColor greenColor];
pageView.image = [pages objectAtIndex:i];
pageView.userInteractionEnabled = YES;
pageView.tag = i;
// Gesture Reocgnisers
UIPinchGestureRecognizer *pinchGr = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(pinchImage:)];
pinchGr.delegate = self;
[pageView addGestureRecognizer:pinchGr];
UIPanGestureRecognizer *panGr = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panImage:)];
[pageView addGestureRecognizer:panGr];
// Finally add this page to the ScrollView
[_pageScrollView addSubview:pageView];
}
_pageScrollView.contentSize = CGSizeMake(self.view.frame.size.width * pageCount, self.view.frame.size.height);
_pageScrollView.pagingEnabled = YES;
_pageScrollView.delegate = self;
[self.view addSubview:_pageScrollView];
And with the help of another good questions here in Stackoverflow, I have put those:
- (void) pinchImage:(UIPinchGestureRecognizer*)pgr {
[self adjustAnchorPointForGestureRecognizer:pgr];
if ([pgr state] == UIGestureRecognizerStateBegan || [pgr state] == UIGestureRecognizerStateChanged) {
if ([pgr scale]) {
NSLog(#"SCALE: %f", [pgr scale]);
[pgr view].transform = CGAffineTransformScale([[pgr view] transform], [pgr scale], [pgr scale]);
[pgr setScale:1];
} else {
NSLog(#"[PAMPHLET PAGE]: The image cannot be scaled.");
}
}
}
My problem is, when I zoom in the one of the UIImageView with Pinch Gesture, the image exceeds (go over) to the next image and hides it. I believe that there should be the way to "limit" the zoom in/out and the size of UIImageView (not UIImage itself), but I don't know where to go from here. I have also put a code to limit the scale something like:
([pgr scale] > 1.0f && [pgr scale] < 1.014719) || ([pgr scale] < 1.0f && [pgr scale] > 0.98f)
but it didn't work...
I know it's not hard for ios professionals, but I'm quite new to objective-c, and this is my first time to develop real application. If this is not a good practice, I also would like to know any ideas to achieve just how to do this like ibooks do (e.g. put UIImageView in UIScrollView, then put the UIScrollView to another UIScrollView)...
Sorry for another beginner question, but I really need help.
Thanks in advance.
Without trying it out myself, my first guess would be that the transform on the UIImageView also transforms its frame. One way to solve that, would be to put the UIImageView in another UIView, and put that UIView in the UIScrollView. Your gesture recognizers and the transform would still be on the UIImageView. Make sure the UIView's clipsToBounds property is set to YES.

Setting image property of UIImageView causes major lag

Let me tell you about the problem I am having and how I tried to solve it. I have a UIScrollView which loads subviews as one scrolls from left to right. Each subview has 10-20 images around 400x200 each. When I scroll from view to view, I experience quite a bit of lag.
After investigating, I discovered that after unloading all the views and trying it again, the lag was gone. I figured that the synchronous caching of the images was the cause of the lag. So I created a subclass of UIImageView which loaded the images asynchronously. The loading code looks like the following (self.dispatchQueue returns a serial dispatch queue).
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
UIImage *image = [UIImage imageNamed:name];
dispatch_sync(dispatch_get_main_queue(), ^{
self.image = image;
});
});
}
However, after changing all of my UIImageViews to this subclass, I still experienced lag (I'm not sure if it was lessened or not). I boiled down the cause of the problem to self.image = image;. Why is this causing so much lag (but only on the first load)?
Please help me. =(
EDIT 3: iOS 15 now offers UIImage.prepareForDisplay(completionHandler:).
image.prepareForDisplay { decodedImage in
imageView.image = decodedImage
}
or
imageView.image = await image.byPreparingForDisplay()
EDIT 2: Here is a Swift version that contains a few improvements. (Untested.)
https://gist.github.com/fumoboy007/d869e66ad0466a9c246d
EDIT: Actually, I believe all that is necessary is the following. (Untested.)
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
// Determine path to image depending on scale of device's screen,
// fallback to 1x if 2x is not available
NSString *pathTo1xImage = [[NSBundle mainBundle] pathForResource:name ofType:#"png"];
NSString *pathTo2xImage = [[NSBundle mainBundle] pathForResource:[name stringByAppendingString:#"#2x"] ofType:#"png"];
NSString *pathToImage = ([UIScreen mainScreen].scale == 1 || !pathTo2xImage) ? pathTo1xImage : pathTo2xImage;
UIImage *image = [[UIImage alloc] initWithContentsOfFile:pathToImage];
// Decompress image
if (image) {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
// Configure the UI with pre-decompressed UIImage
dispatch_async(dispatch_get_main_queue(), ^{
self.image = image;
});
});
}
ORIGINAL ANSWER: It turns out that it wasn't self.image = image; directly. The UIImage image loading methods don't decompress and process the image data right away; they do it when the view refreshes its display. So the solution was to go a level lower to Core Graphics and decompress and process the image data myself. The new code looks like the following.
- (void)loadImageNamed:(NSString *)name {
dispatch_async(self.dispatchQueue, ^{
// Determine path to image depending on scale of device's screen,
// fallback to 1x if 2x is not available
NSString *pathTo1xImage = [[NSBundle mainBundle] pathForResource:name ofType:#"png"];
NSString *pathTo2xImage = [[NSBundle mainBundle] pathForResource:[name stringByAppendingString:#"#2x"] ofType:#"png"];
NSString *pathToImage = ([UIScreen mainScreen].scale == 1 || !pathTo2xImage) ? pathTo1xImage : pathTo2xImage;
UIImage *uiImage = nil;
if (pathToImage) {
// Load the image
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithFilename([pathToImage fileSystemRepresentation]);
CGImageRef image = CGImageCreateWithPNGDataProvider(imageDataProvider, NULL, NO, kCGRenderingIntentDefault);
// Create a bitmap context from the image's specifications
// (Note: We need to specify kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little
// because PNGs are optimized by Xcode this way.)
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CGImageGetWidth(image) * 4, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
// Draw the image into the bitmap context
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
// Extract the decompressed image
CGImageRef decompressedImage = CGBitmapContextCreateImage(bitmapContext);
// Create a UIImage
uiImage = [[UIImage alloc] initWithCGImage:decompressedImage];
// Release everything
CGImageRelease(decompressedImage);
CGContextRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
CGDataProviderRelease(imageDataProvider);
}
// Configure the UI with pre-decompressed UIImage
dispatch_async(dispatch_get_main_queue(), ^{
self.image = uiImage;
});
});
}
I think, the problem could be Images themselves. For example - I got in one of my projects 10 images 640x600 layered with alpha transparency on each other. When I try to push or pop viewcontroller from this viewcontroller.. it lags a lot.
when I leave only few images or use quite smaller images - no lag.
P.S. tested on sdk 4.2 ios5 iphone 4.

EAGLView transparency frames per second

I have a Cocos2d project and I want a constant background throughout the app. In the applicationDidFinishLaunching method of its delegate, I have replaced the line:
[viewController setView:glView];
with
[[viewController view] addSubview:glView];
because I have added subviews to the RootViewController's view in it's initWithNib, and those changes are lost if the view is replaced with glView.
I have also changed the pixelFormat of glView from kEAGLColorFormatRGB565 to kEAGLColorFormatRGBA8. When I make that change, glView becomes transparent and I can see through it, but the fps drops dramatically. If I don't make that change, the view doesn't become transparent, but I don't see the huge drop in fps. I'm talking about a significant drop in fps, from 59.0-60.0 to about 35.0-42.0.
I am using this code right below the addSubview line above to make the view transparent:
glClearColor(0, 0, 0, 0);
director.openGLView.backgroundColor = [UIColor clearColor];
director.openGLView.opaque = NO;
The last two lines are the culprits; commenting them out (both, not just one) causes the large drop in fps, while commenting out the glClearColor line has no effect on fps.
The whole applicationDidFinishLaunching method looks like this:
- (void) applicationDidFinishLaunching:(UIApplication*)application {
window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
if(![CCDirector setDirectorType:kCCDirectorTypeDisplayLink] )
[CCDirector setDirectorType:kCCDirectorTypeDefault];
CCDirector *director = [CCDirector sharedDirector];
// Init the View Controller
viewController = [[RootViewController alloc] initWithNibName:nil bundle:nil];
viewController.wantsFullScreenLayout = YES;
// Create the EAGLView manually
// 1. Create a RGB565 format. Alternative: RGBA8
// 2. depth format of 0 bit. Use 16 or 24 bit for 3d effects, like CCPageTurnTransition
//
EAGLView *glView = [EAGLView viewWithFrame:[window bounds]
pixelFormat:kEAGLColorFormatRGBA8
depthFormat:0
];
// attach the openglView to the director
[director setOpenGLView:glView];
if(![director enableRetinaDisplay:YES] )
CCLOG(#"Retina Display Not supported");
#if GAME_AUTOROTATION == kGameAutorotationUIViewController
[director setDeviceOrientation:kCCDeviceOrientationPortrait];
#else
[director setDeviceOrientation:kCCDeviceOrientationPortrait];
#endif
[director setAnimationInterval:1.0/60];
[director setDisplayFPS:YES];
// make the OpenGLView a child of the view controller
[[viewController view] addSubview:glView];
//***make glView transparent***
glClearColor(0, 0, 0, 0);
director.openGLView.backgroundColor = [UIColor clearColor];
director.openGLView.opaque = NO;
// make the View Controller a child of the main window
[window addSubview:viewController.view];
[window makeKeyAndVisible];
// Default texture format for PNG/BMP/TIFF/JPEG/GIF images
// It can be RGBA8888, RGBA4444, RGB5_A1, RGB565
// You can change anytime.
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];
// Removes the startup flicker
[self removeStartupFlicker];
// Run the intro Scene
[[CCDirector sharedDirector] runWithScene:[MainMenu scene]];
}
Any ideas as to why this is happening? I can provide more code if need be.
If you're testing this on a 1st or 2nd generation device, the drop in framerate is to be expected. Nothing you can do about it. These devices are heavily fillrate-limited, and a transparent 32-bit GL view is just asking too much of the device.
If this happens on a 3rd or even 4th generation device, then there's got to be something wrong but I couldn't begin to tell what that might be.
If you're testing the performance on the Simulator, don't. It's irrelevant.

Resources