This is my first post here - but I've been a reader for a long time. Thank you so much for this site! :-)
I am currently working on a port of my XNA-based 2D engine from WP7 to iOS (5). I would prefer not to use OpenGL directly, because I prefer to invest my time more in gameplay than in technique. So I would be very happy to get an answer not involving OpenGL (directly).
The problem:
When I add an UIImageView to another UIView, there is a short delay before the UIImageView gets drawn. I assume, this is due to the caching the UIView-class performs before converting everything internally to OpenGL then drawing.
What I want:
Tell the UIView (superview) to perform all neccessary calculations for all subviews and then draw them all at once.
Currently the behaviour I observe ist: Calculate uiimageview_1, draw uiimageview_1, calculate uiimageview_n, draw uiimageview_n, ...
Dummycode of what I want:
// put code here to tell superview to pause drawing
for (int i = 0; i < 400; i ++)
{
add UIImageView[i] to superview;
}
// put code here to tell superview to draw now
Possible workaround (but coming from C# & Windows, I have no idea how to implement it efficiently in Objective-C on iOS) - I am afraid that this code is inefficient because large blocks of RAM had to be transferred (per frame!) on retina displays at native resolution:
for (int i = 0; i < 400; i ++)
{
add UIImageView[i] to superview;
}
// put code here to get a bitmap in ram from superview
// return bitmap and draw it in a view for the scenery/canvas
Any help on how to approach this "popping"-problem would be highly appreciated.
Thanks
Answering this will be a bit tricky. The exact behavior of Cocoa Touch is undocumented (an 'implementation detail' says Apple), so recommendations can only be given based on guessing and experience.
Instead of working with UIViews, you might want to try CoreAnimation. It abstracts 2D drawing and compositing operations without the overhead of a UI framework such as UIKit. It also much easier to use than programming OpenGL directly. UIKit uses CoreAnimation to do it's drawing, but augments it in many ways. This 'augmentation' might be exactly the reason why you're hitting performance problems.
Have a look at the tutorial and judge for yourself.
First of all - forget about UIKit's views. If you trying to port XNA game to iOS, I'd recommend to familiarize yourself with Cocos2D. It's more convenient than core OpenGL, and it'll give you performance you need.
Related
I'm looking for a way to do frame-by-frame programmatically drawn animations in a MacOs application (not keyframe property animation). I have tried drawing to CALayers using the drawLayer:inContext: delegate method, calling setNeedsDisplay to draw each frame, however I'm getting poor performance doing it this way. Is there a recommended way to do this type of animation in Cocoa?
A good way to do entirely custom animations is by using CADisplayLink (iOS) or CVDisplayLink (macOS). CVDisplayLink is basically a timer that fires as often as the display refreshes.
You can then calculate your own timing functions based on the values you get off CVDisplayLink. The API is still C so it is a bit cumbersome to use, especially in Swift, but once you get how it functions it works like a charm.
I have only had good experiences with CVDisplayLink, especially with layers. They are really performant. I was able to animate 1000+ layers CVDisplayLink driven at 60fps without any problems.
If you need any help in using the API, feel free to ask!
Alternative:
If you want to use a more modern API, I can recommend SpriteKit. There are some nice animation APIs as well. And they perform really good. Apple uses it itself to draw more complex views (like the Memory Debugger in Xcode).
I'm using Storyboard with DoubleAnimation to translate Y of a StackPanel from top to bottom.
It works fine. But what I want is to accelerate it when it is about to reach bottom (like the status panel of Android).
I read this tutorial here, but it seems to apply to Silverlight only.
How to do that ?
What your looking for is an easing class.
Easing classes change speed with time. So an animation will start out slow and go fast or the opposite etc.etc.
Here's the documentation on ExponentionalEasing. Its pretty straightforward
http://msdn.microsoft.com/en-us/library/system.windows.media.animation.exponentialease(v=vs.95).aspx
However if you want more fine tuning you'll have to use KeyFrames and set the time difference between them
First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!
OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!
Is there any Cocoa control that is capable of drawing tile maps with multiple layers and multiple texture sources which can also intercept touches into single tiles? Having multiple layer support is not a real requirement but an optional feature (the views could still be stacked). Hardware acceleration is not needed at all.
So far I have toyed around with NSMatrix, IKImageBrowser and NSCollectionView but non of them felt like a good solution for the problem. Ideally I need an control similar to the one in Tiled.app. Is there anything, third party or built-in, or do I have to handcraft this control?
I fear that you will be hardly able to find a ready-to-use control for managing tile maps.
If I had to implement something like that on my own, I would consider using CATiledLayer, since this is the closest thing to a tile map control that I know of.
From CATiledLayer Reference:
CATiledLayer is a subclass of CALayer providing a way to asynchronously provide tiles of the layer's content, potentially cached at multiple levels of detail.
There is a nice sample by Bill Dudney (the author of "Core Animation for MacOS and the iPhone"). This sample could provide you a solid foundation for your own project, though it only displays one single PDF, allowing you to zoom in the area you clicked on (this requires rereading the tile at a different detail level).
Another interesting introduction can be found here. This is more step-by-step, so you might start with this.
Finally, on Cocoa is my Girlfriend there is a nice article, although it focuses on iOS, but you may find it anyway relevant.
Cocos2D supports building mac applications now
Article on cocos2d stating this: http://www.cocos2d-iphone.org/archives/1444
Aee here for how to do it: http://chris-fletcher.com/tag/cocos2d-os-x/
Aee here on how to use TMX tile maps with Cocos2D to build tile based maps: http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide:tiled_maps
This means you can use the power of Cocos2d and you will have to write much less code to get to where you want with a tile based map.
If you don't want to use Cocos2D:
It seems you would have to code it yourself, but it shouldn't be too hard to do.
First you can create your .TMX file using the tile editor "Tiled.app" then you would need to parse the XML using a standard xml library for Cocoa.
To lay out the tiles use a UIView for the overall container and then create a tile class that holds your tile display information and responds to clicks the tile class should extend UIView. For the tile class allow the assigning of a click delegate and assign your ViewController as the click delegate for all tiles so you can handle clicks easily with the clicked tile being passed to you.
Loop through your xml data and create and position the tiles in the overall UIView by using the tiles width/height and your tilemaps rows/columns.
I think in about a day or 2 you could have the tile map being rendered and clickable using the standard TMX format which will allow you to edit your map in "Tiled.app"
The TMX standard is covered here: https://github.com/bjorn/tiled/wiki/TMX-Map-Format
route-me might fit the bill.
How difficult would it be to use core animation to make an NSView slide in an out of view like a sheet? Generally speaking, what would be involved in accomplishing this? I've been reading through the CA documentation, but it's been hard for me to pinpoint which parts are relevant to what I want to do since I have no experience with the framework.
Any tips at all would be much appreciated.
Thanks.
Since you're talking of a NSView, you're probably using Cocoa's animation support, not CA directly. In this case, you just need to set the view's frame through the view's animator object:
[theView setFrame:offscreenFrame];
[[theView animator] setFrame:finalFrame];
Unfortunately, Cocoa view animation interacts badly with the more advanced features of CA, like setting an easing. You might have more luck using NSViewAnimation instead, which is not Core Animation-backed and allows for a little more flexibility.