I want to achieve an object 3D rotation in Xcode that instead of using openGL uses a set of prerendered pngs. This would allow for much more complex 3D animations in terms of polygons and light effects. So far I have achieved to build subclass of UIView that contains a UIScrollView and in the scrollViewDidScroll delegate method it scrubs through 360 png images depending on the content offset of the UIScrollView. This does exactly what I want to achieve except a few major problems.
I've tried three different methods to swap the images.
Method 1:
When the View is being initialized I put all UIImages in UIImageViews and those on screen with alpha = 0 and then set the respective imageView's alpha to 1 on scrollViewDidScroll
Method 2:
When the View is being initialized I put all UIImages in an array and put a single UIImageView on screen. In scrollViewDidScroll I set the respective image from the array as the image of the UIImageView
Method 3:
When the View is being initialized I save all imageNames in an Array and put a sing UIImageView on the screen. In scrollViewDidScroll I create a UIImage with the name from the array for the respective index and set this as the image for my UIImageView
All three are very memory consuming and will eventually cause memory warnings or crashes. While method three is slightly less memory expensive it's also a lot laggier.
Is there any method to do this memory efficient and fast without having to use openGL??
Edit: Theodore Gray achieved this in his Elements app in an awesome way and I can't find out how. See here: http://www.youtube.com/watch?v=nHiEqf5wb3g&feature=player_embedded
Related
I have an NSCollectionViewItem with an NSImageView (32x32) to which i supply a #1x image of the same size.
It looks perfect in the interface builder, but when the app is built, the resolution looks quite off. Is there any particular reason for this?
Just to add that the Image in the asset manager also has a #2x
EDIT: Still investigating this issue, but I have just noticed that If the collection view which contains the collection item, which contains the NSImageView is enclosed by a bordered NSSCrollView the images are perfect (.ie non blurry)
Turns out if you draw images in frames with either the x,y coords or the height and width having fractions you end up with blurry images. passing the drawingRect through NSIntegralRect fixes that.
I've been working hard, searching the internet on that problem for 3 days and I'm now running out of ressource.
Currently porting an iOS app to MacOS (deployment 10.11). The problem:
I have a view hierarchy as below:
NSScrollview
documentView
grouping view
tiling view one
array of NSImageView (each one being a tile)
tiling view two
array of NSImageView (each one being a tile)
The two tiling views are overlaying completely, depending of UI one may be hidden or the second one has opacity set below 1.0 to blend the two tiled view.
Because of opacity requirement, as well as performance, views are CAlayer backed. This is done from IB where the to NSScrollview is checked for Core Animation. Thereoff, all the view tree is (implicitly) layer backed.
Works as expected, scroll, magnify, etc..
Then I need to make an image out of the document view to generate an SCNMaterial content (3D view).
On iOS the documentView renderInContext works as expected, and allows an image to be created.
On Appkit the context stay transparent, so is the image, a valid object while as if clearColor.
If the documentView.canDrawSubviewsIntoLayer is set at creation, the view tree renders OK. This can't be the solution since it prevents opacity setting to work.
Even when one tiling view is hidden (no opacity compositing) rendering fails.
I read that some kinds of views are not rendered. I don't use them. There are no filters, no masks, beside a default masksToBounds setting on all the view tree. I don't know why and where it is set. I tried to unset it on all the views at creation with no success. It is set again somehow, on the grouping view below the documentView. This may be the problem but why this property is out of my control ?
Alternative way to get view tree rendering, bitmapImageRepForCachingDisplayInRect: / cacheDisplayInRect:toBitmapImageRep: works the same : ok with canDrawSubviewsIntoLayer, KO otherwise. Apple code examples to make a texture out of a view are using either one of the two methods.
There are plenty of posts, mainly on SO, complaining about CALayer renderInContext and code to custom render a layer tree. Nevertheless, most are quite old and now there must be a simple standard way to achieve it.
Edit: among other attempts, I tried to set each view wantsLayer, with no success.
Well, as often when you post a request for help, you finally find the answer by yourself… Here it is:
Given the view tree as listed in the question, I achieved to render it by setting canDrawSubviewsIntoLayer on each tiling view. This way, the opacity compositing between layers is working, AND the views are rendered.
I post this as an answer, because it solves the problem.
As of WHY this works, here is my guess, but this is not a authorised answer: Each NSImageView tile, subviews of tiling view, have their origin set to their frame. This is not a transform on the layer but a position of the frame in the coordinates of the tiling view. This is a difference with the iOS code version where the tiles are positioned by a transform. I'm going to test further, and see if using a transform rather that a frame origin makes a difference to renderInContext.
Edit: after more testing it appears that the iOS version has a transform AND an offset to the tile.
So the only clue is that the layer system get lost when rendering in context on OSX when some subview have an offset ??
Summary:
The key point to the solution is to find the right layer where to set canDrawSubviewsIntoLayer
I have an app, its a small game using opengles with GLKit.
No im wondering how it works when i want to draw text on
my screen (if it is possible).
How can i do it?
i draw all of my game objects using images (wrapped in some kind
of sprite). its possible to scale, to move, and to rotate.
everything works fine.
but finding out how it works to print text on that glkview
gets me deep inside of problems ^^
I dont want to use uiimages cause i also dont know how
to present uiimages on a glkview.
There are a number of ways to do what you want:
1) Have an image with all the text glyphs you need in it. For example, if your application is in English, you'd have the 26 uppercase and 26 lowercase letters in the image. Upload that texture to the GPU and use the proper texture coordinates or glSubTexImage2d() to pull out the glyphs you need. (It's not clear to me if this is what you meant by not wanting a UIImage. It doesn't have to be a UIImage, though that's probably easiest.)
2) Every time you need to display text, draw it on the CPU on the fly, and upload the entire word, phrase, or sentence as a texture. You could create a CGBitmapContext and use Core Graphics to draw text to it. Then upload it using glTexImage2D().
3) Get the individual glyphs out of the fonts and draw directly using the bezier curves that make up the glyphs. This allows for 3D extrusion, too. However, this option is the most time consuming to code and probably least performant. It also involves dealing with the many small problems that fonts have (like degenerate segments, and incorrect winding orders). IF you want to go down this path, I think maybe Core Text can help.
There are at least two clean ways to do this, depending on your requirements.
While documentation advises against compositing over a CAEAGLLayer (GLKView), it works quite well, at least in recent iOS versions, when transparent content is layered on top of the CAEAGLLayer. For example, try dropping a UITextView, with opaque set to false and a clear background color, on top of a GLKView in your Storyboard in Interface Builder in the Apple GLKit template or your app. In my test on an iPhone 5, frame rendering time remained around 1ms, even while scrolling in the text view. If your text needs are static, or you don't want the user to interact with the text, use CATextLayer as a child layer of your EAGLLayer instead of a view.
The second approach is to render the text into a texture. You can then composite the text onto your view by disabling the depth buffer and rendering the texture on a full screen rectangle. Look at UIGraphicsBeginImageContextWithOptions to see how to render to an offscreen image with Quartz. UIGraphicsGetImageFromCurrentImageContext allows you to retrieve the UIImage to use as a texture.
I need to let user navigate back and forward through different images (from 10 to 20) with a tap.
All the images are 1024x768 JPG's so I don't need to resize or transform them.
There are no animations between them (I'll switch them with a removeFromSuperView and addSubView).
All I want is avoid loading time or unresponding touch, so I actually was thinking about these possible solutions:
Load each image singularly on tap;
Load 3 images: previous, actual and next one;
Load an array or a uiviewimage with all the images and iterate through it;
I will avoid imageNamed and I'll use imageWithContentOfFile or imageWithData.
Which solution do you think is the best one?
Could solutions 1. and 3. bring some performance issue?
Method 1 will be good : iOS devices can load full screen images really fast. Especially if your images don't have alpha. It will depends on images but it takes around 0.05 seconds for a classic png. This means that users will not notice the waiting time if you have to change after a tap especially if you had a fade transition between images.
Things can get harder if user can scroll through images. In this case, I would advise to use UITableView. They behave like UISCrollView and they load/unload pages fastly and smoothly.
To get an horizontal table view, you can use this code which works perfectly : https://github.com/alekseyn/EasyTableView
If your upper limit for number of images is 20, just preload an array of UIImages, and set the UIImageView.image property on response to touch - don't worry about swapping views, reusing a single UIImageView will be fine.
I wouldn't worry about performance unless the upper limit rises much higher - if it does, a dynamic cache like option 2 would be a better choice, but more programming.
If you are concerned about performance in an iPad application destined for the app store, always remember to test on a first generation iPad, since there was a major performance jump after the original.
I have actually done this before. With large images, you are going to have to be careful with memory. I originally loaded all of my images into an NSArray but I experienced memory warning and crashes.
My implementation uses a UIScrollView with paging. I have to arrays, one contains all of the image names and the other is mutable and contains only a few UIImageViews. I record the current 'page' that the scroll view is on, and when I land on an image I ensure that the mutable array contains that image and two images on either side of it (and remove any other images from the array).
The problem with this implementation is that you keep having to read images from disk which will be slow on the main thread. Sooo when I initially create the UIImageViews I add a UIActivityIndicator to them. Then, I pass my array of UIImageViews to a method in the background that actually loads the UIImage and then makes the respective UIImageView set the image on the main thread like so:
// called before you try to load an image in a background thread
[imageView addObserver:self forKeyPath:#"image" options:NSKeyValueObservingOptionNew|NSKeyValueObservingOptionOld context:nil];
// called in the background thread after you load the image from disk
[imageView performSelectorOnMainThread:#selector(setImage:) withObject:fullImage waitUntilDone:NO];`
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
UIImageView *imageView = (UIImageView *)object;
[[imageView viewWithTag:1] removeFromSuperview]; // this is the activity indicator
[imageView removeObserver:self forKeyPath:#"image"];
}
I have created CALayer objects and am able to animate their movement around the screen. However, now I want to animate them to change through a set of images in a loop to create an animation (like an animated gif)
I'm fairly new to programming and very new to Cocoa so code examples welcome.
I have 15 PNG images.
EDIT: I have code that creates an NSArray of 15 CGImageRef objects.
Have the object that owns the array and layer (I presume there is an object that owns both) also own a timer, which sends the object a message to change the image displayed in the layer. The same object should also have an instance variable containing an index into the array.
To respond to the timer message, check whether there are any images in the array, and, if so, divide the index by the count of the array and take the remainder (% operator). The result is the index to access; get the image from that index in the array and change the image in the layer, then add 1 to the computed index and assign it back to the variable.