Some (random) textures rendering upside down in OpenGL? - xcode

I'm using GLKit (OpenGL ES2) in Objective-C to make an iPhone game.
I haven't had a lot of experience with this, so it wouldn't surprise me if I was doing something wrong, but I am generating a heap of objects, which are all mostly the same. In fact I am using a class for them, and just making more instances of the class to create more objects, but somehow some of the objects have textures mapped upside down.
I don't see how this is at all possible since it is the same code used for all of the objects, and some of them are correct, and some aren't.
Any help/ideas would be greatly appreciated.

Check the [UIImage imageOrientation] property to see if the image is flipped. It could be one of these values:
UIImageOrientationUp
UIImageOrientationDown
UIImageOrientationLeft
UIImageOrientationRight
UIImageOrientationUpMirrored
UIImageOrientationDownMirrored
UIImageOrientationLeftMirrored
UIImageOrientationRightMirrored

It turns out this was caused by not re-using the texture over and over again, but regenerating it each time I needed it.
All in all, it's much better practice to only generate the texture once and reuse it anyway, so the solution is simply to do this.

Related

Rendering CALayer multiple times

I've been trying to figure out the best way to show an existing CALayer in a secondary window to allow real-time full-screen output on a secondary monitor. Additionally I would like to have the ability to show real-time thumbnails in my application of the original CALayer, it seems like I should be able to find a setup that could fulfill both requirements.
So far my research resulted in the following options:
CALayer.render(in: CGContext) Using the original layer and redrawing it to additional views this way and setting up a timer or a CVDisplayLink to redraw it every frame.
Rendering the CALayer to a NSBitmap every frame. And using that bitmap in NSImageView across the application.
Using a CAMetalLayer and rendering the texture multiple times using a MTKView. I'm not really familiar with Metal, this seems like a fairly elegant solution, but I'm not sure if it is required to go all the way down to Metal myself.
Using a CARemoteLayerServer with a CARemoteLayerServer.
This seems like an overly complicated setup for in-process sharing of a CALayer and it feels like this approach is more suitable if I'd need to share the layer cross-process.
Using CAReplicatorLayer. Instead of using the replicator layer to create a grid of copies I tried to use it to create just one copy, but it seems like you cant add a CALayer to multiple "parent layers".
All in all I've found some workable solutions, but as I'm quite a novice working with Core Animation I'm not sure which direction is the least resource-heavy and I might still be missing an easier solution.
Has anyone tried something similar?

Monogame Extended Tiled

I'm making an isometric city builder using Monogame Extended and Tiled. I've got everything set-up and now i need to somehow access the specific tiles so i can change them at runtime as the user clicks on a tile to build an object. The problem is, i can't seem to find a "map.GetLayer("Layername").GetTile(x,y) or .SetTile(x,y) function or something similar.
Now what i can do is edit the xml(.tmx) file which has a matrix in it that represents the map and it's drawn tiles. The problem with this is that i need to build the map in the content pipeline again after editing for the changes to be displayed. I can't really build at runtime or can i?
Thanks in advance!
Something like this will get you part way there.
var tileLayer = map.GetLayer<TiledMapTileLayer>("layername");
TiledMapTile tile;
if(tileLayer.TryGetTile(x, y, out tile))
{
// do something with tile
}
However, there's only a limited amount of things you can actually do with the tile once you've got it from the map.
There's no such thing as a SetTile method because changing tile data at runtime is not currently supported. This is a limitation of the renderer, which has been optimized for rendering very large maps by building static geometry that can't be changed once it's loaded into the graphics card.
There has been some discussion about building another renderer that would handle dynamic map changes but at this stage nothing like that has been implemented in the library. You could always have a go at implementing a simple renderer yourself, a really basic one is not as hard as you might think.
An alternative approach to dealing with this kind of problem might be to pre-process the map data before giving it to the renderer. The idea would be to effectively separate the layers of the map that are static from those that are dynamic and render the dynamic tiles as normal sprites. Just a thought, I'm not sure about the details of how this might work.
I plan to eventually revisit the Tiled API in the next major version of MonoGame.Extended. Don't hold your breath, these things can take a lot of time, but I am paying attention to the feedback and kinds of problems people are experiencing with the existing API.
Since the map data is stored in a XML (or csv) file which runs through the Content Pipeline you can not change it at runtime.
Anyways, in a city builder you usually do not change existing tiles but you place object on top of existing tiles.

3D model scn file does not rotate or scale in AR kit

I started a new Xcode project with the ARKit template and simply replaced the "ship.scn" with my "test.scn" filename asset. The object is about 16.5mm wide and 4.8mm tall. The ship worked fine of course, but my test object that reads "test" does not rotate as I move around it, or scale when I move towards or away from it, yet it does track in one location.
I compared the ship and test attribute panels, and I can't find anything that is different about them, except that the ship is textured and my test text is not. What is inherently special about scn objects that would make them behave correctly in ARKit besides their size? I've read through the documentation about anchoring, but it seems like I wouldn't have to do this in code if it's already a scn object.
In case anyone wants to test the file I'm using in the ARKit template to see how it's behaving, the file is here: https://ufile.io/ey49t
I’m answering the question because I think it could help others that are making a similar mistake, but it can be closed if it doesn’t make sense.
The problem is that the size was much too large where I could not see scaling or rotation no matter how much I moved around the room. Compare the scaled size in the last attribute panel on the ship model - not its actual size. Then get the size of your own model scaled down enough so that it is actually resonable.

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

How to convert an OpenGL ES texture into a CIImage

I know how to do it the other way around. But how can I create a CIImage from a texture, without having to copy into CPU memory? [CIImage imageWithData]? CVOpenGLESTextureCache?
Unfortunately, I don't think there's any way to avoid having to read back pixel data using glReadPixels(). All of the inputs for a CIImage (data, CGImageRef, CVPixelBufferRef) are CPU-side, so I don't see a fast path to deliver that to a CIImage. It looks like your best alternative there would be to use glReadPixels() to pull in the raw RGBA data from your texture and send it into the CIImage using -initWithData:options: and an kCIFormatRGBA8 pixel format. (Update: 3/14/2012) On iOS 5.0, there is now a faster way to grab OpenGL ES frame data, using the new texture caches. I describe this in detail in this answer.
However, there might be another way to achieve what you want. If you simply want to apply filters on a texture for output to the screen, you might be able to use my GPUImage framework to do the processing. It already uses OpenGL ES 2.0 as the core of its rendering pipeline, with textures as the way that frames of images or video are passed from one filter to the next. It's also much faster than Core Image, in my benchmarks.
You can supply your texture as an input here, so that it never has to touch the CPU. I don't have a stock class for grabbing raw textures from OpenGL ES yet, but you can modify the code for one of the existing GPUImageOutput subclasses to use this as a source fairly easily. You can then chain filters on to that, and direct the output to the screen or to a still image. At some point, I'll add a class for this kind of data source, but the project's still fairly new.
As of iOS 6, you can use a built-in init method for this situation:
initWithTexture:size:flipped:colorSpace:
See the docs:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIImage_Class/Reference/Reference.html
You might find these helpful:
https://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
https://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Listings/GLCameraRipple_RippleViewController_m.html
In general I think the image data will need to be copied from the GPU to the CPU. However the iOS features mentioned above might make this easier and more efficient.

Resources