What are the barriers to Voxel Cone Tracing on mobile? - opengl-es

After watching the Unreal Engine 4 demo, I'd love to be able to use that tech in an iPad app. I know that it's not feasible right now, as they're targeting PCs and next gen consoles, but I was wondering what has to happen in order for it to be practical on a mobile device.
Can someone knowledgable out there comment on what needs to be added to OpenGL ES to allow it to work? I assume that there also must be much more memory available to store the voxel structures, how much do you need? Is there anything else that needs to be changed?

First thing you need is 3D textures - I think Opengl ES recently added an extension for this.
Geometry shaders? I don't think it has that yet - but im sure there's a way (using traditional methods) to voxelize using the fragment shader.
I use a compute shader to create and filter my mipmaps of the 3d texture.
I'm sure the ability to have bindless textures will improve performance.

Related

Anti-aliasing techniques for OpenGL ES profiles

I've been investigating and trialling different AA techniques in OpenGL on both desktop and ES profiles ( es 2.0 & above ) for rendering 2D content to offscreen FBOs.
So far I've tried FSAA, FXAA, MSAA, and using the GL_LINE_SMOOTH feature.
I've not been able to find an adequate AA solution for ES profiles, where I've been limited to FSAA and FXAA because of API limitations. For example glTexImage2DMultisample ( required for MSAA ) and GL_LINE_SMOOTH functionality are unavailable.
FXAA is clever but blurs text glyphs to the point it's doing more harm than good, its only really suited to 3D scenes.
FSAA gives really good results particularly at 2x super-sampling but consumes too much extra video memory for me to use on most of our hardware.
I'm asking if anyone else has had much luck with any other techniques in similar circumstances - i.e: rendering anti-aliased 2D content to off-screen FBOs using an OpenGL ES profile.
Please note: I know I can ask for multi-sampling of the default frame buffer when setting up the window via GL_MULTISAMPLE on an ES profule, but this is no good to me rendering into off-screen FBOs where AA must be implemented oneself.
If any of my above statements are incorrect then please do jump in & put me straight, it may help!
Thank you.
For example glTexImage2DMultisample ( required for MSAA )
Why is it required? To do this "within spec" render to a multi-sample storage buffer, and then use glBlitFramebuffer to resolve it to a single sampled surface.
If you don't mind extensions then many vendors implement this extensions which behaves like an EGL window surface, with an implicit resolve, which is more efficient than the above as it avoids the round-trip to memory on tile-based hardware architectures.
https://www.khronos.org/registry/gles/extensions/EXT/EXT_multisampled_render_to_texture.txt
and GL_LINE_SMOOTH functionality are unavailable.
Do you have a compelling use case for lines? If you really need them then just triangulate them, but for 3D content where MSAA really helps lines are really not commonly used.

Is there any way to access tango pointcloud camera image pixels in Java

So I know about setSurface, and have no problem using it as an overlay or whatever - its on a surfacecontrol. That said, I am stumped about getting pixel data
1) I've tried everything I can think of (the control, the root, etc) to use the drawing cache functions to get the bits for the camera surface. Yah, no. The cached bitmap is always zerod out.
2) I've used both SurfaceView and GLSurfaceView successfully as a setSurface taget. I cannot use any other class, such as TextureView.
3) I've investigated the C API and I see the camera exposes connectOnFrameAvailable, which will give me access to the pixels
My guess is that the internal tango logic is just using the surface in java to gain access to the underlying bit transfer channel - in the C API it requires a texture ID, which makes me suspect at the end of the day, the camera data is shipped in to the GPU pretty quickly, and I bet that CUDA lib operates on it - given the state of things, I can't see how to get the bits on the Java side without rooting the device - just cause I have a texture or simple surface view rendering raw bits on the screen doesn't mean I can get to them.
I don't want to peel the image data back out of the GPU. I'd need to switch my busy animation from a watch to a calendar for that.
Before I dive down into the C API, is there any way I can get the camera bits in Java ? I really want to be able to associate them with a specific pose, but right now I can't even figure out how to get them at all. I really want to know the location and color of a 3D point. Camera intrinsics, the point cloud, and the 2d image that generated the point cloud are all I need. But I can't do anything if I can't get the pixels, and the more questionable the relationship between an image and a (pose and a pointcloud) the sketchier any efforts will become.
If I do dive into C, will the connectOnFrameAvailable give me what I need ? How well synced is it with the point cloud generation ? Oh, and have I got this right ? Color camera is used for depth, fisheye is used for pose ?
Can I mix Java and C, i.e. create a Tango instance in Java and then just use C for the image issue ? Or am I going to have to re-realize everything in C and stop using the tango java jar ?
will the connectOnFrameAvailable give me what I need?
Yes, it indeed returns the YUV byte buffer.
How well synced is it with the point cloud generation?
The Tango API itself doesn't provide synchronization between the color image and depth point cloud, however, it does provide the timestampe which allow you to sync at the application level.
Color camera is used for depth, fisheye is used for pose?
Yes, you are right.
Can I mix Java and C (i.e. create a Tango instance in Java and then just use C for the image issue)
Starting two Tango instance is really not the way Tango supported, even though it works, it will be extremely hacky..
As the temp walk-around, you could probably try to use the drawing cache of the view?

Unity 3D combine texture

I got a helm, sword and a shield which use 1 texture each, so 3 draw calls. I want to get them to use a single texture to get the draw call down to 1, but not combining them into 1 mesh as i need to disable any of them randomly, plus the sword and shield's position can change when attacking or dropped to ground. Is it doable?
If so how? I'm new to this, thanks.
To save on draw calls, you can use the same material for all three objects without combining their meshes. Then you create a texture file that has the three textures next to each other, and edit the UV maps for the models to use their own parts of the combined texture.
It's possible to do, and requires what are called Texture Atlases. I believe this is often done as an optimization step with the frequently used smaller textures that comprise a scene.
I don't think the free version of Unity has built in support for this (I might be wrong in assuming that the Pro version does natively support), but I believe there are also plugins - a quick google search found "Texture Packer" that appears to do what you want with the paid version being $15, but there's a free version too, so worth a closer look: http://forum.unity3d.com/threads/texture-packer-unity-tutorial.184596/
I don't have experience with any of these yet as I'm not at a stage where I'm trying to do this with my project, but when I get there I think Texture Packer is where I'll start.
Thanks,
Greg

Will an Octree improve performance if my scene contains less than 200K vertices?

I am finally making the move to OpenGL ES 2.0 and am taking advantage of a VBO to load all of the scene data onto the graphics cards memory. However my scene is only around the 200,000 vertices in size ( and I know it depends on hardware somewhat ) but does anyone think an octree would make any sense in this instance ? ( incidentally because of the view point at least 60% of the scene is visible most of the time ) Clearly I am trying to avoid having to implementing an Octree at such an early stage of my GLSL coding life !
There is no need to be worried about optimization and performance if the App you are coding is for learning purpose only. But given your question, apparently you intend to make a commercial App.
Only using VBO will not solve problems on performance for you App, specially as you mentioned that you meant it to run on mobile devices. OpenGL ES has an optimized option for drawing called GL_TRIANGLE_STRIP, which is worth particularly for complex geometry.
Also interesting to add up in improving performance is to apply Bump Masking, for the case you have textures in your model. With these two approaches you App will be remarkably improved.
As you mention that your entire scenery is visible all the time, you should also use level of detail (LOD). To implement geometry LOD, you need a different mesh for each LOD that you wish to use, and each level has fewer polygons than the closest one. You can make yourself the geometry for each LOD, or you can also apply some 3D software to make it automatically.
Some tools are free and you can access and use it to automatically perform generic optimization directly on your GLSL ES code, and it is really worth checking.

Canvas 2d context or WebGL for 2D game

I'm planning on writing a game, which will use a lot of sprites and images. At first I tried EaselJS but playing some other canvas-based games I realized it's not that fast. And when I saw BananaBread by Mozilla I thought "if WebGL can do 3D so fast, then it can do 2D even faster". So I moved to three.js (using planes and transparent textures, texture offset for sprites).
The question is: is it better? Faster? Most of the WebGL games are 3D so should I use canvas 2D context for 2D and WebGL for 3D? I've also noticed that there are no libraries for WebGL in 2D (except WebGL-2d, but it's quite low level).
Please note that compatibility is not my greatest concern as I'm not planning on releasing anything anytime soon :) .
The short answer is yes. WebGL can be quite a bit more efficient if you use it well. A naive implementation will either yield no benefit or perform worse, so if you're not already familiar with the OpenGL API you may want to stick to canvas for the time being.
A few more detailed notes: WebGL can draw textured quads (sprites) very very fast, but if you need more advanced 2D drawing features such as path tracing you'll want to stick to a 2D canvas as implementing those types of algorithms in WebGL is non-trivial. The nature of your game also makes a difference in your choice. If you only have a few moving items on screen at a time Canvas will be fairly fast and reasonably simple. If you're redrawing the entire scene every frame, however, WebGL is better suited to that type of render loop.
My recommendation? If you're just learning both, start with Canvas2D and make your game work with that. Abstract your drawing in a simple manner, such as having a DrawPlayer function rather than ctx.drawImage(playerSprite, ....), and when you reach a point where the game is either functioning and you want it to run faster or the game design dictates that you MUST use a faster drawing method, create an alterate rendering backend for all those abstract functions with WebGL. This gives you the advantages of not getting hung up on rendering tech earlier on (which is ALWAYS a mistake!), let's you focus on gameplay, and if you end up implementing both methods you have a great fallback for non-WebGL browsers like Internet Explorer. Chances are you won't really need the increased speed for a while anyway.
WebGL can be much faster than canvas 2D. See http://blog.tojicode.com/2012/07/sprite-tile-maps-on-gpu.html as one example.
That said, I think you're mostly on your own right now. I don't know of any 2d libraries for WebGL except for maybe PlayN http://code.google.com/p/playn/ though that's in Java and uses the Google Web Toolkit to get converted to JavaScript. I'm also pretty sure it doesn't use the techniques mentioned in that blog post above, although if your game does not use tiles maybe that technique is not useful for you.
three.js is arguably not the library you want if you're planning on 2d.

Resources