CAEmitterLayer (ios5) vs CCParticleSystem (cocos2d) Performance - performance

I am to create an app with lots and lots of particle effects and I was planning to use cocos2d framework. But just recently, I came to know that particle systems can be created in ios5.
I like to know how the UIKit particle system performs when compared to cocos2d particle system.
Anyone tried and tested?

You can check out Dazzle (at https://github.com/lichtschlag/Dazzle), a test app I wrote to test the new iOS 5 particle effect APIs. Frame rates can get low (20 fps) if you spawn to many objects. I do not know how Cocos compares to that, but here is how to monitor the fps:
Build your app using the 'Profile' option, so that Instruments attaches to it.
Select 'Core Animation' Instrument (in the Graphics category). You will need an actual device for that.
Navigate to your particle code, Instruments will display the fps.
If you need to profile opengl code, use the appropriate instrument.

Related

Why is GPU picking in the three.js examples so slow on my system?

I am trying to write a visualization that displays thousands of cubes/spheres/cones and allows you to pick one with the mouse. I can't use the multiple named mesh and raycasting approach because of all the draw calls and it was suggested I investigate instancing.
I found 2 examples that do something similar here and here but on my system they are unusably slow - less that 1Hz. The slowdown seems to be caused by this line. Without it, picking is broken of course but it runs at full speed. All the other examples run normally at a steady 60Hz.
My system is a MacBook Pro with an AMD Radeon R9 M370X.
Browser is Chrome Version 71.0.3578.80. Behaves the same way with Safari 12.0.1
Can anyone suggest a workaround or different approach I can investigate?

Objects are not as stable as ARKit

I have used this tutorial in order to create working example project. But when I move around with device, object is also moving slightly with me (even Lowe's Vision app) but ARKit keeping object a lot more stable than Tango. Is there any guide to fix this issue or Tango is not ready for using in real world applications (other than cases where slightly unstable objects are ok to tolerate, like in games)?
What "Tango" device, if it is the Dev Kit, then that 3 year old Tegra chip and older hardware is probably the bottle-neck as the Phab 2 Pro can compute and track way better then the old Dev Kit as I have compared them next to each other.
I have also compared my Phab 2 Pro with a Tango C API demo to the standard ARKit demo and the Tango has way better tracking since it has the depth camera as ARKit is just good software over a normal RGB camera. But this depth camera loses a lot of its advantage if you are clogging it with the abstraction layer set on Unity.
Also to my knowledge I am not sure how you can really quantify "more stable" as it might be the applications fault, not the hardware

OpenGL development for mac in Xcode. Can I force software render?

I'm porting an OpenGL game from iPhone to Mac, and having problems with textures getting corrupted. I guess it's a memory problem.
The thing is I've crashed the mac 3 times now (happens random when the game launches), so it's getting difficult to debug this.
Is there any way I can force software render?
Select kCGLRendererGenericID as NSOpenGLPFARendererID when you create the pixelformat attribute list for initing the context (initWithAttributes:).
NSOpenGLPFARendererID
Value is a nonnegative renderer ID number. OpenGL renderers that match the specified ID are preferred. Constants to select specific renderers are provided in the CGLRenderers.h header of the OpenGL framework. Of note is kCGLRendererGenericID which selects the Apple software renderer. The other constants select renderers for specific hardware vendors.
NOTE! I just saw that kCGLRendererGenericID has been deprecated and the one to use is kCGLRendererAppleSWID
Another tip is to start the app with the OpenGL Profiler. See here
http://lists.apple.com/archives/quartzcomposer-dev/2010/Jun/msg00090.html

Same QtOpenGL code runs as about 15 times slower when going to Carbon (vs Cocoa)

I'm developing a very simple application for the Mac OSX platform making use of Qt and OpenGL (and QtOpenGL) so crossplatform gets easier.
The application receive a variable number of video streams that have to be rendered to the screen. Each frame of these video streams is used as a texture for mapping a rectangle in 3D space (very similar to a videowall).
Apart from the things such as receiving, locking, uploading video data, synchronizing threads... i consider it is clear that it's a quite simple application.
The fact is that all behaves ok when using cocoa based Qt 4.7 binaries (the default ones) in a 10.5 Mac.
But my code has to run fine at all of the OSX versions starting from (and including to) 10.4. So i tried the code in a 10.4 machine and it crashed just when starting. After a few hours of internet reading, i discovered that for a Qt Application to be targeted at 10.4, carbon Qt based has to be used. So i rebuild the whole project with the new framework.
When the new resulting binary gets run, all works well except by the fact that application's fps fall to about 2 fps!! And it behaves the same at both machines (10.5 computer has sensibly better features)
I've spent quite time working on this but i have not reached a solution. Any suggest?
More information about the application and things i've tried
code has not been modified when recompiling carbon based
only two (256x256 textures) videos ar used in order to assure it's not a bandwidth limit problem (although i know it shouldn't because the first code worked)
the 2 video streams arrive from network (local)
when a video stream arrives, a signal is emmited and the data will be uploaded to an OpenGL texture (glTexSubImage2D)
a timer makes render (paintGL) happen at about 20ms (~50 fps)
the render code use the textures (updated or not) to draw the rectangles.
rendering only when a video arrives won't work because of having 2 (asynchronous) video streams; besides more things have to be draw at screen.
only basic OpenGL commands are used (no PBO,FBO,VBO,...) The only one problematic thing could be the use of shaders (available only from Qt 4.7), but its code is trivial.
i've made use of OpenGLProfiler and Instruments. Nothing special/strange was observed.
Some things i suspect (conclusions)
it's clear it's not a hardware issue. The same computer behave differently
it gives me the sensation it's a threading/locking problem but, why?
carbon is 32 bits. The 10.5 application was 64. It's not possibly develop 64 bits in carbon.
for giving away the 32 bits possible cause, i also rebuild the first project for 32 bits. It worked partically the same.
i've read something about carbon having problems (more than usual) with context switching.
maybe OpenGL implementation is Multithread and code is not? (or the opposite?) That could cause a lot of stalls.
maybe carbon handle events differently from cocoa's? (i mean signal/event dispatching, main loop...)
Ok, this is (sorry for the so long writing) my actual headache. Any suggestion, idea.. would be very appreciated.
Thx in advance.
May I ask a diagnostic question? Can you ensure that it's not being passed to the software renderer?
I remember that when 10.4 was released, there was some confusion about quartz extreme, quartz and carbon, with some of it disabled, and hardware renderers disabled by default on some of them, which required configuration by the end user to get it working correctly. I'm not sure whether this information is pertinent, because you say that, having targetted 10.4, the problem exhibits on both the 10.4 and the 10.5, yes?
It's possible (though admittedly I'm grasping at straws here) that even in 10.5 carbon doesn't use the hardware renderers by default. I'd like to think though that OSX prefers hardware renderers to software renderers in all scenarios, but it may be worth spending a little time looking into, given how thoroughly you're already looking into other options.
Good luck.
If you are using Qt, I guess your code would work on a windows or linux platform. Have you tried your application under these platforms ?
This would quickly reveal if it comes from Qt or the mac OSX version.

hardware acceleration / performance and linkage of different macosx graphics apis, frameworks and layers

the more i read about the different type of views/context/rendering backends, the more i get confused.
regarding to http://en.wikipedia.org/wiki/Quartz_%28graphics_layer%29
MacOSX offers Quartz (Extreme) as a render-backend which itself is part of Core Graphics.
in the Apple docs and in some books too they say that in any case somehow you use OpenGL (obviously since this operating system uses OpenGL to render all its UI).
i currently have an application that should capture real-time video from a camera (via QTKit which is based on Quicktime but is Cocoa) and i would like to further process the frames (via Core Image, GLSL shaders, etc.).
so far so good. now my question is - does it matter performancewise if you
a) draw the captured frame via Quartz and implicitely via OpenGL or
b) if you setup an OpenGL context and a DisplayLink and draw the buffered image explicitely via OpenGL?
what would be the advantages or disadvantages of going either way?
i've looked at the different examples (especially CoreImage101 and CoreVideo101) and documents from apple's developer pages but i can't see why they go (or have to go) that way?!?
and i really don't get where Core Video and Core Animation come into play?
does going way b) automatically mean i use Core Video? and with which way can i use Core Animation?
additional info:
http://developer.apple.com/leopard/overview/graphicsandmedia.html
http://theocacao.com/document.page/306
http://lists.apple.com/archives/quartz-dev/2007/Jun/msg00059.html
p.s.: btw, i am on Leopard, so no QuicktimeX confusion yet :)
Generally speaking OpenGL just gives you more flexibility than the higher level APIs. If the higher level APIs do not offer a feature you need then it is very likely that you will need to drop down to the OpenGL layer.
If they do offer everything you need then you should comparable speed. Perhaps a small (almost negligible) degradation given the Objective-C overhead.

Resources