In my app I'm using a "CIMotionBlur" CIFilter during CALayer animation. The problem is that the filter does not work properly when hardware acceleration is not available:
In OS X Safe Mode the layer becomes invisible during animation;
When using VMWare Fusion the animation is unbearably slow and makes testing the app harder;
Animation works fine without the filter. I'd like to apply filter only when hardware acceleration is available.
What's the highest level API that would let me know when to disable the filter?
I'm going to look for clues in IOKit.
I found the answer in Technical Q&A QA1218
"How do I tell if a particular display is being hardware accelerated by Quartz Extreme?"
It's as simple as this:
NSNumber* curScreenNum = [self.window.screen.deviceDescription objectForKey:#"NSScreenNumber"];
if (CGDisplayUsesOpenGLAcceleration(curScreenNum.integerValue))
{
// Do accelerated stuff
}
Works as expected in my cases.
https://developer.apple.com/library/ios/documentation/graphicsimaging/Conceptual/CoreImaging/ci_concepts/ci_concepts.html#//apple_ref/doc/uid/TP30001185-CH2-TPXREF101
I guess you could go with this to determine whether your desired filters are available. They probably shouldn't be available if Core Image is not available.
If filterNamesInCategory returns the CIMotionBlur filter's name even when it is not available then you should file a bug report with Apple's bug reporter.
As a further test, you could call filterWithName and check for a non-nil result. If you get back nil, you know the filter is available.
Or are you saying that on platforms without hardware acceleration, you get back a filter but it doesn't work? If that's the case then again, time to file a bug report.
Or, are you saying that on non-hardware-accellerated platforms, you get back a filter that works, but it is too slow to be useful?
Poking around in the docs I don't see any way to tell if Core Image is hardware-accellerated for a given platform or not.
However, CI filters are built on top of OpenGL, and you CAN query the OpenGL driver to see if it's hardware-accellerated or not. My guess is that the result of such a query would also tell you if CIFilters were hardware-accellerated or not.
Related
I'm building a scrolling intensive app for macOS with core animation. I've been using CA Instrument to help with optimization. While doing this I noticed something odd... my app gets a better frame rate when running under the CA Instrument debugging tool then it does when running normally.
I found the underlying reason is that the CA Instrument tool sets the CA_LAYER_SURFACE environment variable to 0. Doing that changes the codepath that cocoa uses to render core animation layers, and as a result my app goes from 55fps to 60fps and has noticeably smoother scrolling.
Can anyone tell me more about this CA_LAYER_SURFACE flag. From the above linked to article is seems that including it enables old behavior. But if that's the case why was the old behavior replaced with a new slower behavior? What are the tradeoffs if I decide to leave this flag set to CA_LAYER_SURFACE=0 in my production app?
Thanks!
update
Most of the performance increase went away (i.e. both versions are fast) once I changed the way that I create offscreen rendering contexts as described here:
Fastest way to draw offscreen CALayer content
I've also found out a bit more about the flag as describe in the answer that I posted below.
These release notes (search for "Changes to layer rendering") explain what the flag does:
AppKit Release Notes – Changes to layer rendering
The flag itself isn't document. But setting NSView.layerUsesCoreImageFilters to true has the same effect: rendering your layers in process instead of out of process which is the new default.
I don't fully understand the tradeoffs, but for me the takeaway is that you don't need to use the undocumented CA_LAYER_SURFACE flag, instead just set NSView.layerUsesCoreImageFilters.
We've encountered a strange problem on newer laptops using built-in graphic cards.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK).
Afterwards we parse the feedback buffer. This has worked for many years.
Now we have a problem with glyphs containing holes (only on platforms with built-in graphic cards):
wglUseFontOutlines works perfectly. If we just draw the returned display lists, everything is fine. However, the token stream generated with GL_FEEDBACK is corrupt. The debugger shows nothing unusual, all functions return with success and the parsing itself works fine too. It is really the binary data generated by GL_FEEDBACK mode, which is wrong.
Has anyone else encountered this problem?
And is there an alternative way to obtain outlines and fillings for true type fonts on Windows?
I'm just guessing into the blue here: The GL_SELECT and GL_FEEDBACK rendering modes were usually not supported by widespread GPU driver OpenGL implementations. Only a handful graphics cards from the previous century actually did support these rendering modes. Hence you would almost always fall back into a software implementation when using those modes.
However given modern GPU's vastly more flexible feedback mechanisms, the latest drivers could actually try to implement those rendering modes using GPU features (somewhat weird, because those modi have been removed from modern OpenGL profiles). Anyway, this could be the reason why you're experiencing these problems.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK). Afterwards we parse the feedback buffer.
That's a cool Rube-Goldberg machine. Why don't you simply cut the middleman and obtain the glyph outlines directly using the appropriate Windows GDI functions (GetGlyphOutline) for this? This is what wglUseFontOutlines is using internally anyway.
I need to blend a few image together into a single one, pretty much as what's described here: OpenGL - mask with multiple textures.
I used the solution that is proposed there, but there's an issues with the glBlendFuncSeparate method.
Turns out that this method was introduced in later openGL versions, and according to my gl.h file the version I'm using is 1.
After much searching and reading I realized that this is what I have to work with and that I can't just upgrade my openGL version.
I went ahead and downloaded GLEW.
I added glew.h and glew.c into my VS10 project, defined GLEW_BUILD and now it finally compiles without complaining about glBlendFuncSeparate, but when I run the program it crashes when it tries to call the method, saying Access Violation, I guess that it points to NULL and then crashes when that's being run.
I continued reading and searching on this, and from what I understand, I need to use OpenGl Extensions to make it work.
If what's written in Using OpenGL extensions On Windows is correct then I'm missing something.
Let's say I do everything it says, I "download and install the latest drivers and SDKs for your graphics card" and then compile it, even if it runs on my machine, I see no guarantee that it won't crash on someone else's machine, since they might not have done the same.
I have two questions:
Am I missing something here? this whole process seems way too complicated, and environment dependent.
Is there an alternative for using glBlendFuncSeparate in this kind of a scenario?
You don't need glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO); to use trick described in OpenGL - mask with multiple textures. Yes, you can't added color directly to alpha channel, like described in previous example, but you can be little tricky.
During writing your mask just disable writting all color channels, except alpha:
glColorMask(false, false, false, true);
and enable multiplying mask's alpha on background alpha-channel:
glBelndFunc(GL_ZERO, GL_SRC_ALPHA);
After writing bitmask, don't forget setup your glColorMask back.
glColorMask(true, true, true, true);
//-----------------------------------------------------------------------------------------------------------------------
And yes, you need mask with information in alpha channel:
1) It's can be done with GIMP (very simple, but required GIMP knowlege).
2) You can write you own rootine, for pushing color information to alpha channel, before mask texture creation (it's very simple - just few lines of code).
3) Or just use GL_ALPHA "format" attribute in glTexImage2D for mask texture. This flag just writes bitmaps color to texture alpha channel.
I've been having problems and, after spending a week trying out all kinds of solutions and tearing my hair out, I've come here to see whether anybody could help me.
I'm working on a 3D browser plugin for the Mac (I have one that works on Windows). The only fully-hardware accelerated way to do this is to use a CAOpenGLLayer (or something that inherits from that). If a NSWindow is created and you attach the layer to that window's NSView then everything works correctly. But, for some reason, I can only get a specific number of frames (16) to render when passing the layer into the browser.
Cocoa calls my layer's drawInCGLContext for the first 16 frames. Then, for some unknown reason, it stops calling it. 16 seems like a very specific - and programmatic - number of frames and so I wondered whether anybody had any insight into why drawInCGLContext would not be called after 16 frames?
I'm reasonably sure it's not because I pass the layer into the browser - I've created a very minimal example plugin that renders a rotating quad using CAOpenGLLayer and that actually works. But the full plugin is a lot more complicated than that and I just don't know where to look anymore. I just don't know why drawInCGLContext stops being called. I've tried forcing it using CATransaction, it definitely gets sent the setNeedsDisplay message - but drawInCGLContext is never called. OpenGL doesn't report any errors either (I'm currently checking the results of all OpenGL calls). I'm confused! Help?
So, for anybody else who has this problem in future: You're trying to draw using the OpenGL context outside of the drawInCGLContext. There was a bug in the code where nearly all the drawing was happening from the correct place (drawInCGLContext) but one code path led to it rendering outside of there.
No errors are raised nor does glGetError return any problems. It just stops rendering. So if this happens to you - you're almost certainly making the same mistake I made!
I'm working on a Cocoa application which will be used for a digital-signage/kiosk style display. I've never done anything like this with Cocoa before, but I'm trying to figure out what the best approach is for building the user interface for it.
My main issue is that I need a way to have the user interface scaled up or down depending on the resolution of the display. When I say scaled, I mean that I want everything including white space to maintain the same sizing ratio. The aspect ratio of the interface needs to remain the same (16x9), but it should always fill the entire width of the display its on.
Sorry if I'm not being descriptive enough.
What are some thoughts?
If I follow you correctly, you want all buttons and views, etc. to get larger, the bigger the screen is (which has nothing to do with the dimensions of your views). If that's the case, there's no automatic way to do this.
With Quartz Debugger (part of Xcode Tools), you can set the scaling factor (see "resolution independence"), but this would need to be manually adjusted per system. What's more is I'm not sure if this tinge is persistent across reboots. I leave that for you to investigate.
As far as I know, though, there's no way to adjust this programmatically as resolution independence is still not an exposed consumer feature of OS X.
If anyone is interested, I seem to have found a solution under this post: http://cocoawithlove.com/2009/02/asteroids-style-game-in-coreanimation.html