Is there a "Best" version of Windows that runs AIR the best?
The AIR application mainly plays MP4s and still images in a loop.
Also, when publishing for AIR 3.4 for Desktop what are the best settings?
What should be selected from the below?
Hardware Acceleration:
None
Level 1 - Direct
Level 2 - GPU
Render Mode:
Auto
Direct
CPU
Thanks!
The GPU is to use with graphic objects (besides that normally responds faster).
The CPU When your code makes many calculations.
In my opinion, you should then use:
Level 2 - GPU
Render Mode: Auto
Related
I'm trying to do some profiling on my OpenGL ES code. Somewhere in my GPU pipeline (a shader I believe) is causing a huge delay. Which is the best profiler I can use? Is this one a good option? is there one I can use directly within Visual Studio?
If you have a GPU performance issue on IOS, the best is to use XCode tools to profile it directly on device, running the app from Xcode and then doing a frame capture to look at the timings for each draw call / the number of cycles used by each shader (more info here)
You can also profile on Windows if you are also able to simulate your graphics pipeline in classic OpenGL in your Windows version, but this may not be a good idea as the iPhone's GPU is very different than a classic desktop GPU so the bottleneck might not be the same on Windows than on IOS.
To profile on Windows I would suggest using either Nvidia PerfKit (if you have a Nvidia card) or AMD's GPU PerfStudio if you have an AMD card.
There is also RenderDoc which is a nice tool but not sure if it provides much profiling information (it is more for debugging graphics issues than profiling)
Is there a performance difference between DirectX in a Win8 Desktop App versus DirectX in a Win8 (Store) App?
I am not interested in XAML.
Afaik the store apps use a run - suspend - end cycle so I suppose there could be a small performance loss through abstraction. Am I right with this assumption?
Or is there no noticeable difference?
There is some additional startup and shutdown delay due to the animations that occur automatically (fractions of a second). Once you're in your render loop though it should be equivalent performance - there's no extra abstraction for native app code.
So, we've got a little graphical doohickey that needs to run in a server environment without a real video card. All it really needs is framebuffer objects and maybe some vector/font anti-aliasing. It will be slow, I know. It just needs to output single frames.
I see this post about how to force software rendering mode, but it seems to apply to machines that already have OpenGL enabled cards (like NVidia).
So, for fear of trying to install OpenGL on a machine three time zones away with a bunch of live production sites on it-- has anybody tried this and/or know how to "emulate" an OpenGL environment? Unfortunately our dev server HAS a video card, so I can't really show "what I've tried".
The relevant code is all in Cinder, but I think our actual OpenGL utilization is lightweight for this purpose.
This would run on windows server 2008 Standard
I see MS has a software implementation for OGL 1.1, but can't seem to find one for 2.0
Build/find some Mesa DLLs.
It will be slow.
I am planning a browser based kiosk application that will utilize CSS3 transitions (mainly on opacity) on relatively high resolution (1920x1080px) images.
Doing some preliminary testing on CSS3 transitions I have seen huge differences on rendering performance between Safari and Chrome under OSX 10.7.2 - a bit surprising.
Could anyone give me some recommendations as to OS, browser and hardware suggestions in order to maximize performance?
You could use a stripped down Linux with Chromium Daily builds or Trunk, hardware depends on budget
All 2010 Macbook Pros come with two graphics cards — a low-performance built-in Intel HD one and a high-performance discrete NVIDIA one — and it switches between them on the fly depending on the needs of the running applications.
I have a simple Cocoa application that consists of just a menu bar item with a NSTextField in it. All I do is update the text field with an NSAttributedString from time to time. The trouble is that my application switches my Macbook Pro to use the high-performance NVIDIA card (I used the gfxCardStatus tool to confirm this).
What could possibly need the high-performance card? Is there a known list of reasons for the applications to require high-performance graphics card? Is there a way to force the computer to use the discrete graphics card?
There is a good article about GPU switching in the newer MacBook Pros at Ars Technica.
I noticed that OS X switches to the dedicated GPU if you
Start an application that links against OpenGL
Connect a second display
The code of gfxCardStatus is open source. And it seems that the relevant part is located in switcher.m. You can take a closer look here.
In MacOS 10.7 you can specify a setting in the PList to stop going to discrete graphics:
https://developer.apple.com/library/mac/qa/qa1734/_index.html
Needs to be a 2011+ MacBook Pro.