iOS UI performance profiling - performance

So far, I have always tested the performance (i.e. "smoothness") of my iOS user interfaces informally, by testing the user interface myself. This is obviously not a very accurate way to profile the performance, so I wondered whether there were some methods / tools that are designed to do this. Are there?

Use the Instruments tool 'Core Animation' to measure graphics (and thus UI) performance. Mostly in the form of frame rate (which is a formal way of measuring smoothness), but you can also configure it to show overlapping and blended views (which your GPU absolutely hates).
Also, there are some great WWDC sessions available for iOS developers on this topic.

Related

Performance of Xamarin vs. NativeScript

all.
So, I'm completely new to programming, especially mobile development. Considering goals for a future app I'm designing, I've been thinking of using NativeScript. I was reading the book "The NativeScript Book" offered on NativeScript.org, and they were discussiong JIT vs pre-compiling (AOT I guess it is? Correct me if I'm wrong on that). They also showed how a framework like Xamarin compiles down to the native file (.apk for Android and .ipa for Apple) where NativeScript runs in a javascript virtual machine.
When reading this, it seems like code that has been compiled BEFORE and not just at the time of execution would have a significant advantage in terms of speed, especially on phones where they aren't nearly as capable as modern desktop computers.
Can someone address this concern for me? Likely it's because I am ignorant and don't understand things yet, so please enlighten me and help me learn.
THanks :)
If you are new to programming, the performance of the framework should not be one of your priorities. You should start by focusing on other aspects, like portability, or simplicity of the solution. Learning to write good software should be your top priority instead of trying to write fast software.
Nonetheless, if this is really important for you, there are some benchmarks available opposing the different mobile oriented frameworks. For instance, the NativeScript website has one :
https://www.nativescript.org/blog/nativescript-and-xamarin
Check the 'Speed' section. It states for instance :
In these tests, you can see that Xamarin is roughly 200 ms faster than NativeScript at startup time.
But as you'll notice that when comparing performances, benchmarks always target only one aspect of the solution. This one in particular targets boot time. Some others target sorting of large arrays, or HTTP requests speed.
It is impossible to get a global performance test indicating which solution is the best. I think that one good example in that regard is the comparison of Xamarin VS Native languages (Swift // Java). Check this page :
https://www.altexsoft.com/blog/engineering/performance-comparison-xamarin-forms-xamarin-ios-xamarin-android-vs-android-and-ios-native-applications/
Xamarin is close to native solutions since it uses the native SDKs, but you will notice big differences between Xamarin and native. So it would be event more complicated to compare Xamarin and NativeScript which has a completely different logic !
In my opinion, you should start by the framework you like the most. And change it based on your experiences if you are not satisfied with it.

unity3d and webgl comparison in terms of performance and speed

I am gonna develop a lesson in two platforms(firstly in webgl and then a similar lesson in unity 3d).
the aim of this research is to see the best of these platforms in terms of performance and speed to use it in e-learning environments.
my question is this :
how can i measure the performance (processor, memory, graphic card) for these platforms?
also, I am very appreciated if any one give me ideas or a suggestions to improve this research.
WebGL and Unity are not platforms. Unity is a library that has support for multiple platforms; its performance depends on what hardware its running on. WebGL is a JavaScript API for browsers that allow them to access OpenGL ES 2.0. This also isn't a platform; it is utterly dependent on the hardware it is running on.
Sure, each incurs overhead, but they also do completely different things. Even if one is seen as faster for a particular piece of hardware, that doesn't mean that you can use it. Unity makes applications. Something you download and install. WebGL is for web pages: HTML+JavaScript. The reasons to use one are not the same reasons you would have to use the other.
Making a "WebApp" is very different from making a regular application. You generally decide first off whether you want to make a WebApp or a regular application, then use the tools that are available to the one you pick.
There are platforms that don't support WebGL. Namely, Internet Explorer. Microsoft has already stated that they aren't going to implement WebGL. So WebGL's performance on IE is effectively 0.
Also, WebGL is a low-level rendering API; Unity is a game engine. Unity provides more functionality towards making a game than WebGL, so there are productivity differences you must take into account.
Your desire to compare the performance of these simply is not the most useful criteria for deciding which one to use.
OK, your later answer clued me in to the idea that you're focusing on browser-based tools.
WebGL is not available on Internet Explorer. So again, half of your customer base is gone. However, Unity's browser plug-in is a plug-in and therefore must be downloaded by the user. Quite a few users are against that. Also, Unity's browser plug-in doesn't work on mobile systems; you would be expected to write an app for those.
So which matters more to you: reaching out to mobile users (where WebGL is available), or reaching out to Internet Explorer users? Again, this is something you need to deal with long before you answer questions of performance.

GUI paradigm history

I know that Apple and Microsoft were inspired by Xerox PARC in building the GUI, but my question is: from the hardware point of view, which was the switch to GUI become available? I remember that I've read somewhere about OS running on 80KB at that time. Can you explain me(give some links) about what was necessary from the hardware point of view(also memory, speed) to make the GUI available?
I'm interested about the history of this paradigm switch.
TY.
ps: do you know how were those 80KB used?
here is an excellent article from arstechnica on the history of UI's. Wikipedia also has an article on it.
To answer your question about when the switch was available, i think that a GUI was always available, even in the days of analogue machines. You didnt have a windowing UI back then, but even back in the 60's, there were machines which displayed primitive user interfaces.
Machines that had user interfaces that ran on 64kb memory did so by using a lot of special modes in their hardware to display primitives without using alot of RAM. For example, back in those days, hardware sprites were crucial because that was the only way to get good graphics using low memory. Also dont forget about vector graphics and various other innovations which can create user interfaces without much RAM.

Best practices for Alt-Tab support in a DirectX app?

When writing DirectX applications, obviously it's desirable to support the user suspending the application via Alt-Tab in a way that's fast and error-free. What is the best set of practices for ensuring this? Things that need to be addressed include:
The best methods of detecting when your application has been alt-tabbed out of and when it has been returned to.
What DirectX resources are lost when the user alt-tabs, and the best ways to cope with this.
Major things to do and things to avoid in application architecture for purposes of alt-tab support.
Any significant differences between major DirectX versions as they apply to the above.
Interesting tricks and gotchas are also good to hear about.
I will assume you are using C++ for the purposes of my answers, but if you can afford to use C#, XNA (http://creators.xna.com/) is an excellent game platform that handles all of these issues for you.
1]
This article is helpful for windows events in the window procedure to detect when a window loses or gains focus, you could handle this on your main window: http://www.functionx.com/win32/Lesson05.htm. Also, check out the WM_ACTIVATEAPP message here: http://msdn.microsoft.com/en-us/library/ms632614(VS.85).aspx
2]
The graphics device is lost when the application loses focus from full screen mode. Microsoft offers an article on how to handle this: http://msdn.microsoft.com/en-us/library/bb174717(VS.85).aspx This article also has a lost device tutorial: http://www.codesampler.com/dx9src/dx9src_6.htm
DirectInput can also have a device lost error state, here is a link about that: http://www.toymaker.info/Games/html/directinput.html
DirectSound can also have a device lost error state, this article has code that handles that: http://www.eastcoastgames.com/directx/chapter2.html
3]
I would make sure to never disable Alt-Tab. You probably want minimal CPU load while the application is not active because the user probably Alt-Tabbed because they want to do something else, so you could completely pause the application, or reduce the frames rendered per second. If the application is minimzed, you of course don't need to render anything either. After thinking about a network game, my best solution is that you should still reduce the frames rendered per second as well as the amount of network packets handled, possibly even throwing away many of the packets that come in until the game is re-activated.
4]
Honestly I would just stick to DirectX 9.0c (or DirectX 10 if you want to limit your target operating system to Vista and newer) if at all possible :)
Finally, the DirectX sdk has numerous tutorials and samples: http://www.microsoft.com/downloads/details.aspx?FamilyID=24a541d6-0486-4453-8641-1eee9e21b282&displaylang=en
We solved it by not using a fullscreen DirectX device at all - instead we used a full-screen window with the top-most flag to make it hide the task bar. If you Alt-Tab out of that, you can remove the flag and minimize the window. The texture resources are kept alive by the window.
However, this approach doesn't handle the device lost event happening due to 'lock screen', Ctrl+Alt+Delete, remote desktop connections, user switching or similar. But those don't need to be handled extremely fast or efficiently (at least that was the case in our application)
All serious D3D apps should be able to handle lost devices as this is something that can happen for a variety of reasons.
In DX10 under Vista there is a new "Timeout Detection and Recovery" feature that makes it common in my experience for graphics devices to be reset which would cause a lost device for your app. This seems to be improving as drivers mature but you need to handle it anyway.
In DX8 and 9 (and 10?) if you create your resources (vertex and index buffers and textures mainly) using D3DPOOL_MANAGED they will persist across lost devices and will not need reloading. This is because they are stored in system memory and the DX runtime copies to video memory automatically. However there is a performance cost due to the copying and this is not recommended for rapidly changing vertex data. Of course you would profile first to determine if there is a speed issue :-)

How can I learn about cross platform game development?

How do companies like Valve manage to release games to all three major gaming platforms? I am interested in the best-practices regarding code sharing specifically between Windows, Xbox360 and PS3, since the ideal solution is to reuse as much code as possible instead of rewriting the whole thing for every platform.
It's not any different than writing platform-independent code in other contexts. Hide platform-specific details (input, window interaction, the main event loop, threading, etc) behind generic interfaces, and test regularly on all the platforms you intend to support.
Note that the Cell's threading model is unusual enough that doing threading "generically" takes some care. I am not a Valve employee and I know none of their secrets, but it's my understanding that most game developers who want to target the PS3 use a job queue that the individual cell processors grab tasks off of as needed. This isn't necessarily the best way to use the Cell, but it generalizes nicely to more conventional threading models (like, frex, the one that thet PC and the 360 both use).
There's a bunch of Game Developer Magazine articles and GDC talks on the subject. In fact, since you mentioned Valve, they delivered a talk describing their approach at GDC08.
This is really a huge subject that I could (and have) talk about for hours upon hours, but elevator summary is:
Determine which parts of the engine are completely platform-specific and put them behind an abstraction. File and asset loading, for example, need to be rewritten for each console; but you can hide that behind an IFileSystem interface which provides a uniform API that the game code talks to.
The PS3 makes this hard because its abstraction point has to be someplace completely different from the other platforms. Even game features like collision and nav will have to be written differently for the Cell.
Try to keep leaf game code (entities, AI, sim) as platform-agnostic as possible...
But accept that even the leafiest of game code will sometimes need some platform-specific #ifdefs for perf or memory or TCR reasons. A lot of UI will have to be rewritten because the manufacturers have conflicting certification requirements.
Anyone who says the words "I'm not worried about performance" or "memory isn't an issue" shouldn't be on the payroll.
This question can be divided up into two separate questions. "How can I write portable code?" and "What are the divergent requirements of mainstream gaming platforms?".
The first question is relatively easy to answer. Best practices for abstracting your non-portable code are covered in Write Portable Code:
http://books.google.ca/books?id=4VOKcEAPPO0C&printsec=frontcover
Turning theory into practice, the Quake 3 source code does a pretty good job of dividing out different platforms into separate areas for a C codebase, available at http://www.idsoftware.com/business/techdownloads/ However, it does not demonstrate C++ patterns such as abstract interfaces, implemented once per platform.
The second part of your question, "What are the divergent requirements of mainstream gaming platforms?" is tougher. However, it is notable that your largest areas of change are still your renderer, your audio subsystem and your networking.
Each console platform has a series of certification requirements, available under an agreement with the respective console owners. The requirements drive consistency in user experience and are not focused on gameplay or qualitative, high level issues. For instance, your game may need to display a reasonably interesting animating loading screen, and black screens are unacceptable.
Getting your hands on this documentation as soon as possible is key to making the right choices in developing for a specific console platform.
Finally, if you can't get your hands on a console devkit, I suggest you port your code to the Mac from Windows. The Mac gets you an OS port ensuring you are not tied to Windows as well as a processor port if you support universal binaries. This ensures your code is endian agnostic.
If you support both PC and Mac, you will be well positioned to support a third platform, should you gain access to it in the future.
Addendum You wrote:
the ideal solution is to reuse as much
code as possible instead of rewriting
the whole thing for every platform
In many game porting scenarios, the ideal solution is not to reuse as much code as possible, but to write the optimal code for each platform. Code can be reused between projects and is relatively inexpensive as compared to the content that the engine takes in. A more reasonable goal is to aim for lowest common denominator content that runs on all platforms without modification (a build phase that packs the content for media is okay).
It's great to do simultaneous development. You find all kinds of bugs you wouldn't find doing just one platform.
I remember that programmers in DOS had null pointers all the time because writing to low memory didn't immediately crash them. When you ported to an Amiga, Atari ST, or Macintosh, boom! I remember telling a DOS programmer that he had a couple null pointers on an aready-shipped game. He thought for a couple seconds and grinned, "That explains a few things."
Now that games have such large budgets, it's important to ship them all at the same time so you don't waste marketing and ad budgets.
My advice on simultaneous development is to pick one lead platform, but never let the other platform(s) get more than a week behind. It will become obvious as you program which parts of the code are common to all platforms and which are different. Pull out the differences into one or more platform-specific areas.
My experience is in C/C++. It's a bigger problem if you have to port against different languages (say, Java and Objective-c).
A few years ago the Opera CEO said in an interview that the key to developing for independent platforms is to move away from any single OS/platform libraries. He went on and said that they developed their own libraries that improve OS performance.
My assumption is that big companies will have a common, Xbox, PS, windows, FooOS, separate teams. Each platform needs to be tweaked differently and requires different implementation methods. I don't think they do one source for all platforms; rather, they build one for each OS thereby, improving efficiencies. I remember EA used to release some console games earlier than the PC versions and vice versa.
Another issue is that different consoles have different hardware thus requiring different programming techniques.
there are two extremes, build one source that fits all (java for instance) but you run the risk of inefficiency or write 40 versions; one optimized for each platform
Back when I had a friend into educational computer games (before The Learning Company gutted the field), he was a great fan of creating cross-platform libraries for doing everything.
This is easier for games than other apps. If you have a word processing app to run on the Mac and Windows, for example, it really does need to look and behave like a Mac app on the Mac, and a Windows app on Windows. Write a game, and it doesn't have to conform to the native behavior, look, and feel.
If you want open source examples, you could look at source code of Quake 1, 2 and 3 engines. They are structured quite portably. (Of course, no ps3 or xbox360 support, but same principles apply)
http://www.idsoftware.com/business/techdownloads/

Resources