Is it worth developing a CPU intensive mobile app? - performance

I am considering developing a poker robot to play against for mobile phones.
Needless to say that this is a very CPU intensive application as confirmed by the prototype.
What is the consensus on a CPU intensive mobile apps? Is it even worth developing? My concern is that people will leave negative feedback similar to "CPU Hog. Uninstall".
I could host the CPU intensive brain on a server, but that requires an internet connection for the user, which is undesirable.

It's not worth it if you're dropping frames, especially on mobile. Mobile is very limited comparatively speaking to PC or console, so you have to plan accordingly.
I'd attempt at trying to add as much pre-processed tasks that as possible can be accessed later on without further processing, thus only needing to call the pre-processed data. This'll definitely save quite a bit of CPU usage.
If you can't do it, then it's not worth it unless you can keep the game within a small margin of error which won't be noticed by the average person within your audience.

Related

Windows Timer Resolution vs Application Priority vs Processor scheduling

Please, make it once more clear the technical difference between these three things around MS Windows systems. First is Timer Resolution you may set and get via ntdll.dll non-exported functions NtSetTimerResolution and NtQueryTimerResolution or use the Sysinternals' clockres.exe tool.
One of the scandalous trick used by the Chrome browser some time ago to perform better across the web. (They left high resolution trick for Flash plugin only at the moment). https://bugs.chromium.org/p/chromium/issues/detail?id=153139
https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
In fact Visual Studio and SQL Server in some cases do the trick as well. I personally feel like it performs the whole system better and crisp, not slow down as many people warn out there.
What is the difference between the timer resolution and application I/O and memory priority (realtime/high/above normal/normal/low/background/etc.) you may set via Task Manager except the fact that the timer resolution sets up for the whole system, not a single application?
What is the difference between them and Processor scheduling option you can adjust from CMD > SystemPropertiesPerformance.exe -> Advanced tab. By default, the users' OS versions (like XP/Vista/7/8/8.1/10) set the performance of programs, the servers' versions (2k3/2k8/2k12/2k16) do care of background services. How this option interacts with those two above?
timeBeginPeriod() is the documented api to do this. It is documented to affect the accuracy for Sleep(). Dave Cutler probably did not enjoy implementing it, but allowing Win 3.1 code to port made it necessary. The multi-media api back then was necessary to keep anemic hardware with small buffers going without stuttering.
Very crude, but there is no other good way to do it in the kernel. The normal state for a processor core is to be stopped on a HLT instruction. Consuming (almost) no power, the only way to revive it is with a hardware interrupt. Which is what it does, it cranks up the clock interrupt rate. Normally ticks 64 times per second, you can jack it up to 1000 with timeBeginPeriod, 2000 with the native api.
And yes, pretty bad for power consumption. The clock interrupt handler also activates the thread scheduler, an fairly unsubtle chunk of code. The reason why a Sleep() call can now wake up at (almost) the clock interrupt rate. Tinkered with in Win8.1 btw, the only thing I noticed about the changes is that it is not quite as responsive anymore and a 1 msec rate can cause up to 2 msec delays.
Chrome is indeed notorious for ab/using the heck out of it. I always assumed that it provided a competitive edge for a company that does big business in mobile operating systems and battery-powered devices. The guy that started this web site noticed something was wrong. The more responsible thing to do for a browser is to bump up the rate to 10 msec, necessary to get accurate GIF animation. Multi-media playback does not need it anymore.
This otherwise has no effect at all on scheduling priorities. One detail I did not check is if the thread quantum changes correspondingly (the number of ticks a thread may own a core before being evicted, 3 for a workstation). I suspect it does.

Phone battery use with camera turned on (ar)

I am hoping this is a relatively simple answer. Ive always been interested in ar, and I've been debating about tinkering with a possibly ar driven ui for mobile.
I guess the only real question would be having the camera continuously turned on, how much battery would that use? i.e. would it be too much for something like this to be worth doing?
Battery drain is one of the biggest issues in the smartphones nowadays. I'm not a specialist in power consumption or battery life or whatever but anyone having and using a smartphone (not only for calls of course) would not be wrong by saying this. There are many tips on the internet teaching you how to increase the battery life. In fact processes running on your device need energy and that energy is provided by the battery.
To answer your question, I've been using the smartphones' cameras for AR applications since quite long time now. It's a heavy process and indeed it drains the battery faster than other processes. On the other hand you also have to consider the other processes running on your device while your AR application is used. For example your app might use the device's sensors (gyroscope, GPS, etc); these processes are draining the battery also. A simple test that you might do is to charge your device, start the camera and leave it until the battery dies. Well that's exactly how much the camera would drain the battery (you can even measure the time). Of course you might want to turn off everything else running on the device.
To answer your second question, it depends how the application is created (many things can be optimized a lot!) and how it's going to be used. If the goal of the application is to be used continuously for hours and hours then you need to wait for some other kind of technology being discovered (joking..I hope) or having extra power supply attached to your device. I think it's worth doing the application and optimize it on the fly and also in the end when everything is up an running. If the camera is the only issue then I'm sure it's worth trying!

Lazy evaluation/initialization in GUI applications- Non-disruptive ways to do it?

Suppose you are making a GUI application, and you need to load/parse/calculate a bunch of things before a user can use a certain tool, and you know what you have to do beforehand.
Suddenly, it makes sense to start doing these calculations in the background over a period of time (as opposed to "in one go" on start-up or exactly when it is needed). However, doing too much in the background will slow down the responsiveness of the application.
Are there any standard practices in this kind of approach? Perhaps ways to detect low load on the CPU or the user being idle and execute code in those times? Arguments against this type of approach?
Thanks!
Without knowing your app or your audience, I can't give you specific advice.
My main argument against the approach is that unless you have a high-profile application which will see a lot of use by non-programmers, I wouldn't bother. This sounds like a lot of busy work that could be spent developing or refining features that actually allow people to do new things with your app.
That being said, if there is a reason to do it, there is nothing wrong with lazy-loading data.
The problem with waiting until idle time is that some people have programs like SETI#Home installed on their computer, in which case their computer has little to no idle time. If loading full-throttle kills the responsiveness of your app, you could try injecting sleeps. This is what a lot of video games do when you specify a target frame rate, to avoid pegging the CPU. This would get the data loaded faster, rather than waiting for idle time.
If parts of your app depend on data to work, and the user invokes that part of the app, you will have to abandon the lazy-loading approach and resume your full CPU/disk taxing load. If it takes a long time, or make the app unresponsive, you could display a loading dialog with a progress bar.
If your target audience will tend to have a multicore CPU and if your app startup and background initialization tasks won't contend for the same other resources (e.g. disk IO, network, ...) to the point of creating a new bottleneck, you might want to kick off a background thread to perform the initialization tasks (or even a thread per initialization task if you have several tasks that can run in parallel). That will make more efficient use of a multicore hardware architecture.
You didn't specify your target platform, but it's exceedingly easy to achieve this in .NET and so I have begun doing it in many of my desktop apps.

Testing perceived performance

I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing.
Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...).
I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.
setup a virtual machine, give in as little ram as needed and also you can have it use 1,2 or more CPUs. I like VirtualBox myself install your app and test with different RAM configs
Personally, I'd get an old used crappy computer that is typical of what the users have and test on that. It should be cheap and you will see pretty fast how bad things are.
I think the only way to deal with this is through proper end-user testing, i.e. get yourself a "typical" system for testing and use that to identify any perceptible performance bottlenecks.
You can try out either Virtual PC or VMWare Player/Workstation, load an OS onto it, and then throttle back the resources. I know that with any of those tools you can reduce the memory to whatever you'd like. You can also specify the number of cores you want to use. You might even be able to adjust the clock speed in VMWare Workstation... I'm not sure.
I upvoted SQLMenace's answer, but, I also think that profiling needs to be mentioned, no matter how quickly the code is executing - you'll still see what's taking the most time. If you find yourself with some free time, I think profiling and investigating the results is a good way to spend it.

Scalable Ticketing / Festival Website

I've noticed major music festivals (at least in Australia) and other events that experience a peak in traffic when tickets go on sale have huge problems keeping their websites running well. I've seen a few different techniques used to try combat this such as short sessions and virtual queues but they dont seem to have much effect.
If you were to design a website to sell a lot of tickets in a short amount of time how would you handle scalability? What technologies and programming techniques would you use?
My experience is in the Microsoft stack so answers that in that area will be most useful to me but I'd also like to hear how this sort of problem could be solved on other platforms.
I think the main problem is not that it's "hard" to make such a system scalable, it's that 99% of the time, those sites don't have much traffic. It's not much good buying 50 front end servers and 10 database servers if 99% of the time, they're all idle.
Personally, I'd use something like Amazon EC2 or even Microsoft's new Azure service so that they can run with minimal capacity most of the time, and then ramp up just before a big event goes on sale.

Resources