Do you think polling is a good approach for other devices, such as a disk or a network interface? - lc3

Im learning LC3 but there are some problem i can not understand clearly

Polling is a reasonable way to interact with a device, if the program doesn't have anything better to do than wait for that device.
This can be the case on simple systems, and some embedded systems work that way.
However, as we increase the system's workload, polling may become insufficiently responsive to the devices and/or programs, so other methods perform better.
Interrupts are an alternative to polling.  Interrupts mechanisms support prioritization both in hardware and in software.
When I/O devices are ready simultaneously/concurrently, the hardware will prioritize the devices, effectively by their device speed. This means that fast devices can get the attention of the CPU quickly in response to data being ready.
When the CPU is servicing a low priority device and a higher priority device becomes ready, with interrupts, software can layer so as to allow higher priority devices to interrupt lower priority (slower) devices that may be currently being serviced.
Interrupts also work well when many programs compete for the CPU.
In summary, the more devices and more programs, the worse that polling will perform, potentially slowing the system down and even loosing data from devices.  Interrupts, when properly layered, can mitigate these issues.

Related

Windows Timer Resolution vs Application Priority vs Processor scheduling

Please, make it once more clear the technical difference between these three things around MS Windows systems. First is Timer Resolution you may set and get via ntdll.dll non-exported functions NtSetTimerResolution and NtQueryTimerResolution or use the Sysinternals' clockres.exe tool.
One of the scandalous trick used by the Chrome browser some time ago to perform better across the web. (They left high resolution trick for Flash plugin only at the moment). https://bugs.chromium.org/p/chromium/issues/detail?id=153139
https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
In fact Visual Studio and SQL Server in some cases do the trick as well. I personally feel like it performs the whole system better and crisp, not slow down as many people warn out there.
What is the difference between the timer resolution and application I/O and memory priority (realtime/high/above normal/normal/low/background/etc.) you may set via Task Manager except the fact that the timer resolution sets up for the whole system, not a single application?
What is the difference between them and Processor scheduling option you can adjust from CMD > SystemPropertiesPerformance.exe -> Advanced tab. By default, the users' OS versions (like XP/Vista/7/8/8.1/10) set the performance of programs, the servers' versions (2k3/2k8/2k12/2k16) do care of background services. How this option interacts with those two above?
timeBeginPeriod() is the documented api to do this. It is documented to affect the accuracy for Sleep(). Dave Cutler probably did not enjoy implementing it, but allowing Win 3.1 code to port made it necessary. The multi-media api back then was necessary to keep anemic hardware with small buffers going without stuttering.
Very crude, but there is no other good way to do it in the kernel. The normal state for a processor core is to be stopped on a HLT instruction. Consuming (almost) no power, the only way to revive it is with a hardware interrupt. Which is what it does, it cranks up the clock interrupt rate. Normally ticks 64 times per second, you can jack it up to 1000 with timeBeginPeriod, 2000 with the native api.
And yes, pretty bad for power consumption. The clock interrupt handler also activates the thread scheduler, an fairly unsubtle chunk of code. The reason why a Sleep() call can now wake up at (almost) the clock interrupt rate. Tinkered with in Win8.1 btw, the only thing I noticed about the changes is that it is not quite as responsive anymore and a 1 msec rate can cause up to 2 msec delays.
Chrome is indeed notorious for ab/using the heck out of it. I always assumed that it provided a competitive edge for a company that does big business in mobile operating systems and battery-powered devices. The guy that started this web site noticed something was wrong. The more responsible thing to do for a browser is to bump up the rate to 10 msec, necessary to get accurate GIF animation. Multi-media playback does not need it anymore.
This otherwise has no effect at all on scheduling priorities. One detail I did not check is if the thread quantum changes correspondingly (the number of ticks a thread may own a core before being evicted, 3 for a workstation). I suspect it does.

Are beacons that work with Eddystone and iBeacon feasible?

I have concluded that Eddystone and iBeacon may each have their uses - at least if I want to have the best possible beacon support in my Android + iOS apps.
It seems to me it would be simplest to have physical beacons that work with both - but since beacons are very new to me, I am not sure if they will be feasible:
In use (i.e. no major delays in broadcast and similar)
With concerns to battery drain in the beacon
Since I have zero experience with beacons, it is difficult had to evaluate the different maintenance aspects.
If you do not have day-to-day access to where beacons are placed - is it possible to acquire beacons that can do both without requiring constant maintenance?
While both iBeacon and Eddystone work well on Android, there are real disadvantages to using Eddystone on iOS, as detection times in the background are much slower, and it is not possible to launch a non-running app into the background on iOS with Eddystone. For iOS, Eddystone is best used with foreground-only apps.
From a beacon hardware perspective, both formats are similar. Hardware beacons come in battery powered and wall-powered variants. For best responsiveness and best distance estimating capability, it is important that the beacons be configured to transmit at their highest advertising rate and power.
If you do not have physical access to the installed beacons to change batteries, use wall-powered or USB-powered beacons if at all possible. Be very wary of manufacturer claims of battery life lasting a year or more. These claims are often based on low advertising rates and power levels that save battery but adversely impact performance for many use cases. Also, if both Eddystone and iOS are configured for transmission at the same time, battery life is cut in half.
If you use wall or usb-powered beacons maintenance is minimal. The main problem is people unplugging your beacons (e.g. to get a wall socket to use a vacuum or to get a spare USB port to plug in another device.) You can use locking covers over your beacons to help prevent this, but you cannot eliminate the problem entirely.

Phone battery use with camera turned on (ar)

I am hoping this is a relatively simple answer. Ive always been interested in ar, and I've been debating about tinkering with a possibly ar driven ui for mobile.
I guess the only real question would be having the camera continuously turned on, how much battery would that use? i.e. would it be too much for something like this to be worth doing?
Battery drain is one of the biggest issues in the smartphones nowadays. I'm not a specialist in power consumption or battery life or whatever but anyone having and using a smartphone (not only for calls of course) would not be wrong by saying this. There are many tips on the internet teaching you how to increase the battery life. In fact processes running on your device need energy and that energy is provided by the battery.
To answer your question, I've been using the smartphones' cameras for AR applications since quite long time now. It's a heavy process and indeed it drains the battery faster than other processes. On the other hand you also have to consider the other processes running on your device while your AR application is used. For example your app might use the device's sensors (gyroscope, GPS, etc); these processes are draining the battery also. A simple test that you might do is to charge your device, start the camera and leave it until the battery dies. Well that's exactly how much the camera would drain the battery (you can even measure the time). Of course you might want to turn off everything else running on the device.
To answer your second question, it depends how the application is created (many things can be optimized a lot!) and how it's going to be used. If the goal of the application is to be used continuously for hours and hours then you need to wait for some other kind of technology being discovered (joking..I hope) or having extra power supply attached to your device. I think it's worth doing the application and optimize it on the fly and also in the end when everything is up an running. If the camera is the only issue then I'm sure it's worth trying!

Battery effects of web apps?

I am learning about mobile web apps, and they look interesting. Among other things, I am wondering whether there is a significant difference in battery consumption between the native apps and web apps? (Phonegap, intel xdk, etc)?
There can be a significant difference due to use of transceivers (i.e. the receiver and transmitter on your phone/tablet). On any mobile device, whether notebook, tablet or phone, the processor and peripherals drop into power conserving sleep states. Processor sleep states are called C-states. Peripheral sleep states are called D-states. Thus the greater battery life when your phone is idle. The longer the period of idle, whether processor or peripheral, the better the battery life.
What does this mean for web apps versus native apps? Native apps will use more of the processor but less of expensive peripherals (read that as transceiver include GPS). Both the processor and transceiver are power hogs. So here's the bottom line:
If your web app does a lot of cloud access, it's going to pull down the battery. This is why using the GPS to give you turn by turn instructions kills your battery life (and makes your phone a little heater).
If your native app never goes to sleep or gets any rest (e.g. it does polling instead of using interrupts, or if the interrupt period is too small), you'll pull down your battery.
So the ideal app balances native and web computation to
minimize processor usage (more specifically, maximizes periods where the processor is idle)
minimize peripheral usage (read that as minimize the number of web accesses)
As you can see, these goals are a little contradictory. From a designer perspective, you want to move as much computation onto the cloud while keeping data as local as possible.

Dynamic frequency scaling

I would like to adjust the CPU frequency , in other word, looking for an API or c++ code for frequency scaling in windows ?
In Windows, you can call SetPriorityClass to set the priority of the process
You can also set the priority of a thread by calling SetThreadPriority
The CPU clock speed is not something for which there are just some simple instructions to execute. The clock speed is controlled by the motherboard chipset, and that in turn is controlled by a motherboard-specific device driver.
You can get some control over the clock speed by using the Windows settings for power management. The usual way to slow things down and save energy is to choose a setting on this basis. Modern laptop, tablet and phone computers have extremely sophisticated algorithms but you can hint them in the direction of less power.
You may be able to automate the operation of these Windows programs, if that's all you need.
Many motherboards come with the ability to overclock, and a utility to control it. If you have such a motherboard you may be able to find a way to automate its control program, or it may provide an API. It will not be a generic solution, but one highly specific to the motherboard. Check with your motherboard supplier.
Is there a general Windows capability to do this? Not so far as I know, but there could be something hiding in there somewhere. It will be privileged call to a device driver requiring admin rights, if it exists. My be is that it doesn't.
You can use: PowerWriteDCValueIndex(); / PowerWriteACValueIndex(); with PowerSetActiveScheme(NULL, pwrGUID);

Resources