I have been successfully using didDiscover peripheral to read data, via Scan Response, from a remote sensor.
I depend on an update rate in excess of 7Hz this is a rate that see for the first 1 minute 54 seconds, after the call to scanForPeripherals, however beyond that time the up date rate drops to around 3Hz. The peripherals are still advertising at the same rate.
This behaviour is 100% repeatable.
I assume this is behaviour imposed by Apple to preserve battery life?
Anyone know how to disable it?
So it would appear that simply stopping and then instantly restarting the scanning process, periodically within the 115s time frame, has the desired effect. The scanning rate stays at the pre 115s rate.
A crude but effective solution.
Related
My system needs to be in deep sleep mode and wake up every second, how can I predict and make the boot time as short as possible? I'm a bit surprised by the poor performance of ESP32's low power mode, 150 uA deep sleep, and then a forced reboot sounds crazy to me, am I missing something?
Waking up takes around 200-300 ms (in my projects, switched of bootmessages). And then you have to execute initializing ánd executing code.. doesn’t make sense per second if power is an issue. The ESP has a lot of advantages, but it’s power hungry compared to a pic microcontroller.
In one of my projects the esp wakes up, initiates a i2c request to a sensor. Has to wait 5s and process. I investigated if sleeping for the 5s was better for powerusage, but it wasn’t. Slowing down processor speed is more effective for those moments, but still in the mA range.
I speed it up from 297ms to 47ms.
Form menu config:
Bootloader log verbosity:
No output. Improve 100ms
Default log verbosity.
No output. improves 110ms
Skip image validation when exiting deep sleep.
Yes. Improve 40ms
In theory, It can be as fast as 20ms something else to improve?
I have windowed WinApi/OpenGL app. Scene is drawn rarely (compared to games) in WM_PAINT, mostly triggered by user input - MW_MOUSEMOVE/clicks etc.
I noticed, that when there is no scene moving by user mouse (application "idle") and then some mouse action by user starts, the first frame is drawn with unpleasant delay - like 300 ms. Following frames are fast again.
I implemented 100 ms timer, which only does InvalidateRect, which is later followed by WM_PAINT/draw scene. This "fixed" the problem. But I don't like this solution.
I'd like know why is this happening and also some tips how to tackle it.
Does OpenGL render context save resources, when not used? Or could this be caused by some system behaviour, like processor underclocking/energy saving etc? (Although I noticed that processor runs underclocked even when app under "load")
This sounds like Windows virtual memory system at work. The sum of all the memory use of all active programs is usually greater than the amount of physical memory installed on your system. So windows swaps out idle processes to disc, according to whatever rules it follows, such as the relative priority of each process and the amount of time it is idle.
You are preventing the swap out (and delay) by artificially making the program active every 100ms.
If a swapped out process is reactivated, it takes a little time to retrieve the memory content from disc and restart the process.
Its unlikely that OpenGL is responsible for this delay.
You can improve the situation by starting your program with a higher priority.
https://superuser.com/questions/699651/start-process-in-high-priority
You can also use the virtuallock function to prevent Windows from swapping out part of the memory, but not advisable unless you REALLY know what you are doing!
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366895(v=vs.85).aspx
EDIT: You can improve things for sure by adding more memory and for sure 4GB sounds low for a modern PC, especially if you Chrome with multiple tabs open.
If you want to be scientific before spending any hard earned cash :-), then open Performance Manager and look at Cache Faults/Sec. This will show the swap activity on your machine. (I have 16GB on my PC so this number is very low mostly). To make sure you learn, I would check Cache Faults/Sec before and after the memory upgrade - so you can quantify the difference!
Finally, there is nothing wrong with the solution you found already - to kick start the graphic app every 100ms or so.....
Problem was in NVidia driver global 3d setting -"Power management mode".
Options "Optimal Power" and "Adaptive" save power and cause the problem.
Only "Prefer Maximum Performance" does the right thing.
There are plenty of examples in Windows of applications triggering code at fairly high and stable framerates without spiking the CPU.
WPF/Silverlight/WinRT applications can do this, for example. So can browsers and media players. How exactly do they do this, and what API calls would I make to achieve the same effect from a Win32 application?
Clock polling doesn't work, of course, because that spikes the CPU. Neither does Sleep(), because you only get around 50ms granularity at best.
They are using multimedia timers. You can find information on MSDN here
Only the view is invalidated (f.e. with InvalidateRect)on each multimedia timer event. Drawing happens in the WM_PAINT / OnPaint handler.
Actually, there's nothing wrong with sleep.
You can use a combination of QueryPerformanceCounter/QueryPerformanceFrequency to obtain very accurate timings and on average you can create a loop which ticks forward on average exactly when it's supposed to.
I have never seen a sleep to miss it's deadline by as much as 50 ms however, I've seen plenty of naive timers that drift. i.e. accumalte a small delay and conincedentally updates noticable irregular intervals. This is what causes uneven framerates.
If you play a very short beep on every n:th frame, this is very audiable.
Also, logic and rendering can be run independently of each other. The CPU might not appear to be that busy, but I bet you the GPU is hard at work.
Now, about not hogging the CPU. CPU usage is just a break down of CPU time spent by a process under a given sample (the thread schedulerer actually tracks this). If you have a target of 30 Hz for your game. You're limited to 33ms per frame, otherwise you'll be lagging behind (too slow CPU or too slow code), if you can't hit this target you won't be running at 30 Hz and if you hit it under 33ms then you can yield processor time, effectivly freeing up resources.
This might be an intresting read for you as well.
On a side note, instead of yielding time you could effecivly be doing prepwork for future computations. Some games when they are not under the heaviest of loads actually do things as sorting and memory defragmentation, a little bit here and there, adds up in the end.
Hello I'm trying to read some data from the serial port and record it in the hard drive. I'm using visual C++ express, and made an application using the windows form.
The program basically sends a byte ("s") every t seconds, this trigger the device connected to the serial port to send back 3 bytes. The baud rate now is on 38400bps. The time t is controlled by the timer class of visual c++.
The problem I have is that if I set the ticking time of the timer to 1ms, the data is not recorded every 1ms, but around every 15ms. I've read that maybe the resolution of the timer is set to 15ms, but not sure about it. Anyhow, how can I make the timer event to trigger every 1ms, instead of every 15ms? or is there another way to read the serial port data faster? I'm looking for 500Hz or higher.
The device connected to the serial port is a 32bit microcontroller, which I have control over the program as well so I can easily change it, but just can't figure out another way to make this transmission.
Thanks for any help!
Windows is not a real-time OS, and regardless of what period your timer is set to there are no guarantees that it will be consistently maintained. Moreover the OS clock resolution is dependent on the hardware vendor's HAL implementation and varies from system to system. Multi-media timers have higher resolution, but the real-time guarantees are still not there.
Apart from that, you need to do a little arithmetic on the timing you are trying to achieve. At 38400,N,8,1, you can only transfer at most 3.84 characters in 1ms, so your timing is tight in any case since you are pinging with one character and expecting three characters to be returned. You can certainly go no faster without increasing the bit rate.
A better solution would be to have the PC host send the required reporting period to the embedded target once then have the embedded target perform its own timing so that it autonomously emits data every period until the PC requests that it stop or sends a different period. Your embedded system is far more capable of maintaining hard-real-time constraints.
Alternatively you could simply have your device perform its sample and transmit the three characters with the timing entirely determined by the transmission time of the three characters, and stream the data constantly. This will give you a sample period of 781.25us (1280Hz) without any triggering from the PC and it will be truly periodic and jitter free. If you want a faster sample rate, simply increase the bit rate.
Windows Forms timer resolution is about 15-20 ms. You can try multimedia timer, see timeSetEvent function.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd757634%28v=vs.85%29.aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/dd743609%28v=vs.85%29.aspx
Timer precision is set by uResolution parameter (0 - maximal possible precision). In any case, you cannot get timer callback every ms - Windows is not real-time system.
Reading Jeff Willcox on frame rate counters, I realized my application rarely hit the 60 fps. I'm not satisfied with the global performance of my app (compared to its iPhone counterpart), but the numbers seems weird to me.
When the app is doing nothing, even just after launch, it's even sometimes at 0 fps. And the higher I hit is 50 fps.
Overall, my application is not blazing fast, but not really slow. So how can I interpret the numbers ? How can I spot the issue that makes my app have a bad fps ?
A low frame rate doesn't necessarily indicate poor performance.
If you're testing on an actual device and you see poor performance when then an investigation may indicate a problem that may be related to an issue which also impacts frame rate.
Hmmm. That sentence may not be clear.
Don't worry too much about getting a high frame rate all the time. Focus on actual performance experienced by the user.
If the actual performance is poor and the frame rate is low, that's when you should worry about the frame rate.
What's important is testing on an actual device and what performance is like there.
Jeff Wilcox notes in his post that:
Frame rate counters may be 0 when there is no animation being updated on the thread at any particular moment. You can add a very simple, continually animating and repeating, animation to your application during development & testing if you want to ensure that there is always some frame rate value available.
So the 0fps reading seems not be an issue since no screen updates need to be rendered.