Is there any relation between advertising interval, walking speed, and window size of moving average filter? - ibeacon

My beacons have advertisement interval of 330ms. I use an iOS device to scan the advertisement packet whose scanning rate is 1 scan per second on average. I want to use the moving average filter to smooth the fluctuating RSSI values. Considering the walking speed of 1.2 m/s and the advertisement interval of 330 ms, what should be the size of a window in the moving average filter? Is there any mathematical relationship between them?
Thank you.

There is no one correct answer here. It is a trade-off between noise in the distance estimate and lag time.
The large (and longer) your statistical sample, the more lag time there will be in a running average. A 20 second window will tell you where you were on average over the last 20 seconds, and filter out a lot of noise. A 5 second running average will tell you where you were on average over the last 5 seconds, but with much more noise on the calculation.
How much lag you can tolerate and how much noise you can tolerate all depend on your use case. Use cases that are very time sensitive may sacrifice accuracy for the sake of less lag. Conversely use cases needing greater accuracy may accept more lag to filter out more noise on the estimate.

Related

USRP N320 recording the edges

When I record a signal with USRP N320 SDR, it has some problems on the edges of the spectrum. For example, when I choose sample rate 50 Msps, 2 MHz of the start of the spectrum and 2 MHz of the end of the spectrum, gives the wrong results. When it see a pulse on the edges it decreases the power and changes the frequency little bit. But 46 MHz bandwidth is perfectly working.
Sample rate: 50 Msps, Properly working bandwidth: 46 MHz
Sample rate: 100 Msps, Properly working bandwidth: 90 MHz
Sample rate: 200 Msps, Properly working bandwidth: 180 MHz
I tried to filter the edges with bandpass filter but it does give the OOOOOO problem. Even if I choose the sample rate 50 Msps. But normally, I can record successfully without bandpass filter when I choose sample rate 200 Msps.
Is there a solution to record the edges correctly. Or filtering it without dropping samples.
First off:
I tried to filter the edges with bandpass filter but it does give the OOOOOO problem
means that your computer isn't fast enough to apply the filter to the data stream. That might mean two things: you've designed a filter that's too long and could be shorter and still do what you want, or what you want to do requires a filter of that length and you will need to find a faster PC (hard) or use a faster filter implementation (did you try the FFT filters?).
For example, when I choose sample rate 50 Msps, 2 MHz of the start of the spectrum and 2 MHz of the end of the spectrum, gives the wrong results.
This is not surprising! Remember that anything with a ADC needs an anti-aliasing filter on the analog side, and these can't be arbitrarily sharp. So, necessarily, the spectrum at the edge of your band gets a bit dampened, and there's a bit of aliasing there. The dampening, you could counteract by throwing an equalizing filter on your PC at it, which would need to necessarily be more compute-intense than what is happening on the USRP, but the aliasing of the lowest frequencies onto the highest, and vice versa, due to finite steepness of the analog anti-aliasing filter you cannot repair. That's the signal processing blues for any kind of acquisition device.
There's one trick though, which the USRP uses: when your requested sampling rate is lower than the ADC's sampling rate, the USRP can internally apply a (better!) digital filter to select that target sampling rate as bandwidth, and decimate to that.
Thus, depending on the ADC rate to output sampling rate relationship (in UHD, the ADC rate is called "master clock rate", MCR), there's further digital filtering and decimation going on in the digital logic inside the N320. These filters also can't be infinitely sharp – and you might see that.
Generally, you'd want that decimation between MCR and the sampling rate you've requested to be an even number, and not too large. Don't have the N320's digital signal processing architecture in my head right now, but I bet using a decimation that's a multiple of 4 or even 8 is a good move – you get to use the nicer half-band filters then.
Modern UHD also has the filter API, with which you can work with these digital filters manually; this rarely is what you really want to do here, though.

Why actual runtime for a larger search value is smaller than a lower search value in a sorted array?

I executed a linear search on an array containing all unique elements in range [1, 10000], sorted in increasing order with all search values i.e., from 1 to 10000 and plotted the runtime vs search value graph as follows:
Upon closely analysing the zoomed in version of the plot as follows:
I found that the runtime for some larger search values is smaller than the lower search values and vice versa
My best guess for this phenomenon is that it is related to how data is processed by CPU using primary memory and cache, but don't have a firm quantifiable reason to explain this.
Any hint would be greatly appreciated.
PS: The code was written in C++ and executed on linux platform hosted on virtual machine with 4 VCPUs on Google Cloud. The runtime was measured using the C++ Chrono library.
CPU cache size depends on the CPU model, there are several cache levels, so your experiment should take all those factors into account. L1 cache is usually 8 KiB, which is about 4 times smaller than your 10000 array. But I don't think this is cache misses. L2 latency is about 100ns, which is much smaller than the difference between lowest and second line, which is about 5 usec. I suppose this (second line-cloud) is contributed from the context switching. The longer the task, the more probable the context switching to occur. This is why the cloud on the right side is thicker.
Now for the zoomed in figure. As Linux is not a real time OS, it's time measuring is not very reliable. IIRC it's minimal reporting unit is microsecond. Now, if a certain task takes exactly 15.45 microseconds, then its ending time depends on when it started. If the task started at exact zero time clock, the time reported would be 15 microseconds. If it started when the internal clock was at 0.1 microsecond in, than you will get 16 microsecond. What you see on the graph is a linear approximation of the analogue straight line to the discrete-valued axis. So the tasks duration you get is not actual task duration, but the real value plus task start time into microsecond (which is uniformly distributed ~U[0,1]) and all that rounded to the closest integer value.

Throughput's change is not logical when mobile UEs are increasing their distance

I'm trying to measure throughput when the distance between two mobile UE is changing. I'm using Omnet++. and measuring throughput in the mac layer.
It's supposed to be that when the distance increases throughput should decrease (well-known inverse relationship)
But in my simulation it doesn't happen. Does anyone have any idea about it in omnet++?
I also attached the chart of throughput during the time
enter image description hereenter image description here
Thanks
#thardes2 Ok, I'm using Omnet++. In my scenario, there is 1 base station and 2 UE without any mobility during the simulation. I run the simulation 20 times (10,20,30 ,... 200m) and try to measure the throughput which is calculated in the MAC layer in the LteHarqBufferRx.cc (in a way total received bytes / simulation time)
double tputSample = (double)totalRcvdBytes_ / (NOW - getSimulation()->getWarmupPeriod());
macOwner_->emit(macThroughput_, tputSample);
I used SinglePair-UDP-Infra scenario in D2D sample in Simulte and verything is working well, but my issue is: Why throughput doesn't reduce when distance increases? Even in some long distance throughput increases!!! Am I calculating throughput in a wrong place?
Thank you for your help

ticks and hours in NetLogo

Is there some type of conversion for ticks to a unit of real time? I would like my program to simulate a 48 hour experiment. Does anyone have any suggestions on how to do this? Thanks!
How quickly do things change in your experiment? From system dynamics, a good rule of thumb is to have a discrete clock tick 4 times during the smallest interval of real time in which something meaningful happens in the modelled system. For example, if you would expect to see changes every minute, then you would have 4 ticks each minute (and your ABM rules about updating the system would be calculated on the basis of 15 seconds) and then run the simulation for 11,520 (=48x60x4) ticks.

Are cycles in computing equal to time?

I have a book describing energy saving compiler algorithms with a variable having "cycles" as measuring unit for the "distance" until something happens (an HDD is put into idle mode).
But the results for efficiency of the algorithm have just "time" on one axis of a diagram, not "cycles". So is it safe to assume (i.e. my understanding of the cycle concept) that unless something like dynamic frequency scaling is used, cycles are equal to real physical time (seconds for example)?
The cycles are equal to real physical time, for example a CPU with a 1 GHz frequency executes 1,000,000,000 cycles per second which is the same as 1 over 1,000,000,000 seconds per cycle or, in a other words a cycle per nanosecond. In the case of dynamic frequency that would change according to the change in frequency at any particular time.

Resources