How do I get the sampling rate of a canvas element? - html5-canvas

I am currently investigating preprocessing for on-line handwriting recognition (see http://write-math.com). One interesting property of input devices is the sampling rate, that means the rate at which I get onmousemove and similar events.
I can record them and see the time delta between two events is varies from 1.00 ms - 700.00 ms, but is in average 27.34 ms for this recording.
(sampling rate is measured in Points / second. So the sampling rate would be or for the average case )
Is there any possibility to get this information from the client directly? Are there devices where the sampling rate is known? How does Javascript internally decide how often to fire those events? Can the "event firing rate" be increased / decreased?

Mousemove firing rates for a particular browser are stable over a sufficient sample size, but the firing rate of any individual mouse event is affected by non-canvas activities (garbage collection, background tasks, etc).
I don't know of any browser that allows adjustment of the mousemove timing--all internally defined.
Interesting project!

Related

USRP N320 recording the edges

When I record a signal with USRP N320 SDR, it has some problems on the edges of the spectrum. For example, when I choose sample rate 50 Msps, 2 MHz of the start of the spectrum and 2 MHz of the end of the spectrum, gives the wrong results. When it see a pulse on the edges it decreases the power and changes the frequency little bit. But 46 MHz bandwidth is perfectly working.
Sample rate: 50 Msps, Properly working bandwidth: 46 MHz
Sample rate: 100 Msps, Properly working bandwidth: 90 MHz
Sample rate: 200 Msps, Properly working bandwidth: 180 MHz
I tried to filter the edges with bandpass filter but it does give the OOOOOO problem. Even if I choose the sample rate 50 Msps. But normally, I can record successfully without bandpass filter when I choose sample rate 200 Msps.
Is there a solution to record the edges correctly. Or filtering it without dropping samples.
First off:
I tried to filter the edges with bandpass filter but it does give the OOOOOO problem
means that your computer isn't fast enough to apply the filter to the data stream. That might mean two things: you've designed a filter that's too long and could be shorter and still do what you want, or what you want to do requires a filter of that length and you will need to find a faster PC (hard) or use a faster filter implementation (did you try the FFT filters?).
For example, when I choose sample rate 50 Msps, 2 MHz of the start of the spectrum and 2 MHz of the end of the spectrum, gives the wrong results.
This is not surprising! Remember that anything with a ADC needs an anti-aliasing filter on the analog side, and these can't be arbitrarily sharp. So, necessarily, the spectrum at the edge of your band gets a bit dampened, and there's a bit of aliasing there. The dampening, you could counteract by throwing an equalizing filter on your PC at it, which would need to necessarily be more compute-intense than what is happening on the USRP, but the aliasing of the lowest frequencies onto the highest, and vice versa, due to finite steepness of the analog anti-aliasing filter you cannot repair. That's the signal processing blues for any kind of acquisition device.
There's one trick though, which the USRP uses: when your requested sampling rate is lower than the ADC's sampling rate, the USRP can internally apply a (better!) digital filter to select that target sampling rate as bandwidth, and decimate to that.
Thus, depending on the ADC rate to output sampling rate relationship (in UHD, the ADC rate is called "master clock rate", MCR), there's further digital filtering and decimation going on in the digital logic inside the N320. These filters also can't be infinitely sharp – and you might see that.
Generally, you'd want that decimation between MCR and the sampling rate you've requested to be an even number, and not too large. Don't have the N320's digital signal processing architecture in my head right now, but I bet using a decimation that's a multiple of 4 or even 8 is a good move – you get to use the nicer half-band filters then.
Modern UHD also has the filter API, with which you can work with these digital filters manually; this rarely is what you really want to do here, though.

Computing End-To-End Delay in Veins

I have read a bunch of posts on SO regarding the computation of end-to-end delay in Veins, but have not found an answer to be fulfilling in explaining why the delay is seemingly too low.
I am using:
Veins 4.7
Sumo 0.32.0
Omnetpp 5.3
Channel switching is turned off.
I have the following code, sending a message from the transmitting node:
if(sendMessage) {
WaveShortMessage* wsm = new WaveShortMessage();
sendDown(wsm);
}
The receiving node computes the delay using the wsm creation time, but I have also tried setting the timestamp on the transmitting side. The result is the same.
simtime_t delay = simTime() - wsm -> getCreationTime();
delayVector.record(delay);
The sample output for the delay vector is as follows:
Item# Event# Time Value
0 165 14.400239402394 2.39402394E-4
1 186 14.500240403299 2.40403299E-4
2 207 14.600241404069 2.41404069E-4
3 228 14.700242404729 2.42404729E-4
Which means that the end-to-end delay (from creation to reception) is equivalent to roughly a quarter of a millisecond, which seems to be quite low - and a fair bit below what is typically reported in the literature. This seems to be consistent with what other people have reported as being an issue (e.g. end to end delay in Veins)
Am I missing something in this computation? I have tried adding load on the network by adding a high number of vehicular nodes (21 nodes within a 1000x50 sandbox on a straight highway, with an average speed of 50 km/h), but the result seems to be the same. The difference is negligible. I have read several research papers that suggest that end-to-end delay should increase dramatically in high vehicular densities.
This end-to-end delay is to be expected. If your application's simulation model does not explicitly model processing delay (e.g., by an application running on a slow general purpose computer), all you would expect to delay a frame is propagation delay (lightspeed, so negligible here) and queueing delay on the MAC (time from inserting frame into TX queue until transmission finishes).
To give an example, for a 2400 bit frame sent at 6 Mbit/s this delay is roughly 0.45 ms. You are likely using slightly shorter frames, so your values appear to be reasonable.
For background information, see F. Klingler, F. Dressler, C. Sommer: "The Impact of Head of Line Blocking in Highly Dynamic WLANs" (DOI 10.1109/TVT.2018.2837157), which also includes a comparison of theory vs. Veins vs. real measurements.

Throughput's change is not logical when mobile UEs are increasing their distance

I'm trying to measure throughput when the distance between two mobile UE is changing. I'm using Omnet++. and measuring throughput in the mac layer.
It's supposed to be that when the distance increases throughput should decrease (well-known inverse relationship)
But in my simulation it doesn't happen. Does anyone have any idea about it in omnet++?
I also attached the chart of throughput during the time
enter image description hereenter image description here
Thanks
#thardes2 Ok, I'm using Omnet++. In my scenario, there is 1 base station and 2 UE without any mobility during the simulation. I run the simulation 20 times (10,20,30 ,... 200m) and try to measure the throughput which is calculated in the MAC layer in the LteHarqBufferRx.cc (in a way total received bytes / simulation time)
double tputSample = (double)totalRcvdBytes_ / (NOW - getSimulation()->getWarmupPeriod());
macOwner_->emit(macThroughput_, tputSample);
I used SinglePair-UDP-Infra scenario in D2D sample in Simulte and verything is working well, but my issue is: Why throughput doesn't reduce when distance increases? Even in some long distance throughput increases!!! Am I calculating throughput in a wrong place?
Thank you for your help

Is there any relation between advertising interval, walking speed, and window size of moving average filter?

My beacons have advertisement interval of 330ms. I use an iOS device to scan the advertisement packet whose scanning rate is 1 scan per second on average. I want to use the moving average filter to smooth the fluctuating RSSI values. Considering the walking speed of 1.2 m/s and the advertisement interval of 330 ms, what should be the size of a window in the moving average filter? Is there any mathematical relationship between them?
Thank you.
There is no one correct answer here. It is a trade-off between noise in the distance estimate and lag time.
The large (and longer) your statistical sample, the more lag time there will be in a running average. A 20 second window will tell you where you were on average over the last 20 seconds, and filter out a lot of noise. A 5 second running average will tell you where you were on average over the last 5 seconds, but with much more noise on the calculation.
How much lag you can tolerate and how much noise you can tolerate all depend on your use case. Use cases that are very time sensitive may sacrifice accuracy for the sake of less lag. Conversely use cases needing greater accuracy may accept more lag to filter out more noise on the estimate.

Veins delay does not change with beacon frequency or number of nodes

I'm trying to simulate an imergancy breaking application using veins and analyze its performance. Research papers on 802.11p shows that as beacon frequency and number of vehicles increase delay should increase considerably due to mac layer delay of the protocol ( for 50 vehicles at 8Hz - about 300ms average delay).
But when simulating application with veins delay values does not show much different ( it ranges from 1ms-4ms).I've checked the Mac layer functionality and it appears that the channel is idle most of the time. So when a packet reaches Mac layer the channel has already been idle for more than the DIFS so packet gets sent quickly. I tried increasing packet size and reducing bitrate. It increase the previous delay by a certain amount. But drastic increase of delay due to backoff process cannot be seen.
Do you know why this happens ???
As you use 802.11p the default data rate on the control channel is 6Mbits (source: ETSI EN 302 663)
750Mbyte/s = 750.000bytes/s
Your beacons contain 500bytes. So the transmission of a beacon takes about 0.0007 seconds. As you have about 50 cars in your multi lane scenario and for example they are sending beacons with a 10 hertz frequency, it takes about 0.35s from 1 second to transmit your 500 beacons.
In my opinion, this are to less cars to create your mentioned effect, because the channel is idling for about 60% of the time.

Resources