How to calculate the received power or SNR based on rssi value in the RxJanusFrameNtf? - janus

When I send a Janus frame between two nodes, at the receiving side in RxJanusFrameNtf there is rssi value,
How to calculate the received power or the SNR using rssi?

The rssi value is good for making signal strength comparisons between different nodes. The rssi value in typically given in dB, but with an arbitrary reference. You'd need a noise measurement in dB with the same reference to convert rssi to SNR -- something that is specific to the implementation of a modem or simulation model. For example, Subnero modems publish the noise value in the same units (but potentially a larger bandwidth) as phy.noise.

Related

Throughput's change is not logical when mobile UEs are increasing their distance

I'm trying to measure throughput when the distance between two mobile UE is changing. I'm using Omnet++. and measuring throughput in the mac layer.
It's supposed to be that when the distance increases throughput should decrease (well-known inverse relationship)
But in my simulation it doesn't happen. Does anyone have any idea about it in omnet++?
I also attached the chart of throughput during the time
enter image description hereenter image description here
Thanks
#thardes2 Ok, I'm using Omnet++. In my scenario, there is 1 base station and 2 UE without any mobility during the simulation. I run the simulation 20 times (10,20,30 ,... 200m) and try to measure the throughput which is calculated in the MAC layer in the LteHarqBufferRx.cc (in a way total received bytes / simulation time)
double tputSample = (double)totalRcvdBytes_ / (NOW - getSimulation()->getWarmupPeriod());
macOwner_->emit(macThroughput_, tputSample);
I used SinglePair-UDP-Infra scenario in D2D sample in Simulte and verything is working well, but my issue is: Why throughput doesn't reduce when distance increases? Even in some long distance throughput increases!!! Am I calculating throughput in a wrong place?
Thank you for your help

Labview fluid flow

This is a two part question:
I have a fluid flow sensor connected to an NI-9361 on my DAQ. For those that don't know, that's a pulse counter card. None the less, from the data read from the card, I'm able to calculate fluid flowing through the device in Gallons per hour, min, sec, etc. But what I need to do is the following:
Calculate total number of gallons of fluid that has flowed through the sensor since the loop began running
if possible, store that total so that it can be incremented next time the program runs
I know how to calculate it by hand, just not sure how to achieve the running summation required to calculate total amount of fluid that has passed through the sensor, or how to store the variable being incremented at the next program execution. I'm presuming the latter would involve writing a TDMS file, then opening and reading back the data, unless there's a better way?
Edit:
Below is the code used to determine GPM flow through my sensor. This setup is in accordance with the 9361 manual; it executes and yields proper results.
See this link for details:
http://zone.ni.com/reference/en-XX/help/373197L-01/criodevicehelp/crio-9361/
I can extrapolate how many gallons flow per second, or sample period, the 1526.99 scalar is the fluid flow manufacturer's constant - number of pulses per gallon passing through the sensor. The 9361 is set to frequency/period mode, so I'm calculating cycles per second, dividing by the constant for cycles per gallon to get gallons per second/min.
I suppose I could get a time reference by looking at the sample period, so I guess the better question is, how do I keep an incrementing sum?

Why is the number of SNIRLostPackets so large?

I am trying to simulate vehicular ad hoc networks using Veins 4.4, OMNeT++ 5.0 and SUMO 0.25.
There are only one-hop broadcast beacons in the vanet, and each vehicle sends 10 beacons/s. The size of each beacon is 4000 bits. And nic.mac1609_4.bitrate is 6Mbps.
In my simulation,there are 120 vehicles almost uniformly along the 2-km road, and the TX power of each vehicle is 200mW. After 20s' simulation, I select the vehicles in the middle part, and their SNIRLostPackets is almost 19000 and ReceivedBroadcasts is almost 3500, so the Packet Error Ratio is so high.
I read the codes and find:if there is one bit error in one packet, the packet is consedered to be not correctly received. Is my finding right?
The TX power is 200mW and vehicles are in each others' communication range. Then the network load at each vehicle is 4000*10*120 = 4.8Mbps < 6Mbps,so the number of SNIRLostPackets should not be so large?

Default for 0db sound level as an absolute float value

I'm currentyl building something like a tiny software audio synthesizer on Window 7 in c++. The core engine is running and upon receiving midi events it plays notes, changes programmes, etc. What puzzles me at the moment is where to put the 0 db reference sound pressure level of the output channels.
Let's say the synthesizer produces a sinewave with 440 Hz with an amplitude of |0.5f| . In order to calculate the sound level in db I need to set the reference level (0 db). Does anyone know something like a default for this?
When decibel relative to full scale is in question, AKA dBFS, zero dB is assigned to the maximum possible digital level. A quote from Wiki:
0 dBFS is assigned to the maximum possible digital level.[1] for
example, a signal that reaches 50% of the maximum level at any point
would peak at -6 dBFS i.e. 6 dB below full scale. All peak
measurements will be negative numbers, unless they reach the maximum
digital value.
First you need to be clear about units. dB on its own is a ratio, not an absolute value. As #Roman R. suggested, you can just use 0 dB to mean "full scale" and then your range will be 0 dB (max) to some negative dB value which corresponds to the minimum value that you are interested in (e.g. -120 dB). However this is just an arbitrary measurement which doesn't tell you anything about the absolute value of the signal.
In your question though you refer to dB SPL (SPL = Sound Pressure Level), which is an absolute unit. 0 dB SPL is typically defined as 20 µPa (RMS), which is around the threshold of human hearing, and in this case the range of interest might be say -20 dB SPL to say +120 dB SPL. However if you really do want to measure dB SPL and not just an arbitrary dB value then you will need to calibrate your system to take into account microphone gain, microphone frequency response, A-D sensitivity/gain, and various other factors. This is non-trivial, but essential if you actually want to implement some kind of SPL measuring system.

GPS Time synchronisation

I'm parsing NMEA GPS data from a device which sends timestamps without milliseconds. As far as I heard, these devices will use a specific trigger point on when they send the sentence with the .000 timestamp - afaik the $ in the GGA sentence.
So I'm parsing the GGA sentence, and take the timestamp when the $ is received (I compensate for any further characters being read in the same operation using the serial port baudrate).
From this information I calculate the offset for correcting the system time, but when I compare the time set to some NTP servers, I will get a constant difference of 250ms - when I correct this manually, I'm within a deviation of 20ms, which is ok for my application.
But of course I'm not sure where this offset comes from, and if it is somehow specific to the GPS mouse I'm using or my system. Am I using the wrong $ character, or does someone know how exactly this should be handled? I know this question is very fuzzy, but any hints on what could cause this offset would be very helpful!
Here is some sample data from my device, with the $ character I will take as the time offset marked:
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003538.000,A,5046.8555,N,00606.2913,E,0.00,22.37,160209,,,A*58
-> $ <- GPGGA,003539.000,5046.8549,N,00606.2922,E,1,07,1.5,249.9,M,47.6,M,,0000*5C
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPGSV,3,1,10,09,77,107,17,12,63,243,30,05,51,249,16,14,26,315,20*7E
$GPGSV,3,2,10,30,24,246,25,17,23,045,22,15,15,170,16,22,14,274,24*7E
$GPGSV,3,3,10,04,08,092,22,18,07,243,22*74
$GPRMC,003539.000,A,5046.8549,N,00606.2922,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003540.000,5046.8536,N,00606.2935,E,1,07,1.5,249.0,M,47.6,M,,0000*55
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003540.000,A,5046.8536,N,00606.2935,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003541.000,5046.8521,N,00606.2948,E,1,07,1.5,247.8,M,47.6,M,,0000*5E
You have to take into account things that are going on in GPS device:
receive satellite signal and calculates position, velocity and time.
prepare NMEA message and put it into serial port buffer
transmit message
GPS devices have relatively slow CPUs (compared to modern computers), so this latency you are observing is result of processing that device must do between generation of position and moment it begin transmitting data.
Here is one analysis of latency in consumer grade GPS receivers from 2005. There you can find measurement of latency for specific NMEA sentences.

Resources