Why is the number of SNIRLostPackets so large? - omnet++

I am trying to simulate vehicular ad hoc networks using Veins 4.4, OMNeT++ 5.0 and SUMO 0.25.
There are only one-hop broadcast beacons in the vanet, and each vehicle sends 10 beacons/s. The size of each beacon is 4000 bits. And nic.mac1609_4.bitrate is 6Mbps.
In my simulation,there are 120 vehicles almost uniformly along the 2-km road, and the TX power of each vehicle is 200mW. After 20s' simulation, I select the vehicles in the middle part, and their SNIRLostPackets is almost 19000 and ReceivedBroadcasts is almost 3500, so the Packet Error Ratio is so high.
I read the codes and find:if there is one bit error in one packet, the packet is consedered to be not correctly received. Is my finding right?
The TX power is 200mW and vehicles are in each others' communication range. Then the network load at each vehicle is 4000*10*120 = 4.8Mbps < 6Mbps,so the number of SNIRLostPackets should not be so large?

Related

Negative packet loss in omnet

I am working on a smart home system with a lot of lamps and switches and a hub, using aodv routers. So i was trying to calculate the packet loss. So I calculated total received and total sent packets and calculated their difference as sum('packetSent:count')-sum('packetReceived:count'). But this value turns out to be a negative value for me. Any idea why?

How can I increase speed of vehicles in Veins?

I am using OMNeT++ 5.4.1, Veins 7.4.1, and SUMO 0.30.0.
As the maximum speed of vehicles in Veins is 13.99, I really want to increase it up to 33. How can I do it? is it possible in Veins or should I do it in SUMO?
You may need to modify the network. Probably the vehicles are only allowed to drive 50 km/h (about 13.99 m/s considering the deviations allowed by the models) on the streets you are looking at. Use SUMO's netedit to edit the maximum edge speeds.

Veins delay does not change with beacon frequency or number of nodes

I'm trying to simulate an imergancy breaking application using veins and analyze its performance. Research papers on 802.11p shows that as beacon frequency and number of vehicles increase delay should increase considerably due to mac layer delay of the protocol ( for 50 vehicles at 8Hz - about 300ms average delay).
But when simulating application with veins delay values does not show much different ( it ranges from 1ms-4ms).I've checked the Mac layer functionality and it appears that the channel is idle most of the time. So when a packet reaches Mac layer the channel has already been idle for more than the DIFS so packet gets sent quickly. I tried increasing packet size and reducing bitrate. It increase the previous delay by a certain amount. But drastic increase of delay due to backoff process cannot be seen.
Do you know why this happens ???
As you use 802.11p the default data rate on the control channel is 6Mbits (source: ETSI EN 302 663)
750Mbyte/s = 750.000bytes/s
Your beacons contain 500bytes. So the transmission of a beacon takes about 0.0007 seconds. As you have about 50 cars in your multi lane scenario and for example they are sending beacons with a 10 hertz frequency, it takes about 0.35s from 1 second to transmit your 500 beacons.
In my opinion, this are to less cars to create your mentioned effect, because the channel is idling for about 60% of the time.

iBeacon is receiving abnormal RSSI signal

I developed an ibeacon-based ios APP, but the RSSI signal it received jumps between 0 and a normal value during beacon ranging(there is kinda like a pattern showing a normal RSSI signal every 4-6 zero RSSI).
I am trying to let my iphone have a real time response based on the RSSI signal received, but I won't be able to do anything with this much unstable signal. I don't know this is because of hardware or battery problem or anything else. Any idea is appreciated.
When ranging for beacons on iOS, if no beacon packets have been received in the last second (but beacon packets have been received in the last five seconds), the beacon will be included in the list of CLBeacon objects in the callback, but it will be given an rssi value of 0.
You can confirm this is true by turning off a beacon. You will notice you will continue to get it in ranging callbacks for about 5 seconds, but the rssi will always be zero. After those five seconds, it is removed from the list.
If you are seeing it bounce back and forth between 0 and a normal value, this indicates that beacon packets are only being received every few seconds. The most likely cause is a beacon transmitter that rarely sends packets (say every 3 to 5 seconds). Some manufacturers sell beacons that do this to conserve battery life.
For best ranging performance, turn up the advertising rate to 10 Hz if your beacon manufacturer allows it, and also increase the transmitter power to maximum. This will use much more battery but will alleviate the spots you are seeing.

How do I calculate PCIe 1x, 2.0, 3.0, speeds properly?

I am honestly very lost with the speeds calculations of PCIe devices.
I can understand the 33MHz - 66MHz clocks of PCI and PCI-X devices, but PCIe confuses me.
Could anyone explain how to calculate the transfer speeds of PCIe?
To understand the table pointed to by Paebbels, you should know how PCIe transmission works. Contrary to PCI and PCI-X, PCIe is a point-to-point serial bus with link aggregation (meaning that several serial lanes are put together to increase transfer bandwidth).
For PCIe 1.0, a single lane transmits symbols at every edge of a 1.25GHz clock (Takrate). This yield a transmission rate of 2.5G transfers (or symbols) per second. The protocol encodes 8 bit of data with 10 symbols (8b10b encoding) for DC balance and clock recovery. Therefore the raw transfer rate of a lane is
2.5Gsymb/s / 10symb * 8bits = 250MB/s
The raw transfer rate can be multiplied by the number of lanes available to get the full link transfer rate.
Note that the useful transfer rate is actually less than that because data is packetized similar to ethernet protocol layer packetization.
A more detailed explanation can be found in this Xilinx white paper.

Resources