Reduce the coverage area between vehicles - omnet++

I need to reduce the coverage area of communication between vehicles. Should I reduce values of these parameters in omnetpp.ini?
*.**.nic.phy80211p.sensitivity = -89dBm
*.**.nic.phy80211p.maxTXPower = 10mW
*.**.nic.phy80211p.thermalNoise = -110dBm
If not, which parameters can I modify please?

If by coverage area you mean communication range, the short answer is yes, you can modify these parameters to reduce the communication range (which I'd probably do by lowering the maximum transmission power). Alternatively, you can change the channel properties (in config.xml) by adding a corresponding analog model that has the behavior you're looking for. I recommend having a look at the Two-Ray Interference model and the Obstacle Shadowing model, which are part of VEINS.

In the current Veins version (i.e. 4.5) you can also reduce the maxInterfDist of the ConnectionManager which will result in overall less vehicles getting an AirFrame given to their NICs which they then try to decode. However, this only decreases the distance of the best possible communication (i.e. without buildings in LOS, etc.) and not the average distance which usually is way smaller due to fading effects and buildings.

in my opinion, the maxTXPower and maxInterfDist do not affect the coverage area between vehicles. You should modify the "..nic.mac1609_4.txPower" and "..nic.phy80211p.sensitivity". For a better understanding, you may check the answer from Christoph Sommer in this thread: how to set the transmission range of a node under Veins 2.0?

Related

Change the transmission signal strength for a specific set of vehicles during the run-time

I started (since about one week) using veins (4.4) under omnet++ (5.0).
My current task is to let vehicles adjust their transmission range according to a specific context. I did read a lot of asked questions like these ones (and in other topics/forums):
Dynamical transmission range in the ieee802.11p module
Vehicles Receive Beacon Messages outside RSU Range
How coverage distance and interference distance are affected by each other
Maximum transmission range vs maximum interference distance
Reduce the coverage area between vehicles
how to set the transmission range of a node under Veins 2.0?
My Question:
How to -really- change the transmission range of just some nodes?
From the links above, I knew that the term "transmission range", technically, is related to the received power, noise,sensitivity threshold, etc. which defines the probability of reception.
Since I am new to veins (and omnet++ as well), I did few tests and I concluded the following:
"TraCIMobility" module can adjust the nodes' parameters (for each vehicle, there is an instance) such as the ID, speed, etc.
I could, also, instantiate the "Mac1609_4" (for each vehicle) and changed some of its parameters like the "txPower" during simulation run-time but it had no effect on the real communication range.
I could not instantiate (because it was global) the "connection manager" module which was the only responsible of (and does override) the effective communication range. this module can be configured in the ".ini" file but I want different transmission powers and most importantly "can be changed during run-time".
The formula to calculate the transmission range is in the attached links, I got it, but it must be a way to define or change these parameters in one of the layers (even if it is in the phy layer, i.e., something like the attached signal strength...)
Again, maybe there is some wrong ideas in what I have said, I just want to know what/how to change this transmission range.
Best regards,
You were right to increase the mac1609_4.txPower parameter to have a node send with more power (hence, the signal being decodable further away). Note, however, that (for Veins 4.4) you will also need to increase connectionManager.pMax then, as this value is used to determine the maximum distance (away from a transmitting simulation module) that a receiving simulation module will be informed about an ongoing transmission. Any receiving simulation module further away will not be influenced by the transmission (in the sense of it being a candidate for decoding, but also in the sense of it contributing to interference).
Also note that transmissions on an (otherwise) perfectly idle channel will reach much further than transmissions on a typically-loaded channel. If you want to obtain a good measurement of how far a transmission reaches, have some nodes create interference (by transmitting broadcasts of their own), then look at how the Frame Delivery Rate (FDR) drops as distance between sender and receiver increases.
Finally, note that both 1) the noise floor and 2) the minimum power level necessary for the simulation module of a receiver to attempt decoding a frame need to be calibrated to the WLAN card you want to simulate. The values chosen in the Veins 4.4 tutorial example are very useful for demonstrating the concepts of Veins, whereas the values of more recent versions of Veins come closer to what you would expect from a "typical" WLAN card used in some of the more recent field tests. See the paper Bastian Bloessl and Aisling O'Driscoll, "A Case for Good Defaults: Pitfalls in VANET Physical Layer Simulations," Proceedings of IFIP Wireless Days Conference 2019, Manchester, UK, April 2019 for a more detailed discussion of these parameters.
I am just giving my opinion in case someone was already in my situation:
In veins (the old version that I am using is 4.4), the "connection manager" is the responsible for evaluating a "potential" exchange of packets, thus, its transmission power is almost always set to the upper-bound.
I was been confused after I changed the vehicles "Mac1609_4" transmission power and "graphically", the connection manager was still showing me that the packets are received by some far nodes which in fact was not the case, it was just evaluating whether it is properly received or not (via the formula discussed in the links above).
Thus: changing the "TxPower" of each vehicle had really an effect beside graphically (the messages were not mounted to the upper layers).
In sum, to make a transmission range aware scheme, this is what must be done:
In the sender node (vehicle), and similarly to the pointer "traci" which deals with the mobility features, a pointer to the "mac1609" must be created and pointed to it as follows:
In "tracidemo11p.h" add ->
#include "veins/modules/mac/ieee80211p/Mac1609_4.h"//added
#include "veins/base/utils/FindModule.h"//added
and as a protected variable in the class of "tracidemo11p" in the same ".h" file ->
Mac1609_4* mac;//added
In "tracidemo11p.cc" add ->
mac = FindModule<Mac1609_4*>::findSubModule(getParentModule());
now you can manipulate "mac" as in "traci", the appropriate methods are in the "modules/mac/ieee80211p/Mac1609_4.cc & .h"
for our work, the method will be:
mac->setTxPower(10);//for example
This will have an impact on the simulation in real-time for each node instance.
It may had described it with basic concepts because I am new to omnet-veins, these was done in less than one week (and will be provided for new users as well).
I hope it will be helpful (and correct)

RNG seed selection highly affects simulation otucomes

I'm developing for the first time a simulation in Omnet++ 5.4 that makes use of the queueinglib library available. In particular, I've built a simple model that comprehends a server and some passive queues.
I've repeated the simulation different times, setting different seeds and parameters, as written in the Omnet Simulation Manual, here is my omnetpp.ini:
# specify how many times a run needs to be repeated
repeat = 100
# default: different seed for each run
# seed-set = ${runnumber}
seed-set = ${repetition} # I tried both lines
OppNet.source.interArrivalTime = exponential(${10,5,2.5}s)
This produces 300 runs, 100 repetitions for each value of the interArrivalTime distribution parameter.
However, I'm observing some "strange" behaviour, namely, the resulting statistics are highly variable according to the RNG seed.
For example, considering the lengths of the queues in my model, I have in the majority of the runs values smaller than 10, while in a few others a mean value that differs in orders of magnitude (85000, 45000?).
Does this mean that my implementation is wrong? Or is it possible that the random seed selection could influence the simulation outcomes so heavily?
Any help or hint is appreciated, thank you.
I cannot rule out that your implementation is wrong without seeing it, but it is entirely possible that you just happened to configure a chaotic scenario.
And in that case, yes, any minor change in the inputs (in this case the PRNG seed) might cause a major divergence in the results.
EDIT:
Especially considering a given (non-trivial) queuing network, if you are varying the "load" (number/rate of incoming jobs/messages, with some random distribution), you might observe different "regimes" in the results:
with very light load, there is barely any queuing
with a really heavy load (close, or at max capacity), most queues are almost always loaded, or even growing constantly without limit
and somewhere in between the two, you might get this erratic behavior, where sometimes a couple of queues suddenly grow really deep, then are emptied, then different queues are loaded up, and so on; as a function of either time, or the PRNG seed, or both - this is what I mean by chaotic
But this is just speculation...
Nobody can say whether your implementation is right or wrong without seeing it. However, there are some general rules that apply to queues which you should be aware of. You say that you're varying the interArrivalTime distribution parameter. A very important concept in queueing is the traffic intensity, which is the ratio of the interarrival rate to the service rate. If that ratio is less than one, the line length can vary by a great deal but in the long run there will be time intervals when the queue empties out because on average the server can handle more customers than arrive. This is a stable queue. However, if that ratio is greater than one, the queue will grow unboundedly. The longer you run the system, the longer the line will get. The thing that surprises many people is that the line will also go to infinity asymptotically when traffic intensity is equal to one.
The other thing to know is that the closer the traffic intensity gets to one for a stable queue, the greater the variability is. That's because the average increases, but there will always be periods of line length zero as described above. In order to have zeros always present but have the average increasing, there must be times where the queue length gets above the average, which implies the variability must be increasing. Changing the random number seed gives you some visibility on the magnitude of the variance that's possible at any slice of time.
Bottom line here is that you may just be seeing evidence that queues are weirder and more variable than you thought.

OMNeT++ & Veins steady state vehicle density

I want to keep a constant number of cars in my (long) simulation (OMNeT+Veins). I do not care about mobility that much so i could probably use the Veins in-built function *.manager.numVehicles = 100. The thing is that if i do not specify any(enough) vehicle flows (from SUMO) my simulation terminates instantly (because of no events). So i create some flows (that exit the simulation sooner) and Veins fills up for the cars as they dissappear.
Is there a more elegant way to do this? I'd prefer to just use the numVehicles function since it's easier and the cars move minimally so they remain in the simulation for long.
I need steady-state vehicular density (number of vehicles fixed - even if old ones leave and new ones enter to replace them at the same instant).
Thanks,
Andreas
The autoShutdown parameter can be set to false to instruct the coupling interface to keep going even though there are no more cars in the simulation. See https://github.com/sommer/veins/blob/veins-4.5/src/veins/modules/mobility/traci/TraCIScenarioManagerLaunchd.ned#L55
https://stackoverflow.com/a/71071341/7215379 has a solution. It actually is meant to keep the number of vehicles in simulation constant. You can look into the possibility of reusing it.

Vehicles Receive Beacon Messages outside RSU Range

I am working on Veins framework, inside OMNET++. I set the property of RSU inside .ned file as follow:
#display("p=150,140;b=10,10,oval;r=90");
The tkenv shows a circle around RSU, but the vehicles received beacons outside the range (circle).
How can I adjust the transmission range of RSU to the Circle?
Adding a display string tag of r is just adding a circle to the graphical output; it does not influence the simulation.
If you define the "transmission range" as the point where the probability of reception is zero, you can calculate this point based on the transmission power at the antenna and the sensitivity of the radio (both set in omnetpp.ini), as well as the used path loss and fading models (set in config.xml). If you change these parameters and models, you are changing this "transmission range".
Note, however, that this range has only little relevance to frame reception probability.
Veins employs the MiXiM suite and approach to model transmissions as two-dimensional (time and frequency) functions of signal power that are modified by path loss and fading effects (both stochastic and deterministic, e.g., due to buildings).
If a frame's receive power is above the sensitivity threshold, its reception probability is computed based on dividing these functions for signal, interference, and noise to derive the SINR and, from that, the bit error rate.
Even at moderate interference levels, this means that most frames cannot be decoded even though they are well above the sensitivity threshold (simply because their SINR was too low).
Just to repeat: I am warning against calculating a "transmission range" for anything other than purely informational purposes. How far you can send in theory has absolutely no relation to how far you can send on a moderately busy channel. This effect is modeled in Veins!

Algorithm to decide if digital audio data is clipping?

Is there an algorithm or some heuristic to decide whether digital audio data is clipping?
The simple answer is that if any sample has the maximum or minimum value (-32768 and +32767 respectively for 16 bit samples), you can consider it clipping. This isn't stricly true, since that value may actually be the correct value, but there is no way to tell whether +32767 really should have been +33000.
For a more complicated answer: There is such a thing as sample counting clipping detectors that require x consecutive samples to be at the max/min value for them to be considered clipping (where x may be as high as 7). The theory here is that clipping in just a few samples is not audible.
That said, there is audio equipment that clips quite audible even at values below the maximum (and above the minimum). Typical advice is to master music to peak at -0.3 dB instead of 0.0 dB for this reason. You might want to consider any sample above that level to be clipping. It all depends on what you need it for.
If you ever receive values at the maximum or minimum, then you are, by definition, clipping. Those values represent their particular value as well as all values beyond, and so they are best used as outside bounds detectors.
-Adam
For digital audio data, the term "clipping" doesn't really carry a lot of meaning other than "max amplitude". In the analog world, audio data comes from some hardware which usually contains a "clipping register", which allows you the possibility of a maximum amplitude that isn't clipped.
What might be better suited to digital audio is to set some threshold based on the limitations of your output D/A. If you're doing VOIP, then choose some threshold typical of handsets or cell phones, and call it "clipping" if your digital audio gets above that. If you're outputting to high-end home theater systems, then you probably won't have any "clipping".
I just noticed that there even are some nice implementations.
For example in Audacity:
Analyze → Find Clipping…
What Adam said. You could also add some logic to detect maximum amplitude values over a period of time and only flag those, but the essence is to determine if/when the signal hits the maximum amplitude.

Resources