Fix speed in omnet++ and sumo - omnet++

I have question. If I have road with three lane each lane has own speed like ( 80 , 100 , 120 )Km/h ... Then need to test the performance of any protocol like the AODV for VANET when the vehicle speed is 90 Km/h.. and the number of vehicle is 100 how can do this,mean to set fix speed for these cars. Thanks in advance.

To fix the speed of a vehicle you can simply give it the aspired speed as maximum speed and prevent it from dawdling (which it does with the default model). An input route file could look like this:
<routes>
<vType sigma="0" id="t1" maxSpeed="90"/>
<vehicle id="v1" type="t1" depart="0" route="r1" departSpeed="max"/>
</routes>
Please note that this will set the maximum speed to 90m/s (as requested in the question) which means 324 km/h. Also the definition of the route r1 is excluded here.
If you have multiple interacting vehicles however you cannot at the same time guarantee collision free simulation and constant speed and sumo will favor the first.

Related

USRP N320 recording the edges

When I record a signal with USRP N320 SDR, it has some problems on the edges of the spectrum. For example, when I choose sample rate 50 Msps, 2 MHz of the start of the spectrum and 2 MHz of the end of the spectrum, gives the wrong results. When it see a pulse on the edges it decreases the power and changes the frequency little bit. But 46 MHz bandwidth is perfectly working.
Sample rate: 50 Msps, Properly working bandwidth: 46 MHz
Sample rate: 100 Msps, Properly working bandwidth: 90 MHz
Sample rate: 200 Msps, Properly working bandwidth: 180 MHz
I tried to filter the edges with bandpass filter but it does give the OOOOOO problem. Even if I choose the sample rate 50 Msps. But normally, I can record successfully without bandpass filter when I choose sample rate 200 Msps.
Is there a solution to record the edges correctly. Or filtering it without dropping samples.
First off:
I tried to filter the edges with bandpass filter but it does give the OOOOOO problem
means that your computer isn't fast enough to apply the filter to the data stream. That might mean two things: you've designed a filter that's too long and could be shorter and still do what you want, or what you want to do requires a filter of that length and you will need to find a faster PC (hard) or use a faster filter implementation (did you try the FFT filters?).
For example, when I choose sample rate 50 Msps, 2 MHz of the start of the spectrum and 2 MHz of the end of the spectrum, gives the wrong results.
This is not surprising! Remember that anything with a ADC needs an anti-aliasing filter on the analog side, and these can't be arbitrarily sharp. So, necessarily, the spectrum at the edge of your band gets a bit dampened, and there's a bit of aliasing there. The dampening, you could counteract by throwing an equalizing filter on your PC at it, which would need to necessarily be more compute-intense than what is happening on the USRP, but the aliasing of the lowest frequencies onto the highest, and vice versa, due to finite steepness of the analog anti-aliasing filter you cannot repair. That's the signal processing blues for any kind of acquisition device.
There's one trick though, which the USRP uses: when your requested sampling rate is lower than the ADC's sampling rate, the USRP can internally apply a (better!) digital filter to select that target sampling rate as bandwidth, and decimate to that.
Thus, depending on the ADC rate to output sampling rate relationship (in UHD, the ADC rate is called "master clock rate", MCR), there's further digital filtering and decimation going on in the digital logic inside the N320. These filters also can't be infinitely sharp – and you might see that.
Generally, you'd want that decimation between MCR and the sampling rate you've requested to be an even number, and not too large. Don't have the N320's digital signal processing architecture in my head right now, but I bet using a decimation that's a multiple of 4 or even 8 is a good move – you get to use the nicer half-band filters then.
Modern UHD also has the filter API, with which you can work with these digital filters manually; this rarely is what you really want to do here, though.

How can I increase speed of vehicles in Veins?

I am using OMNeT++ 5.4.1, Veins 7.4.1, and SUMO 0.30.0.
As the maximum speed of vehicles in Veins is 13.99, I really want to increase it up to 33. How can I do it? is it possible in Veins or should I do it in SUMO?
You may need to modify the network. Probably the vehicles are only allowed to drive 50 km/h (about 13.99 m/s considering the deviations allowed by the models) on the streets you are looking at. Use SUMO's netedit to edit the maximum edge speeds.

Computing End-To-End Delay in Veins

I have read a bunch of posts on SO regarding the computation of end-to-end delay in Veins, but have not found an answer to be fulfilling in explaining why the delay is seemingly too low.
I am using:
Veins 4.7
Sumo 0.32.0
Omnetpp 5.3
Channel switching is turned off.
I have the following code, sending a message from the transmitting node:
if(sendMessage) {
WaveShortMessage* wsm = new WaveShortMessage();
sendDown(wsm);
}
The receiving node computes the delay using the wsm creation time, but I have also tried setting the timestamp on the transmitting side. The result is the same.
simtime_t delay = simTime() - wsm -> getCreationTime();
delayVector.record(delay);
The sample output for the delay vector is as follows:
Item# Event# Time Value
0 165 14.400239402394 2.39402394E-4
1 186 14.500240403299 2.40403299E-4
2 207 14.600241404069 2.41404069E-4
3 228 14.700242404729 2.42404729E-4
Which means that the end-to-end delay (from creation to reception) is equivalent to roughly a quarter of a millisecond, which seems to be quite low - and a fair bit below what is typically reported in the literature. This seems to be consistent with what other people have reported as being an issue (e.g. end to end delay in Veins)
Am I missing something in this computation? I have tried adding load on the network by adding a high number of vehicular nodes (21 nodes within a 1000x50 sandbox on a straight highway, with an average speed of 50 km/h), but the result seems to be the same. The difference is negligible. I have read several research papers that suggest that end-to-end delay should increase dramatically in high vehicular densities.
This end-to-end delay is to be expected. If your application's simulation model does not explicitly model processing delay (e.g., by an application running on a slow general purpose computer), all you would expect to delay a frame is propagation delay (lightspeed, so negligible here) and queueing delay on the MAC (time from inserting frame into TX queue until transmission finishes).
To give an example, for a 2400 bit frame sent at 6 Mbit/s this delay is roughly 0.45 ms. You are likely using slightly shorter frames, so your values appear to be reasonable.
For background information, see F. Klingler, F. Dressler, C. Sommer: "The Impact of Head of Line Blocking in Highly Dynamic WLANs" (DOI 10.1109/TVT.2018.2837157), which also includes a comparison of theory vs. Veins vs. real measurements.

Throughput's change is not logical when mobile UEs are increasing their distance

I'm trying to measure throughput when the distance between two mobile UE is changing. I'm using Omnet++. and measuring throughput in the mac layer.
It's supposed to be that when the distance increases throughput should decrease (well-known inverse relationship)
But in my simulation it doesn't happen. Does anyone have any idea about it in omnet++?
I also attached the chart of throughput during the time
enter image description hereenter image description here
Thanks
#thardes2 Ok, I'm using Omnet++. In my scenario, there is 1 base station and 2 UE without any mobility during the simulation. I run the simulation 20 times (10,20,30 ,... 200m) and try to measure the throughput which is calculated in the MAC layer in the LteHarqBufferRx.cc (in a way total received bytes / simulation time)
double tputSample = (double)totalRcvdBytes_ / (NOW - getSimulation()->getWarmupPeriod());
macOwner_->emit(macThroughput_, tputSample);
I used SinglePair-UDP-Infra scenario in D2D sample in Simulte and verything is working well, but my issue is: Why throughput doesn't reduce when distance increases? Even in some long distance throughput increases!!! Am I calculating throughput in a wrong place?
Thank you for your help

speed of vechile in veins increases slowly

I'm using veins 4.4, OMNeT++ 5.0 and SuMO 0.25. I have set vehicle speed to 0 to stop them by traciVehicle->setSpeed(0) then after certain case i set them to 20 by traciVehicle->setSpeed(20) to cross the intersection but for no reason it increases slowly till its time finish , so can i make it faster ???
A vehicle in SuMO has the speedMode parameter which determines how it should behave for instance in terms of acceleration and deceleration.
By default this parameters is set to consider all checks like keeping a safe gap to other vehicles and the maximum acceleration. When set to 0 the vehicle ignores all checks like the maximum acceleration.
Try setting different values for the speedMode in Veins to achieve the expected vehicle behavior. You can do so by using the TraCICommandInterface and the TraciVehicle. Have a look at the TraCITestApp for an example. Also you could play around with the maximumSpeed parameter.
i solved the problem by regenerating my map and after that vehicles crossed in its expected speed.i think unknown error occurred in my (.net or .rou ) files while i was debugging my code.

Resources