I'm using veins 4.4, OMNeT++ 5.0 and SuMO 0.25. I have set vehicle speed to 0 to stop them by traciVehicle->setSpeed(0) then after certain case i set them to 20 by traciVehicle->setSpeed(20) to cross the intersection but for no reason it increases slowly till its time finish , so can i make it faster ???
A vehicle in SuMO has the speedMode parameter which determines how it should behave for instance in terms of acceleration and deceleration.
By default this parameters is set to consider all checks like keeping a safe gap to other vehicles and the maximum acceleration. When set to 0 the vehicle ignores all checks like the maximum acceleration.
Try setting different values for the speedMode in Veins to achieve the expected vehicle behavior. You can do so by using the TraCICommandInterface and the TraciVehicle. Have a look at the TraCITestApp for an example. Also you could play around with the maximumSpeed parameter.
i solved the problem by regenerating my map and after that vehicles crossed in its expected speed.i think unknown error occurred in my (.net or .rou ) files while i was debugging my code.
Related
I want to measure the time between a) setting a target position for my axis on my computer and b) the variable is set on the axis.
I Setup the scope view of TwinCAT3 and displayed the SetPos variable (from the axis) and the setTarget Variable from the EtherCAT slave. I expected a delay of 500us I got 30ms.
The Scope view of TwinCAT3 is showing the SetPos (generated out of GUI the axis functions of TwinCAT3 (reverse sequencing)) and the target position variable inside my embedded system. There is no inertia involved, it is just the variable.
The delay is about 30ms as shown in the screenshot. The EtherCAT Servo is in DC-Mode with a cycle of 500us. The setTarget position variable is wirten inside the Sync0 ISR. The setTarget (PDO) variable is getting captured in 50us.
I am quite sure, that this 30ms delay is coming from TwinCAT itself. Maybe something from generating the SetPoint to transfer it to EtherCAT?
Thanks for your answers!
Chris
Maybe it has something to do with the setpoint generator. There are two tasks involved in the setpoint generation: the SVB task and the SAF task. Read more about it here.
The document mentions about the SVB Task:
The SVB task is the setpoint generator and generates the velocity and position control profiles for the entire move of all drives according to the current position, command position, maximum velocity, acceleration and deceleration rates, and jerk of each drive. This task is typically run every 10ms and a change in any of these parameters will result in a new profile for the entire move every 10ms. As such if a drive is at the target position it will still be calculating profiles to hold it at that target position.
Therefore possible causes might be the:
limits on maximum velocity, acceleration or jerk
cycle time of this task.
What if you change the limits or the cycle time of this task?
I have read a bunch of posts on SO regarding the computation of end-to-end delay in Veins, but have not found an answer to be fulfilling in explaining why the delay is seemingly too low.
I am using:
Veins 4.7
Sumo 0.32.0
Omnetpp 5.3
Channel switching is turned off.
I have the following code, sending a message from the transmitting node:
if(sendMessage) {
WaveShortMessage* wsm = new WaveShortMessage();
sendDown(wsm);
}
The receiving node computes the delay using the wsm creation time, but I have also tried setting the timestamp on the transmitting side. The result is the same.
simtime_t delay = simTime() - wsm -> getCreationTime();
delayVector.record(delay);
The sample output for the delay vector is as follows:
Item# Event# Time Value
0 165 14.400239402394 2.39402394E-4
1 186 14.500240403299 2.40403299E-4
2 207 14.600241404069 2.41404069E-4
3 228 14.700242404729 2.42404729E-4
Which means that the end-to-end delay (from creation to reception) is equivalent to roughly a quarter of a millisecond, which seems to be quite low - and a fair bit below what is typically reported in the literature. This seems to be consistent with what other people have reported as being an issue (e.g. end to end delay in Veins)
Am I missing something in this computation? I have tried adding load on the network by adding a high number of vehicular nodes (21 nodes within a 1000x50 sandbox on a straight highway, with an average speed of 50 km/h), but the result seems to be the same. The difference is negligible. I have read several research papers that suggest that end-to-end delay should increase dramatically in high vehicular densities.
This end-to-end delay is to be expected. If your application's simulation model does not explicitly model processing delay (e.g., by an application running on a slow general purpose computer), all you would expect to delay a frame is propagation delay (lightspeed, so negligible here) and queueing delay on the MAC (time from inserting frame into TX queue until transmission finishes).
To give an example, for a 2400 bit frame sent at 6 Mbit/s this delay is roughly 0.45 ms. You are likely using slightly shorter frames, so your values appear to be reasonable.
For background information, see F. Klingler, F. Dressler, C. Sommer: "The Impact of Head of Line Blocking in Highly Dynamic WLANs" (DOI 10.1109/TVT.2018.2837157), which also includes a comparison of theory vs. Veins vs. real measurements.
I'm trying to measure throughput when the distance between two mobile UE is changing. I'm using Omnet++. and measuring throughput in the mac layer.
It's supposed to be that when the distance increases throughput should decrease (well-known inverse relationship)
But in my simulation it doesn't happen. Does anyone have any idea about it in omnet++?
I also attached the chart of throughput during the time
enter image description hereenter image description here
Thanks
#thardes2 Ok, I'm using Omnet++. In my scenario, there is 1 base station and 2 UE without any mobility during the simulation. I run the simulation 20 times (10,20,30 ,... 200m) and try to measure the throughput which is calculated in the MAC layer in the LteHarqBufferRx.cc (in a way total received bytes / simulation time)
double tputSample = (double)totalRcvdBytes_ / (NOW - getSimulation()->getWarmupPeriod());
macOwner_->emit(macThroughput_, tputSample);
I used SinglePair-UDP-Infra scenario in D2D sample in Simulte and verything is working well, but my issue is: Why throughput doesn't reduce when distance increases? Even in some long distance throughput increases!!! Am I calculating throughput in a wrong place?
Thank you for your help
I'm working on Veins in OMNeT++, by using TraCI commands to get the travel time of roads using this method:
double getCurrentTravelTime().
The value which I get from this is very small. So I wondered what the unit of travel time is and also how to get the mean speed in SuMO or Veins?
This commands queries variable 0x5a from an edge. Its meaning is documented on the SUMO wiki, on page http://www.sumo.dlr.de/wiki/TraCI/Edge_Value_Retrieval:
current travel time (0x5a): double, Returns the current travel time (length/mean speed).
Where not specified otherwise, SUMO uses the international system of units, that is, the return value is in seconds.
I am considering the idea of using Inet/omnet++ to evaluate a routing algorithm we are working on. Since I am using the tool for the first time, I was executing some examples and reading the source code.
Then I found an example, which is shipped with inet, /inet/examples/wireless/throughput.
The problem is that I don't get the same values.
In the README file one can read:
"Throughput is measured by the "sink" submodule of the AP. It is recorded
into the output scalar file, but can also be inspected during runtime.
The Excel sheet includes throughput measured by the simulation, and compares
it to the theoretical maximum which is roughly 5.12 Mbps (at 11 Mbps bitrate
and 1000-byte packets). The theoretical value and the simulation output
are very close, the difference being less than 1 kbps."
The same value is presented in Timing.xls
However, I obtain a different value when I execute the simulation: 846266 bit/sec
Do I need to perform some additional calculation to obtain the final value of throughput?
Is that a bug?
Is the value no longer valid due to some modification in INET?
The default value of bitrate for throughput example is 1 Mbps. So the value you obtained is correct.
To change the bitrate edit this line in omnetpp.ini in throughput directory:
**.wlan*.bitrate = 1Mbps