How to get current simulation time in omnet++? - omnet++

I am measuring energy level when the packet comes to one of the specific LCN from anothers.I want to get the current simulation time when packet arrives to this LCN.To do this, I used
SimTime();
function but it always gives me the 0. So, how can I get current simulation time. I need to draw the energy level of the LCN with respect to time until simulation ends.I mean what is energy level of of LCN when the time is 10? (for example)

When you call SimTime() you actually call the constructor for the class SimTime.
What you are looking for is the global function simTime().

Related

Dead time delay between axis command (SetPos) and execution in TwinCAT3 with EtherCAT

I want to measure the time between a) setting a target position for my axis on my computer and b) the variable is set on the axis.
I Setup the scope view of TwinCAT3 and displayed the SetPos variable (from the axis) and the setTarget Variable from the EtherCAT slave. I expected a delay of 500us I got 30ms.
The Scope view of TwinCAT3 is showing the SetPos (generated out of GUI the axis functions of TwinCAT3 (reverse sequencing)) and the target position variable inside my embedded system. There is no inertia involved, it is just the variable.
The delay is about 30ms as shown in the screenshot. The EtherCAT Servo is in DC-Mode with a cycle of 500us. The setTarget position variable is wirten inside the Sync0 ISR. The setTarget (PDO) variable is getting captured in 50us.
I am quite sure, that this 30ms delay is coming from TwinCAT itself. Maybe something from generating the SetPoint to transfer it to EtherCAT?
Thanks for your answers!
Chris
Maybe it has something to do with the setpoint generator. There are two tasks involved in the setpoint generation: the SVB task and the SAF task. Read more about it here.
The document mentions about the SVB Task:
The SVB task is the setpoint generator and generates the velocity and position control profiles for the entire move of all drives according to the current position, command position, maximum velocity, acceleration and deceleration rates, and jerk of each drive. This task is typically run every 10ms and a change in any of these parameters will result in a new profile for the entire move every 10ms. As such if a drive is at the target position it will still be calculating profiles to hold it at that target position.
Therefore possible causes might be the:
limits on maximum velocity, acceleration or jerk
cycle time of this task.
What if you change the limits or the cycle time of this task?

Labview fluid flow

This is a two part question:
I have a fluid flow sensor connected to an NI-9361 on my DAQ. For those that don't know, that's a pulse counter card. None the less, from the data read from the card, I'm able to calculate fluid flowing through the device in Gallons per hour, min, sec, etc. But what I need to do is the following:
Calculate total number of gallons of fluid that has flowed through the sensor since the loop began running
if possible, store that total so that it can be incremented next time the program runs
I know how to calculate it by hand, just not sure how to achieve the running summation required to calculate total amount of fluid that has passed through the sensor, or how to store the variable being incremented at the next program execution. I'm presuming the latter would involve writing a TDMS file, then opening and reading back the data, unless there's a better way?
Edit:
Below is the code used to determine GPM flow through my sensor. This setup is in accordance with the 9361 manual; it executes and yields proper results.
See this link for details:
http://zone.ni.com/reference/en-XX/help/373197L-01/criodevicehelp/crio-9361/
I can extrapolate how many gallons flow per second, or sample period, the 1526.99 scalar is the fluid flow manufacturer's constant - number of pulses per gallon passing through the sensor. The 9361 is set to frequency/period mode, so I'm calculating cycles per second, dividing by the constant for cycles per gallon to get gallons per second/min.
I suppose I could get a time reference by looking at the sample period, so I guess the better question is, how do I keep an incrementing sum?

What is the unit of travel time and speed in Veins?

I'm working on Veins in OMNeT++, by using TraCI commands to get the travel time of roads using this method:
double getCurrentTravelTime().
The value which I get from this is very small. So I wondered what the unit of travel time is and also how to get the mean speed in SuMO or Veins?
This commands queries variable 0x5a from an edge. Its meaning is documented on the SUMO wiki, on page http://www.sumo.dlr.de/wiki/TraCI/Edge_Value_Retrieval:
current travel time (0x5a): double, Returns the current travel time (length/mean speed).
Where not specified otherwise, SUMO uses the international system of units, that is, the return value is in seconds.

Time delay in simulink

I am working on my graduation project. It is a digital protection relay which trips if the measured value exceeds a specific value. It must trip after a time delay. We use texas instrument kit and simulink to build the program.
My problem is how to make a time delay so that i can delay the trip signal with predetermined time. The attached image shows aport of the program
thanks.
You can implement that by creating a counter (integrator). For example the counter increments (counter_value +=.001) by one each millisecond. Your trip delay is 1.54 s then you compare (>=) the value of the counter to your trip delay.
The counter is activated and reset by the boolean input signal that you want to delay.
I dont have Simulink installed. Therefore i cant give you a picture, and i think pseudocode would not help much either.

how to measure channel busy time in veins?

is there any function that returns channel busy time? I use veins-2.2, mac and decider 802.11p. if there is not such function, how measuring the channel busy time is possible?
Channel busy time in Veins 2.2 is measured at two points: in the Phy layer and in the Mac layer. Both record a corresponding scalar value at the end of the simulation. Note that there is a difference in meaning between the two:
Mac busy time is (in almost all cases) what you want to record: it records how many seconds the Mac treated the channel as busy. Divide the scalar totalBusyTime by the total simulation time and you know the fraction of time that the Mac could not send.
Phy busy time is calculated very different: its value busyTime increases for each frame received above the sensitivity threshold. To give an example, if 1 frame is being received at any given time during the simulation, the value of this scalar would be 100%. If 4 frames are interfering for all of your simulation, the value of this scalar would be 400% (which is different to the Mac busy time you probably want).

Resources