Dead time delay between axis command (SetPos) and execution in TwinCAT3 with EtherCAT - twincat

I want to measure the time between a) setting a target position for my axis on my computer and b) the variable is set on the axis.
I Setup the scope view of TwinCAT3 and displayed the SetPos variable (from the axis) and the setTarget Variable from the EtherCAT slave. I expected a delay of 500us I got 30ms.
The Scope view of TwinCAT3 is showing the SetPos (generated out of GUI the axis functions of TwinCAT3 (reverse sequencing)) and the target position variable inside my embedded system. There is no inertia involved, it is just the variable.
The delay is about 30ms as shown in the screenshot. The EtherCAT Servo is in DC-Mode with a cycle of 500us. The setTarget position variable is wirten inside the Sync0 ISR. The setTarget (PDO) variable is getting captured in 50us.
I am quite sure, that this 30ms delay is coming from TwinCAT itself. Maybe something from generating the SetPoint to transfer it to EtherCAT?
Thanks for your answers!
Chris

Maybe it has something to do with the setpoint generator. There are two tasks involved in the setpoint generation: the SVB task and the SAF task. Read more about it here.
The document mentions about the SVB Task:
The SVB task is the setpoint generator and generates the velocity and position control profiles for the entire move of all drives according to the current position, command position, maximum velocity, acceleration and deceleration rates, and jerk of each drive. This task is typically run every 10ms and a change in any of these parameters will result in a new profile for the entire move every 10ms. As such if a drive is at the target position it will still be calculating profiles to hold it at that target position.
Therefore possible causes might be the:
limits on maximum velocity, acceleration or jerk
cycle time of this task.
What if you change the limits or the cycle time of this task?

Related

Is there real-time DC offset compensation in Ettus N310?

I am working with an Ettus N310 that is being controlled by some 3rd party software. I don't have much of an insight of how they set up and control the device, just tell it what center frequency to tune to and when to grab IQ. If I receive a signal, let's say a tone, at or very near the center frequency, I end up with a large DC offset that jumps around every few 100 usec. If I offset the signal well away from the center frequency, the DC offset is negligible. From what I see in Ettus' documentation, DC offset compensation is something that's set once when the device starts receiving but it looks to me like here it is being done periodically while the USRP is acquiring data. If I receive a signal near center frequency, the DC offset compensator gets messed up and creates a worse bias. Is this a feature on the N310 that I am not aware of or is this probably something that the 3rd party controller is doing?
Yes, there's a DC offset compensation in the N310. The N310 uses an Analog Devices RFIC (the AD9371), which has these calibrations built-in. Both the AD9371 and the AD9361 (used in the USRP E3xx and B2xx series) don't like narrow-band signals close to DC due to their calibration algorithms (those chips are optimized for telecoms signals).
Like you said, the RX DC offset compensation is happening at initialization. At runtime, the quadrature error correction kicks in. The manual holds a table of those: https://uhd.readthedocs.io/en/latest/page_usrp_n3xx.html#n3xx_mg_calibrations). You can try turning off the QEC tracking and see if it improves your system's performance.

Labview fluid flow

This is a two part question:
I have a fluid flow sensor connected to an NI-9361 on my DAQ. For those that don't know, that's a pulse counter card. None the less, from the data read from the card, I'm able to calculate fluid flowing through the device in Gallons per hour, min, sec, etc. But what I need to do is the following:
Calculate total number of gallons of fluid that has flowed through the sensor since the loop began running
if possible, store that total so that it can be incremented next time the program runs
I know how to calculate it by hand, just not sure how to achieve the running summation required to calculate total amount of fluid that has passed through the sensor, or how to store the variable being incremented at the next program execution. I'm presuming the latter would involve writing a TDMS file, then opening and reading back the data, unless there's a better way?
Edit:
Below is the code used to determine GPM flow through my sensor. This setup is in accordance with the 9361 manual; it executes and yields proper results.
See this link for details:
http://zone.ni.com/reference/en-XX/help/373197L-01/criodevicehelp/crio-9361/
I can extrapolate how many gallons flow per second, or sample period, the 1526.99 scalar is the fluid flow manufacturer's constant - number of pulses per gallon passing through the sensor. The 9361 is set to frequency/period mode, so I'm calculating cycles per second, dividing by the constant for cycles per gallon to get gallons per second/min.
I suppose I could get a time reference by looking at the sample period, so I guess the better question is, how do I keep an incrementing sum?

How to get current simulation time in omnet++?

I am measuring energy level when the packet comes to one of the specific LCN from anothers.I want to get the current simulation time when packet arrives to this LCN.To do this, I used
SimTime();
function but it always gives me the 0. So, how can I get current simulation time. I need to draw the energy level of the LCN with respect to time until simulation ends.I mean what is energy level of of LCN when the time is 10? (for example)
When you call SimTime() you actually call the constructor for the class SimTime.
What you are looking for is the global function simTime().

Changing scheduler tick time

I want to change the scheduler tcik time(The amount of time CPU spends on each process).
Initially I checked about jiffies, jiffies variable represents the no.of timer ticks from the boot. CONFIG_HZ in the configuration file represents no.of timer ticks per second, please correct me if this is not correct.
The CONFIG_HZ value is same as scheduler tick time ? if it is different then please guide me where I can change the scheduler tick time.
Yes CONFIG_HZ defines the number of ticks in one second.
Basically scheduler is invoked every 1/CONFIG_HZ second for taking waking, task sleeping, balance load.
scheduler_tick -> This function gets called every 1/CONFIG_HZ second.
CONFIG_HZ defined in Kconfig and its value is set using .config which can be modified using menuconfig.
Global Variable jiffies holds the number of ticks that have occurred since the system has booted.
I d like to clarify about terms.
Jiffies is strictly speaking a measure of time.
Like we have hours , minutes, seconds exactly the same thing
is jiffy. And only after that it happens so that kernel works
with time via jiffy units.
It happens so that scheduler is launched every jiffy (roughly
speaking). to get more details i suggest to look at "linux kernel development" book. - https://github.com/eeeyes/My-Lib-Books/blob/master/Linux%20Kernel%20Development%2C%203rd%20Edition.pdf

Default for 0db sound level as an absolute float value

I'm currentyl building something like a tiny software audio synthesizer on Window 7 in c++. The core engine is running and upon receiving midi events it plays notes, changes programmes, etc. What puzzles me at the moment is where to put the 0 db reference sound pressure level of the output channels.
Let's say the synthesizer produces a sinewave with 440 Hz with an amplitude of |0.5f| . In order to calculate the sound level in db I need to set the reference level (0 db). Does anyone know something like a default for this?
When decibel relative to full scale is in question, AKA dBFS, zero dB is assigned to the maximum possible digital level. A quote from Wiki:
0 dBFS is assigned to the maximum possible digital level.[1] for
example, a signal that reaches 50% of the maximum level at any point
would peak at -6 dBFS i.e. 6 dB below full scale. All peak
measurements will be negative numbers, unless they reach the maximum
digital value.
First you need to be clear about units. dB on its own is a ratio, not an absolute value. As #Roman R. suggested, you can just use 0 dB to mean "full scale" and then your range will be 0 dB (max) to some negative dB value which corresponds to the minimum value that you are interested in (e.g. -120 dB). However this is just an arbitrary measurement which doesn't tell you anything about the absolute value of the signal.
In your question though you refer to dB SPL (SPL = Sound Pressure Level), which is an absolute unit. 0 dB SPL is typically defined as 20 µPa (RMS), which is around the threshold of human hearing, and in this case the range of interest might be say -20 dB SPL to say +120 dB SPL. However if you really do want to measure dB SPL and not just an arbitrary dB value then you will need to calibrate your system to take into account microphone gain, microphone frequency response, A-D sensitivity/gain, and various other factors. This is non-trivial, but essential if you actually want to implement some kind of SPL measuring system.

Resources