UHD B210 phase synchronisation between radios - usrp

I have two B210 radios sharing a 10MHz external master clock and 1pps time signal. I have noticed that each time I start a receive stream simultaneously on both radios they are out by a random but quantised phase variation.
For example if the internal master clock rate is set to 16MHz and the sample rate is 1Mhz then the phase difference is a random multiple of Pi/6. Note this is between different B210 radios - there is no such variation between the two RF channels of the B210.
I need the radios to be in phase lock, not just frequency lock, for the measurements I am making. My work around at the moment is to inject a calibration signal into all the radios at the start of each capture to measure this difference then compensate for it by adjusting the samples in software.
Is there something I have missed in the UHD API which allows me to lock the radios together so they do not have this variation?

There is no way of providing the phase synchronisation between radios as they have different local synthesizers.
From USRP Driver Manual:
After tuning the RF front-ends, each local oscillator may have a random phase offset due to the dividers in the VCO/PLL chains. This offset will remain constant after the device has been initialized, and will remain constant until the device is closed or re-tuned.
More here http://files.ettus.com/manual/page_sync.html

Related

Generating multiple delayed channels in FPGA

Trying to find a way of implementing on an FPGA a multi channel delayed signal in real time. My intention is to A/D a continuous audio signal and split the signal into 10 output channels with each channel time delayed by differing delay amounts. The delays are to vary between 10us to 50us between each channel. I'm trying to attempt a beamforming of an audio signal.
Could be done on a ram block large enough to hold the data for the longest required delay.
So there would be a ring buffer, samples would be written to a common head and read out at different offsets from the head, with offsets matching the desired delay. Even at few megasamples per second (unlikely for the audible sound?) you should be able to do that with a simple dual-port ram block (one writing port, one reading port), or even with a single port ram.

Best Route For Input Clocks on Kintex7 FPGA

I'm looking for advice on a less than ideal situation.
I've inherited a project where we have a hardware design issue. We generate a clock to a chip which feeds the clock back in over a none clock-capable input. This works at up to 160MHz but we are looking to increase the clock so I'm researching IO options. This is used to clock 8 parallel data inputs.
Right now the data inputs go through a delay and a IDDR block. The output is fed to a FIFO. Our clock is still routed to a BUFG - so we have:
Data - IDELAY - IDDR - FIFO
Clock - BUFG ----^------^
I read somewhere that routing to a BUFG has a large delay so a BUFR-BUFIO is better. Is this the case? Have I missed a better option?
When you say generating a clock to "a chip", I will assume that you mean the Kintex7 chip.
The delay is not a problem. The issue is for your timing closure to be set up properly so that the static timing analysis can validate whether you violate any setup or hold time in all boundary corners of the board.
If you look at DS182 document, you will find under AC Switching characteristics which will give you a rough idea on how well the chip can perform.
However, the best is to let the timing analyzer inside Vivado calculate for you whether your desired clock frequency will be able to close timing.
You just need to make sure
The data input is synchronous to your final clock.
If it isn't, then clock that data input across two stages of registers with respect to the final clock.
Specify your timing constraints
Run through synthesis and implementation
Check the timing to see that there are no violations.
Or maybe I did not understand something about what you are trying to do.

what is the difference between Test pin and Ready pin in 8086 microprocessor?

can you tell me difference between Test pin and Ready pin in 8086 microprocessor because both of them deal with wait instructions?
TEST: input is examined by the ‘‘Wait’’ instruction. If the TEST input is
LOW execution continues, otherwise the processor waits in an ‘‘Idle’’
state. This input is synchronized internally during each clock cycle on
the leading edge of CLK.
READY: is the acknowledgement from the addressed memory or I/O
device that it will complete the data transfer. The READY signal from
memory/IO is synchronized by the 8284A Clock Generator to form
READY. This signal is active HIGH. The 8086 READY input is not
synchronized. Correct operation is not guaranteed if the setup and hold
times are not met.
If you read the description of the READY signal, the wait instruction is not mentioned.
The READY signal is sampled on each and every memory or I/O cycle. If a device is not capable of responding to the CPU's request in the standard bus cycle, the READY signal can be used to stretch out the cycle, giving it more time.
This is done by signalling to the CPU that the device is not READY. The CPU adds a clock cycles to the bus transaction until it is READY. These extra cycles are given the confusing name of "WAIT STATES", and have nothing to do with the WAIT instruction or the TEST line. Many years ago, makers of fast memory would brag "No wait states!"
The part about the 8284a refers to the details of ensuring that the READY input meets the timing requirements of the processor. Namely the so called setup and hold times, normally only of concern to the engineer designing the computer system.
In your question, you can see that the TEST input is explicitly sampled by the WAIT instruction. The TEST input is simply an input signal with a dedicated pin on the processor (TEST) sampled by a dedicated instruction (WAIT).
Most processors have signals similar to the READY line. The TEST line is rather more rare.

[Common Clock Framework]: How to set rate of a muxed clock if its parent clock unable to set?

Studying Common Clock Framework and have a doubt related to muxed clocks.
If we want to set particular rate of a muxed clock and the current parent of the clock is unable to set the desirable rate (parent have lesser rate).
Then, Is there any function or mechanism who switches the parent of the clock (from its parent list) automatically and sets the desire rate?
One possible solution, we can call the set_parent() manually and then call set_rate(), which can set desire rate. But what if we just call set_rate() and it swithces parent of the clock automatically and sets desirable rate.
Some clocks may up-scale a timer using a PLL. So having a parent that has a lower clocking doesn't mean that automatically trying to increase the parent clock is the best solution. The Common clock framework (CCF) is meant to allow multiple drivers/sub-systems access to a shared resource. The CCF doesn't try to be intelligent as the way different clock trees behave is difficult to know generically.
One possible solution, we can call the set_parent() manually and then call set_rate(), which can set desire rate.
I think you mean to call get_parent() and then use set_rate? Some of the time, it is not easy to call set_parent() as it maybe fixed. You need to read your SOC documentation. In some cases, there are multiple input clocks available. Ie, the real clock hierarchy is not a tree but a DAG although the active hierarchy is tree-like.
But what if we just call set_rate() and it switches parent of the clock automatically and sets desirable rate.
This might make sense for your SOC clock that you are looking at but not generically. There maybe dozens of clocks dependant on a parent and it maybe possible to re-rate grand-parents, etc. It is probably not the best choice to re-rate the system clock because an audio driver wants a clock that is a few HZ out?
It is possible to write the clock driver so that it will re-rate the parent if a request is made on a child that doesn't work. However, this is part of the clock drivers and not the CCF generally.
Example
For instance, an SOC might have an audio clock with three input sources,
A dedicated 48000khz
Some low speed bus clock (platform general)
A USB clock
Option 1 is the best sound quality with the highest power consumption. Option 2 is meant to be generic but may not match sound rates well resulting in sub-optimal DAC/wave/sound generation. Option three might be good for some sort of USB sound slave, but if you are not using USB this may be expensive for power consumption.
In the case above, set_parent() maybe a way to get the desired rate, if the SOC clock driver supports it.
There is no intelligence in the CCF; if there is some flexibility it is in the clock driver but this depends on the clock hardware. It is up to a programmer to read the SOC documentation and determine what is the best way to configure the clock tree. Probably you should also examine the clock driver for your SOC and Linux version to see what it is supporting. You can not generically change the clock rate of parents in a driver as other devices may depend on them. If you need this for a particular SOC in an SOC family, you need to special case it by examining a device tree to see which SOC the driver is running on. This is the case where you can use get_parent() and set_rate() for the particular SOC.
Reference: A question on older Linux clock structure.

How to generate ~100kHz clock signal in Liunx kernel module with bit-banging?

I'm trying to generate clock signal on GPIO pin (ARM platform, mach-davinci, kernel 2.6.27) which will have something arroung 100kHz. Using tasklet with high priority to do that. Theory is simple, set gpio high, udelay for 5us, set gpio low, wait another 5us, but strange problems appear. First of all, can't get this 5us of dalay, but it's fine, looks like hw performance problem, so i moved to period = 40us (gives ~25kHz). Second problem is worst. Once per ~10ms udelay waits 3x longer than usual. I'm thinking that it's hearbeat taking this time, but this is is unacceptable from protocol (which will be implemented on top of this) point of view. Is there any way to temporary disable heartbeat procedure, lets say, for 500ms ? Or maybe I'm doing it wrong from the beginning? Any comments?
You cannot use tasklet for this kind of job. Tasklets can be preempted by interrupts. In some case your tasklet can be even executed in the process context!
If you absolutely have to do it this way, use an interrupt handler - get in, disable interrupts, do whatever you have to do and get out as fast as you can.
Generating the clock asynchronously in software is not the right thing to do. I can think of two alternatives that will work better:
Your processor may have a built-in clock generator peripheral that isn't already being used by the kernel or another driver. When you set one of these up, you tell it how fast to run its clock, and it just starts running out the pulses.
Get your processor's datasheet and study it.
You might not find a peripheral called a "clock" per se, but might find something similar that you can press into service, like a PWM peripheral.
The other device you are talking to may not actually require a regular clock. Some chips that need a "clock" line merely need a line that goes high when there is a bit to read, which then goes low while the data line(s) are changing. If this is the case, the 100 kHz thing you're reading isn't a hard requirement for a clock of exactly that frequency, it is just an upper limit on how fast the clock line (and thus the data line(s)) are allowed to transition.
With a CPU so much faster than the clock, you want to split this into two halves:
The "top half" sets the data line(s) state correctly, then brings the clock line up. Then it schedules the bottom half to run 5 μs later, using an interrupt or kernel timer.
In the "bottom half", called by the interrupt or timer, bring the clock line back down, then schedule the top half to run again 5 μs later.
Unless you can run your timer tasklet at higher priority than the kernel timer, you will always be susceptible to this kind of jitter. You do really have to do this by bit-ganging? It would be far easier to use a hardware timer or PWM generator. Configure the timer to run at your desired rate, set the pin to output, and you're done.
If you need software control on each bit period, you can try and work around the other tasks by setting your tasklet to run at a short period, say three-fourths of your 40 us delay. In the tasklet, disable interrupts and poll the clock until you get to the correct 40 us timeslot, set the I/O state, re-enable interrupts, and exit. But this effectively types up 25 % of your system in watching a clock.

Resources