Is there real-time DC offset compensation in Ettus N310? - usrp

I am working with an Ettus N310 that is being controlled by some 3rd party software. I don't have much of an insight of how they set up and control the device, just tell it what center frequency to tune to and when to grab IQ. If I receive a signal, let's say a tone, at or very near the center frequency, I end up with a large DC offset that jumps around every few 100 usec. If I offset the signal well away from the center frequency, the DC offset is negligible. From what I see in Ettus' documentation, DC offset compensation is something that's set once when the device starts receiving but it looks to me like here it is being done periodically while the USRP is acquiring data. If I receive a signal near center frequency, the DC offset compensator gets messed up and creates a worse bias. Is this a feature on the N310 that I am not aware of or is this probably something that the 3rd party controller is doing?

Yes, there's a DC offset compensation in the N310. The N310 uses an Analog Devices RFIC (the AD9371), which has these calibrations built-in. Both the AD9371 and the AD9361 (used in the USRP E3xx and B2xx series) don't like narrow-band signals close to DC due to their calibration algorithms (those chips are optimized for telecoms signals).
Like you said, the RX DC offset compensation is happening at initialization. At runtime, the quadrature error correction kicks in. The manual holds a table of those: https://uhd.readthedocs.io/en/latest/page_usrp_n3xx.html#n3xx_mg_calibrations). You can try turning off the QEC tracking and see if it improves your system's performance.

Related

PWM transistor heating - Rapberry

I have a raspberry and an auxiliary PCB with transistors for driving some LED strips.
The strips datasheets says 12V, 13.3W/m, i'll use 3 strips in parallel, 1.8m each, so 13.3*1.8*3 = 71,82W, with 12 V, almost 6A.
I'm using an 8A transistor, E13007-2.
In the project i have 5 channels of different LEDs: RGB and 2 types of white.
R, G, B, W1 and W2 are directly connected in py pins.
LED strips are connected with 12V and in CN3, CN4 for GND (by the transistor).
Transistor schematic.
I know that that's a lot of current passing through the transistors, but, is there a way to reduce the heating? I think it's getting 70-100°C. I already had a problem with one raspberry, and i think it's getting dangerous for the application. I have some large traces in the PCB, that's not the problem.
Some thoughts:
1 - Resistor driving the base of the transistor. Maybe it won't reduce heating, but i think it's advisable for short circuit protection, how can i calculate this?
2 - The PWM has a frequency of 100Hz, is there any difference if i reduce this frequency?
The BJT transistor you're using has current gain hFE of roughly 20. This means that the collector current is roughly 20 times the base current, or the base current needs to be 1/20 of the collector current, i.e. 6A/20=300mA.
Rasperry PI for sure can't supply 300mA current from the IO pins, so you're operating the transistor in linear region, which causes it to dissipate a lot of heat.
Change your transistors to MOSFETs with low enough threshold voltage (like 2.0V to have enough conduction at 3.3V IO voltage) to keep it simple.
Using a N-Channel MOSFET will run much cooler if you get enough gate voltage to force to completely enhance. Since this is not a high volume item why not simply use a MOSFET gate driver chip. Then you can use a low RDS on device. Another device is the siemons BTS660 (S50085B BTS50085B TO-220). it is a high side driver that you will need to drive with an open collector or drain device. It will switch 5A at room temperature with no heat sink.It is rated for much more current and is available in a To220 type package. It is obsolete but available as is the replacement. MOSFETs are voltage controlled while transistors are current controlled.

ATTiny85 Internal Clock and One-Wire

Is the internal clock on the ATTiny85 sufficiently accurate for one-wire timing?
Per https://learn.sparkfun.com/tutorials/ws2812-breakout-hookup-guide one-wire timing seems to need accuracy around the 0.05us range, so a 10% clock error on the AVR at 8MHZ would cause 0.0125us sized timing differences (assuming the 10% error figure is accurate, and that it's 10% error on frequency, not +/- 10% variance on each pulse).
Not a ton of margin - but is it good enough?
First of all, WS2812 LEDs are not the 1-wire.
The control protocol of WS2812 is described in the datasheet
The short answer is yes, ATTiny85, also the whole AVR family have enough clock accuracy to control the WS2812 chain. But routine should be written at assembler, also no interrupts should be allowed, to guarantee match the timing requests. When doing the programming well, 8MHz speed of the internal oscillator may be enough to output the different data to two WS2812 chains simultaneously.
So, when running 8MHz ±10%, the one clock cycle would be 112...138 ns.
The datasheet requires (with 150ns tolerance):
When transmitting "one": high level to be 550...850ns; - 6 clock cycles (672...828) matches this range (also 5 clock cycles (560...690ns) matches)
following low level - 450...750ns; - 5 cycles (560...690ns)
When transmitting "zero": high level 200...500ns; - 3 cycles (336...414ns)
following low level 650...950ns; - 6 cycles (672...828).
So, as you can see, considering tolerance ±10% of the clock's source, you can find the integer number of cycles which will guarantee match to the required intervals.
Speaking from the experience, it still be working if the low level, which follows the pulse, will be extended for a couple hundreds of nanoseconds.
There are known issues using internal oscilator with UART - should be timed to 2% accuracy while the internal oscilator can be up to 10% off with factory setting. While it can be calibrated(AVR has register OSCCAL for that purpose), its frequency is influenced by temperature.
It is worth the try, but might not to be reliable with temperature changes or fluctuating operating voltage.
References: ATmega's internal oscillator - how bad is it, Timing accuracy on tiny2313, Tuning internal oscilator
The timing requirements of NeoPixels (WS2812B) are wide enough that the only really critical part is the minimum width of a 1 bit. The ATtiny85 at 16Mhz is plenty fast to drive a string of them from a GPIO pin. At 8Mhz, it may not work (I haven't tried yet). I just released a small Arduino sketch which allows you to control NeoPixel strings of any length on a ATtiny85 without using any RAM.
https://github.com/bitbank2/NeoPixel
For devices with hardware SPI (e.g. ATMega328p), it's better to use SPI to shift out the bits (also included in my code).

MME Audio Output Buffer Size

I am currently playing around with outputting FP32 samples via the old MME API (waveOutXxx functions). The problem I've bumped into is that if I provide a buffer length that does not evenly divide the sample rate, certain audible clicks appear in the audio stream; when recorded, it looks like some of the samples are lost (I'm generating a sine wave for the test). Currently I am using the "magic" value of 2205 samples per buffer for 44100 sample rate.
The question is, does anybody know the reason for these dropouts and if there is some magic formula that provides a way to compute the "proper" buffer size?
Safe alignment of data buffers is the value of nBlockAlign of WAVEFORMATEX structure.
Software must process a multiple of nBlockAlign bytes of data at a
time. Data written to and read from a device must always start at the
beginning of a block. For example, it is illegal to start playback of
PCM data in the middle of a sample (that is, on a non-block-aligned
boundary).
For PCM formats this is the amount of bytes for single sample across all channels. Non-PCM formats have their own alignments, often equal to length of format-specific block, e.g. 20 ms.
Back in time when waveOutXxx was the primary API for audio, carrying over unaligned bytes was an unreasonable burden for the API and unneeded performance overhead. Right now this API is a compatibility layer on top of other audio APIs, and I suppose that unaligned bytes are just stripped to still play the rest of the content, which would otherwise be rejected in full due to this small glitch, which might be just a smaller and non-fatal caller's inaccuracy.
if you fill the audio buffer with sine sample and play it looped , very easily it will click , unless the buffer length is not a multiple of the frequence, as you said ... the audible click in fact is a discontinuity in the wave ...an advanced techinques is to fill the buffer dinamically , that is, you should set a callback notification while the buffer pointer advance and fill the buffer with appropriate data at appropriate offset. i would use a more large buffer as 2205 is too short to get an async notification , calculate data , and write the buffer ,all that while playing , but it would depend of cpu power

Default for 0db sound level as an absolute float value

I'm currentyl building something like a tiny software audio synthesizer on Window 7 in c++. The core engine is running and upon receiving midi events it plays notes, changes programmes, etc. What puzzles me at the moment is where to put the 0 db reference sound pressure level of the output channels.
Let's say the synthesizer produces a sinewave with 440 Hz with an amplitude of |0.5f| . In order to calculate the sound level in db I need to set the reference level (0 db). Does anyone know something like a default for this?
When decibel relative to full scale is in question, AKA dBFS, zero dB is assigned to the maximum possible digital level. A quote from Wiki:
0 dBFS is assigned to the maximum possible digital level.[1] for
example, a signal that reaches 50% of the maximum level at any point
would peak at -6 dBFS i.e. 6 dB below full scale. All peak
measurements will be negative numbers, unless they reach the maximum
digital value.
First you need to be clear about units. dB on its own is a ratio, not an absolute value. As #Roman R. suggested, you can just use 0 dB to mean "full scale" and then your range will be 0 dB (max) to some negative dB value which corresponds to the minimum value that you are interested in (e.g. -120 dB). However this is just an arbitrary measurement which doesn't tell you anything about the absolute value of the signal.
In your question though you refer to dB SPL (SPL = Sound Pressure Level), which is an absolute unit. 0 dB SPL is typically defined as 20 µPa (RMS), which is around the threshold of human hearing, and in this case the range of interest might be say -20 dB SPL to say +120 dB SPL. However if you really do want to measure dB SPL and not just an arbitrary dB value then you will need to calibrate your system to take into account microphone gain, microphone frequency response, A-D sensitivity/gain, and various other factors. This is non-trivial, but essential if you actually want to implement some kind of SPL measuring system.

GPS Time synchronisation

I'm parsing NMEA GPS data from a device which sends timestamps without milliseconds. As far as I heard, these devices will use a specific trigger point on when they send the sentence with the .000 timestamp - afaik the $ in the GGA sentence.
So I'm parsing the GGA sentence, and take the timestamp when the $ is received (I compensate for any further characters being read in the same operation using the serial port baudrate).
From this information I calculate the offset for correcting the system time, but when I compare the time set to some NTP servers, I will get a constant difference of 250ms - when I correct this manually, I'm within a deviation of 20ms, which is ok for my application.
But of course I'm not sure where this offset comes from, and if it is somehow specific to the GPS mouse I'm using or my system. Am I using the wrong $ character, or does someone know how exactly this should be handled? I know this question is very fuzzy, but any hints on what could cause this offset would be very helpful!
Here is some sample data from my device, with the $ character I will take as the time offset marked:
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003538.000,A,5046.8555,N,00606.2913,E,0.00,22.37,160209,,,A*58
-> $ <- GPGGA,003539.000,5046.8549,N,00606.2922,E,1,07,1.5,249.9,M,47.6,M,,0000*5C
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPGSV,3,1,10,09,77,107,17,12,63,243,30,05,51,249,16,14,26,315,20*7E
$GPGSV,3,2,10,30,24,246,25,17,23,045,22,15,15,170,16,22,14,274,24*7E
$GPGSV,3,3,10,04,08,092,22,18,07,243,22*74
$GPRMC,003539.000,A,5046.8549,N,00606.2922,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003540.000,5046.8536,N,00606.2935,E,1,07,1.5,249.0,M,47.6,M,,0000*55
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003540.000,A,5046.8536,N,00606.2935,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003541.000,5046.8521,N,00606.2948,E,1,07,1.5,247.8,M,47.6,M,,0000*5E
You have to take into account things that are going on in GPS device:
receive satellite signal and calculates position, velocity and time.
prepare NMEA message and put it into serial port buffer
transmit message
GPS devices have relatively slow CPUs (compared to modern computers), so this latency you are observing is result of processing that device must do between generation of position and moment it begin transmitting data.
Here is one analysis of latency in consumer grade GPS receivers from 2005. There you can find measurement of latency for specific NMEA sentences.

Resources